• gedhrel@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    7 months ago

    Casey’s video is interesting, but his example is framed as moving from 35 cycles/object to 24 cycles/object being a 1.5x speedup.

    Another way to look at this is, it’s a 12-cycle speedup per object.

    If you’re writing a shader or a physics sim this is a massive difference.

    If you’re building typical business software, it isn’t; that 10,000-line monster method does crop up, and it’s a maintenance disaster.

    I think extracting “clean code principles lead to a 50% cost increase” is a message that needs taking with a degree of context.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      Yup. If that 12-cycle speedup is in a hot loop, then yeah, throw a bunch of comments and tests around it and perhaps keep the “clean” version around for illustrative purposes, and then do the fast thing. Perhaps throw in a feature flag to switch between the “clean” and “fast but a little sketchy” versions, and maybe someone will make a method to memoize pure functions generically so the “clean” version can be used with minimal performance overhead.

      Clean code should be the default, optimizations should come later as necessary.

      • coloredgrayscale@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        Keeping the clean version around seems dangerous advice.

        You know it won’t get maintained if there are changes / fixes. So by the time someone may needs to rewrite the part, or application many years later (think migration to different language) it will be more confusing than helping.

    • bonus_crab@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      For what its worth , the cache locality of Vec<Box<Dyn trait>> is terrible in general, i feel like if youre iterating over a large array of things and applying a polymorphic function you’re making a mistake.

      Cache locality isnt a problem when youre only accessing something once though.

      So imo polymorphism has its place for non iterative-compute type work, ie web server handler functions and event driven systems.

  • Turun@feddit.de
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    It would be interesting to see if an iterator instead of a manual for loop would increase the performance of the base case.

    My guess is not, because the compiler should know they are equivalent, but would be interesting to check anyway.

    • onlinepersona@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      Do you mean this for loop?

      for shape in &shapes {
        accum += shape.area();
      }
      

      That does use an iterator

      for-in-loops, or to be more precise, iterator loops, are a simple syntactic sugar over a common practice within Rust, which is to loop over anything that implements IntoIterator until the iterator returned by .into_iter() returns None (or the loop body uses break).

      Anti Commercial AI thingy

      CC BY-NC-SA 4.0

  • zweieuro@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    Correct me if I am wrong but isn’t “loop unrolling/unwinding” something that the c++ and rust compilers do? Why does the loop here not get unwound?

    • Giooschi@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      Loop unrolling is not really the speedup, autovectorization is. Loop unrolling does often help with autovectorization, but is not enough, especially with floating point numbers. In fact the accumulation operation you’re doing needs to be associative, and floating point numbers addition is not associative (i.e. (x + y) + z is not always equal to (x + (y + z)). Hence autovectorizing the code would change the semantics and the compiler is not allowed to do that.