name                                            diff %  speedup 
 slice::sort_large_random                       -65.49%   x 2.90 
 slice::sort_large_strings                      -37.75%   x 1.61 
 slice::sort_medium_random                      -47.89%   x 1.92 
 slice::sort_small_random                        11.11%   x 0.90 
 slice::sort_unstable_large_random              -47.57%   x 1.91 
 slice::sort_unstable_large_strings             -25.19%   x 1.34 
 slice::sort_unstable_medium_random             -22.15%   x 1.28 
 slice::sort_unstable_small_random              -15.79%   x 1.19
  • arendjr@programming.dev
    link
    fedilink
    arrow-up
    7
    ·
    4 months ago

    Yeah, it was the first line of the linked PR:

    This PR replaces the sort implementations with tailor-made ones that strike a balance of run-time, compile-time and binary-size, yielding run-time and compile-time improvements.

    It was also repeated a few paragraphs later that the motivation for the changes was both runtime and compile time improvements. So a little bit bumped to hear the compile time impact wasn’t as good as the authors hoped apparently. I’m not even sure I fully endorse the tradeoff, because it seems the gains, while major, only affect very select use cases, while the regressions seem to affect everyone and hurt in an area that is already perceived as a pain point. But oh well, the total regression is still minor so I guess we’ll live with it.

    • KillTheMule@programming.dev
      link
      fedilink
      arrow-up
      5
      ·
      4 months ago

      only affect very select use cases

      I did not read the whole conversation, but sorting seems a very common usecase (not mine, but seems to me a lot of people sort data), so this seems quite a broad improvement to me.

      that is already perceived as a pain point

      Note though, as is mentioned in the issue, that the survey showed people still prioritize runtime performance over compilation performance in general, so this tradeoff seems warranted.

      the total regression is still minor

      It’s not unheard of that regressions can be unmade later on, so here’s hoping :)

      • arendjr@programming.dev
        link
        fedilink
        arrow-up
        5
        ·
        4 months ago

        Yeah, sorting is definitely a common use case, but note it also didn’t improve every sorting use case. Anyway, even if I’m a bit skeptical I trust the Rust team that they don’t take these decisions lightly.

        But the thing that lead to my original question was: if the compiler itself uses the std sorting internally, there’s also additional reason to hope that it might have transitive performance benefits. So even if compiling the Rust compiler with this PR was actually slower, compiling again with the resulting compiler could be faster since the resulting compiler benefits from faster sorting. So yeah, fingers crossed 🤞

        • KillTheMule@programming.dev
          link
          fedilink
          arrow-up
          5
          ·
          4 months ago

          transitive performance benefits

          I would have assumed the benchmark suite accounts for that, otherwise the results aren’t quite as meaningfull really. Which ties back you your 2nd senctence: I certainly trust the rust team more than myself on these things :)