Implement a perf testing benchmarking system#1163
Merged
mweststrate merged 1 commit intoimmerjs:mainfrom Sep 12, 2025
Merged
Conversation
Pull Request Test Coverage Report for Build 17538736762Details
💛 - Coveralls |
This was referenced Sep 8, 2025
mweststrate
approved these changes
Sep 12, 2025
Collaborator
mweststrate
left a comment
There was a problem hiding this comment.
Ok, this is super cool. THanks for bringing the state of the art!
Contributor
|
🎉 This PR is included in version 10.2.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Per #1152 , I've been investigating Immer performance across versions.
I originally wrote a perf benchmarking setup in https://github.com/markerikson/immer-perf-tests . Since then, I've iterated on it significantly - adding support for CPU profiles, sourcemappings, and logging more details.
This PR ports the benchmarking setup to be a subfolder directly integrated into the Immer repo, which should make running benchmarks straightforward - just
yarn build-and-profilein the./perf-testingfolder to both rebuild the current Immer source and the benchmarking script, then run it with per-version-per-use-case relative comparisons and a generated CPU profile.As with #1162, the work for this PR includes a lot of AI-generated code from Claude. This is a relatively new experiment for me, and I've done a lot of careful babysitting to keep it going in the right direction and tried to review and check the results.
Changes for the benchmarking setup in this PR:
process.env.NODE_ENVchecks while running the benchmarks (which is what caused earlier versions of my benchmark to seem slower, as they were directly running theimmer.mjsartifacts in Node)immerProducersobject controls which versions actually get run in the benchmarksetUseStrictIterationoption in my upcoming perf optimizations branch. The script currently has logic set up to be able to callsetUseStrictIteration(false), but has the import of that function disabled until it's used with the perf optimizations branch to avoid "this import doesn't exist" errors--cpu-profgenerates a.cpuprofilefile that can be analyzed withread-cpuprofile.jsread-cpuprofile.jsuses sourcemaps for each library to properly identify functions and associated perf samples, and then logs information on the most expensive functions for each version