-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comparative benchmarks #130
Comments
Do you think about comparing specific functions, like in the clone function documentation having a category that compares with clone functions from other libs or do you think about something more general? Another thing would be a .md file, inside readme or are you thinking about a documentation page? |
@MarlonPassos-git The idea you're describing might deserve its own discussion, but I am referring to performance benchmarks. That means adding |
@aleclarson Is it something like this you have in mind? (Apologies for the unflattering example 😅) https://gist.github.com/crishoj/a6396844f88e212e911893b49b5c54de |
Hey @crishoj, thinking about it now, I would prefer all of the comparative benchmarks to be kept in one module, rather than in the function-specific benchmark files (which are intended to detect perf regressions, so they will run on any PR that modifies that particular function). Also, I wonder if we shouldn't leave Radash out of the comparisons, since it's unmaintained. 🤔 |
For me, it really doesn't make sense to keep bentmarks for libs that are not being maintained(radash or Underscore). The ones that come to mind are:
|
Perhaps the bench helper could have an option to choose whether to run comparative implementations or only radashi. Perf tracking in CI would be great. Something along the lines of https://github.com/benchmark-action/github-action-benchmark |
The design philosophies of Ramda and Immutable are too different to warrant a comparison, I think. Might be worth comparing to es-toolkit though.
That might require more effort than it's worth. 🤔 Also, if we did put comparative benchmarks in the same file as normal benchmarks, we'd have to not assume that lodash et al are installed, because the template repository I'm working on doesn't have them as dependencies, which means copying comparative benchmarks into “your own Radashi” would be troublesome. (Note: I'll be writing a post about the template repository soon) |
This comment was marked as outdated.
This comment was marked as outdated.
Quick update:
And here's my proposal for comparative benchmarks:
If anyone wants to tackle this, let me know or just assign yourself to this issue. Preferably, leave some time for feedback from others in the community, in case anyone has objections or ideas for improvement. |
ok I can work on that, before that I would like to clarify what would be the standard? For all the benchmarks that we have nowadays, for radashi to have a comparative version? For example, import * as lodash from 'lodash'
import * as radashi from 'radashi'
const comparativeLibs = [
{ name: 'radashi', lib: radashi },
{ name: 'lodash', lib: lodash }
]
describe.each(comparativeLibs)('function clamp in the library: $name', ({lib}) => {
bench('with no arguments', async () => {
lib.clamp(100, 0, 10)
lib.clamp(0, 10, 100)
lib.clamp(5, 0, 10)
})
}) but the function import * as lodash from 'lodash'
import * as radashi from 'radashi'
const comparativeLibs = [
{ name: 'radashi', lib: radashi },
{ name: 'lodash', lib: lodash }
]
describe.each(comparativeLibs)('function max in the library: $name', ({lib}) => {
bench('with list of numbers', () => {
const list = [5, 5, 10, 2]
lib.max(list)
})
bench('with list of objects', () => {
const list = [
{ game: 'a', score: 100 },
{ game: 'b', score: 200 },
{ game: 'c', score: 300 },
{ game: 'd', score: 400 },
{ game: 'e', score: 500 },
]
lib.max(list, x => x.score)
})
}) or instead of repeating the describe we can create multiple benches for each lib. This way the output shows which lib is faster: import * as lodash from 'lodash'
import * as radashi from 'radashi'
const comparativeLibs = [
{ name: 'radashi', lib: radashi },
{ name: 'lodash', lib: lodash }
]
describe("clamp", () => {
for (const {name, lib} of comparativeLibs) {
bench(`${name}: with no arguments`, () => {
lib.clamp(100, 0, 10)
lib.clamp(0, 10, 100)
lib.clamp(5, 0, 10)
})
}
}) import * as lodash from 'lodash'
import * as radashi from 'radashi'
const comparativeLibs = [
{ name: 'radashi', lib: radashi },
{ name: 'lodash', lib: lodash }
]
describe("max", () => {
for (const {name, lib} of comparativeLibs) {
bench(`${name}: with list of numbers`, () => {
const list = [5, 5, 10, 2]
lib.max(list)
})
bench(`${name}: with list of objects`, () => {
const list = [
{ game: 'a', score: 100 },
{ game: 'b', score: 200 },
{ game: 'c', score: 300 },
{ game: 'd', score: 400 },
{ game: 'e', score: 500 },
]
lib.max(list, x => x.score)
})
}
}) |
@MarlonPassos-git I think we'll want one
^ Nevermind on all that. I think a basic So, to be clear, I'm thinking something like this: const libs = {radashi, lodash} as const
type Library = typeof libs[keyof typeof libs]
type Benchmark = (_: Library) => void
const benchmarks: Record<keyof radashi, Benchmark | Record<string, Benchmark>> = {
dash: _ => {
const input = 'TestString123 with_MIXED_CASES, special!@#$%^&*()Characters, and numbers456'
if (_ == lodash) {
_.kebabCase(input)
} else {
_.dash(input)
}
},
max: {
'with numbers': _ => {
const list = [5, 5, 10, 2]
_.max(list)
},
'with objects': _ => {
const list = [
{ game: 'a', score: 100 },
{ game: 'b', score: 200 },
{ game: 'c', score: 300 },
{ game: 'd', score: 400 },
{ game: 'e', score: 500 },
]
_.max(list, x => x.score)
}
},
}
for (const [funcName, run] of Object.entries(benchmarks)) {
describe(funcName, () => {
if (isObject(run)) {
const tests = Object.entries(run)
for (const [testName, run] of tests) {
for (const [libName, lib] of Object.entries(libs)) {
bench(`${libName}: ${testName}`, () => run(lib))
}
}
} else {
for (const [libName, lib] of Object.entries(libs)) {
bench(libName, () => run(lib))
}
}
})
} Also, I think we could hoist the test values with basic labels like |
@MarlonPassos-git You'll probably find this useful: https://gist.github.com/aleclarson/a7198339c0a68991cb6c94cf9d60fa29. It's the Lodash comparison data I've collected so far. |
Although performance isn't the only way we're competing with Lodash, it'd be great to have perf comparisons with Lodash and other similar libraries wherever we cover the same use cases.
We don't need to compare ourselves with FP libraries, since we're not actually competing with them. We don't need to compare with Underscore, since it's legacy at this point. Any compared libraries should have 1K+ stars on Github (maybe more).
The text was updated successfully, but these errors were encountered: