-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concurrency support #9
base: master
Are you sure you want to change the base?
Conversation
With this change it will no longer start for each file a goroutine it will start fewer goroutines and never more than the system cpu cores count. This will increase the performance on single core systems because it won't create unnessesary goroutines and multi core systems will still benefit from the reduced amount of goroutines.
Refactor test to be easier to read.
Now if you give it 10 files and tell it to split it into 3 parts it will not longer split it into 2 parts. It will split it into 3 parts and add the remaining files to the first part -> the first part will have in this case one file more than the rest. This and some other cases are now covered by tests.
The name is now more descriptive and easier to understand.
cpuCount := runtime.NumCPU() | ||
filesPerCore := splitArrayIntoParts(files, cpuCount) | ||
|
||
go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a simpler approach would be to just spawn a new goroutine for each file and add each slice of *reports
to another slice guarded by a mutex. Then when the WaitGroup
is done()
, iterate through the slice of all reports and dump that to stdout.
I don't know how much of an advantage the splitting gives us either as I usually let the Go runtime schedule goroutines for me. :)
Other than that I like the approach. I think if we pare it down to what I tried to describe, it'd be perfect.
This is my implementation for the concurrent version that I offered inside #8.
Feel free to criticize my work and to propose changes 😃