Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Undo background thread execution #1

Open
wants to merge 2 commits into
base: wait-free-MemoryMappedFileManager
Choose a base branch
from

Conversation

SandishKumarHN
Copy link

Undo background thread execution

Copy link
Owner

@leventov leventov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SandishKumarHN thank you for preparing a commit against my branch.

@@ -918,41 +909,10 @@ private void scheduleWriteTouch(final long bufferWatermarkOffsetInFile, final lo
// If we really have to cancel write touch tasks, it means that the logging rate is so high that
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The containing method, scheduleWriteTouch() should be removed completely. Generally, the new version of the class shouldn't even mention "touch", "pretouch". All related methods should be removed.

@@ -496,13 +496,6 @@ private void doCloseAndSwitchRegionExclusively() {
// it's return it is guaranteed that all concurrent writers has completed.
if (manager.closing) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unmapMappedBuffer() shouldn't be wrapped in if (manager.closing). Try to understand why the code with background execution used to have this if (manager.closing) condition. When you understand it, you'll realize why if (manager.closing) is not needed without background threading.

@@ -972,15 +932,10 @@ private void drainWriteTouchTasks() {
}

private void asyncCreateNextRegion(final Region currentRegion, final long newMappingOffsetInFile) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method is no longer "async". You cannot just rename it to "createNextRegion" because there is already a method with this name. So I think this method should be now inlined at its sole usage site.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I Will name it as syncCreateNextRegion for now.

Copy link
Owner

@leventov leventov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks OK for testing.

@SandishKumarHN
Copy link
Author

@leventov
I was able to cherry-pick Log4j2AppenderLatencyBenchmark from your pull request and generate metrics for old log4j2 version.

below are the charts hosted on Github pages
For 2K new version looks good:
https://sandishkumarhn.github.io/log4j2-charts/2K/2KChart.html
For 20K new version also looks good:
https://sandishkumarhn.github.io/log4j2-charts/20K/20KChart.html
For 100K new version performance is worst:
https://sandishkumarhn.github.io/log4j2-charts/100K/100KChart.html

@leventov
Copy link
Owner

@SandishKumarHN

  1. Please post on what machine, OS, and filesystem you are running this.
  2. Noticing the difference between your rollback and these two commits: apache@d28bad8 and apache@756374e, which you are mainly rolling back here, there is extra mappedBuffer.force() that you didn't remove: apache@756374e#diff-ed0f4dcd71b56e532376d1f3c2376d95R637. I don't know why I added that force(), either for semantic or "performance" reasons. Could you please try to research this question?

@leventov
Copy link
Owner

Also, did you benchmark log4j master back then, or the current master?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants