-
In my tests, I noticed a dependency conflict between the size parameter and the target type (directory/filename), which resulted in different test outcomes. The specific situation is as follows: When I used the following commands, the test results were consistent and normal:
However, when I used the following commands, the test results were abnormal:
From my observations, when using a filename, the size parameter must be set to 1g (target capacity), rather than 1024/16=64m; whereas when using a directory, the size parameter must be set to 1024/16=64m, rather than 1g (target capacity). I would like to understand the reason for this dependency conflict and whether there are any solutions or recommendations. I look forward to your response. Thank you for your assistance! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 25 replies
-
Hi @sk93mo, I think the help for the size option explains this:
As you're using ...but be careful! That "single file, multiple jobs" infers that you will be running 16 jobs over the same 1g region which means they can interfere with each other and because you're doing writes that means certain writes may end up being thrown away. For example, job1 writes the first 4k but then job2 immediately writes the same 4k before the job1's 4k gets all the way to non-volatile storage. Due to something in your storage stack doing an optimisation, job1's 4k may be thrown away and success returned because it's already been replaced! To sidestep this issue you may want to arrange for different jobs to write to different regions of the same file by using |
Beta Was this translation helpful? Give feedback.
-
Hi @sk93mo, TLDR; I'm guessing what you are seeing is some sort of overhead when you are going through a filesystem but it's hard to know for sure. At first glace your both of your jobs look fine. However, it's hard to diagnose why you're getting such varied results because you haven't shared information about the environment that you're running these jobs in (see https://github.com/axboe/fio/blob/master/REPORTING-BUGS ). From your previous question we can guess that the first job is going to an NVMe block device and we can see that the second job is going into a filesystem but it would help to know the following:
Also you left out most of the summary that fio produces which contains useful information! If you can attach it as markdown formatted text that can help the reader. You can use Any time you get strange results with fio, I strongly recommend cutting the fio job down to the smallest number of options that still reproduce the problem to help reduce the scope of things to look at. I will offer some suggestions this time so can you check whether removing all of the following still allows the issue to occur:
My wild guess is that both results that you see are "right" (but as previously state I can't know for sure). Using the following jobs:
my laptop here gives a speed of 53MiB/s for the "filesystem" job and 142MiB/s for the "blockdev" job. My filesystem is ext4 on top of a LUKS encrypted block device and doing the same job against |
Beta Was this translation helpful? Give feedback.
Hi @sk93mo,
I think the help for the size option explains this:
As you're using
numjobs=16
normally each of the 16 jobs is going to create a seperate file of sizesize
but when you setfilename
all 16 jobs now refer to same single file so a single file of sizesize
will be made. However, in your casefilename
refers to a block device so no new file needs to be made and I/O will be restricted to the firstsize
region of the existing "file"......but be careful! That "single file, multiple jobs" infers that you will be running 16 jo…