I wanted to check the read/write performance of my disk. I am executing the below command to write into a file
time dd if=/dev/zero of=/home/test.txt bs=2k count=32k;
which gives about 400MB/s
For checking the read performance i have executed below commands.with and without 'of' parameter. There is a huge difference between those results
time dd if=/home/test.txt of=/dev/zero bs=2k (gives about 2.8GB/s)
time dd if=/home/test.txt bs=2k (9MB/s)
I read that "of=/dev/zero" is used to read data from some temp file while creating the file.
But why is it required while checking for read performance and why there is a huge difference in speed with and without "of=/dev/zero"
/dev/zero is a special file. It's contents stem from a device driver. All write operations on /dev/zero are guaranteed to succeed. A bit more about that here and here
Without specifying of
dd
prints to stdout. Thus the data which the terminal receives has to be formatted and printed. The terminal you're using is very likely to bottleneck the performance of your drive.Also if likely stands for input file, likewise of means output file.
Edit:
Writing to /dev/zero can have unexpected results. I wouldn't say this is an accurate way of measuring read performance.