Fast reading from HDD for Linux - strange phenomenon

229 views Asked by At

[Sorry for the confusion: The original post had "SSD" instead of "HDD" in the title, but I figured out that I performed the tests on an HDD by accident, as I was accessing the wrong mounting point. On an SSD this phenomenon did not occur. Still interesting that it happens for an HDD though.]

I am using the following code for reading in a loop from a given number of files of constant size. All files to be read exist and reading is successful.

It is clear that varying the file size has an effect on fMBPerSecond, because when reading files smaller than the page size, still the whole page is read. However, nNumberOfFiles has an effect on fMBPerSecond as well, which is what I do not understand. Apparently, it is not nNumberOfFiles itself that has an effect, but it is the product nNumberOfFiles * nFileSize.

But why should it have an effect? The files are opened/read/closed sequentially in a loop.

I tested with nFileSize = 65536. When choosing nNumberOfFiles = 10000 (or smaller) I get something around fMBPerSecond = 500 MB/s. With nNumberOfFiles = 20000 I get something around fMBPerSecond = 100 MB/s, which is a dramatic loss in performance.

Oh, and I should mention that I clear the disk cache before reading by executing:

sudo sync
sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'

Any ideas what is happening here behind the scenes would be welcome.

Pedram

void Read(int nFileSize, int nNumberOfFiles)
{
    char szFilePath[4096];

    unsigned char *pBuffer = new unsigned char[nFileSize];

    Helpers::get_timer_value(true);

    for (int i = 0; i < nNumberOfFiles; i++)
    {
        sprintf(szFilePath, "files/test_file_%.4i", i);

        int f = open(szFilePath, O_RDONLY);

        if (f)
        {
            if (read(f, pBuffer, (ssize_t) nFileSize) != (ssize_t) nFileSize)
                printf("error: could not read file '%s'\n", szFilePath);

            close(f);
        }
        else
        {
            printf("error: could not open file for reading '%s'\n", szFilePath);
        }
    }

    const unsigned int t = Helpers::get_timer_value();

    const float fMiliseconds = float(t) / 1000.0f;
    const float fMilisecondsPerFile = fMiliseconds / float(nNumberOfFiles);
    const float fBytesPerSecond = 1000.0f / fMilisecondsPerFile * float(nFileSize);
    const float fMBPerSecond = fBytesPerSecond / 1024.0f / 1024.0f;

    printf("t = %.8f ms / %.8i bytes - %.8f MB/s\n", fMilisecondsPerFile,
        nFileSize, fMBPerSecond);

    delete [] pBuffer;
}
1

There are 1 answers

0
Guntram Blohm On

There are several SSD models, especially the more expensive datacenter models, that combine an internal DRAM cache with the (slower) persistent NAND cells. As long as the data you read fits in the DRAM cache, you'll get faster response.