#include <fstream>
#include <vector>
#include <algorithm>
#include <iterator>
using namespace std;
vector<char> f1()
{
ifstream fin{ "input.txt", ios::binary };
return
{
istreambuf_iterator<char>(fin),
istreambuf_iterator<char>()
};
}
vector<char> f2()
{
vector<char> coll;
ifstream fin{ "input.txt", ios::binary };
char buf[1024];
while (fin.read(buf, sizeof(buf)))
{
copy(begin(buf), end(buf),
back_inserter(coll));
}
copy(begin(buf), begin(buf) + fin.gcount(),
back_inserter(coll));
return coll;
}
int main()
{
f1();
f2();
}
Obviously, f1()
is more concise than f2()
; so I prefer f1()
to f2()
. However, I worry that f1()
is less efficient than f2()
.
So, my question is:
Will the mainstream C++ compilers optimize f1()
to make it as fast as f2()
?
Update:
I have used a file of 130M to test in release mode (Visual Studio 2015 with Clang 3.8):
f1()
takes 1614
ms, while f2()
takes 616
ms.
f2()
is faster than f1()
.
What a sad result!
I've checked your code on my side using with
mingw482
. Out of curiosity I've added an additional functionf3
with the following implementation:I've tested using a file
~90M
long. For my platform the results were a bit different than for you.The results were calculated as mean of 10 consecutive file reads.
The
f3
function takes the least time since atvector<char> coll(len);
it has all the required memory allocated and no further reallocations need to be done. As to the back_inserter it requires the type to havepush_back
member function. Which for vector does the reallocation whencapacity
is exceeded. As described in docs:Among
f1
andf2
implementations the latter is slightly faster although both use theback_inserter
. Thef2
is probably faster since it reads the file in chunks which allows some buffering to take place.