I made a module using Inline::C and I noticed some unexpected performance discrepancies between running it on the host MacOS vs a guest Linux VM. Looking into it, it was due to the default C compiler flags being different. On MacOS they appear to be:
-fno-common -DPERL_DARWIN -fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include -O3 -DVERSION=\"0.00\" -DXS_VERSION=\"0.00\"
Vs on Centos 7:
-fPIC -fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -O2 -DVERSION=\"0.00\" -DXS_VERSION=\"0.00\"
The main difference for my code is O3 vs O2, so I looked into the Inline docs and used:
use Inline (C => Config => ccflags => '-O3');
To explicitly specify -O3
. Well, the result is that -O3 -O2
is applied that way, so specifying ccflags does not overwrite the default, it just adds before them, which in the end does not have any effect.
Any idea where the default comes from and/or how to overwrite it to specify the optimization level that I want.
It appears as though adding the
optimize
configuration option may do what you want. Here's a very short example with the output before addingoptimize => '-O3'
and after:Here's the output (snipped for brevity):
Before:
cc -c -I"/home/steve/scratch/inline" -fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2
After:
cc -c -I"/home/steve/scratch/inline" -fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O3
...on Linux Mint 18.3.
The default comes from
$Config{optimize}
, which is stored as a read-only default at the timeperl
was compiled/built on the system.