I read that certain c++ dll can auto - offload to Xeon Phi even if they weren't made specifically to use manycore..
My question - can I Interop a c++ dll built say using Intel parallel studio from c# ? Hoping it will auto-offload
I read that certain c++ dll can auto - offload to Xeon Phi even if they weren't made specifically to use manycore..
My question - can I Interop a c++ dll built say using Intel parallel studio from c# ? Hoping it will auto-offload
I am also confused by your statement about dlls being able to auto-offload to the Intel Xeon Phi coprocessor "even if they weren't made specifically to use manycore". To auto-offload, the library must contain code which uses offload directives and is compiled with a compiler that recognizes the offload directives. By default, offload directives result in the compiler generating both processor and coprocessor executable code.
As to how to build a dynamic library containing offload code, let me quote from a reply Kevin Davis posted on the software.intel.com:
For the Dynamic library, all source files containing Language Extensions for Offload must be compiled with the -fPIC compiler option. You add this option under the IDE under the property setting: Configuration Properties > C/C++ > Code Generation [Intel C++] > Additional Options for MIC Offload Compiler .... When using the icl command-line, each source file from the Dynamic library that contains offload code must be compiled using the /Qoffload-option to pass the -fPIC option.
There is a limitation in that you cannot use _Cilk_offload directives in a dll but I think that is the only limitation.
As to calling a library containing offload directive from C#,there is a problem in the IntelĀ® Parallel Studio XE 2015 which is fixed in Update 1. But other than that one version of the compiler, you should be able to call dynamic libraries containing offload directives.
There are DLLs, which can perform automatic offload to Intel Xeon Phi, e.g. Intel MKL. These libraries can be linked with either C++ and C# (see froth's answer).
Automatic offload
in this context means that a library contains a code, which allows to offload its computations, transparent to the user.Also, a library may contain some code, which would decide whether to run the computation on CPU or coprocessor(s).