Lately, I've been delving into SGX development. At present, I am attempting to examine the efficacy of an identical program running both inside and outside of SGX, and I am facing a few challenges.
I have two programs which vary in terms of execution speed and the enhancements brought about by compile-time optimization, occurring within and outside of SGX.
Program 1: This program utilizes the Darknet library, a machine learning framework, to conduct simple machine learning tasks.
When the compiler optimization option is set to -O0, the time taken to complete computing tasks with SGX is 150 seconds, while the runtime without SGX is 100 seconds. When the compiler optimization option is set to -O2, the runtime is approximately the same for both scenarios, averaging around 20 seconds.
Inside SGX | Outside SGX | |
---|---|---|
-O0 | 150s | 100s |
-O2 | 20s | 20s |
Acceleration | 7.5 | 5 |
Program 2: The second program involves a custom-written logistic regression code that I'm presently working with.
Inside SGX | Outside SGX | |
---|---|---|
-O0 | 14173ms | 11988ms |
-O2 | 3691ms | 1166ms |
Acceleration | 3.8 | 10.3 |
I am looking for answers to the following questions:
- What is causing such a notable discrepancy in speed between the program's execution inside and outside of SGX?
- Is the compilation optimization strategy consistent within and outside of SGX?
- How come, when altering the compiler optimization option from O0 to O2, the first program experiences a higher rate of acceleration inside SGX as opposed to outside, whereas the complete inverse situation occurs with the second program?