Learnt that XNNPACK can accelerate computation a lot in general, thus tried on Xcode/Objective-C. Here's the imp according to official doc
// add XNNPACK
TFLInterpreterOptions *options = [[TFLInterpreterOptions alloc] init];
options.useXNNPACK = YES;
// init interpreter
NSString *modelPath = [[NSBundle mainBundle] pathForResource:@"mymodel" ofType:@"tflite"];
_modelInterpreter = [[TFLInterpreter alloc] initWithModelPath:modelPath
options:options
error:&error];
With CocoaPods, I tried with TensorFlowLite 2.3.0, 2.4.0, and the latest x.x.x-nighly version. in all cases, whenever XNNPACK is enabled, the init fails. Internally it fails at this line in TFLInterpreter.mm:
_interpreter = TfLiteInterpreterCreate(model, cOptions);
Am I missing something or it's just XNNPACK is not properly implemented in the lib yet?
Thanks!