I am trying to enter two images on MPSNNGraph.
However, even if I input an array like [input1, input2] on "withSourceImages", I can only input "input1" as input image. Ideally, when creating a graph as below, I want to make "inputImage1" as "input1" and "inputImage2" as "input2".
Actually, when I ran it like this and looked at the result of "concat", I was able to see what "input1" was concatenated, not "input2".
The graph look like:
let inputImage1 = MPSNNImageNode(handle: nil)
let inputImage2 = MPSNNImageNode(handle: nil)
let scale = MPSNNBilinearScaleNode(source: inputImage1,
outputSize: MTLSize(width:256,
height: 256,
depth: 3))
let scale2 = MPSNNBilinearScaleNode(source: inputImage1,
outputSize: MTLSize(width:64,
height: 64,
depth: 3))
...
let concat = MPSNNConcatenationNode(sources: [conv3.resultImage, scale2.resultImage])
...
if let graph = MPSNNGraph(device: commandQueue.device,
resultImage: tanh.resultImage,
resultImageIsNeeded: true){
self.graph = graph
}
and part of encoding graph look like:
let input1 = MPSImage(texture: texture, ...)
let input2 = MPSImage(texture: texture2, ...)
graph.executeAsync(withSourceImages: [input1, input2]) { outputImage, error in
...
}
How do I enter the second input and the graph receive it?
Could you give me some advice?
The code you provide actually looks correct. Referencing the MPSNNGraph.h header here:
I want to point out though, that MPSNNConcatenationNode behaves in a pretty unique way. It always concat on the depth(channel) dimension. When concating images with different spacial dimensions, it will respect the smaller one (i.e. 2x2x10 concat 4x4x15 -> 2x2x25). Maybe that is where your issue came from.