I'm trying to fill MPSImage
or 2D Metal texture with values manually and pass that to do convolutional network operation. An input for CNN (Metal Performance Shaders) is usually an image (like this https://developer.apple.com/library/content/samplecode/MPSCNNHelloWorld/Introduction/Intro.html#//apple_ref/doc/uid/TP40017482-Intro-DontLinkElementID_2), that's why I could pass UnsafePointer
of CGContext
, but this time I'd like to use Float array as an input.
The following is what I tried. I converted an input array to NSData
, but it didn't work.
var inputData = NSData(bytes: inputFloatArrayOfArray, length: inputFloatArrayOfArray.count * inputFloatArrayOfArray[0].count * MemoryLayout<Float>.size)
// The type of inputFloatArrayOfArray is [[Float]]
network.srcImage.texture.replace(region: MTLRegion( origin: MTLOrigin(x: 0, y: 0, z: 0),
size: MTLSize(width: inputWidth, height: inputHeight, depth: 1)),
mipmapLevel: 0,
slice: 0,
withBytes: &inputData,
bytesPerRow: inputWidth * MemoryLayout<Float>.size,
bytesPerImage: 0)
Manually set a 1D Texture in Metal may relate to my question (FYI: it says "deal with 2D textures that load the texture by converting a loaded UIImage
to raw bytes data, but creating a dummy UIImage
felt like a hack for me." ), but it seems there is no enough answer. Now I have no ideas how to tackle this. Please let me know anything if you have any ideas.
Thank you very much in advance.
If your tensor has <= 4 feature channels, then you just copy them in with feature channels 0-3 sitting where RGBA would be in the texture. If your tensor has more than that, then you use a MTLTexture2DArray instead. Additional feature channels beyond the first four go consecutively into the same coordinate in later images in the array.