This is really a conceptual question -- I've been working on this for sometime now, but haven't found a great way to go about solving my problem. I have a hexagonal image with hexagonal binning/pixels, with b/w intensity values for each pixel and am trying to feed this into a deep autoencoder, but it seems as if these use square or rectangular images (with square pixels). Note this image is given as a 1-D array, with appropriate x,y coordinates
I've thought and looked into number of ideas to handle this situation, and am looking for some feedback or info that can point me in the right direction.
- Convert the hexagonal image to a cube. This would work if we dealt with all full hexagon pixels, but the half-cells (ie. half hexagons) makes this not possible.
- Slicing hexagon pixels into equal sized pixels (half hexagons) so we can feed them in as "square" pixels. However, the orientation of the half-hexagons prove this to be a challenge. I also thought about slicing the pixels this into smaller triangle pixels, but then I wouldn't know how to deal with this.
- Adding white pixels (ie. all 0s) and forcing the image to look like a rectangle or square. However, I wouldn't know the relation between square and hexagonal pixels.
- Transforming hexagon pixels to square pixels, then adding white spaces such that the hexagon image becomes a rectangular image. This seems to be the most probable, and I'm currently reading articles on how to do this, but I'm not sure how to properly handle the half-hexagon pixels.
I guess the generalized question is --
how do I deal with feeding an image into a Neural Network when the image is both non-rectangular shaped and non-rectangular pixeled?
Any thoughts would be appreciated. Thanks!
I don't see any problems with resampling it with a regular square grid, so that it becomes a proper 2D image. You would likely need to do it in any case to keep the network size reasonably small.