So I've been struggling with this problem for a while so I would appreciate it if somebody helped me out with this.

I'm trying to create a physical robot that solves a puzzle. The image of the completed puzzle will be provided along with a picture of scattered pieces

I've gotten opencv to find contours and single out each piece and rotate them so they are all parallel to the horizontal axes (all "diamond" or "diagonal" pieces are rotated so they look like squares)

I've been using SIFT to match a bunch of small square pieces to the complete picture.

Comparing an un-rotated square piece to the full picture

The problem is this is not in the correct orientation. How would I go about finding out whether I need to rotate 90, 180, 270 degrees?

Another problem I have is to determine which quadrant (non-adrant?) the piece is in. For example, this piece belongs to the bottom right corner. Is there a function that identifies the majority of similar keypoints and then classify into one of the nine regions?

Since SIFT are designed to be rotation-invariant, it is a good thing that the feature matches even though you have a rotation.

To determine how much rotation you need, you generally need to have your camera calibration parameter in order to unproject the picture into a view that is top-down. For your robot, it looks like the pictures are already top-down.

If this assumption holds, you can perform a regression to figure out what angle you need to rotate your piece. If you also know that your pieces are always square, you only have 4 choices to choose from. In that case, you can try all 4 and see which one is "closest" to your extracted patch (matched via SIFT to the big picture).

Determining the quadrant the matched piece is in can be done by looking at the coordinates of the matched points. Their distance to the corners should be what you need.