This is for a robot simulation that I'm currently working on. If you look at the diagram provided, you'll see two co-ordinate frames. Frame A, Frame B; and you will find a point p.
Co-ordinate frame A is the world-frame, and frame B is the local frame for my robot (where the x-axis is the heading-direction of the robot, as per convention). The robot is able to rotate and drive around in the world.
My goal here is to find the point p, expressed in terms of frame A, and re-express it in terms of the frame B.
The standard equation that I would use to implement this would be as follows:
point_in_frameB_x = point_in_frameA_x * cos(theta) - point_in_frameA_y * sin(theta) + t_x
point_in_frameB_y = point_in_frameA_x * sin(theta) + point_in_frameA_y * cos(theta) + t_y
Where t_x
and t_y
make up the translation transformation of frame B to frame A.
However, there are some complications here that prevent me from getting my desired results:
Since the robot can rotate around (with its default pose being with a heading direction north--and this has a rotation of 0 radians), I don't know how I would define t_x
and t_y
in the above code. Because if the robot has a heading direction (i.e. x-axis) parallel to the y-axis of frame A, the translation vector would be different from the situation where the robot has a heading direction parallel to the x-axis of frame A.
You would notice that the transformation from frame A to frame B isn't straightforward. I'm using this convention simply because I'm implementing this simulator which uses this convention for its image-frame.
Any help would be greatly appreciated. Thanks in advance.
Follow right hand rule for assigning coordinate frames. In your picture axes of either frame A or frame B must be changed.