I have a custom UIView with UIPanGestureRecogniser for capturing points and velocities. I store these points and velocities in NSMutableArrays then use the data to create UIBezierPaths which then get added to an undo/redo history.
As the user moves their finger across the screen the undo history continually gets new paths added to it and each new path is drawn out to an offscreen graphics context, clipped (to size of path) then drawn on the UIView.
My problem is: Now I am creating a pinch to zoom function and applying a scale and translation transform to the drawing view, any drawing done when zoomed in ends up in the wrong place (its up and to the left of where your finger is on screen) and the wrong size (smaller). You can't actually see what you are drawing until you undo and redo it. I was thinking the wrong rectangle of the offscreen context or screen is getting updated or the points of the paths stored in the undo history have different origin/reference point due to the transforms (don't know what to call it!).
It is my first iOS app (there may be some daft errors) and I have used many different tutorials to get this far but I am stuck on how the transforms affect the paths. And how to make sure paths get drawn in the right place on the offscreen context at the right scale. I have tried transforming the paths, transforming points and tried to invert the transform, but I just don't get it.
So here is the code (apologies for the quantity). I include what deals with capturing the points, making paths and drawing to screen... a pinch recognizer will have scaled up the drawing view and it will have been translated to zoom it in to the centre of the pinch.
In the ViewController I create the drawing view (VelocityDrawer
) at the size of the whole screen and add gesture recognisers:
VelocityDrawer *slv = [[VelocityDrawer alloc] initWithFrame:CGRectMake(0,0,768,1024)];
slv.tag = 100;
drawingView = slv;
drawingView.delegate = self;
drawingView.currentPen = finePen;
then initWithFrame:(CGRect)frame
in VelocityDrawer:
self.undoHistory = [[NSMutableArray alloc] init];
self.redoHistory = [[NSMutableArray alloc] init];
// create offscreen context
drawingContext = [self createOffscreenContext:frame.size];
UIPanGestureRecognizer *panGestureRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:@selector(handlePanGesture:)];
panGestureRecognizer.maximumNumberOfTouches = 1;
[self addGestureRecognizer:panGestureRecognizer];
UITapGestureRecognizer *tapGestureRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(handleTap:)];
tapGestureRecognizer.numberOfTapsRequired = 1;
tapGestureRecognizer.numberOfTouchesRequired = 1;
[self addGestureRecognizer:tapGestureRecognizer];
[self clearHistoryBitmaps];
Offscreen context is created like this:
- (CGContextRef) createOffscreenContext: (CGSize) size {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
float scaleFactor = [[UIScreen mainScreen] scale];
// must use whole numbers so invalid context does not happen
NSInteger sw = (NSInteger)(size.width * scaleFactor);
NSInteger sh = (NSInteger)(size.height * scaleFactor);
CGContextRef context = CGBitmapContextCreate(NULL,
sw,
sh,
8,
sw * 4,
colorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextScaleCTM(context, scaleFactor, scaleFactor);
return context;
}
In handlePanGesture
. I capture points, calculate the size or width of line in extractSize
(based on finger velocity) then add the info to an array:
if (panGestureRecognizer.state == UIGestureRecognizerStateBegan)
{
CGPoint point = [panGestureRecognizer locationInView:panGestureRecognizer.view];
CGPoint prev = [[points firstObject] pos];
float size;
size = currentPen.minWidth/2;
// Add point to array
[self addPoint:point withSize:size];
// Add empty group to history
[undoHistory addObject:[[NSMutableArray alloc] init]];
}
if (panGestureRecognizer.state == UIGestureRecognizerStateChanged) {
CGPoint point = [panGestureRecognizer locationInView:panGestureRecognizer.view];
currentPoint = [(LinePoint *)[points lastObject] pos];
float size = clampf([self extractSize:panGestureRecognizer], currentPen.minWidth, currentPen.maxWidth);
[self addPoint:point withSize:size];
NSMutableArray *pArr = [[NSMutableArray alloc] init];
UIBezierPath *sizer = [[UIBezierPath alloc] init];
// interpolate points to make smooth variable width line
NSMutableArray *interPoints = [self calculateSmoothLinePoints];
// code continues
I loop over interPoints
(an array of interpolated points between newest and previous points from recognizer) creating paths that will be drawn on screen:
// other code here
// loop starts
CGMutablePathRef path = CGPathCreateMutable();
CGPathMoveToPoint(path, NULL, mid1.x, mid1.y);
CGPathAddQuadCurveToPoint(path, NULL, prevT1.x, prevT1.y, mid2.x, mid2.y);
CGPathAddLineToPoint(path, NULL, mid2b.x, mid2b.y);
CGPathAddQuadCurveToPoint(path, NULL, prevB1.x, prevB1.y, mid1b.x, mid1b.y);
CGPathAddLineToPoint(path, NULL, mid1.x, mid1.y);
UIBezierPath *aPath = [UIBezierPath bezierPath];
aPath.CGPath = path;
[sizer appendPath:aPath];
[pArr addObject:aPath];
// more code here
// loop ends
CGPathRelease(path);
Once all the paths are added to pArr
I create a HistoryItem
and populate it with the path, line colour, line width etc.
HistoryItem *action = [[HistoryItem alloc] initWithPaths:pArr
andLineColour:self.lineColor
andLineWidth:self.lineWidth
andDrawMode:self.currentDrawMode
andScale:self.scale];
[self addAction:action];
addAction
adds a HistoryItem
to the undo stack. Please note I record self.scale
but don't do anything with it. Then I get the bounding rectangle (drawBox
) and call setNeedsDisplayInRect
CGRect drawBox = CGPathGetBoundingBox(sizer.CGPath);
//Pad bounding box to respect line width
drawBox.origin.x -= self.lineWidth * 1;
drawBox.origin.y -= self.lineWidth * 1;
drawBox.size.width += self.lineWidth * 2;
drawBox.size.height += self.lineWidth * 2;
[self setNeedsDisplayInRect:drawBox];
when the gesture is finished, I add a round end to the line. code omitted.
Finally drawRect:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
// offScreen context
UIGraphicsPushContext(drawingContext);
HistoryItem *action = [[undoHistory lastObject] lastObject];
if(currentDrawMode == DRAW || currentDrawMode == ERASE) {
for (UIBezierPath *p in action.pathsToDraw) {
p.lineWidth = 1;
p.lineCapStyle = kCGLineCapRound;
[action.lineColor setFill];
[action.lineColor setStroke];
[p fill];
[p stroke];
}
}
if(currentDrawMode == UNDO) {
CGContextClearRect(drawingContext, self.bounds);
for (NSArray *actionGroup in undoHistory) {
for (HistoryItem *undoAction in actionGroup) {
for (UIBezierPath *p in undoAction.pathsToDraw) {
p.lineWidth = 1;
p.lineCapStyle = kCGLineCapRound;
[undoAction.lineColor setFill];
[undoAction.lineColor setStroke];
[p fill];
[p stroke];
}
}
}
}
// similar code for redo omitted
Maybe a problem here with the frame/sizes?
// Continuation of drawRect:
CGImageRef cgImage = CGBitmapContextCreateImage(drawingContext);
CGContextClipToRect(context, rect);
CGContextDrawImage(context, CGRectMake(0, 0, self.frame.size.width, self.frame.size.height), cgImage);
CGImageRelease(cgImage);
[super drawRect:rect];
}
You will need to apply the same transforms to your points when you redraw the line. The points you have stored in your array is relative to the canvas it was rendered on. I suggest you store the original frame and then scale the coordinates between the original frame and the new size on the next line render.