I have written an application using core data that collects various data from a sensor over wifi. My data thread runs in the background reading data off a socket from wifi and creating new core-data entities. The problem is I'm getting about 27 updates a second and when I've tried to have the thread call a save on the objects as soon as it receives them my UI starts to lag and the program becomes unusable I'm not sure if this is due to a design flaw in my code or related to the nature of how Core Data works.
I'd like to know some options for how I can actively do saves in the background without impacting my UI or any other application code. I had thought perhaps there was a way to fire off a batch save of say 500 records every few seconds in another background thread or something but I wasn't sure exactly a) how to implement this and b) if it was possible.
I'm actively creating objects with calls to:
[NSEntityDescription insertNewObjectForEntityForName:@"RPYL" inManagedObjectContext:managedObjectContext];
And once I'm done with the data collection I make a call to:
[managedObjectContext save:&error]
You cannot save in the background with no effect on your main thread — the SQLite developers are of the antiquated opinion that "Threads are evil. Avoid them.". There is therefore a lot of mutual exclusion involved in using SQLite. While it is saving, the persistent store is locked, regardless of where the save originates from. If anything else needs to access the store during that time then it must wait. If you have affected indexes then the save involves taking every item in the affected tables and sorting them according to the indexed column as SQLite implements indexing by binary search. So the cost can be a function of how much you're inserting plus how much you already have in the store.
First of all try intelligent faulting when doing the thread confinement hop over onto your the main queue — use
NSFetchRequest
and explicitly say you want things pre-faulted. Then you'll be able to access them without any further trips to the store, saving you from hitting the mutex. Try running with-com.apple.CoreData.SQLDebug 1
set for your target to see quite how often you are going to the store: if it's not something you've optimised for then I guarantee it'll be a huge amount more than you thought.General tips also apply: schedule against the runloop, not directly onto the main queue, if you have work you want to do but not while the user is interacting with the app. The runloop switches into and out of tracking mode so you can just schedule your work for the default mode and it'll automatically avoid running while the UI is busy under user control.
If that doesn't resolve the problem and your records are intended to be immutable upon receipt with no complicated queries then consider using Core Data as the permanent store but an immutable non-managed object of your own design for runtime use. Build those in the background and pass them forwards.
EDIT: batching your saves is easy enough — just literally don't call save too frequently. One solution is to have a BOOL indicating whether a save is scheduled; when you want to save then if one isn't scheduled, schedule one for a second from now with
dispatch_after
. If one is scheduled then do nothing. If you've data constantly coming in then more complicated schemes (eg, when did I last save, so how long until I can save again?) aren't going to gain you much.