Reputation: 3
I have an iPad application which continuously records the finger position (up to 30 times per second) if it is on screen and saves a 'Sample'-Object to Core Data for each dataset recorded. After a while, maybe two or three minutes, this process or at least the user feedback gets really slow. If I shut down the App and relaunch it is slow right from the start, but deleting the persistent store helps to get it back to the original performance. Therefore I think the problem should have to do with the data storage.
I have tried to detect the problem with the time profiler, but most of the time is consumed by system libraries and non-Objective-C code. If I turn those off for the profiling, what remains the most time consuming method is the initialization of the sample object:
@implementation Sample (Create)
+(Sample *)sampleWithTime:(double)time
xValue:(double)x
yValue:(double)y
eventType:(EventType)event
chillStatus:(BOOL)chill
participant:(Participant *)whoListened
track:(Track *)whichTrack
inManagedObjectContext:(NSManagedObjectContext *)context
{
Sample *sample;
if (!sample) {
sample = [NSEntityDescription insertNewObjectForEntityForName:@"Sample" inManagedObjectContext:context];
sample.time = time;
sample.x = x;
sample.y = y;
sample.event = event;
sample.chillStatus = chill;
sample.whichTrack = whichTrack;
sample.whoListened = whoListened;
}
return sample;
}
sample.whichTrack = which track; sets up the relationship to the track-object (ManagedObject subclass, refers to the music, which is playing) given to the method. It is a one-to-many relationship in the sense that there is only one track per sample, but lots of samples per track.This line then consumes 87% of the time of the whole method, even though the next line is doing exactly the same and needs almost nothing compared to that.
Does this make any sense to search for the problem here? Can the database get that much slower because the set of the objects in the relationship become bigger and have to be copied to add an object or something like that? Is there anything I can do to improve the performance? The database as a file does not become big at all, it is still less than 1 MB.
Upvotes: 0
Views: 347
Reputation: 6011
It make a lot of sense to search the problem there.
I assume you set your whichTrack
(to-one) relationship with an inverse samples
(to-many) relationship on the Track
object.
You take 30 Sample
s/sec ==> 3min of sampling on the same Track
will hold: 5400 objects in the to-many relationship. this relationship is maintained on both ends, this means that whenever you insert a new sample, the entire set of objects (at least their faults) must be fetched from the store (if not already faulted).
If you save after each new Sample
and release you context (reset or refresh the Track
object) will require a re-fetch of all existing items in the to-many relationship the next time you access the Track
item.
I would try to first remove the inverse relationship (the to-many side of the Track
entity) and see how it behave.
Watch out as this will eliminate any cascading upon Track
deletion (of Sample
objects).
If you still need cascading upon Track
deletion, you could add it yourself by adding a prepareForDeletion
implementation to your Track
entity.
Another solution (more elegant) will be to shard your samples into a SampleContainer
object.
A Track
object will have a to-many relationship to SampleContainer
.
Each container will be limited to 'N' samples.
When a container is filled, add a new container to the Track
.
In addition to all that:
You could perform a save only every 'T' time interval. this will further reduce store access (saves), BUT this might not suit you as you might loose Sample
s if the user terminate the application between 2 saves (you can always save before going to background).
Upvotes: 1