Core Data Size Limitations/Performance Issues

I’ve been playing with Core Data for a few weeks now and I know core data can handle a lot of data. I’m playing with an app that has about 13,000 items in one of the Core Data objects and the app is really slow when you are scrolling the table. I checked to make sure I was queuing the cells and I am. I know one option to fix this is to not pull back all 13,000 rows in one array, but I wanted to let the users look through them and filter them using the ‘search’ function. Are there other ways to resolve this?

Are you blocking your UI thread far too long?
Can you get the filtering done at the source?

so I’m pulling all 13k rows in to an array which is my datasource for my table. The rows are queued so they are only created 5 at a time and are dequeued using dequeueReusableCellWithIdentifier. When the user scrolls through the rows it is very slow and choppy. The initial load of the 13k doesn’t take too long, but the scrolling is not acceptable.

What platform are you on?
13K rows. How big is a row of data?

Possible things to try:

If your data is static, try bypassing CoreData. Keep your data in a file and build a list from it; and use that list to feed your table’s data source.

If you know how to use GCD (Grand Central Dispatch), try separating your GUI thread from your data source and keep the number rows in your table small. Here the programming will start to get interesting.

I don’t think the delay is loading the data in to an array, I think the delay is getting the data out of the array for the table. I’m a newbie, so I’m thinking the GCD might be a little over my head at this point, but thanks for the suggestion! :slight_smile: I think my only choice here is to separate the rows out to smaller chunks (A-L and M-Z) so the scrolling is smoother and not so many rows are returned in the initial array. As far as the size of the rows, they have 19 fields of which 10 are Integers and 2 are Booleans and the rest are Strings

Good thinking. Please let us know how it turns out.

No need to pull in all 13k rows! Just set the pre-fetch size to something like 100+ rows. CoreData will fetch the items in 100 row blocks and then create fully faulted objects for each of the other rows in your predicated fetch. That way it’s already done the heavy lifting to find out where everything is using your predicate but hasn’t done all the disk I/O required to pull it in. That way when you ask for the next batch or some random entity in the faulted NSArray, CoreData already has a pointer to exactly where on disk that entity is and can just go ping and pull it in.

I was at a table at WWDC when someone else complained how his app was performing slowly. Even the Apple engr didn’t know about fully faulted objects and setting the pre-fetch limit in CoreData. He said he was accessing things randomly based on generic predicates and I said so what. I spent about 5 minutes with Apple looking on setting the pre-fetch size and made no changes at all to his base code on how he accessed the data. His app quintupled in speed!

It’s amazing how segregated information is inside Apple. This was an Instruments lab and the people knew about instruments, but they didn’t have a clue how CoreData actually worked and were just trying to show him how to use interments. He didn’t want to know that, he wanted to know why his app was sucky slow!

George

Have you tried using the measurement tools to find out where the hold up is?