I went in a different direction. I limited myself to what the book has shown us so far. Doing save/load and undo suggests going with Core Data, which gives those for free. That made the next task how to model a bunch of ovals.
I skipped that and went for the view next. It had an array of CGRect values, which it drew on the view’s canvas (after checking that the oval would be visible). I later added a CGRect? variable for the current trial oval, which is drawn slightly differently. The trial oval is usually NIL. (Changing either sets the needs-display flag, as does the other properties for background and foreground colors. Those are Inspectable through Interface Builder, as is the whole class Designable.)
Then I went back for the model. The model was for a single oval, with its bounds as a Transformable trait. In the code, I directly manipulated CGRect values wrapped in NSValue objects. I saved/loaded the ovals as entities directly dumped into the managed context. On file open, I did a fetch and iterated over each object to copy its bounds to a CGRect inside the oval-view’s array. When I created a new oval, I inserted it into the context, which triggers a NSNotification to copy it into the oval-view’s array. The notification method also handles removals. (Due to value semantics, the program gives up when getting an update-style notification.)
The chapter doesn’t do anything with its discussion on gesture recognizers. I used that as a sign to use a pan-gesture as a cheap click-and-drag motion to define a new oval. The document class acts as the receiver for gesture’s updates, and calculates the trial oval to draw only during the drag. (The trial oval is set back to NIL after the drag ends. If the drag ended successfully, the trial oval is copied into the managed context. This copies it back to the oval-view as a full entry.)
It doesn’t supports edits, strictly adds. No removals except indirectly through undo.