Small Comment


Hello there…

I was very excited for the Big Nerd Ranch Obj-C book to finally arrive. I may comment here and there as I run across things since I am a new programmer and just getting my feet wet! First an observation. I noticed when running the Challenge program that if I kept the FLOAT values to a single decimal (42.0 and 14.5 say), the addition always worked out perfectly. But changing that to 42.0 and 3.14 created results that were off by tiny amounts…

“The result of all of this rigamarole is 45.139999.”

I assume that’s normal but as the calculations get more and more complex, I can’t help but thing that even these tiny variables might add up?



Hey, Dave,

Good intuition: The small errors do add up. And it is a real problem if you are doing very precise calculations. For problems like this, there are special libraries that do arbitrary precision numbers.

  • Aaron


I just ran into the same issue too.

To me 3.14 + 42.0 = 45.14

How does that error get introduced? I’m curious to know what’s going on behind it.




The error is introduced by the nature of binary processors. In other words, it is not inherent to the C language or Xcode, it is the way the CPU calculates.

Binary numbers of limited length (32 bit or 64 bit) can only be so precise when representing decimal number fractions. In other words, by architecting a CPU with a certain number of bits to represent numbers (typically 32 or 64) you are already limiting the number of decimal points you can represent. A similar situation happens when you round your decimals to a certain fixed number (e.g.: when calculating currency amounts rounding them to two decimals). There is a necessary level of imprecision implied by that requirement.

There are ways to get more precision in computer calculations. That methodology is called arbitrary precision arithmetic. More on that subject in WikiPedia: … arithmetic



Thanks for the information. That is very helpful.