Here is my code to the answer for the challenge:
#include <stdio.h>
int main(int argc, const char * argv[]) {
float firstFloat = 3.14;
float secondFloat = 42.0;
double theSum = (double)firstFloat + (double)secondFloat;
printf("The sum of %f and %f is %f.\n", firstFloat, secondFloat, theSum);
return 0;
}
I know that typecasting hasn’t been covered yet, but from some other courses I took I learned that lesson.
Here is my output:
The sum of 3.140000 and 42.000000 is 45.140000.
What typecasting does is change the float to a double so that the precision is kept.