Pergunta

I know this is a simple questions, but it came up when I was coding and I am wondering how it works now. So, my first question is that when printf is given an integer like below, but expecting a %f float value, why is it always outputting 0.000000? I am running this in GCC on linux terminal.

int main() {

int a = 2, b = 5, result = 0;
result = b/a*a;

printf("%f\n", result);
}
//Above printf statement outputs 0.000000 every time.

Then when I use the code below and give printf a double when it is expecting an %i integer value, the output is always random/garbage.

int main() {

double a = 2, b = 5, result = 0;
result = b/a*a;

printf("%i\n", result);
}
//Above printf statement outputs random numbers every time.

I completely understand the above code is incorrect since the printf output type is not the same as I am inputting, but I expected it to act the same way for each form of error instead of changing like this. Just caught my curiosity so I thought I would ask.

Foi útil?

Solução

Basically because if you interpret the bits that make up a small integer as if they were a double, it looks like the double value for 0. Whereas if you interpret the bits that represent a small double value as an integer, it looks like something more interesting. Here is a link to a page that describes how the bits are used to represent a double: http://en.m.wikipedia.org/wiki/IEEE_floating_point . With this link and a little patience, you can actually work out the integer value that a given double would be interpreted as.

Outras dicas

You used the wrong format specifiers It should be

int a = 2, b = 5, result = 0;
result = b/a*a;

printf("%d\n", result);

...

double a = 2, b = 5, result = 0;
result = b/a*a;

printf("%f\n", result);
Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top