I'm working on a homework assignment to print out big and little endian values of an int and float. I'm having trouble converting to little endian.

here's my code

void convertLitteE(string input)
{
    int theInt;
    stringstream stream(input);
    while(stream >> theInt)
    {
        float f = (float)theInt;

        printf("\n%d\n",theInt);
        printf("int:   0x");
        printLittle((char *) &theInt, sizeof(theInt));

        printf("\nfloat: 0x");
        printLittle((char *) &f, sizeof(f));
        printf("\n\n");
    }
}

void printLittle(char *p, int nBytes)
{
    for (int i = 0; i < nBytes; i++, p++)
    {
        printf("%02X", *p);
    }
}

when input is 12 I get what I would expect

output:
int:   0x0C000000
float: 0x00004041

but when input is 1234

output:
int:   0xFFFFFFD20400000
float: 0x0040FFFFFFF9A44

but I would expect

int :  0xD2040000
float: 0x00409A44

When I step through the for loop I can see where there appears to be a garbage value and then it prints all the F's but I don't know why. I've tried this so many different ways but I can't get it to work.

Any help would be greatly appreciated.

有帮助吗?

解决方案

Apparently on your system, char is a signed 8-bit type. Using unsigned 8-bit bytes, the 4-byte little-endian representation of 1234 would be 0xd2, 0x04, 0x00, 0x00. But when interpreted as a signed char on most systems, 0xd2 becomes -0x2e.

Then the call to printf promotes that char to the int with value -0x2e, then printf (which is not very typesafe) reads in an unsigned int where you passed the int. This is Undefined Behavior, but on most systems it will be the same as a static_cast, so you get the value 0xFFFFFFD2 when trying to print the first byte.

If you stick to using unsigned char instead of char in these functions, you can avoid this particular problem.

(But as @jogojapan pointed out, this entire approach is not portable at all.)

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top