Reputation: 161
I have done type casting with int
and char
but not with pointers so I posted this question.
#include <stdio.h>
int main() {
int a[4] = { 1, 2, 6, 4 };
char *p;
p = (char *)a; // what does this statement mean?
printf("%d\n",*p);
p = p + 1;
printf("%d",*p); // after incrementing it gives 0 why?
}
The first call to printf
gives the first element of the array. And after p=p+1
it gives 0
. Why?
Upvotes: 2
Views: 241
Reputation: 15387
a is a pointer to an array of memory. This memory is probably arranged in little endian twos complement format as a contiguous block of 16 bytes.
When you cast it, you basically just said "okay, I know this is a pointer to an array of ints, but now we're going to reinterpret the data as an array of chars". char typically is one byte. So, when you added one to the pointer, you incremented the place the pointer pointed to. Because it is a char array, it advanced one byte down the array, which is in the middle of the int, which contains 0.
To clarify, based on what I assume about your computer's architecture:
Little endian numbers (the least significant byte comes first):
00000001000000000000000000000000 in binary = 1 in decimal 00000010000000000000000000000000 in binary = 2 in decimal 00000110000000000000000000000000 in binary = 6 in decimal 00000100000000000000000000000000 in binary = 4 in decimal
Your array of ints looks like this:
00000001000000000000000000000000000000100000000000000000000000000000011000000000000000000000000000000100000000000000000000000000
The variable "a" points to the first integer, which is 32 bits (i.e., "*a" is 00000001000000000000000000000000
). If you add one to the variable "a", you increment the pointer, so *(a+1) points to the second int 00000010000000000000000000000000
).
Now, you cast the variable "a" to a "char*" pointer. So, now, the variable "p" points to the same place, but because it is a char pointer, pointers to the first char, which is 8 bits (i.e., "*p" is 00000001
, the first byte of your array).
Finally, when you increment "p", you increment the pointer. Because "p" is a pointer to chars, it advances one byte, so "*(p+1)" is 00000000
.
Upvotes: 0
Reputation: 14363
int a[4] = { 1, 2, 6, 4 };
declares a as array. a
at this point of time stores the address of the first element of the array.
char *p;
declares p as a pointer
to character
p = (char *)a;
Now since p
& a
both stores addresses. So the address stored at a
(address
of first element of the array) is assigned to p
. The typecasting is done as p
was declared as char *
what it does is that, assuming address stored at a
is say 100 and assuming that int
takes 2 bytes and char
takes 1 bytes in memory
a+1 would return a+size of int(2) = 102
and
p+1 would return p+size of char(1) = 101
and that will explain the output as
A. two bytes stored at a
contains the first element of the array.
B. one byte stored at p
is the character representation of the first byte of integer 1
, which is 0
.
Hope this helps.
Upvotes: 1
Reputation: 124712
Let's imagine a fairly typical platform in which a byte is eight bits, memory is arranged using little-endian byte ordering, and an int
represents four bytes in memory. On this platform, a value of 1
would be laid out like so:
00000001 00000000 00000000 00000000
^
the first element of 'a'
p
is declared as a pointer to char
(not int
) and is initialized to points to the first element of the array a
. A char
on this platform represents one byte. The int
value above interpreted as a char
would look like so:
00000001 -------- -------- --------
| |
------
char is only 8 bits wide
So, whether we read one byte or four, i.e., whether we read *p
or a[0]
, the value is 1
. However, when you increment p
, a pointer to char
, it now points to the next char in memory, which is the next byte:
00000001 00000000 00000000 00000000
00000001 00000000 -------- --------
^ ^ ^ ^
p p+1 p+2 p+3 ...
a[1]
points to the next int
(2
), p[1]
points to the next char
, which is 0
.
On a side note, you've actually stumbled upon a method to determine if a given processor uses little- or big-endian byte order. If the system were big-endian (most significant byte first) then your first printf
would have printed 0. This is because the memory layout would have changed:
0000000 00000000 00000000 00000001
^
the first element of 'a'
0000000 -------- -------- --------
^
p
If you have more than a single byte arranged in big-endian order which represent the value 1 and you read only the first byte you can use its value (1
or 0
) to test the endianness of the machine:
int n = 1;
if(*(char*)&n == 1)
// little endian
else
// big endian
Upvotes: 5
Reputation: 2005
If you use an intel like processor the bytes on the stack are stored using the little endian format. In more, probably the type int (on your computer) has size 4 bytes while char is 1 byte long (you can verify this using the c operator 'sizeof'), so the first element of your integer array is:
0x04 0x00 0x00 0x00
For this reason when you use a char pointer to see the values of your integer array, you proceed forward one bytes at time (not 4) with the obvious result you've obtained.
I hope this helps!
Upvotes: 0
Reputation: 409216
You have an array of four integers. You try to access them as an "array" of characters. That's what the casting means.
As for the second printf
printing zero, remember that integers are four bytes while characters are only one. So increasing the pointer p
will make p
point to the second byte of the first integer, which is zero. If you had a larger number (i.e. over 255) then the second byte would have a value as well. But note that this only works on Intel-type machines, which are little endian, on big endian machines both printf
would print zero.
Upvotes: 2
Reputation: 22114
To be exact, the first printf
doesn't give the first element of the array, it gives the first 8 bits of first element, which just happens to be equal to the first elements numeric value.
The second printf gives the next 8 bits of first element, which is 0 in this case.
1 = 00000000 00000000 00000000 00000001 (32 bits)
Upvotes: 2