Reputation: 3090
I'm doing a small client/server project as a final project in a C++ course. We are handed some classes that take care of communication (using sys/socket.h) and we can basically do connection->send(byte)
to send one byte of data.
Say I have a string that I want to send. How do I make sure an 'a' is interpreted as an 'a' when sent from client to server or vice versa? Since the standard isn't saying anything about char defaulting to unsigned or signed I don't know how to handle it.
I had some idea that I could subtract std::numeric_limits<char>::min()
on each end but I'm not sure it is a good one.
Upvotes: 1
Views: 12536
Reputation: 126
Other think to be aware, is the endiannes of your CPU, types bigger then 1 byte could be a problem. Try to take big endian as standard because is the standard for most common networking protocols.
Best regards.
Upvotes: 0
Reputation: 932
To verify your comunications you should use a network sniffer. Wireshark is a good one.
If the sniffer capture your ethernet packet and say that it has an 'a', you can be very sure that you have sent an 'a'.
In my experience, when you send a string with something similar to that:
string myString = "Hello World";
mySocket.send(myString.c_str(), myString.length());
the result is the same if I make a cast of myString.c_str() with (char*) or (unsigned char*).
A byte is 8 bits always :)
Regards
Upvotes: 1
Reputation: 348
Neither TCP nor UDP care about the encoding of your string. It will always be interpreted just as an array of bytes. So to make sure your string is interpreted correctly both server and client have to agree on a common encoding.
In your case I would simply work with the c_str()
method of std::string for sending the string and assigning the string by interpreting received data as const char*
(For ASCII strings). This should work as long as the client app and the server app are using the same string libraries.
Upvotes: 5