Reputation: 109
I am working on an application that has a single server with 4 NIC ports that are to be configured for the same static IP address of 192.168.0.1 for all NICs and talk to 4 separate black boxes with each black box having the same static IP address of 192.168.0.2 on each box as well. The only difference in the communication to and from the black boxes would be the port numbers, such as box 1 would use port 2010 to listen to my data, while box 2 would use 2020, box 3 would use 2030 and so on with a similar pattern of the boxes transmitting back to the server as in ports 2110, 2120, 2130 and so on. The wiring interface between the server and the black boxes is one to one without any switches or hubs in between. That means Ethernet port eth1 goes straight to box 1, eth2 goes to box 2 and so on.
In my application design I will have different threats with separate socket instances for each port. The one thing I am unsure about is how to to specify which Ethernet interface the socket should use? I have read about the bind()
in other threats, as one can specify the IP address of the source and the port and bind those to a socket, letting the underlying layers decide on the actual Ethernet adapter to use. Since I will be using datagram UDP packets, which are just sent out on the network regardless if the client is listening, a resolve of ip/port would not work here I would assume and I also do not want to spam the network with non destined packets as there will be lost of data flow across already. Also, this will be written under C++11 in a Windows environment using winsock2.
How would I go about specifying which eth interface/adapter to use for a particular socket in such instance?
And for those that will ask why I am doing it this way, I have no choice as it is an outside vendor's black box hardware that I have no control in specifying different IP addresses.
Upvotes: 1
Views: 612
Reputation: 283684
You can do this, but not with sockets, or even using the networking protocol stack of your host.
But you can send and receive complete packets from a particular interface, using a mechanism such as winpcap, tun/tap, or slirp. Actually a proper network test needs to do this anyway, because you will need to test the peer's ability to handle malformed packets, which the host networking stack will never generate.
Basic observation, your task is essentially equivalent to implementing bridging in user-mode, although you aren't selecting interface from a bridge learn table, the rest is the same. So take a look at some software that does user-mode bridging on Win32, for example coLinux.
If your requirements document actually says that it will be done using Winsock2, you're going to need to fight to get that changed before you have any hope of progress. (This is why requirements should specify goals, not means.)
Upvotes: 2
Reputation: 339836
You're exposing yourself to seven levels of hell by trying to do this in software.
IMHO, the simplest solution would be to put a trivial dual NIC gateway box on each of the four network segments that will translate from a separately configured subnet on each physical port to the (hidden) duplicate address on each black box.
+----------+ +-------+ +-------+
| | 172.16.n.1 | | 192.168.0.1 | |
| NIC n +------------------+ NAT +------------------+ BB |
| | 172.16.n.2 | | 192.168.0.2 | |
+----------+ +-------+ +-------+
The NAT box would have to proxy packets sent to 192.168.0.1 as if they came from 172.16.n.2 (or be otherwise configured to have the 172.16.n.1 as the target destination address) and you would need to have port forwarding configured to forward inbound packets to 172.16.n.2 to the hidden 192.168.0.2.
Upvotes: 0