Reputation: 1961
I'm trying to build a persistent TCP connection between two servers using Go; A client
that resides locally and server
in the remote machine. The client also serves HTTP requests locally to a PHP
script. Each request is relayed through the TCP connection to the remote server. The response is then sent back to the PHP script.
HTTP request from .php script
\
\ -----> HTTP req is relayed
\
client <--------------TCP connection------------> server
/
/ <------ Response is relayed
/
HTTP request from .php script
With concurrent connections, the requests sent to and back from the remote server aren't synchronized, i.e. the response for request #1 could be sent to request #2 instead.
My current solution
I've created a map
of channels that are identified by a unique ID per HTTP request. This unique ID is passed in the TCP request to the server and is sent back along with the response. The client parses this ID in the response and sends it to the corresponding channel in the array. Oversimplified code where conns
is the map
of available channels and the unique ID is the whole response, or str
:
connbuf := bufio.NewReader(remoteConn)
str, err := connbuf.ReadString('\n')
str = strings.TrimSpace(str)
if len(str) > 0 {
id, _ := strconv.Atoi(str)
conns[id] <- str
}
Question
Is there a more elegant, built-in way to achieve this - making sure requests and responses to the local PHP script are correct even with concurrent requests?
Upvotes: 1
Views: 154
Reputation: 30057
This is as much a self-answer by kouton as anything, but using a single "back end" connection (between your local server and the remote server) to service multiple "front end" connections (between your local PHP and the local server) is a form of multiplexing, and the muxado package (video of a talk) is meant to help you implement it.
Upvotes: 1