Zhang Mingxiang
Zhang Mingxiang

Reputation: 96

The fix client can receive incoming messages but cannot send outgoing heartbeat message

We have built a fix client. The fix client can receive incoming messages but cannot send outgoing heartbeat message or reply the TestRequest message after the last heartbeat was sent, something is triggered to stop sending heartbeat anymore from client side.

fix version: fix5.0

The same incident happened before, we have tcpdump for one session in that time [1]: https://i.sstatic.net/oKx49.png

we deploy every fix session to separated k8s pods.

  1. We doubted it's CPU resource issue because the load average is high around the issue time, but it's not solved after we add more cpu cores. we think the load average is high because of fix reconnection.
  2. We doubted it's IO issue because we use AWS efs which shared by 3 sessions for logging and message store. but it's still not solved after we use pod affinity to assign 3 sessions to different nodes.
  3. It's not a network issue either, since we can receive fix messages, other sessions worked well at that time. We have disabled SNAT in k8s cluster too.

We are using quickfixj 2.2.0 to create a fix client, we have 3 sessions, which are deployed to k8s pods in separated nodes.

  1. rate session to get fx price from server
  2. order session to get transaction(execution report) messages from server, we only send logon/heartbeat/logout messages to server.
  3. backoffice session to get marketstatus

We use apache camel quickfixj component to make our programming easy. It works well in most time, but it keeps happening to reconnect to fix servers in 3 sessions, the frequency is like once a month, mostly only 2 sessions have issues.

heartbeatInt = 30s

The fix event messages at client side

20201004-21:10:53.203 Already disconnected: Verifying message failed: quickfix.SessionException: Logon state is not valid for message (MsgType=1)
20201004-21:10:53.271 MINA session created: local=/172.28.65.164:44974, class org.apache.mina.transport.socket.nio.NioSocketSession, remote=/10.60.45.132:11050
20201004-21:10:53.537 Initiated logon request
20201004-21:10:53.643 Setting DefaultApplVerID (1137=9) from Logon
20201004-21:10:53.643 Logon contains ResetSeqNumFlag=Y, resetting sequence numbers to 1
20201004-21:10:53.643 Received logon

The fix incoming messages at client side

8=FIXT.1.1☺9=65☺35=0☺34=2513☺49=Quote1☺52=20201004-21:09:02.887☺56=TA_Quote1☺10=186☺
8=FIXT.1.1☺9=65☺35=0☺34=2514☺49=Quote1☺52=20201004-21:09:33.089☺56=TA_Quote1☺10=185☺
8=FIXT.1.1☺9=74☺35=1☺34=2515☺49=Quote1☺52=20201004-21:09:48.090☺56=TA_Quote1☺112=TEST☺10=203☺ 
----- 21:10:53.203 Already disconnected ----
8=FIXT.1.1☺9=87☺35=A☺34=1☺49=Quote1☺52=20201004-21:10:53.639☺56=TA_Quote1☺98=0☺108=30☺141=Y☺1137=9☺10=183☺
8=FIXT.1.1☺9=62☺35=0☺34=2☺49=Quote1☺52=20201004-21:11:23.887☺56=TA_Quote1☺10=026☺

The fix outgoing messages at client side

8=FIXT.1.1☺9=65☺35=0☺34=2513☺49=TA_Quote1☺52=20201004-21:09:02.884☺56=Quote1☺10=183☺
---- no heartbeat message around 21:09:32 ----
---- 21:10:53.203 Already disconnected ---
8=FIXT.1.1☺9=134☺35=A☺34=1☺49=TA_Quote1☺52=20201004-21:10:53.433☺56=Quote1☺98=0☺108=30☺141=Y☺553=xxxx☺554=xxxxx☺1137=9☺10=098☺
8=FIXT.1.1☺9=62☺35=0☺34=2☺49=TA_Quote1☺52=20201004-21:11:23.884☺56=Quote1☺10=023☺
8=FIXT.1.1☺9=62☺35=0☺34=3☺49=TA_Quote1☺52=20201004-21:11:53.884☺56=Quote1☺10=027☺

Thread dump when TEST message from server was received.BTW, The gist is from our development environment which has the same deployment. https://gist.github.com/hitxiang/345c8f699b4ad1271749e00b7517bef6

We had enabled the debug log at quickfixj, but not much information, only logs for messages receieved.

Upvotes: 0

Views: 1230

Answers (1)

Zhang Mingxiang
Zhang Mingxiang

Reputation: 96

The sequence in time serial

  1. 20201101-23:56:02.742 Outgoing heartbeat should be sent at this time, Looks like it's sending, but hung at io writing - in Running state
  2. 20201101-23:56:18.651 test message from server side to trigger thread dump
  3. 20201101-22:57:45.654 server side began to close the connection
  4. 20201101-22:57:46.727 thread dump - right
  5. 20201101-23:57:48.363 logon message
  6. 20201101-22:58:56.515 thread dump - left

The right(2020-11-01T22:57:46.727Z): when it hangs, The left(2020-11-01T22:58:56.515Z): after reconnection enter image description here

It looks like that the storage - aws efs we are using made the issue happen. But the feedback from aws support is that nothing is wrong at aws efs side. Maybe it's the network issue between k8s ec2 instance and aws efs.

  1. First, we make the logging async at all session, make the disconnection happen less.
  2. Second, for market session, we write the sequence files to local disk, the disconnection had gone at market session.
  3. Third, at last we replaced the aws efs with aws ebs(persist volume in k8s) for all sessions. It works great now.

BTW, aws ebs is not high availability across zone, but it's better than fix disconnection.

Upvotes: 3

Related Questions