Reputation: 11
I’m trying to create an ingestion pipeline using Opensearch Data Prepper to batch upload CDR records with its corresponding QoE data to the opensearch index. After each call ends, the SBC sends three POST requests (within maximum of 10 seconds apart). First request includes CDR metadata with unique call IDs (one for each leg of the call)
{
...
"recordType": "STOP",
"productName": "TEST-SBC01",
"setupTime": "11:59:18",
"globalSessionId": "07f499384e9afc34",
"sessionId": "a1c45b:44:47",
"isSuccess": "yes",
"timeToConnect": 237,
"callDuration": 1582,
"timeZone": "UTC",
"ingressCallOrigin": "in",
"egressCallOrigin": "out",
"ingressTrmReason": "GWAPP_NORMAL_CALL_CLEAR",
"ingressCallId": "[email protected]",
"egressCallId": "[email protected]",
}
Other two requests contain voice quality statistics, corresponding to each call leg.
{
"VQSessionReport": "CallTerm",
"CallID": "[email protected]",
...
}
{
"VQSessionReport": "CallTerm",
"CallID": "[email protected]",
...
}
My objective is to aggregate (merge) all three requests into a single JSON object and send it to the Openseach index. The problem is that I can’t match on a single identification key because QoE requests could be corresponding to either ingressCallId or egressCallId. Is it possible to achieve this using data prepper or do I need to go Kafka route?
Thanks
I tired to use data prepper with aggredation but I was only able aggregate on either ingressCallId or egressCallId but not both
Upvotes: 1
Views: 246