Reputation: 183
I am working on Kinesis real-time streaming video POC.
I could able to stream video from android app to video-stream and called python boto3 api ('get_records') for face detection.
Face getting detected and getting response from api.
"InputInformation": {
"KinesisVideo": {
"StreamArn": "arn:aws:kinesisvideo:<video-stream>",
"FragmentNumber": "913..",
"ServerTimestamp": 1.5234201234E9,
"ProducerTimestamp": 1.523420130123E9,
"FrameOffsetInSeconds": 0.6769999861718424
}
},
"StreamProcessorInformation": {
"Status": "RUNNING"
},
"FaceSearchResponse": [{
"DetectedFace": {
"BoundingBox": {
"Height": 0.41025642,
"Width": 0.30769232,
"Left": 0.45673078,
"Top": 0.23397435
},
"Confidence": 99.99998, ........
Question: How do I generate a frame by highlighting detected face from this data stream output (by referring video-stream data)?
I am not finding any example or document in aws reference page to create a frame and store it as jpeg image with face highlight.
Any help/pointer on example in java / python api to generate frame from video-stream?
Upvotes: 0
Views: 1144
Reputation: 344
For the AWS Rekognition integration with Kinesis Video Streams, please checkout the KinesisVideoRekognitionIntegrationExample published in the Consumer Parser library. This example shows how to ingest video file (which you can replace with real-time producer like GStreamer sample application), retrieve data, parse MKV, decode H264 frames using JCodec, integrate with Rekognition JSON output and draw bounding boxes on the face detected using JFrame.
Upvotes: 1
Reputation: 269550
There is no automated facility to modify the video data based on detected faces. You would need to write an application that:
Upvotes: 0