Reputation: 1357
I know this might be a relatively generic question, but I'm trying to see how I can get pointed in the right direction...
I'm trying to build a live face recognition app, using AWS Rekognition. I'm pretty comfortable with the API, and using static images uploaded to S3 to perform facial recognition. However, I'm trying to find out a way to stream live data into Rekognition. After reading the various articles and documentation that Amazon makes available, I found the process but can't seem to get over one hurdle.
According to the docs, I can use Kinesis to accomplish this. Seems pretty simple: create a Kinesis video stream, and process the stream through Rekognition. The producer produces the stream data into the Kinesis stream and I'm golden.
The problem I have is the producer. I found that AWS has a Java Producer library available (https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/producer-sdk-javaapi.html). Great... Seems simple enough, but now how do I use that producer to capture the stream from my webcam, and send off the bytes to Kinesis? The sample code that AWS provided actually uses static images from a directory, no code to get it integrated with an actual live source like a webcam.
Ideally, I can load my camera as an input source amd start streaming. But I can't seem to find any documentation on how to do this.
Any help, or direction would be greatly appreciated.
Upvotes: 3
Views: 8640
Reputation: 168
As other answers have noted, you can use the GStreamer sample C++ app directly, but probably maybe easier is to use the Gstreamer plugin they have made, which allows streaming from a RTMP stream - see an example here. If you don’t want the hassle of compiling and setting up producer code, there’s also out-of-the-box solutions you can get where the producer that sends video to Kinesis Video Streams is actually running on the device, like KVstreamer. Although that does require an AXIS device, it comes with a graphical user interface which may help you get started quickly.
Upvotes: -1
Reputation: 31
At the moment, to use AWS Rekogniton with a livestream camera, you must setup AWS kinesis Video stream and AWS kinesis data stream as described here: https://docs.aws.amazon.com/rekognition/latest/dg/recognize-faces-in-a-video-stream.html
After that, you have to use API: PutMedia to send livestream frame to AWS Kinesis video stream. Then AWS Rekognition will use this as input, after processing, the output will send to AWS Kinesis Data stream. So you will get back the result from AWS Kinesis Data stream.
All steps are quite easy but you might get some trouble with PutMedia API. Now, I can't find any document to achieve it but you can use this source code as a reference. It creates a live streaming video from your webcam/usb_cam using MediaSource not PutMedia. You can start with it and make some changes to use PutMedia instead of MediaSource. https://github.com/backdoorcodr/amazon-kinesis-video-streams-producer-sdk-java
I'm doing the same but it takes time cuz I'm just a java newbie. Hope it can help.
Upvotes: 3
Reputation: 344
You can use GStreamer sample app which uses webcam or any camera attached to your machine as input to ingest video into Kinesis Video Streams. Currently the sample application can be executed in Mac, Ubuntu or Raspberry Pi. Also you can use the Android sample app to ingest video from an Android device.
Also for the AWS Rekognition integration with Kinesis Video Streams, please checkout the sample published in the Consumer Parser library. This example shows how to ingest video file (which you can replace with real-time producer like GStreamer sample app above), retrieve data, parse MKV, decode H264 frames, integrate with Rekognition JSON output and draw bounding boxes on the face detected in video frame.
Upvotes: 4