Bartłomiej Semańczyk
Bartłomiej Semańczyk

Reputation: 61834

How do I use trained .tfilte model, to create mask on live-camera output?

I've started learning ML on iOS with Swift. Now I know a little bit about neural networks. Here I have .tflite model well trained to recognize nails because the effect is like this:

enter image description here

Now I need to create a mask on live-camera output when

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {}

is called.

Currently when I put mask on live-camera there is an output like this:

enter image description here

What may be wrong with my model which interprets the output?

Here you can see my ScannerViewController used to preview a mask, and DeepLabModel.

EDIT 1:

If you have any other model, that can replace my DeepLabModel I also will be happy with this. Here is something wrong, and I don't know what.

EDIT 2:

I also think about possibility that the pod used in DeepLabModel is wrong:

pod 'TensorFlowLiteGpuExperimental'

Upvotes: 0

Views: 380

Answers (1)

Farmaker
Farmaker

Reputation: 2790

After analyzing your .tflite file that is hosted at your link above I can say that is well structured, it is giving 2 labels as you desire BUT it is not fully trained. I give you 3 pictures of the results after inference on android phone.

Picture 1 Picture 2 Picture 3

So there is nothing wrong with your code... .tflite file is not producing good results!

My advice is to continue train it with more pictures of hands and nails. I would recommend over 300 pictures with masks of different hands and nails and about 30.000 epochs using Deeplab

If you need a tool to help u with creating masks use this

You can always search google or Kaggle for datasets to enhance the number of images you are using

If you need more info or anything else you can tag me!

Happy coding!

Upvotes: 1

Related Questions