Reputation: 1
I am new to reactjs and building a page/component for uploading an image and running it through faceapi.js to show face landmark points. I am trying to get these points in an array to be used for some other process later on.
The webcam version works then I tried to do my photo based version; however, been struggling with the below.
The current error I get:
toNetInput.ts:39
Uncaught (in promise) Error: toNetInput - expected media to be of type HTMLImageElement | HTMLVideoElement | HTMLCanvasElement | tf.Tensor3D, or to be an element id
I tried defining this from an image element in the Dom as well but didn't work. I feel like I am missing something very simple but too blind to see it...
import React, { useRef , useEffect, useState} from "react";
import * as faceapi from 'face-api.js';
//this one was for previous tests with img html element
import * as inputImage from '../assets/images/sah.png';
function CompName() {
loadModels();
const [file, setFile] = useState([])
const [canvas, setcanvas] = useState([])
function fileChangeHandler(event){
setFile(event);
}
useEffect(() => {
setcanvas(faceapi.createCanvas(file))
console.log('file 1 ', file.size);
document.body.append(canvas)
if(file.size>0){
detectF();
console.log('file ', file);
}
}, [file])
async function loadModels(){
Promise.all([
await faceapi.nets.tinyFaceDetector.loadFromUri('/models') ,
await faceapi.nets.ssdMobilenetv1.loadFromUri('/models'),
await faceapi.nets.faceLandmark68Net.loadFromUri('/models'),
await faceapi.nets.faceRecognitionNet.loadFromUri('/models'),
await faceapi.nets.faceExpressionNet.loadFromUri('/models')
]).then((values) => {
console.log('models loaded');
});
}
const displaySize = {
width: file.width,
height: file.height
}
faceapi.matchDimensions(canvas, displaySize)
async function detectF(){
const detections = await faceapi.detectAllFaces(file, new faceapi.TinyFaceDetectorOptions()).withFaceLandmarks()
const resizedDetections = faceapi.resizeResults(detections, displaySize)
// faceapi.draw.drawDetections(canvas, resizedDetections)
faceapi.draw.drawFaceLandmarks(canvas, resizedDetections)
// faceapi.draw.drawFaceExpressions(canvas, resizedDetections)
const landmarkPositions = resizedDetections[0].landmarks.positions;
console.log({detections});
}
return (
<>
<div >
<input type="file" onChange={(e) => fileChangeHandler(e.target.files[0])} />
{/* <img src={inputImage} onLoad={handleImageLoad} /> */}
</>
);
}
export default CompName;
Any help is valuable, thanks in advance.
Here are the sources I've gone through that could be helpful. Trying detect faces on image using face-api.js.Getting error: Unhandled Rejection (Error): createCanvasFromMedia -media has not finished loading yet
face-api.js load image file from disk
https://github.com/justadudewhohacks/face-api.js
Upvotes: 0
Views: 825
Reputation: 341
const img = await faceapi.bufferToImage(file);
first of all, you have to use to faceapi buffer function to pass an image
the removing promise .all and setting a state which let us know when is module is loaded.
after we load the module we can give the user the choice to upload an image.
here is the sandbox link: https://codesandbox.io/s/inspiring-germain-vlyhc0?file=/src/App.js
let me know if you need further help.
Upvotes: 0