suvodipMondal
suvodipMondal

Reputation: 878

how to detect a face with react-native-camera facedetector?

I am trying to detect face with react-native-camera, I want to know how can we detect an individual's face, there is no proper documentation about the mlkit.

await FaceDetector.detectFacesAsync(data.uri) this statement is just returning face object like this face[0] = { bounds: { origin: { x: 739, y: 987 }, size: { x: 806, y: 789 } }, faceID: 0, rollAngle: 10.533509254455566, yawAngle: 0.7682874798774719 }.

This is just object's position, I cannot figure out how to recognize individual's face characteristics like eys, nose with the FaceDetector and suppose I will save person A's face data then how I will match the data with A's face later with react-native-camera ?

Upvotes: 1

Views: 5139

Answers (2)

patel jigar
patel jigar

Reputation: 52

import SplashScreen from 'react-native-splash-screen'
import React, { useEffect, createRef,useState } from 'react';
import { SafeAreaView, View, Image, StyleSheet, Text, Modal, TouchableOpacity } from 'react-native';
import { RNCamera } from 'react-native-camera';


const Test = (props) => {

  useEffect(() => {
    SplashScreen.hide();
  });


  const [faces, setFace] = useState([]);
  const [faceavl, setFaceavl] = useState(false);
  const [takeTimeFaceAvl, setTakeTimeFaceAvl] = useState(false);
  const [searchWaiting, setsearchWaiting] = useState(null)
  const [modalVisible, setModalVisible] = useState(false);
  const [image, setImage] = useState(null);


  const mycamera = createRef()


  const PendingView = () => (
    <View
      style={{
        flex: 1,
        backgroundColor: 'lightgreen',
        justifyContent: 'center',
        alignItems: 'center',
      }}
    >
      <Text>Waiting</Text>
    </View>
  );


  const renderFaces = () => (
    <View style={{
      position: 'absolute',
      bottom: 0,
      right: 0,
      left: 0,
      top: 0,
    }} pointerEvents="none">
      {faces.map(renderFace)}
    </View>
  );

  const renderFace = ({ bounds, faceID, rollAngle, yawAngle }) => (
    <View
      key={faceID}
      transform={[
        { perspective: 600 },
        { rotateZ: `${rollAngle.toFixed(0)}deg` },
        { rotateY: `${yawAngle.toFixed(0)}deg` },
      ]}
      style={[
        {
          padding: 10,
          borderWidth: 1,
          borderRadius: 2,
          position: 'absolute',
          borderColor: '#000',
          justifyContent: 'center',
        },
        {
          ...bounds.size,
          left: bounds.origin.x,
          top: bounds.origin.y,
        },
      ]}
    >

    </View>
  );


  return (
    <>
      <SafeAreaView style={styles.container}>

        <RNCamera
          ref={mycamera}

          style={styles.preview}
          type={RNCamera.Constants.Type.front}
          flashMode={RNCamera.Constants.FlashMode.on}
          androidCameraPermissionOptions={{
            title: 'Permission to use camera',
            message: 'We need your permission to use your camera',
            buttonPositive: 'Ok',
            buttonNegative: 'Cancel',
          }}
          androidRecordAudioPermissionOptions={{
            title: 'Permission to use audio recording',
            message: 'We need your permission to use your audio',
            buttonPositive: 'Ok',
            buttonNegative: 'Cancel',
          }}

          onFacesDetected={(data) => {
            setFace(data.faces)
            setFaceavl(true);
            clearTimeout(searchWaiting)
            const avc = setTimeout(() => {
              console.log()
              setFaceavl(false);
              setFace([])
            }, 500)
            setsearchWaiting(avc)
          }}
          onFaceDetectionError={(error) => {
            console.log('face--detact-->', error)
          }}


        >
          {({ camera, status, recordAudioPermissionStatus }) => {
            if (status !== 'READY') return <PendingView />;
            return (
              <View style={{ flex: 0, flexDirection: 'row', justifyContent: 'center' }}>
                <TouchableOpacity onPress={async () => {
                  const options = { quality: 0.5, base64: true };
                  const data = await camera.takePictureAsync(options)
                  if (faceavl) {
                    setTakeTimeFaceAvl(true)
                  } else {
                    setTakeTimeFaceAvl(false)
                  }
                  console.log(data.uri)
                  setImage(data)
                  setModalVisible(!modalVisible)
                }} style={styles.capture}>
                  <Text style={{ fontSize: 14 }}> SNAP </Text>
                </TouchableOpacity>
              </View>
            );
          }}

        </RNCamera>
        {faces ? renderFaces() : null}
      </SafeAreaView>


      <Modal
        animationType="slide"
        transparent={true}
        visible={modalVisible}
        onRequestClose={() => {
          Alert.alert("Modal has been closed.");
          setModalVisible(!modalVisible);
        }}
      >
        <View style={styles.centeredView}>
          <View style={styles.modalView}>
            {takeTimeFaceAvl ? image ? <Image
              style={{
                width: 200,
                height: 100,
              }}
              source={{
                uri: image.uri,
              }}
            /> : null : <Text>Face not found</Text>}
            <TouchableOpacity
              style={[styles.button, styles.buttonClose]}
              onPress={() => setModalVisible(!modalVisible)}
            >
              <Text style={styles.textStyle}>Hide Modal</Text>
            </TouchableOpacity>
          </View>
        </View>
      </Modal>

    </>
  );
}
const styles = StyleSheet.create({
  container: {
    flex: 1,
    flexDirection: 'column',
    backgroundColor: 'black',
  },
  item: {
    backgroundColor: '#FFF',
  },
  viewOne: {
    flexDirection: 'row'
  },
  viewTwo: {
    alignItems: 'flex-end', marginEnd: 9
  },
  title: {
    fontSize: 16, // Semibold #000000
    color: '#000000',
  },
  noOff: {
    color: '#D65D35',
    fontSize: 20,  // Semibold
  }, product: {
    color: '#A6A6A6',
    fontSize: 16,  // Regular
  }, titleView: {
    flex: 1,
    alignSelf: 'center',
    marginStart: 14,
    marginEnd: 14,
  },
  centeredView: {
    flex: 1,
    justifyContent: "center",
    alignItems: "center",
    marginTop: 22
  },
  modalView: {
    margin: 20,
    backgroundColor: "white",
    borderRadius: 20,
    padding: 10,
    alignItems: "center",
    shadowColor: "#000",
    shadowOffset: {
      width: 0,
      height: 2
    },
    shadowOpacity: 0.25,
    shadowRadius: 4,
    elevation: 5
  },
  button: {
    borderRadius: 20,
    padding: 10,
    elevation: 2
  },
  buttonOpen: {
    backgroundColor: "#F194FF",
  },
  buttonClose: {
    backgroundColor: "#2196F3",
  },
  textStyle: {
    color: "white",
    fontWeight: "bold",
    textAlign: "center"
  },
  modalText: {
    marginBottom: 15,
    textAlign: "center"
  },

  preview: {
    flex: 1,
    justifyContent: 'flex-end',
    alignItems: 'center',
  },
  capture: {
    flex: 0,
    backgroundColor: '#fff',
    borderRadius: 5,
    padding: 15,
    paddingHorizontal: 20,
    alignSelf: 'center',
    margin: 20,
  },
});

Upvotes: 0

Chrisito
Chrisito

Reputation: 524

ML Kit does not support Face Recognition. Also, React Native is not officially supported (yet), but you could check out https://rnfirebase.io/ml-vision/face-detection#process which outlines how you can get a 133-point contour of the face. However, this is not meant for facial recognition, but rather for overlays (e.g. masks, filters).

Upvotes: 1

Related Questions