JavaScript tutorial: Add face detection to your web app

Let’s add face detection to our React map explorer app using the pico.js JavaScript library

JavaScript tutorial: Add face detection to your web app
Thinkstock

Last week we enhanced a map interface with voice commands using annyang. This week we’ll extend our multi-modal interface even further by adding simple head-tracking using pico.js. Pico.js is a minimal JavaScript library that is closer to a proof-of-concept than a production library, but it seems to work the best among the face detection libraries I’ve investigated.

The goal of this post is to start displaying the user’s head position overlaid on the map with a simple red dot:

javascript face detection map IDG

First we’ll create a simple React class wrapping the pico.js functionality that we can use to get position updates of a user's face:

<ReactPico onFaceFound={(face) => {this.setState({face})}} />

Then we can use the details of that face location to render a component, if there is a face detected:

{face && <FaceIndicator x={face.totalX} y={face.totalY} />}

Our first challenge with pico.js is that it is an implementation of a research project in JavaScript, not necessarily a production-ready library following modern JavaScript standards. Among other things, this means that you can’t yarn add picojs. And while the introduction to pico.js is an excellent primer on object detection, it reads more like a research paper and less like API docs. The examples provided are more than enough to put the code to use, though. I spent a few hours shoehorning the provided samples into a relatively simple React class we can use to leverage the code.

The first thing that pico.js needs to do is load the cascade model, which involves making an AJAX call that pulls in a binary representation of a model pre-trained on faces. (You could use the same library to track other sorts of objects, but you would need to use the official pico implementation to train your custom model.) We can put that model loading code in our componentDidMount lifecycle method. I’ve further abstracted the sample code to another method called loadFaceFinder for clarity:

componentDidMount() {
    this.loadFaceFinder();
  }
  loadFaceFinder() {
    const cascadeurl = 'https://raw.githubusercontent.com/nenadmarkus/pico/c2e81f9d23cc11d1a612fd21e4f9de0921a5d0d9/rnt/cascades/facefinder';
    fetch(cascadeurl).then((response) => {
      response.arrayBuffer().then((buffer) => {
        var bytes = new Int8Array(buffer);
        this.setState({
          faceFinder: pico.unpack_cascade(bytes)
        });
        new camvas(this.canvasRef.current.getContext('2d'), this.processVideo);
      });
    });
  }

Beyond fetching and parsing the binary representation of the face detection model and setting it on the state, we’re creating a new camvas, which takes a reference to a <canvas> context and a callback handler. The camvas library will load video from a user’s webcam onto the canvas and call the handler for each frame rendered. The contents of loadFaceFinder is almost an identical copy of the reference project pico.js provides. We change the location where we store the model so it’s accessible on state, and we reference our canvas context via a react Ref instead of using the browser-provided DOM APIs.

Our this.processVideo is also nearly identical to the code provided in the reference project. We need just a few changes. We only want to execute code if the model is loaded, so we add a check surrounding the full body of the code. I’ve also created this React class with a callback handler we expect the user to pass in, so we’ll only run the processing code if that handler is defined:

  processVideo = (video, dt) => {
    if(this.state.faceFinder && this.props.onFaceFound) {
      /* all the code */
    }
}

The only other change I’ve made is what we do when a face is found. The pico.js example draws some circles on a canvas, but we'll want to instead pass data back to our callback handler. Let’s modify the code a bit so it’s easier for our callback handler to deal with the values:

          this.props.onFaceFound({
            x: 640 - dets[i][1],
            y: dets[i][0],
            radius: dets[i][2],
            xRatio: (640 - dets[i][1]) / 640,
            yRatio: dets[i][0] / 480,
            totalX: (640 - dets[i][1]) / 640 * window.innerWidth,
            totalY: dets[i][0] / 480 * window.innerHeight,
          });

This format lets us pass back the absolute position and radius of the face within the captured canvas element, the relative position of the face within the canvas element, and the position of the face within the canvas element mapped to the total screen size. And our custom class is basically complete. I also needed to make a few small changes to pico.js and pico’s version of camvas.js to work with modern syntax, but those were more about keywords and less about logic.

Now we can import our custom ReactPico class into our App, render it, and conditionally render our FaceIndicator class if we’ve detected a face. After playing around with some other face detection libraries, I was pleasantly surprised at how accurate and usable pico.js was, even though it is not a fully-fledged library.

Next week, we’ll take our shoehorned ReactPico class, refactor it to be more idiomatic, and publish it as an NPM package so everyone can easily add face recognition to their React apps. As always, the code for this week is available on GitHub and you can continue the conversation on Twitter: @freethejazz

Copyright © 2019 IDG Communications, Inc.