<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
    <channel>
        
        <title>
            <![CDATA[ facial recognition - freeCodeCamp.org ]]>
        </title>
        <description>
            <![CDATA[ Browse thousands of programming tutorials written by experts. Learn Web Development, Data Science, DevOps, Security, and get developer career advice. ]]>
        </description>
        <link>https://www.freecodecamp.org/news/</link>
        
        <generator>Eleventy</generator>
        <lastBuildDate>Sun, 10 May 2026 16:28:29 +0000</lastBuildDate>
        <atom:link href="https://www.freecodecamp.org/news/tag/facial-recognition/rss.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        
            <item>
                <title>
                    <![CDATA[ How to Integrate Facial Recognition Authentication in a Social App with Face API ]]>
                </title>
                <description>
                    <![CDATA[ Social applications have evolved over the years, and there is a major need for secure methods to authenticate users' identities. Integrating multifactor authentication capabilities into applications is crucial for strengthening their integrity. In so... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/integrate-facial-recognition-authentication-in-a-social-application/</link>
                <guid isPermaLink="false">68d20d7a6bd072175081e6b2</guid>
                
                    <category>
                        <![CDATA[ React ]]>
                    </category>
                
                    <category>
                        <![CDATA[ authentication ]]>
                    </category>
                
                    <category>
                        <![CDATA[ facial recognition ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ Oluwatobi ]]>
                </dc:creator>
                <pubDate>Tue, 23 Sep 2025 03:01:14 +0000</pubDate>
                <media:content url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758208687476/3ca6b95d-55c8-4bb6-a4aa-580409e1608f.png" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>Social applications have evolved over the years, and there is a major need for secure methods to authenticate users' identities.</p>
<p>Integrating multifactor authentication capabilities into applications is crucial for strengthening their integrity. In social apps, authentication mechanisms eliminate unwanted access to personal information between two parties. Facial authentication is not entirely new, as most devices have it built-in as security measure. It offers stronger protection compared to many traditional methods, especially against risks like phishing, brute-force attacks, and account hacking.</p>
<h2 id="heading-outline">Outline</h2>
<ul>
<li><p><a class="post-section-overview" href="#heading-what-to-expect">What to expect</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-prerequisites">Prerequisites</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-a-brief-intro-to-the-face-api-tool">A Brief Intro to the Face API tool</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-project-setup">Project Setup</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-demo-project-integrating-facial-recognition-and-authentication">Demo Project: Integrating Facial Recognition and Authentication</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-additional-information-and-tips">Additional Information and Tips</a></p>
</li>
<li><p><a class="post-section-overview" href="#heading-conclusion">Conclusion</a></p>
</li>
</ul>
<h2 id="heading-what-to-expect">What to Expect</h2>
<p>In this article, I’ll walk you through creating a multi-factor authentication system for a chat application powered by <a target="_blank" href="https://getstream.io">Stream.io</a>, and ensuring efficient user face ID authentication to allow only authorized access to your app. I will illustrate all these with relevant code examples.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Here are the necessary prerequisites to follow along with this tutorial:</p>
<ul>
<li><p>Intermediate knowledge of Node.js/Express for the backend aspect</p>
</li>
<li><p>Knowledge of React for the frontend aspect</p>
</li>
<li><p><a target="_blank" href="https://getstream.io">Stream.io</a> API key</p>
</li>
</ul>
<p>Before we get started, we’ll briefly highlight the facial authentication tool of choice: <a target="_blank" href="https://justadudewhohacks.github.io/face-api.js/docs/index.html">Face-Api.js</a>.</p>
<h2 id="heading-a-brief-intro-to-the-face-api-tool">A Brief Intro to the Face API tool</h2>
<p>Face-Api.js is a facial recognition package designed for integration with JavaScript-powered applications. It was built on top of the Tensor flow library and provides extensive facial detection based on machine learning models and abstract calculations.</p>
<p>In addition to all these features, it's friendly to use and can also be used locally with its predefined models. Here is a link to its <a target="_blank" href="https://justadudewhohacks.github.io/face-api.js/docs/index.html">documentation page</a>, which provides relevant code examples.</p>
<p>It provides features such as face detection, face capture, and face match, which use the <a target="_blank" href="https://en.wikipedia.org/wiki/Euclidean_algorithm">Euclidean algorithm</a> to make precise distinctions. We'll now set it up alongside our chat application project in the next section.</p>
<h2 id="heading-project-setup">Project Setup</h2>
<p>As mentioned earlier, this is a full-stack project containing both the frontend and the backend aspects. In this section, we’ll set up both code bases before proceeding to the demo project section.</p>
<h3 id="heading-frontend">Frontend</h3>
<p>We will power the application using the Vite framework for the frontend.</p>
<pre><code class="lang-javascript">npm create vite@latest
</code></pre>
<p>After creating the React application, install face-api.js with this command:</p>
<pre><code class="lang-javascript">npm i face-api.js
</code></pre>
<p>This will install the <code>face</code> package and the required dependencies. You can then install Stream’s powered chat SDK, which will form the main crux of the project.</p>
<pre><code class="lang-javascript">npm i stream-chat stream-chat-react
</code></pre>
<p>After successful completion, we are finally done with the project structure scaffold. To aid ease of local testing of our frontend application, we will have to host the face models needed by the Face package locally. Here is a <a target="_blank" href="https://github.com/justadudewhohacks/face-api.js-models">link</a> to the models. Kindly copy the model's folder and paste it into the public folder in the code project. Next, we’ll set up our backend project.</p>
<h3 id="heading-backend">Backend</h3>
<p>The backend is built to store user details and ensure user authentication before accessing the chat application. MongoDB will be the database of choice, and we will use the Express.js library as the backend API development environment of choice. For the ease of setup, kindly clone this <a target="_blank" href="https://github.com/oluwatobi2001/stream-backend.git">code-base</a> and install it on the local PC. It comes preloaded with the necessary installation files. To further enjoy a seamless backend experience, you can utilize the MongoDB <a target="_blank" href="https://www.mongodb.com/products/platform/atlas-database">Atlas</a> option as the database for storing user details. With that, we will now begin the code project in the next section.</p>
<h2 id="heading-demo-project-integrating-facial-recognition-and-authentication">Demo Project: Integrating Facial Recognition and Authentication</h2>
<p>In this section, we will walk through setting up an authentication page on the frontend where a user can register their details, username, email, and password on the registration page. They are also obliged to take a snapshot of their face, and the face API will be called to detect a face in the image. They won't be allowed to proceed beyond this until it is successful.</p>
<p>Thereafter, the image <code>faceDescriptor</code> function is called, which generates a unique face description vector value of the user’s face based on the machine learning models provided. These values are securely stored in the MongoDB database via the Express.js backend after successfully registering. The application is coupled to a multifactor authentication system, which has both the password based authentication and the facial authentication mechanisms.</p>
<p>When the first hurdle (password authentication) is completed, the user is then required to take a face match, comparing it with the user's face descriptor stored from the registration page. The comparison is achieved using the Euclidean algorithmic comparison based on the threshold we provide. If it meets the threshold, the face is said to be matched, and the user gets access to the chat application; else, the user is denied access to the Stream.io-powered chat application. Relevant source code snippets highlighting these steps will be provided concurrently with images.</p>
<p>We’ll begin by building a defunct registration page for our chat application using React, of course. We will begin by importing and initializing the necessary packages.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> React, {useState, useRef, useEffect} <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>;
<span class="hljs-keyword">import</span> * <span class="hljs-keyword">as</span> faceapi <span class="hljs-keyword">from</span> <span class="hljs-string">'face-api.js'</span>
<span class="hljs-keyword">import</span> {useNavigate} <span class="hljs-keyword">from</span> <span class="hljs-string">'react-router-dom'</span>
<span class="hljs-keyword">import</span> axios <span class="hljs-keyword">from</span> <span class="hljs-string">'axios'</span>;

<span class="hljs-keyword">const</span> Register =<span class="hljs-function">()=&gt;</span> {

    <span class="hljs-keyword">const</span> navigate= useNavigate(<span class="hljs-string">"/"</span>)
    <span class="hljs-keyword">const</span> userRef = useRef();
    <span class="hljs-keyword">const</span> passwordRef= useRef();
    <span class="hljs-keyword">const</span> emailRef = useRef();
    <span class="hljs-keyword">const</span> FullRef = useRef()
</code></pre>
<p>In the code snippet above, we imported useful React hooks and initialized our installed <code>Face-api.js</code> tool. <a target="_blank" href="https://www.npmjs.com/package/axios">Axios</a> will serve as our API request tool of choice for this project. The <code>useRef</code> hook will be used to track the user inputs. We then defined the register function and initialized the various <code>useRef</code> hooks for the various input fields to be inputted.</p>
<pre><code class="lang-javascript">

    useEffect(<span class="hljs-function">()=&gt;</span> {
<span class="hljs-keyword">const</span> loadModels =<span class="hljs-keyword">async</span>() =&gt; {
<span class="hljs-keyword">await</span> faceapi.nets.tinyFaceDetector.loadFromUri(<span class="hljs-string">'/models'</span>);
<span class="hljs-keyword">await</span> faceapi.nets.faceLandmark68Net.loadFromUri(<span class="hljs-string">'/models'</span>);
<span class="hljs-keyword">await</span> faceapi.nets.faceRecognitionNet.loadFromUri(<span class="hljs-string">'/models'</span>);
<span class="hljs-keyword">await</span> faceapi.nets.faceExpressionNet.loadFromUri(<span class="hljs-string">'/models'</span>);
<span class="hljs-keyword">await</span> faceapi.nets.tinyFaceDetector.loadFromUri(<span class="hljs-string">'/models'</span>);
setModelIsLoaded(<span class="hljs-literal">true</span>);
                startVideo();
}
  loadModels()  }, [])
</code></pre>
<p>In the code above, the <code>useEffect</code> hook is called to ensure that the various locally stored <code>face-api</code> models are initialized and active in our application. The models are stored in the <code>models</code> sub-folder within the <code>public</code> folder. Going forward, after initializing our models, we will now set up our camcorder feature on our webpage.</p>
<pre><code class="lang-javascript">  <span class="hljs-keyword">const</span> [faceDetected, setFaceDetected] = useState(<span class="hljs-literal">false</span>);


        <span class="hljs-comment">// Start video feed</span>
        <span class="hljs-keyword">const</span> startVideo = <span class="hljs-function">() =&gt;</span> {
            navigator.mediaDevices
                .getUserMedia({ <span class="hljs-attr">video</span>: <span class="hljs-literal">true</span> })
                .then(<span class="hljs-function">(<span class="hljs-params">stream</span>) =&gt;</span> {
                    videoRef.current.srcObject = stream;
                })
                .catch(<span class="hljs-function">(<span class="hljs-params">err</span>) =&gt;</span> <span class="hljs-built_in">console</span>.error(<span class="hljs-string">"Error accessing webcam: "</span>, err));
        };
        <span class="hljs-keyword">const</span> captureSnapshot = <span class="hljs-keyword">async</span> () =&gt; {
            <span class="hljs-keyword">const</span> canvas = snapshotRef.current;
            <span class="hljs-keyword">const</span> context = canvas.getContext(<span class="hljs-string">'2d'</span>);
            context.drawImage(videoRef.current, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, canvas.width, canvas.height);
            <span class="hljs-keyword">const</span> dataUrl = canvas.toDataURL(<span class="hljs-string">'image/jpeg'</span>);
            setSnapshot(dataUrl);

            <span class="hljs-comment">// Generate the face descriptor (128-dimensional vector)</span>
            <span class="hljs-keyword">const</span> detection = <span class="hljs-keyword">await</span> faceapi
                .detectSingleFace(canvas, <span class="hljs-keyword">new</span> faceapi.TinyFaceDetectorOptions())
                .withFaceLandmarks()
                .withFaceDescriptor();

            <span class="hljs-keyword">if</span> (detection) {
                <span class="hljs-keyword">const</span> newDescriptor = detection.descriptor;
                setDescriptionValue(newDescriptor)
                <span class="hljs-built_in">console</span>.log( newDescriptor);
               setSubmitDisabled(<span class="hljs-literal">false</span>)
                stopVid()
            } <span class="hljs-keyword">else</span> {
                <span class="hljs-built_in">console</span>.error(<span class="hljs-string">"No face detected in snapshot"</span>);
            }
        };
    <span class="hljs-keyword">const</span> stopVid =<span class="hljs-function">() =&gt;</span> {

        navigator.mediaDevices
                .getUserMedia({ <span class="hljs-attr">video</span>: <span class="hljs-literal">false</span> })
                <span class="hljs-keyword">const</span> stream = videoRef?.current?.srcObject;
        <span class="hljs-keyword">if</span> (stream) {
            stream.getTracks().forEach(<span class="hljs-function"><span class="hljs-params">track</span> =&gt;</span> {track.stop()})
            videoRef.current.srcObject = <span class="hljs-literal">null</span>;
            setCameraActive(<span class="hljs-literal">false</span>)
        }
    }
        <span class="hljs-comment">// Detect face in the video stream</span>
        <span class="hljs-keyword">const</span> handleVideoPlay = <span class="hljs-keyword">async</span> () =&gt; {
            <span class="hljs-keyword">const</span> video = videoRef.current;
            <span class="hljs-keyword">const</span> canvas = canvasRef.current;

            <span class="hljs-keyword">const</span> displaySize = { <span class="hljs-attr">width</span>: video.width, <span class="hljs-attr">height</span>: video.height };
            faceapi.matchDimensions(canvas, displaySize);

            <span class="hljs-built_in">setInterval</span>(<span class="hljs-keyword">async</span> () =&gt; {
                <span class="hljs-keyword">if</span> (!cameraActive) <span class="hljs-keyword">return</span> ;
                <span class="hljs-keyword">const</span> detections = <span class="hljs-keyword">await</span> faceapi.detectAllFaces(
                    video,
                    <span class="hljs-keyword">new</span> faceapi.TinyFaceDetectorOptions()
                );

                <span class="hljs-keyword">const</span> resizedDetections = faceapi.resizeResults(detections, displaySize);

                canvas.getContext(<span class="hljs-string">'2d'</span>).clearRect(<span class="hljs-number">0</span>, <span class="hljs-number">0</span>, canvas.width, canvas.height);
                faceapi.draw.drawDetections(canvas, resizedDetections);
                    <span class="hljs-keyword">const</span> detected = detections.length &gt; <span class="hljs-number">0</span>;
                 <span class="hljs-keyword">if</span> (detected &amp;&amp; !faceDetected) {
                captureSnapshot();  <span class="hljs-comment">// Capture the snapshot as soon as a face is detected</span>
            }

                setFaceDetected(detections.length &gt; <span class="hljs-number">0</span>);
            }, <span class="hljs-number">100</span>);
        };
</code></pre>
<p>In the code above, we begin by defining a <code>useState</code> array when the user’s face is detected during the sign-up process. Thereafter, the function to trigger the browser camcorder is then activated. With this on, we can then trigger the <code>handlePlayFunction</code> in the code. This function monitors facial detection as highlighted by the face models already initialized. The <code>stopVid</code> function is also triggered when the user’s facial detection has been successfully completed.</p>
<p>In this section, we also activated the browser camcorder tool in our application to provide us with real time video. The <code>CaptureSnapshot</code> function helps to obtain a snapshot from the current video being showcased.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> RegSubmit = <span class="hljs-keyword">async</span> (e) =&gt; {
  e.preventDefault();
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"hello"</span>);

  <span class="hljs-keyword">try</span> {
    <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> axios.post(BACKEND_URL, {
      <span class="hljs-attr">username</span>: userRef.current.value,
      <span class="hljs-attr">email</span>: emailRef.current.value,
      <span class="hljs-attr">FullName</span>: FullRef.current.value,
      <span class="hljs-attr">password</span>: passwordRef.current.value,
      <span class="hljs-attr">faceDescriptor</span>: descriptionValue,
    });

    <span class="hljs-built_in">console</span>.log(res.data);
    setError(<span class="hljs-literal">false</span>);
    navigate(<span class="hljs-string">"/login"</span>);
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"help"</span>);
  } <span class="hljs-keyword">catch</span> (err) {
    <span class="hljs-built_in">console</span>.error(err);
    setError(<span class="hljs-literal">true</span>);
  }
};
</code></pre>
<p>With all the values obtained, the <code>regSubmit</code> function is then defined. When executed, it stores the provided user details with the face description object on our backend server which can then be accessed in the next section for authentication.</p>
<p>Below is the full registration code.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> React, { useState, useRef, useEffect } <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>;
<span class="hljs-keyword">import</span> * <span class="hljs-keyword">as</span> faceapi <span class="hljs-keyword">from</span> <span class="hljs-string">'face-api.js'</span>;
<span class="hljs-keyword">import</span> { useNavigate } <span class="hljs-keyword">from</span> <span class="hljs-string">'react-router-dom'</span>;
<span class="hljs-keyword">import</span> axios <span class="hljs-keyword">from</span> <span class="hljs-string">'axios'</span>;

<span class="hljs-keyword">const</span> Register = <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-keyword">const</span> navigate = useNavigate(<span class="hljs-string">"/"</span>);

  <span class="hljs-keyword">const</span> userRef = useRef();
  <span class="hljs-keyword">const</span> passwordRef = useRef();
  <span class="hljs-keyword">const</span> emailRef = useRef();
  <span class="hljs-keyword">const</span> FullRef = useRef();
  <span class="hljs-keyword">const</span> snapshotRef = useRef(<span class="hljs-literal">null</span>);
  <span class="hljs-keyword">const</span> videoRef = useRef(<span class="hljs-literal">null</span>);
  <span class="hljs-keyword">const</span> canvasRef = useRef(<span class="hljs-literal">null</span>);

  <span class="hljs-keyword">const</span> [modelIsLoaded, setModelIsLoaded] = useState(<span class="hljs-literal">false</span>);
  <span class="hljs-keyword">const</span> [detections, setDetections] = useState([]);
  <span class="hljs-keyword">const</span> [error, setError] = useState(<span class="hljs-literal">false</span>);
  <span class="hljs-keyword">const</span> [snapshot, setSnapshot] = useState(<span class="hljs-literal">null</span>);
  <span class="hljs-keyword">const</span> [cameraActive, setCameraActive] = useState(<span class="hljs-literal">true</span>);
  <span class="hljs-keyword">const</span> [submitDisabled, setSubmitDisabled] = useState(<span class="hljs-literal">true</span>);
  <span class="hljs-keyword">const</span> [descriptionValue, setDescriptionValue] = useState(<span class="hljs-literal">null</span>);
  <span class="hljs-keyword">const</span> [faceDetected, setFaceDetected] = useState(<span class="hljs-literal">false</span>);

  useEffect(<span class="hljs-function">() =&gt;</span> {
    <span class="hljs-keyword">const</span> loadModels = <span class="hljs-keyword">async</span> () =&gt; {
      <span class="hljs-keyword">await</span> faceapi.nets.tinyFaceDetector.loadFromUri(<span class="hljs-string">'/models'</span>);
      <span class="hljs-keyword">await</span> faceapi.nets.faceLandmark68Net.loadFromUri(<span class="hljs-string">'/models'</span>);
      <span class="hljs-keyword">await</span> faceapi.nets.faceRecognitionNet.loadFromUri(<span class="hljs-string">'/models'</span>);
      <span class="hljs-keyword">await</span> faceapi.nets.faceExpressionNet.loadFromUri(<span class="hljs-string">'/models'</span>);
      <span class="hljs-keyword">await</span> faceapi.nets.tinyFaceDetector.loadFromUri(<span class="hljs-string">'/models'</span>);
      setModelIsLoaded(<span class="hljs-literal">true</span>);
      startVideo();
    };

    loadModels();
  }, []);

  <span class="hljs-keyword">const</span> RegSubmit = <span class="hljs-keyword">async</span> (e) =&gt; {
    e.preventDefault();
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"hello"</span>);

    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> axios.post(<span class="hljs-string">'http://localhost:5000/v1/users'</span>, {
        <span class="hljs-attr">username</span>: userRef.current.value,
        <span class="hljs-attr">email</span>: emailRef.current.value,
        <span class="hljs-attr">FullName</span>: FullRef.current.value,
        <span class="hljs-attr">password</span>: passwordRef.current.value,
        <span class="hljs-attr">faceDescriptor</span>: descriptionValue
      });

      <span class="hljs-built_in">console</span>.log(res.data);
      setError(<span class="hljs-literal">false</span>);
      navigate(<span class="hljs-string">"/login"</span>);
      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"help"</span>);
    } <span class="hljs-keyword">catch</span> (err) {
      <span class="hljs-built_in">console</span>.log(err);
      setError(<span class="hljs-literal">true</span>);
    }
  };

  <span class="hljs-keyword">const</span> startVideo = <span class="hljs-function">() =&gt;</span> {
    navigator.mediaDevices
      .getUserMedia({ <span class="hljs-attr">video</span>: <span class="hljs-literal">true</span> })
      .then(<span class="hljs-function">(<span class="hljs-params">stream</span>) =&gt;</span> {
        videoRef.current.srcObject = stream;
      })
      .catch(<span class="hljs-function">(<span class="hljs-params">err</span>) =&gt;</span> <span class="hljs-built_in">console</span>.error(<span class="hljs-string">"Error accessing webcam: "</span>, err));
  };

  <span class="hljs-keyword">const</span> stopVid = <span class="hljs-function">() =&gt;</span> {
    navigator.mediaDevices.getUserMedia({ <span class="hljs-attr">video</span>: <span class="hljs-literal">false</span> });
    <span class="hljs-keyword">const</span> stream = videoRef?.current?.srcObject;
    <span class="hljs-keyword">if</span> (stream) {
      stream.getTracks().forEach(<span class="hljs-function">(<span class="hljs-params">track</span>) =&gt;</span> track.stop());
      videoRef.current.srcObject = <span class="hljs-literal">null</span>;
      setCameraActive(<span class="hljs-literal">false</span>);
    }
  };

  <span class="hljs-keyword">const</span> captureSnapshot = <span class="hljs-keyword">async</span> () =&gt; {
    <span class="hljs-keyword">const</span> canvas = snapshotRef.current;
    <span class="hljs-keyword">const</span> context = canvas.getContext(<span class="hljs-string">'2d'</span>);
    context.drawImage(videoRef.current, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, canvas.width, canvas.height);
    <span class="hljs-keyword">const</span> dataUrl = canvas.toDataURL(<span class="hljs-string">'image/jpeg'</span>);
    setSnapshot(dataUrl);

    <span class="hljs-keyword">const</span> detection = <span class="hljs-keyword">await</span> faceapi
      .detectSingleFace(canvas, <span class="hljs-keyword">new</span> faceapi.TinyFaceDetectorOptions())
      .withFaceLandmarks()
      .withFaceDescriptor();

    <span class="hljs-keyword">if</span> (detection) {
      <span class="hljs-keyword">const</span> newDescriptor = detection.descriptor;
      setDescriptionValue(newDescriptor);
      <span class="hljs-built_in">console</span>.log(newDescriptor);
      setSubmitDisabled(<span class="hljs-literal">false</span>);
      stopVid();

      <span class="hljs-keyword">if</span> (storedDescriptor &amp;&amp; isMatchingFace(storedDescriptor, newDescriptor)) {
        <span class="hljs-built_in">setInterval</span>(alert(<span class="hljs-string">"face matched"</span>), <span class="hljs-number">100</span>);
      } <span class="hljs-keyword">else</span> {
        alert(<span class="hljs-string">"No Match Found!"</span>);
      }
    } <span class="hljs-keyword">else</span> {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">"No face detected in snapshot"</span>);
    }
  };

  <span class="hljs-keyword">const</span> handleVideoPlay = <span class="hljs-keyword">async</span> () =&gt; {
    <span class="hljs-keyword">const</span> video = videoRef.current;
    <span class="hljs-keyword">const</span> canvas = canvasRef.current;
    <span class="hljs-keyword">const</span> displaySize = { <span class="hljs-attr">width</span>: video.width, <span class="hljs-attr">height</span>: video.height };
    faceapi.matchDimensions(canvas, displaySize);

    <span class="hljs-built_in">setInterval</span>(<span class="hljs-keyword">async</span> () =&gt; {
      <span class="hljs-keyword">if</span> (!cameraActive) <span class="hljs-keyword">return</span>;

      <span class="hljs-keyword">const</span> detections = <span class="hljs-keyword">await</span> faceapi.detectAllFaces(
        video,
        <span class="hljs-keyword">new</span> faceapi.TinyFaceDetectorOptions()
      );

      <span class="hljs-keyword">const</span> resizedDetections = faceapi.resizeResults(detections, displaySize);
      canvas.getContext(<span class="hljs-string">'2d'</span>).clearRect(<span class="hljs-number">0</span>, <span class="hljs-number">0</span>, canvas.width, canvas.height);
      faceapi.draw.drawDetections(canvas, resizedDetections);

      <span class="hljs-keyword">const</span> detected = detections.length &gt; <span class="hljs-number">0</span>;
      <span class="hljs-keyword">if</span> (detected &amp;&amp; !faceDetected) {
        captureSnapshot();
      }

      setFaceDetected(detected);
    }, <span class="hljs-number">100</span>);
  };

  <span class="hljs-keyword">return</span> (
    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"flex flex-col w-full h-screen justify-center"</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"flex flex-col"</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">form</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"flex flex-col mb-2 w-full"</span> <span class="hljs-attr">onSubmit</span>=<span class="hljs-string">{RegSubmit}</span>&gt;</span>
          <span class="hljs-tag">&lt;<span class="hljs-name">h3</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"flex flex-col mx-auto mb-5"</span>&gt;</span>Registration Page<span class="hljs-tag">&lt;/<span class="hljs-name">h3</span>&gt;</span>

          <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"flex flex-col mb-2 w-[50%] mx-auto items-center"</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">input</span>
              <span class="hljs-attr">type</span>=<span class="hljs-string">"text"</span>
              <span class="hljs-attr">placeholder</span>=<span class="hljs-string">"Email"</span>
              <span class="hljs-attr">className</span>=<span class="hljs-string">"w-full rounded-2xl h-[50px] border-2 p-2 mb-2 border-gray-900"</span>
              <span class="hljs-attr">required</span>
              <span class="hljs-attr">ref</span>=<span class="hljs-string">{emailRef}</span>
            /&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">input</span>
              <span class="hljs-attr">type</span>=<span class="hljs-string">"text"</span>
              <span class="hljs-attr">placeholder</span>=<span class="hljs-string">"Username"</span>
              <span class="hljs-attr">className</span>=<span class="hljs-string">"w-full rounded-2xl h-[50px] border-2 p-2 mb-2 border-gray-900"</span>
              <span class="hljs-attr">required</span>
              <span class="hljs-attr">ref</span>=<span class="hljs-string">{userRef}</span>
            /&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">input</span>
              <span class="hljs-attr">type</span>=<span class="hljs-string">"text"</span>
              <span class="hljs-attr">placeholder</span>=<span class="hljs-string">"Full Name"</span>
              <span class="hljs-attr">className</span>=<span class="hljs-string">"w-full rounded-2xl h-[50px] border-2 p-2 mb-2 border-gray-900"</span>
              <span class="hljs-attr">required</span>
              <span class="hljs-attr">ref</span>=<span class="hljs-string">{FullRef}</span>
            /&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">input</span>
              <span class="hljs-attr">type</span>=<span class="hljs-string">"password"</span>
              <span class="hljs-attr">placeholder</span>=<span class="hljs-string">"Password"</span>
              <span class="hljs-attr">className</span>=<span class="hljs-string">"w-full rounded-2xl h-[50px] border-2 p-2 mb-2 border-gray-900"</span>
              <span class="hljs-attr">required</span>
              <span class="hljs-attr">ref</span>=<span class="hljs-string">{passwordRef}</span>
            /&gt;</span>

            <span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>
              {!modelIsLoaded &amp;&amp; cameraActive &amp;&amp; !descriptionValue ? (
                <span class="hljs-tag">&lt;<span class="hljs-name">p</span>&gt;</span>Loading<span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
              ) : (
                <span class="hljs-tag">&lt;&gt;</span>
                  {!descriptionValue &amp;&amp; (
                    <span class="hljs-tag">&lt;&gt;</span>
                      <span class="hljs-tag">&lt;<span class="hljs-name">video</span>
                        <span class="hljs-attr">ref</span>=<span class="hljs-string">{videoRef}</span>
                        <span class="hljs-attr">width</span>=<span class="hljs-string">"200"</span>
                        <span class="hljs-attr">height</span>=<span class="hljs-string">"160"</span>
                        <span class="hljs-attr">onPlay</span>=<span class="hljs-string">{handleVideoPlay}</span>
                        <span class="hljs-attr">autoPlay</span>
                        <span class="hljs-attr">muted</span>
                      /&gt;</span>
                      <span class="hljs-tag">&lt;<span class="hljs-name">canvas</span>
                        <span class="hljs-attr">ref</span>=<span class="hljs-string">{canvasRef}</span>
                        <span class="hljs-attr">width</span>=<span class="hljs-string">"200"</span>
                        <span class="hljs-attr">height</span>=<span class="hljs-string">"160"</span>
                        <span class="hljs-attr">style</span>=<span class="hljs-string">{{</span> <span class="hljs-attr">position:</span> '<span class="hljs-attr">absolute</span>', <span class="hljs-attr">top:</span> <span class="hljs-attr">0</span>, <span class="hljs-attr">left:</span> <span class="hljs-attr">0</span> }}
                      /&gt;</span>
                      <span class="hljs-tag">&lt;<span class="hljs-name">p</span>&gt;</span>
                        {faceDetected ? (
                          <span class="hljs-tag">&lt;<span class="hljs-name">span</span> <span class="hljs-attr">style</span>=<span class="hljs-string">{{</span> <span class="hljs-attr">color:</span> '<span class="hljs-attr">green</span>' }}&gt;</span>Face Detected<span class="hljs-tag">&lt;/<span class="hljs-name">span</span>&gt;</span>
                        ) : (
                          <span class="hljs-tag">&lt;<span class="hljs-name">span</span> <span class="hljs-attr">style</span>=<span class="hljs-string">{{</span> <span class="hljs-attr">color:</span> '<span class="hljs-attr">red</span>' }}&gt;</span>No Face Detected<span class="hljs-tag">&lt;/<span class="hljs-name">span</span>&gt;</span>
                        )}
                      <span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
                      <span class="hljs-tag">&lt;<span class="hljs-name">canvas</span>
                        <span class="hljs-attr">ref</span>=<span class="hljs-string">{snapshotRef}</span>
                        <span class="hljs-attr">width</span>=<span class="hljs-string">"480"</span>
                        <span class="hljs-attr">height</span>=<span class="hljs-string">"360"</span>
                        <span class="hljs-attr">style</span>=<span class="hljs-string">{{</span> <span class="hljs-attr">display:</span> '<span class="hljs-attr">none</span>' }}
                      /&gt;</span>
                    <span class="hljs-tag">&lt;/&gt;</span>
                  )}
                <span class="hljs-tag">&lt;/&gt;</span>
              )}

              {snapshot &amp;&amp; (
                <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">style</span>=<span class="hljs-string">{{</span> <span class="hljs-attr">marginTop:</span> '<span class="hljs-attr">20px</span>' }}&gt;</span>
                  <span class="hljs-tag">&lt;<span class="hljs-name">h4</span>&gt;</span>Face Snapshot:<span class="hljs-tag">&lt;/<span class="hljs-name">h4</span>&gt;</span>
                  <span class="hljs-tag">&lt;<span class="hljs-name">img</span>
                    <span class="hljs-attr">src</span>=<span class="hljs-string">{snapshot}</span>
                    <span class="hljs-attr">alt</span>=<span class="hljs-string">"Face Snapshot"</span>
                    <span class="hljs-attr">width</span>=<span class="hljs-string">"200"</span>
                    <span class="hljs-attr">height</span>=<span class="hljs-string">"160"</span>
                  /&gt;</span>
                <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
              )}
            <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>

            <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"mt-2"</span>&gt;</span>
              <span class="hljs-tag">&lt;<span class="hljs-name">button</span> <span class="hljs-attr">type</span>=<span class="hljs-string">"button"</span> <span class="hljs-attr">onClick</span>=<span class="hljs-string">{stopVid}</span>&gt;</span>
                Stop Video
              <span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>
            <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>

            <span class="hljs-tag">&lt;<span class="hljs-name">button</span>
              <span class="hljs-attr">disabled</span>=<span class="hljs-string">{submitDisabled}</span>
              <span class="hljs-attr">className</span>=<span class="hljs-string">"mx-auto mt-4 rounded-2xl cursor-pointer text-white bg-primary w-[80%] lg:w-[50%] h-[40px] text-center items-center justify-center"</span>
              <span class="hljs-attr">type</span>=<span class="hljs-string">"submit"</span>
            &gt;</span>
              Register
            <span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>
          <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>

          <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"flex flex-col mt-1 w-full"</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">p</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"flex justify-center"</span>&gt;</span>
              Registered previously?<span class="hljs-symbol">&amp;nbsp;</span>
              <span class="hljs-tag">&lt;<span class="hljs-name">a</span> <span class="hljs-attr">href</span>=<span class="hljs-string">"/login"</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"text-blue-600 underline"</span>&gt;</span>
                Login
              <span class="hljs-tag">&lt;/<span class="hljs-name">a</span>&gt;</span>
            <span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
          <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>

          {error &amp;&amp; (
            <span class="hljs-tag">&lt;<span class="hljs-name">p</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"text-red-600 text-center mt-2"</span>&gt;</span>
              Error while registering, try again
            <span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
          )}
        <span class="hljs-tag">&lt;/<span class="hljs-name">form</span>&gt;</span></span>
      &lt;/div&gt;
    &lt;/div&gt;
  );
};

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> Register;
</code></pre>
<p>Going forward, we will be working on our multifactor authentication system. In the code below, we will be highlighting the <code>loginSubmit</code> function which will be triggered when the user email and password credentials are provided for logging in to our chat application. The <code>useRef</code> hook is initialized which ensures that the values passed in the input boxes are parsed to the backend via the <code>Axios</code> request tool.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> React, { useState, useRef, useEffect } <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>;
<span class="hljs-keyword">import</span> { Link, useNavigate } <span class="hljs-keyword">from</span> <span class="hljs-string">'react-router-dom'</span>;
<span class="hljs-keyword">import</span> axios <span class="hljs-keyword">from</span> <span class="hljs-string">'axios'</span>;

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">Login</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> navigate = useNavigate();
  <span class="hljs-keyword">const</span> userRef = useRef();
  <span class="hljs-keyword">const</span> passwordRef = useRef();

  <span class="hljs-keyword">const</span> [error, setError] = useState(<span class="hljs-literal">false</span>);

  <span class="hljs-keyword">const</span> LoginSubmit = <span class="hljs-keyword">async</span> (e) =&gt; {
    e.preventDefault();
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> axios.post(
        <span class="hljs-string">'http://localhost:5000/v1/auth/login'</span>,
        {
          <span class="hljs-attr">email</span>: userRef.current.value,
          <span class="hljs-attr">password</span>: passwordRef.current.value,
        },
        { <span class="hljs-attr">withCredentials</span>: <span class="hljs-literal">true</span> }
      );

      <span class="hljs-built_in">console</span>.log(res?.data);
      setError(<span class="hljs-literal">false</span>);
      navigate(<span class="hljs-string">'/confirm-auth'</span>);
      <span class="hljs-built_in">console</span>.log(res);
    } <span class="hljs-keyword">catch</span> (err) {
      setError(<span class="hljs-literal">true</span>);
      <span class="hljs-built_in">console</span>.log(err);
    }
  };
}
</code></pre>
<p>The full login page code example will be provided <a target="_blank" href="http://github.com/oluwatobi2001/Stream-frontend.git">here</a>. After successfully confirming their identity via the use of the password authentication feature, we can then go on to confirm the user’s identity via the use of the face recognition system.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> axios <span class="hljs-keyword">from</span> <span class="hljs-string">'axios'</span>;
<span class="hljs-keyword">import</span> React, { useRef, useEffect, useState } <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>;
<span class="hljs-keyword">import</span> * <span class="hljs-keyword">as</span> faceapi <span class="hljs-keyword">from</span> <span class="hljs-string">'face-api.js'</span>;
<span class="hljs-keyword">import</span> { useNavigate } <span class="hljs-keyword">from</span> <span class="hljs-string">'react-router-dom'</span>;
</code></pre>
<p>First of all, we will set up the app by importing the necessary dependencies as highlighted in the code snippet above.</p>
<pre><code class="lang-javascript">
  useEffect(<span class="hljs-function">() =&gt;</span> {
    <span class="hljs-keyword">const</span> loadModels = <span class="hljs-keyword">async</span> () =&gt; {
      <span class="hljs-keyword">await</span> faceapi.nets.tinyFaceDetector.loadFromUri(<span class="hljs-string">'/models'</span>);
      <span class="hljs-keyword">await</span> faceapi.nets.faceLandmark68Net.loadFromUri(<span class="hljs-string">'/models'</span>);
      <span class="hljs-keyword">await</span> faceapi.nets.faceRecognitionNet.loadFromUri(<span class="hljs-string">'/models'</span>);
      <span class="hljs-keyword">await</span> faceapi.nets.faceExpressionNet.loadFromUri(<span class="hljs-string">'/models'</span>);
    };

    loadModels();
  }, []);

  <span class="hljs-keyword">const</span> handleVideoPlay = <span class="hljs-keyword">async</span> () =&gt; {
    <span class="hljs-keyword">const</span> video = videoRef.current;
    <span class="hljs-keyword">const</span> canvas = canvasRef.current;

    <span class="hljs-keyword">const</span> displaySize = { <span class="hljs-attr">width</span>: video.width, <span class="hljs-attr">height</span>: video.height };
    faceapi.matchDimensions(canvas, displaySize);

    <span class="hljs-built_in">setInterval</span>(<span class="hljs-keyword">async</span> () =&gt; {
      <span class="hljs-keyword">if</span> (!cameraActive) <span class="hljs-keyword">return</span>;

      <span class="hljs-keyword">const</span> detections = <span class="hljs-keyword">await</span> faceapi.detectAllFaces(
        video,
        <span class="hljs-keyword">new</span> faceapi.TinyFaceDetectorOptions()
      );

      <span class="hljs-keyword">const</span> resizedDetections = faceapi.resizeResults(detections, displaySize);
      canvas.getContext(<span class="hljs-string">'2d'</span>).clearRect(<span class="hljs-number">0</span>, <span class="hljs-number">0</span>, canvas.width, canvas.height);
      faceapi.draw.drawDetections(canvas, resizedDetections);

      <span class="hljs-keyword">const</span> detected = detections.length &gt; <span class="hljs-number">0</span>;
      <span class="hljs-keyword">if</span> (detected &amp;&amp; !faceDetected) {
        captureSnapshot();
      }

      setFaceDetected(detected);
    }, <span class="hljs-number">100</span>);
  };

  <span class="hljs-keyword">const</span> startVideo = <span class="hljs-function">() =&gt;</span> {
    navigator.mediaDevices
      .getUserMedia({ <span class="hljs-attr">video</span>: <span class="hljs-literal">true</span> })
      .then(<span class="hljs-function">(<span class="hljs-params">stream</span>) =&gt;</span> {
        videoRef.current.srcObject = stream;
      })
      .catch(<span class="hljs-function">(<span class="hljs-params">err</span>) =&gt;</span> <span class="hljs-built_in">console</span>.error(<span class="hljs-string">"Error accessing webcam: "</span>, err));
  };

  <span class="hljs-keyword">const</span> stopVid = <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-keyword">const</span> stream = videoRef.current.srcObject;
    <span class="hljs-keyword">if</span> (stream) {
      stream.getTracks().forEach(<span class="hljs-function">(<span class="hljs-params">track</span>) =&gt;</span> track.stop());
      videoRef.current.srcObject = <span class="hljs-literal">null</span>;
      setCameraActive(<span class="hljs-literal">false</span>);
    }
  };

  <span class="hljs-keyword">const</span> deleteImage = <span class="hljs-function">() =&gt;</span> {
    setSnapshot(<span class="hljs-literal">null</span>);
    setDescriptionValue(<span class="hljs-literal">null</span>);
    setFaceDetected(<span class="hljs-literal">false</span>);
    setCameraActive(<span class="hljs-literal">true</span>);
    startVideo();
  };

  <span class="hljs-keyword">const</span> captureSnapshot = <span class="hljs-keyword">async</span> () =&gt; {
    <span class="hljs-keyword">const</span> canvas = snapshotRef.current;
    <span class="hljs-keyword">const</span> context = canvas.getContext(<span class="hljs-string">'2d'</span>);
    context.drawImage(videoRef.current, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, canvas.width, canvas.height);

    <span class="hljs-keyword">const</span> dataUrl = canvas.toDataURL(<span class="hljs-string">'image/jpeg'</span>);
    setSnapshot(dataUrl);
    stopVid();

    <span class="hljs-keyword">const</span> detection = <span class="hljs-keyword">await</span> faceapi
      .detectSingleFace(canvas, <span class="hljs-keyword">new</span> faceapi.TinyFaceDetectorOptions())
      .withFaceLandmarks()
      .withFaceDescriptor();

    <span class="hljs-keyword">if</span> (detection) {
      <span class="hljs-keyword">const</span> newDescriptor = detection.descriptor;
      setDescriptionValue(newDescriptor);
      <span class="hljs-built_in">console</span>.log(newDescriptor);
    }
  };
</code></pre>
<p>After initializing all the necessary dependencies, we also imported our models as we did in the registration page to detect the user’s face and then generate a face description. We also allowed for the user to delete the snapshot and retake the image as many times as possible.</p>
<pre><code class="lang-javascript">  <span class="hljs-keyword">const</span> FaceAuthenticate = <span class="hljs-keyword">async</span> (e) =&gt; {
    e.preventDefault();

    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> axios.post(
        <span class="hljs-string">'http://localhost:5000/v1/auth/face-auth'</span>,
        { <span class="hljs-attr">faceDescriptor</span>: descriptionValue },
        { <span class="hljs-attr">withCredentials</span>: <span class="hljs-literal">true</span> }
      );

      <span class="hljs-built_in">console</span>.log(res?.data);
      navigate(<span class="hljs-string">'/chat'</span>);
    } <span class="hljs-keyword">catch</span> (err) {
      <span class="hljs-built_in">console</span>.log(err);
    }
  };
</code></pre>
<p>After the face descriptor object gets generated, we then sent it to the backend to compare it with the stored face descriptor obtained at the point of registration. If they match, we get redirected to the chat application. Otherwise, an appropriate error message denying us access to the chat application is displayed.</p>
<p>Here is the code to the <code>FaceAuth</code> page:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> axios <span class="hljs-keyword">from</span> <span class="hljs-string">'axios'</span>;
<span class="hljs-keyword">import</span> React, { useRef, useEffect, useState } <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>;
<span class="hljs-keyword">import</span> * <span class="hljs-keyword">as</span> faceapi <span class="hljs-keyword">from</span> <span class="hljs-string">'face-api.js'</span>;
<span class="hljs-keyword">import</span> { useNavigate } <span class="hljs-keyword">from</span> <span class="hljs-string">'react-router-dom'</span>;

<span class="hljs-keyword">const</span> FaceAuth = <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-keyword">const</span> navigate = useNavigate(<span class="hljs-string">"/"</span>);

  <span class="hljs-keyword">const</span> videoRef = useRef(<span class="hljs-literal">null</span>);
  <span class="hljs-keyword">const</span> canvasRef = useRef(<span class="hljs-literal">null</span>);
  <span class="hljs-keyword">const</span> snapshotRef = useRef(<span class="hljs-literal">null</span>);

  <span class="hljs-keyword">const</span> [cameraActive, setCameraActive] = useState(<span class="hljs-literal">true</span>);
  <span class="hljs-keyword">const</span> [snapshot, setSnapshot] = useState(<span class="hljs-literal">null</span>);
  <span class="hljs-keyword">const</span> [descriptionValue, setDescriptionValue] = useState(<span class="hljs-literal">null</span>);
  <span class="hljs-keyword">const</span> [faceDetected, setFaceDetected] = useState(<span class="hljs-literal">false</span>);

  useEffect(<span class="hljs-function">() =&gt;</span> {
    <span class="hljs-keyword">const</span> loadModels = <span class="hljs-keyword">async</span> () =&gt; {
      <span class="hljs-keyword">await</span> faceapi.nets.tinyFaceDetector.loadFromUri(<span class="hljs-string">'/models'</span>);
      <span class="hljs-keyword">await</span> faceapi.nets.faceLandmark68Net.loadFromUri(<span class="hljs-string">'/models'</span>);
      <span class="hljs-keyword">await</span> faceapi.nets.faceRecognitionNet.loadFromUri(<span class="hljs-string">'/models'</span>);
      <span class="hljs-keyword">await</span> faceapi.nets.faceExpressionNet.loadFromUri(<span class="hljs-string">'/models'</span>);
    };

    loadModels();
  }, []);

  <span class="hljs-keyword">const</span> handleVideoPlay = <span class="hljs-keyword">async</span> () =&gt; {
    <span class="hljs-keyword">const</span> video = videoRef.current;
    <span class="hljs-keyword">const</span> canvas = canvasRef.current;

    <span class="hljs-keyword">const</span> displaySize = { <span class="hljs-attr">width</span>: video.width, <span class="hljs-attr">height</span>: video.height };
    faceapi.matchDimensions(canvas, displaySize);

    <span class="hljs-built_in">setInterval</span>(<span class="hljs-keyword">async</span> () =&gt; {
      <span class="hljs-keyword">if</span> (!cameraActive) <span class="hljs-keyword">return</span>;

      <span class="hljs-keyword">const</span> detections = <span class="hljs-keyword">await</span> faceapi.detectAllFaces(
        video,
        <span class="hljs-keyword">new</span> faceapi.TinyFaceDetectorOptions()
      );

      <span class="hljs-keyword">const</span> resizedDetections = faceapi.resizeResults(detections, displaySize);
      canvas.getContext(<span class="hljs-string">'2d'</span>).clearRect(<span class="hljs-number">0</span>, <span class="hljs-number">0</span>, canvas.width, canvas.height);
      faceapi.draw.drawDetections(canvas, resizedDetections);

      <span class="hljs-keyword">const</span> detected = detections.length &gt; <span class="hljs-number">0</span>;
      <span class="hljs-keyword">if</span> (detected &amp;&amp; !faceDetected) {
        captureSnapshot();
      }

      setFaceDetected(detected);
    }, <span class="hljs-number">100</span>);
  };

  <span class="hljs-keyword">const</span> startVideo = <span class="hljs-function">() =&gt;</span> {
    navigator.mediaDevices
      .getUserMedia({ <span class="hljs-attr">video</span>: <span class="hljs-literal">true</span> })
      .then(<span class="hljs-function">(<span class="hljs-params">stream</span>) =&gt;</span> {
        videoRef.current.srcObject = stream;
      })
      .catch(<span class="hljs-function">(<span class="hljs-params">err</span>) =&gt;</span> <span class="hljs-built_in">console</span>.error(<span class="hljs-string">"Error accessing webcam: "</span>, err));
  };

  <span class="hljs-keyword">const</span> stopVid = <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-keyword">const</span> stream = videoRef.current.srcObject;
    <span class="hljs-keyword">if</span> (stream) {
      stream.getTracks().forEach(<span class="hljs-function">(<span class="hljs-params">track</span>) =&gt;</span> track.stop());
      videoRef.current.srcObject = <span class="hljs-literal">null</span>;
      setCameraActive(<span class="hljs-literal">false</span>);
    }
  };

  <span class="hljs-keyword">const</span> deleteImage = <span class="hljs-function">() =&gt;</span> {
    setSnapshot(<span class="hljs-literal">null</span>);
    setDescriptionValue(<span class="hljs-literal">null</span>);
    setFaceDetected(<span class="hljs-literal">false</span>);
    setCameraActive(<span class="hljs-literal">true</span>);
    startVideo();
  };

  <span class="hljs-keyword">const</span> captureSnapshot = <span class="hljs-keyword">async</span> () =&gt; {
    <span class="hljs-keyword">const</span> canvas = snapshotRef.current;
    <span class="hljs-keyword">const</span> context = canvas.getContext(<span class="hljs-string">'2d'</span>);
    context.drawImage(videoRef.current, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, canvas.width, canvas.height);

    <span class="hljs-keyword">const</span> dataUrl = canvas.toDataURL(<span class="hljs-string">'image/jpeg'</span>);
    setSnapshot(dataUrl);
    stopVid();

    <span class="hljs-keyword">const</span> detection = <span class="hljs-keyword">await</span> faceapi
      .detectSingleFace(canvas, <span class="hljs-keyword">new</span> faceapi.TinyFaceDetectorOptions())
      .withFaceLandmarks()
      .withFaceDescriptor();

    <span class="hljs-keyword">if</span> (detection) {
      <span class="hljs-keyword">const</span> newDescriptor = detection.descriptor;
      setDescriptionValue(newDescriptor);
      <span class="hljs-built_in">console</span>.log(newDescriptor);
    }
  };

  <span class="hljs-keyword">const</span> FaceAuthenticate = <span class="hljs-keyword">async</span> (e) =&gt; {
    e.preventDefault();

    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> axios.post(
        <span class="hljs-string">'http://localhost:5000/v1/auth/face-auth'</span>,
        { <span class="hljs-attr">faceDescriptor</span>: descriptionValue },
        { <span class="hljs-attr">withCredentials</span>: <span class="hljs-literal">true</span> }
      );

      <span class="hljs-built_in">console</span>.log(res?.data);
      navigate(<span class="hljs-string">'/chat'</span>);
    } <span class="hljs-keyword">catch</span> (err) {
      <span class="hljs-built_in">console</span>.log(err);
    }
  };

  <span class="hljs-keyword">return</span> (
    <span class="xml"><span class="hljs-tag">&lt;&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"flex w-full h-screen flex-col justify-center"</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">p</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"flex flex-col mx-auto items-center text-lg font-semibold mb-3"</span>&gt;</span>
          Take a snapshot to confirm your identity
        <span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">p</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"text-center mb-4"</span>&gt;</span>Ensure that the picture is taken in a bright area<span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>

        <span class="hljs-tag">&lt;<span class="hljs-name">button</span>
          <span class="hljs-attr">onClick</span>=<span class="hljs-string">{startVideo}</span>
          <span class="hljs-attr">className</span>=<span class="hljs-string">"flex w-[30%] mx-auto text-center items-center justify-center mb-5 h-[40px] bg-blue-600 rounded-md text-white"</span>
        &gt;</span>
          Turn on Webcam
        <span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>

        {!snapshot ? (
          <span class="hljs-tag">&lt;&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">video</span>
              <span class="hljs-attr">className</span>=<span class="hljs-string">"flex mx-auto items-center rounded-md"</span>
              <span class="hljs-attr">ref</span>=<span class="hljs-string">{videoRef}</span>
              <span class="hljs-attr">width</span>=<span class="hljs-string">"240"</span>
              <span class="hljs-attr">height</span>=<span class="hljs-string">"180"</span>
              <span class="hljs-attr">onPlay</span>=<span class="hljs-string">{handleVideoPlay}</span>
              <span class="hljs-attr">autoPlay</span>
              <span class="hljs-attr">muted</span>
            /&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">canvas</span>
              <span class="hljs-attr">ref</span>=<span class="hljs-string">{snapshotRef}</span>
              <span class="hljs-attr">width</span>=<span class="hljs-string">"240"</span>
              <span class="hljs-attr">height</span>=<span class="hljs-string">"180"</span>
              <span class="hljs-attr">style</span>=<span class="hljs-string">{{</span> <span class="hljs-attr">position:</span> '<span class="hljs-attr">absolute</span>', <span class="hljs-attr">top:</span> <span class="hljs-attr">0</span>, <span class="hljs-attr">left:</span> <span class="hljs-attr">0</span> }}
            /&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">button</span> <span class="hljs-attr">onClick</span>=<span class="hljs-string">{captureSnapshot}</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"mt-4 mx-auto block text-sm text-blue-600 underline"</span>&gt;</span>
              Take a snapshot
            <span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>
          <span class="hljs-tag">&lt;/&gt;</span>
        ) : (
          <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"flex w-full justify-center"</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">img</span>
              <span class="hljs-attr">src</span>=<span class="hljs-string">{snapshot}</span>
              <span class="hljs-attr">className</span>=<span class="hljs-string">"rounded-lg"</span>
              <span class="hljs-attr">width</span>=<span class="hljs-string">"240"</span>
              <span class="hljs-attr">height</span>=<span class="hljs-string">"180"</span>
              <span class="hljs-attr">alt</span>=<span class="hljs-string">"Face Snapshot"</span>
            /&gt;</span>
          <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
        )}

        <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"flex flex-row w-full justify-evenly mt-5"</span>&gt;</span>
          <span class="hljs-tag">&lt;<span class="hljs-name">button</span>
            <span class="hljs-attr">onClick</span>=<span class="hljs-string">{deleteImage}</span>
            <span class="hljs-attr">className</span>=<span class="hljs-string">"bg-purple-500 text-white p-2 h-[35px] rounded-lg"</span>
          &gt;</span>
            Delete Image
          <span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>
          <span class="hljs-tag">&lt;<span class="hljs-name">button</span>
            <span class="hljs-attr">onClick</span>=<span class="hljs-string">{FaceAuthenticate}</span>
            <span class="hljs-attr">className</span>=<span class="hljs-string">"bg-purple-500 text-white p-2 h-[35px] rounded-lg"</span>
          &gt;</span>
            Upload Image
          <span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
      <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
    <span class="hljs-tag">&lt;/&gt;</span></span>
  );
};

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> FaceAuth;
</code></pre>
<p>Displayed below is how the face authentication page should look like.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcPFsPVo9dymrTmMCyskCszbMf_SdG2n_j5gd7ayT1nQ6jOlhX8a_KFRG51cnqMCxUqFaVgTR2hrdGipmudd9B2TQpNfm4FrFMlYRo7bbu1gtRq1bKB5FmPi4QcbEPTLyDtAPbNEA?key=bLpVfispbJQQ4phtxWLC7w" alt="facial authentication page " width="600" height="400" loading="lazy"></p>
<p>Having set up the frontend, let's head to the backend and configure the registration and login endpoint for our project. The entire code to the backend project can be gotten <a target="_blank" href="http://github.com/oluwatobi2001/stream-backend.git">here</a>. We will only be highlighting the <code>faceAuth</code> backend function in this article.</p>
<p>To verify authentication, we will be using the sessions option instead of the JWT option. Important user information will be stored and accessed in the session cookies attached to the requests and responses to the frontend. Here is the <code>faceAuth</code> function:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> faceAuth = <span class="hljs-keyword">async</span> (req, res) =&gt; {
  <span class="hljs-keyword">try</span> {
    <span class="hljs-built_in">console</span>.log(req.session);

    <span class="hljs-keyword">const</span> id = req.session.passport?.user;
    <span class="hljs-built_in">console</span>.log(id);


    <span class="hljs-keyword">const</span> user = <span class="hljs-keyword">await</span> User.findById(id);
    <span class="hljs-built_in">console</span>.log(user);

    <span class="hljs-keyword">if</span> (user == <span class="hljs-literal">null</span>) {
      <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">400</span>).json({ <span class="hljs-attr">err</span>: <span class="hljs-string">"User not found"</span> });
    }


  } <span class="hljs-keyword">catch</span> (err) {
    <span class="hljs-built_in">console</span>.error(err);
    res.status(<span class="hljs-number">500</span>).json({ <span class="hljs-attr">err</span>: <span class="hljs-string">"Internal Server Error"</span> });
  }
};
</code></pre>
<p>First, we defined an asynchronous function named <code>faceAuth</code>. We then obtained the unique ID of the user who had successfully scaled over the initial login process from the request session.</p>
<p>To confirm the similarity of the user's stored face descriptor and the picture sent from the frontend, we utilized the matching face function based on the Euclidean algorithm to confirm the user's identity as done below.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> isMatchingFace = <span class="hljs-function">(<span class="hljs-params">descriptor1, descriptor2, threshold = <span class="hljs-number">0.6</span></span>) =&gt;</span> {
  <span class="hljs-comment">// Convert the stored descriptors to Float32Array if they aren't already</span>
  <span class="hljs-keyword">if</span> (!(descriptor1 <span class="hljs-keyword">instanceof</span> <span class="hljs-built_in">Float32Array</span>)) {
    descriptor1 = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Float32Array</span>(<span class="hljs-built_in">Object</span>.values(descriptor1));
  }

  <span class="hljs-keyword">if</span> (!(descriptor2 <span class="hljs-keyword">instanceof</span> <span class="hljs-built_in">Float32Array</span>)) {
    descriptor2 = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Float32Array</span>(<span class="hljs-built_in">Object</span>.values(descriptor2));
  }

  <span class="hljs-keyword">const</span> distance = faceapi.euclideanDistance(descriptor1, descriptor2);
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Euclidean Distance:"</span>, distance);

  <span class="hljs-keyword">return</span> distance &lt; threshold;
};
</code></pre>
<p>As stated in the code above, the threshold of similarity of comparison used was 0.6. This is flexible and can be modified to suit the user's preference, as a higher threshold will provide better accuracy overall.<br>If the function returns true, then the user has been successfully authenticated and can then have access to our chat application. Here is the full code snippet.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> faceAuth = <span class="hljs-keyword">async</span> (req, res) =&gt; {
  <span class="hljs-keyword">try</span> {
    <span class="hljs-built_in">console</span>.log(req.session);

    <span class="hljs-keyword">const</span> id = req.session.passport?.user;
    <span class="hljs-built_in">console</span>.log(id);

    <span class="hljs-keyword">const</span> { faceDescriptor } = req.body;
    <span class="hljs-keyword">const</span> user = <span class="hljs-keyword">await</span> User.findById(id);
    <span class="hljs-built_in">console</span>.log(user);

    <span class="hljs-keyword">if</span> (user == <span class="hljs-literal">null</span>) {
      <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">400</span>).json({ <span class="hljs-attr">err</span>: <span class="hljs-string">"User not found"</span> });
    }

    <span class="hljs-keyword">const</span> isMatchingFace = <span class="hljs-function">(<span class="hljs-params">descriptor1, descriptor2, threshold = <span class="hljs-number">0.6</span></span>) =&gt;</span> {
      <span class="hljs-comment">// Convert the stored descriptor (object) to a Float32Array</span>
      <span class="hljs-keyword">if</span> (!(descriptor1 <span class="hljs-keyword">instanceof</span> <span class="hljs-built_in">Float32Array</span>)) {
        descriptor1 = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Float32Array</span>(<span class="hljs-built_in">Object</span>.values(descriptor1));
      }

      <span class="hljs-keyword">if</span> (!(descriptor2 <span class="hljs-keyword">instanceof</span> <span class="hljs-built_in">Float32Array</span>)) {
        descriptor2 = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Float32Array</span>(<span class="hljs-built_in">Object</span>.values(descriptor2));
      }

      <span class="hljs-keyword">const</span> distance = faceapi.euclideanDistance(descriptor1, descriptor2);
      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Euclidean Distance:"</span>, distance);

      <span class="hljs-keyword">return</span> distance &lt; threshold;
    };

    <span class="hljs-keyword">if</span> (isMatchingFace(faceDescriptor, user.faceDescriptor)) {
      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Face match successful"</span>);
      req.session.mfa = <span class="hljs-literal">true</span>;

      <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">200</span>).json({
        <span class="hljs-attr">msg</span>: <span class="hljs-string">"User authentication was successful. Proceed to the chat app."</span>,
      });
    } <span class="hljs-keyword">else</span> {
      <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">401</span>).json({ <span class="hljs-attr">msg</span>: <span class="hljs-string">"Face does not match. Access denied."</span> });
    }
  } <span class="hljs-keyword">catch</span> (err) {
    <span class="hljs-built_in">console</span>.log(err);
    res.status(<span class="hljs-number">500</span>).json({
      <span class="hljs-attr">err</span>: <span class="hljs-string">"User face couldn't be authenticated. Please try again later"</span>,
    });
  }
};
</code></pre>
<p>With the main hurdle completed, we can then navigate to our application and have a seamless chat experience.</p>
<p>Additionally, as a safety measure, a rate limiter is also in place to minimize the use of brute-force techniques by malicious individuals to gain access to the chat application.</p>
<h2 id="heading-additional-information-and-tips">Additional Information and Tips</h2>
<p>The overall aim of these efforts is to achieve a more scalable and secure method of user validation. The threshold can easily be modified and tweaked to improve application accuracy. Alternatively, the <a target="_blank" href="https://aws.amazon.com/rekognition/">AWS Rekognition</a> tool can sufficiently replace the Face API tool with efficient cloud-powered models. The limitations of facial recognition can also be overcome by exploring biometric authentication, as it’s a known fact that each individual's fingerprint is unique, greatly reducing the risk of user compromise.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>So far, we have walked through the process of creating an efficient multi-factor facial authentication-based tool to prevent intruder access to our chat application, ensuring and prioritizing the highest level of user privacy. Need an SDK that assures you of a seamless and secure chat experience? Try Stream.io today.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ How to Authenticate a User with Face Recognition in React.js ]]>
                </title>
                <description>
                    <![CDATA[ By Hrishikesh Pathak With the advent of Web 2.0, authenticating users became a crucial task for developers.  Before Web 2.0, website visitors could only view the content of a web page – there was no interaction. This era of the internet was called We... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/authenticate-with-face-recognition-reactjs/</link>
                <guid isPermaLink="false">66d45f31182810487e0ce1a6</guid>
                
                    <category>
                        <![CDATA[ Artificial Intelligence ]]>
                    </category>
                
                    <category>
                        <![CDATA[ facial recognition ]]>
                    </category>
                
                    <category>
                        <![CDATA[ privacy ]]>
                    </category>
                
                    <category>
                        <![CDATA[ React ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Security ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Fri, 29 Jul 2022 13:55:01 +0000</pubDate>
                <media:content url="https://www.freecodecamp.org/news/content/images/2022/07/FaceIO-react--1-.jpg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Hrishikesh Pathak</p>
<p>With the advent of Web 2.0, authenticating users became a crucial task for developers. </p>
<p>Before Web 2.0, website visitors could only view the content of a web page – there was no interaction. This era of the internet was called Web 1.0.</p>
<p>But after Web 2.0, people gained the ability to post their own content on a website. And then content moderation became a never-ending task for website owners. </p>
<p>To reduce spam on these websites, developers introduced user authentication systems. Now website moderators can easily know the source of spam and can prevent those spammers from accessing the website further.</p>
<p>If you are want to know how to implement content moderation on your website, you can read my article on <a target="_blank" href="https://betterprogramming.pub/detect-and-blur-human-faces-on-your-website-8c4a2d69a538">How to detect and blur faces in your web applications</a>.</p>
<p>Now let's see what we'll be getting into in this tutorial.</p>
<h2 id="heading-what-youll-learn-in-this-tutorial">What You'll Learn in This Tutorial</h2>
<p>In this tutorial, we will discuss different authentication techniques you can use to authenticate users. These include email-password authentication, phone auth, OAuth, passwordless magic links, and at last facial authentication. </p>
<p>Our primary focus will be on authentication via face recognition techniques in this article.</p>
<p>We'll also build a project that teaches you how to integrate facial recognition-based authentication in your React web application. </p>
<p>In this project, we'll use the FaceIO SaaS (software as a service) platform to integrate facial recognition-based authentication. So, make sure you set up a free <a target="_blank" href="https://faceio.net/getting-started">FaceIO account</a> to follow along.</p>
<p>And finally, we'll take a look at the user privacy aspect and discuss how face recognition doesn't harm your privacy. We'll also talk about whether it's a reliable choice for developers in the future.</p>
<p>This article is packed with information, hands-on projects, and discussions. Grab a cup of coffee and a slice of pizza 🍕 and let's get started.</p>
<p>The final version of this project looks like this. Looks interesting? Let's do it then.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/07/faceIO-final.gif" alt="Image" width="600" height="400" loading="lazy"></p>
<h2 id="heading-different-types-of-user-authentication-systems">Different Types of User Authentication Systems</h2>
<p>There are many user authentication systems out there right now that you can choose to implement in your websites. There are no real superior or inferior auth techniques. All of these auth systems depend on using the right tool for the job.</p>
<p>For example, if you are making a simple landing page to collect emails from users, there is no need to use OAuth. But if you are building a social platform, then using OAuth makes more sense than traditional authentication. You can pull the user's details and profile images directly from OAuth.</p>
<p>If your web application is built around any investment-related content or legally binding services, then using phone auth makes more sense. A user can create unlimited email accounts but they'll have limited phone numbers to use.</p>
<p>Let's take a look at some popular authentication systems so we can see their pros and cons.</p>
<h3 id="heading-email-password-based-authentication">Email-password based authentication</h3>
<p>Email-password-based authentication is the oldest technique for verifying a user. The implementation is also very simple and easy to use. </p>
<p>The pro of this system is you don't need to have a third-party account to log in. If you have an email, whether it is self-hosted or from a service (like Gmail, Outlook, and so on), you are good to go. </p>
<p>The primary con of this system is you need to remember all of your passwords. As the number of websites is constantly growing and we need to log in to most sites to access our profiles, remembering passwords for every site becomes a daunting task for us humans. </p>
<p>Coming up with a unique and strong password is also a huge task. Our brains aren't typically capable of memorizing many random strings of letters and numbers. This is the biggest drawback of email-password-based authentication systems.</p>
<h3 id="heading-phone-authentication">Phone authentication</h3>
<p>Phone authentication is generally a very reliable auth technique to verify a user's identity. As a user typically doesn't have more than one phone number, this can be best suited for assets-related websites where user identity is very important. </p>
<p>But the drawback of this system is people don't want to reveal their phone numbers if they don't trust you. A phone number is much more personal than an email. </p>
<p>One more important factor of phone authentication is its cost. The cost of sending a text message to a user with an OTP is high compared to email. So website owners and developers often prefer to stick with email auth.</p>
<h3 id="heading-oauth-based-authentication">OAuth-based authentication</h3>
<p>OAuth is a relatively new technique compared to the previous two. In this technique, OAuth providers user authentication and useful information on behalf of the user. </p>
<p>For example, if the user has an account with Google (for example), they can log in to other sites directly using their Google account. The website gets the user details details from Google itself. This means that there's no need to create multiple accounts and remember every password for those accounts. </p>
<p>The major drawback of this system is that you as a developer have to trust the OAuth providers and many people don't want to link all their accounts for privacy reasons. So you'll often see an email-password field in addition to OAuth on most websites.</p>
<h3 id="heading-magic-link-authentication">Magic link authentication</h3>
<p>Magic links solve most of the problems you face in email password-based authentication. Here you have to provide only your password and you will receive an email with an auth link. Then you have to open this link in your browser and you are done. No need to remember any passwords. </p>
<p>This type of authentication has gained in popularity these days. It saves a lot of time for the user, and it's also very cheap. And you don't have to trust a 3rd-party like in the case of OAuth.</p>
<h3 id="heading-facial-recognition-authentication">Facial recognition authentication</h3>
<p>Facial recognition is one of the latest authentication techniques, and many developers are adopting it these days. Facial recognition reduces the hassle of entering your email-password or any other user credentials to log in to a web application. </p>
<p>The most important thing is that this authentication system is fast and doesn't need any special hardware. You just need a webcam, which almost all devices have nowadays. </p>
<p>Facial recognition technology uses artificial intelligence to map out the unique facial details of a user and store them as a hash (some random numbers and text with no meaning) to reduce privacy-related issues. </p>
<p>Building and deploying an artificial intelligence-based face recognition model from scratch is not easy and can be very costly for indie developers and small startups. So you can use SaaS platforms to do all this heavy-lifting for you. FaceIO and AWS recognition are examples of these type of services you can use in your projects.</p>
<p>In this hands-on project, we are going to use FaceIO APIs to authenticate a user via facial recognition in a React web application. FaceIO gives you an easy way to integrate the authentication system with their <code>fio.js</code> JavaScript library.</p>
<h2 id="heading-project-setup">Project Setup</h2>
<p>Before starting, make sure to create a FaceIO account and create a new project. Save the public ID of your FaceIO project. We need this ID later in our project.</p>
<p>To make a React.js project, we will use Vite. To start a Vite project, navigate to your desired folder and execute the following command:</p>
<pre><code class="lang-bash">npm create vite@latest
</code></pre>
<p>Then follow the instructions and create a React app using Vite. Navigate inside the folder and run <code>npm insall</code> to install all the dependencies for your project.</p>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/07/Screenshot-from-2022-07-27-10-46-05.png" alt="Image" width="600" height="400" loading="lazy"></p>
<p>After following all these steps, your project structure should look like this:</p>
<pre><code class="lang-bash">.
├── index.html
├── package.json
├── package-lock.json
├── public
│   └── vite.svg
├── src
│   ├── App.css
│   ├── App.jsx
│   ├── assets
│   │   └── react.svg
│   └── main.jsx
└── vite.config.js
</code></pre>
<h2 id="heading-how-to-integrate-faceio-into-our-react-rroject">How to Integrate FaceIO into Our React Rroject</h2>
<p>To integrate FaceIO into our project, we need to add their CDN in the <code>index.html</code> file. Open the <code>index.html</code> file and add the faceIO CDN before the <code>root</code> component. To learn more, check out <a target="_blank" href="https://faceio.net/integration-guide">FaceIO's integration guide</a>.</p>
<pre><code class="lang-html"><span class="hljs-tag">&lt;<span class="hljs-name">body</span>&gt;</span>    
    <span class="hljs-tag">&lt;<span class="hljs-name">script</span> <span class="hljs-attr">src</span>=<span class="hljs-string">"https://cdn.faceio.net/fio.js"</span>&gt;</span><span class="hljs-tag">&lt;/<span class="hljs-name">script</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">id</span>=<span class="hljs-string">"root"</span>&gt;</span><span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">script</span> <span class="hljs-attr">type</span>=<span class="hljs-string">"module"</span> <span class="hljs-attr">src</span>=<span class="hljs-string">"/src/main.jsx"</span>&gt;</span><span class="hljs-tag">&lt;/<span class="hljs-name">script</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">body</span>&gt;</span>
</code></pre>
<p>Now remove all the code from the <code>App.jsx</code> file to start from scratch. I've kept this tutorial as minimal as possible. So I've only added a heading and two buttons in the UI to demonstrate how the FaceIO facial authentication process works. </p>
<p>Here, one button works as a sign-in button, and the other one works as a log-in button.</p>
<p>The code inside the <code>App.jsx</code> file looks like this:</p>
<pre><code class="lang-jsx"><span class="hljs-keyword">import</span> <span class="hljs-string">"./App.css"</span>;
<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">App</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">return</span> (
    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">section</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>Face Authentication by FaceIO<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">button</span>&gt;</span>Sign-in<span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">button</span>&gt;</span>Log-in<span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">section</span>&gt;</span></span>
  );
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> App;
</code></pre>
<h3 id="heading-how-to-register-a-users-face-using-faceio">How to Register a User's Face using FaceIO</h3>
<p>Working with FaceIO is very fast and easy. As we are using the <code>fio.js</code> library, we have to execute only one helper function to authenticate a user. This <code>fio.js</code> library will do most of the work for us.</p>
<p>To register a user, we initialize our FaceIO object inside a <code>useEffect</code> hook. Otherwise, every time a state changes, it re-runs the components and reinitializes the <code>faceIO</code> object.</p>
<pre><code class="lang-js"><span class="hljs-keyword">let</span> faceio;
useEffect(<span class="hljs-function">() =&gt;</span> {
    faceio = <span class="hljs-keyword">new</span> faceIO(<span class="hljs-string">"Your Public ID goes here"</span>);
}, []);
</code></pre>
<p>Your FaceIO public ID is located on your FaceIO console. Copy the public ID and paste it here to initialize your FaceIO object.</p>
<p>Now, define a function named <code>handleSignIn()</code>. This function contains our user registration logic. </p>
<p>Inside the function call the <code>enroll</code> method of the <code>faceIO</code> object. This <code>enroll</code> method is equivalent to the sign-up function in a standard password backed registration system and accepts a <code>payload</code> argument. You can add any user-specific information (for example their name or email address) to this payload. </p>
<p>This payload information will be stored along with the facial authentication data for future reference. To learn about other optional arguments, check out their <a target="_blank" href="https://faceio.net/integration-guide#enroll">API docs</a>.</p>
<p>In our sign-in <code>button</code>, on user click we invoke this <code>handleSignIn()</code> function. The code snippets for user sign-in look like this:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> handleSignIn = <span class="hljs-keyword">async</span> () =&gt; {
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">let</span> response = <span class="hljs-keyword">await</span> faceio.enroll({
        <span class="hljs-attr">locale</span>: <span class="hljs-string">"auto"</span>,
        <span class="hljs-attr">payload</span>: {
          <span class="hljs-attr">email</span>: <span class="hljs-string">"example@gmail.com"</span>,
          <span class="hljs-attr">pin</span>: <span class="hljs-string">"12345"</span>,
        },
      });

      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">` Unique Facial ID: <span class="hljs-subst">${response.facialId}</span>
      Enrollment Date: <span class="hljs-subst">${response.timestamp}</span>
      Gender: <span class="hljs-subst">${response.details.gender}</span>
      Age Approximation: <span class="hljs-subst">${response.details.age}</span>`</span>);
    } <span class="hljs-keyword">catch</span> (error) {
      <span class="hljs-built_in">console</span>.log(error);
    }
  };

<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">button</span> <span class="hljs-attr">onClick</span>=<span class="hljs-string">{handleSignIn}</span>&gt;</span>Sign-in<span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span></span>
</code></pre>
<p><img src="https://www.freecodecamp.org/news/content/images/2022/07/faceIO-1.png" alt="Image" width="600" height="400" loading="lazy">
<em>FaceIO screen</em></p>
<h3 id="heading-how-to-sign-in-using-face-recognition">How to Sign In using Face Recognition</h3>
<p>After registering the user, then you'll need to get the user into the authentication or log-in/sign-in flow. Using the <code>fio.js</code> library also makes it very easy for us to set up a log-in flow using face authentication. </p>
<p>We have to invoke the <code>authenticate</code> method of the <code>faceIO</code> object which is equivalent to the sign-in function in a standard password backed registration system and all the critical work will be done by the <code>fio.js</code> package.</p>
<p>At first, define a new function named <code>handleLogIn()</code> to handle all the log-in logic in our React app. Inside this function, we invoke the <code>authenticate</code> method of the <code>faceIO</code> object as I mentioned earlier.</p>
<p>This method accepts a <code>locale</code> argument. This is the default language of the interaction of users with FaceIO widgets. If you are not sure, you can assign <code>auto</code> in this field. </p>
<p>The <code>authenticate</code> method also take more optional arguments like <code>permissionTimeout</code>, <code>idleTimeout</code>, <code>replyTimeout</code> and so on. You can check out their API documentation to know more about optional arguments.</p>
<p>We invoke this <code>handleLogIn()</code> function when someone clicks on the <code>Log-in</code> button:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> handleLogIn = <span class="hljs-keyword">async</span> () =&gt; {
    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">let</span> response = <span class="hljs-keyword">await</span> faceio.authenticate({
        <span class="hljs-attr">locale</span>: <span class="hljs-string">"auto"</span>,
      });

      <span class="hljs-built_in">console</span>.log(<span class="hljs-string">` Unique Facial ID: <span class="hljs-subst">${response.facialId}</span>
          PayLoad: <span class="hljs-subst">${response.payload}</span>
          `</span>);
    } <span class="hljs-keyword">catch</span> (error) {
      <span class="hljs-built_in">console</span>.log(error);
    }
  };

<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">button</span> <span class="hljs-attr">onClick</span>=<span class="hljs-string">{handleLogIn}</span>&gt;</span>Log-in<span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span></span>
</code></pre>
<p>Our user authentication project using FaceIO and React is now complete! You learned how to register and login a user. You can see the process is fairly simple compared to implementing an <code>email-password</code> based or some other authentication method we discussed earlier in this article.</p>
<p>Now you can style all the <code>jsx</code> elements using CSS. I didn't add CSS here to reduce complexity in this project. If you are curious, you can take a look at my <a target="_blank" href="https://gist.github.com/hrishiksh/bf76c98e05f6e85eb46d7e736bae351d">GitHub gist</a>.</p>
<p>If you want to host this React FaceIO project for free, you can check out this article on <a target="_blank" href="https://hrishikeshpathak.com/blog/deploy-nextjs-cloudflare-pages">how to deploy your React and Nextjs project in Cloudflare pages</a>.</p>
<h2 id="heading-how-to-use-the-faceio-rest-api">How to Use the FaceIO REST API</h2>
<p>Besides providing widgets via the <code>fio.js</code> library, FaceIO also provides <a target="_blank" href="https://faceio.net/rest-api">REST APIs</a> to streamline the authentication process. </p>
<p>Every application in the FaceIO console has an API key. You can use this API key to access the FaceIO REST API endpoints. The base URL for the REST API is <code>https://api.faceio.net/.</code></p>
<p>The URL schema accepts URL parameters like this <code>https://api.faceio.net/cmd?param=val&amp;param2=val2</code>. Here <code>cmd</code> is an API endpoint and <code>param</code> is an endpoint parameter if it accepts any.</p>
<p>Using the REST API endpoints, you can automate various tasks in your backend.</p>
<ol>
<li>You can delete a face ID on a user's request.</li>
<li>You can attach a payload with a face ID.</li>
<li>You can change the PIN associated with a face ID.</li>
</ol>
<p>This REST API is intended to be used purely on the server side. Make sure you don't expose it to clients. It's important that you read the following Privacy and Security sections to learn more about this.</p>
<h2 id="heading-how-to-use-faceio-webhooks">How to Use FaceIO WebHooks</h2>
<p>Webhooks are event-driven communication systems among servers. You can use this <a target="_blank" href="https://faceio.net/webhooks">webhook feature of FaceIO</a> to update and sync your backend with new events happening in your front-end web application. </p>
<p>The event of this webhook fires on new user enrollment, facial authentication success, facial ID deletion, and so on.</p>
<p>You can set up FaceIO webhooks in your project console. A typical webhook call from FaceIO lasts for 6 seconds. This contains all the information about a specific event in a JSON format and looks like this:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"eventName"</span>:<span class="hljs-string">"String - Event Name"</span>,
  <span class="hljs-attr">"facialId"</span>: <span class="hljs-string">"String - Unique Facial ID of the Target User"</span>,
  <span class="hljs-attr">"appId"</span>:    <span class="hljs-string">"String - Application Public ID"</span>,
  <span class="hljs-attr">"clientIp"</span>: <span class="hljs-string">"String - Public IP Address"</span>,
  <span class="hljs-attr">"details"</span>: {
     <span class="hljs-attr">"timestamp"</span>: <span class="hljs-string">"Optional String - Event Timestamp"</span>,
     <span class="hljs-attr">"gender"</span>:    <span class="hljs-string">"Optional String - Gender of the Enrolled User"</span>,
     <span class="hljs-attr">"age"</span>:       <span class="hljs-string">"Optional String - Age of the Enrolled User"</span>
   }
}
</code></pre>
<h2 id="heading-privacy-and-faceio">Privacy and FaceIO</h2>
<p>Privacy is the most important thing for a user nowadays. As big corporations use your data for their good, questions arise on whether the privacy of these face recognition techniques is valid and legitimate.</p>
<p>FaceIO as a service follows all the privacy guidelines and gets user consent before requesting their camera access. Even if the developer wanted, FaceIO doesn't scan faces without getting consent. Users can easily opt-out of the system and can delete their facial data from the server.</p>
<p>FaceIO is CCP and GDPR compliant. As a developer, you can release this facial authentication system anywhere in the world without facing privacy issues. You can read this article to know more <a target="_blank" href="https://faceio.net/apps-best-practice">about FaceIO privacy best practices</a>.</p>
<h2 id="heading-faceio-security">FaceIO Security</h2>
<p>The security of a web application is an important topic to discuss and consider. As a developer,  you are responsible for the security of a site or application's users.</p>
<p>FaceIO follows some important and serious security guidelines for user data protection. FaceIO hashes all the unique facial data of the user along with the payload we specified earlier. So the stored information is nothing but some random strings which can't be reverse engineered.</p>
<p>FaceIO outlines some very important <a target="_blank" href="https://faceio.net/security-best-practice">security guidelines</a> for developers. Their security guide focuses on adding a strong pin code to protect user data. FaceIO also rejects covered faces so that no one can impersonate someone else.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>If you've read this far, thank you for your time and effort. Make sure to follow along with the hands-on tutorial so you can fully grasp the topic. </p>
<p>The project should be approachable if you follow all the steps. If you make something out of it, show me on <a target="_blank" href="https://twitter.com/hrishikshpathak">Twitter</a>. If you have any questions, please ask. I will happy to help you. Till then, have a good day.</p>
 ]]>
                </content:encoded>
            </item>
        
            <item>
                <title>
                    <![CDATA[ From Augmented Reality to emotion detection: how cameras became the best tool to decipher the world ]]>
                </title>
                <description>
                    <![CDATA[ By Avi Ashkenazi The camera is finally on stage to help solve user experience (UX) design, technology, and communication issues. Years after the Kinect was trashed and Google Glass failed, there is now new hope. The impressive technological array tha... ]]>
                </description>
                <link>https://www.freecodecamp.org/news/facial-recognition-as-aux-driver-8a49dfd477ca/</link>
                <guid isPermaLink="false">66c34a3ca124e2df05195f34</guid>
                
                    <category>
                        <![CDATA[ Apple ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Augmented Reality ]]>
                    </category>
                
                    <category>
                        <![CDATA[ Design ]]>
                    </category>
                
                    <category>
                        <![CDATA[ facial recognition ]]>
                    </category>
                
                    <category>
                        <![CDATA[ UX ]]>
                    </category>
                
                <dc:creator>
                    <![CDATA[ freeCodeCamp ]]>
                </dc:creator>
                <pubDate>Thu, 14 Sep 2017 11:59:53 +0000</pubDate>
                <media:content url="https://cdn-media-1.freecodecamp.org/images/1*SaZceHMxVeG7iDRWLeDn6A.jpeg" medium="image" />
                <content:encoded>
                    <![CDATA[ <p>By Avi Ashkenazi</p>
<p>The camera is finally on stage to help solve user experience (UX) design, technology, and communication issues.</p>
<p>Years after the <a target="_blank" href="https://en.wikipedia.org/wiki/Kinect">Kinect</a> was trashed and <a target="_blank" href="https://en.wikipedia.org/wiki/Google_Glass">Google Glass</a> failed, there is now new hope. The impressive technological array that <a target="_blank" href="https://en.wikipedia.org/wiki/Apple_Inc.">Apple</a> minimized from a <a target="_blank" href="https://en.wikipedia.org/wiki/PrimeSense">PrimeSense</a> to the <a target="_blank" href="https://www.apple.com/ca/iphone-x/?afid=p238%7CskwMTeFkg-dc_mtid_20925xpb40345_pcrid_220916805480_&amp;cid=wwa-ca-kwgo-iphone-slid-">iPhone X</a> is the beginning of emotion-dependent interactions.</p>
<p>It’s not new. It’s commercialized and gives developers access to indispensable information.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/FUI8GMHXrVH8BJMvlFPAGRxPfeU17rUJUyiE" alt="Image" width="800" height="450" loading="lazy"></p>
<p>Recently, Mark Zuckerberg mentioned that much of Facebook’s focus will be on the camera and its surrounding environment. <a target="_blank" href="https://www.snapchat.com/l/en-gb/">Snapchat</a> has defined itself as a camera company. Apple and Google are also heavily investing in cameras. <strong>The camera has tremendous power that we have not yet tapped into. It has the power to detect emotions.</strong></p>
<h3 id="heading-inputs-need-to-be-easy-natural-and-effortless">Inputs need to be easy, natural, and effortless</h3>
<p><img src="https://cdn-media-1.freecodecamp.org/images/fJKL3RDfpjFK19X2W8Lthc1tFEadL9iqN9d0" alt="Image" width="743" height="216" loading="lazy"></p>
<p>When Facebook first introduced emojis as an enhanced reaction to <em>Like,</em> I realized that they were onto something. Facebook recently added five emotions which helped Facebook to better understand its users’ emotional reactions to its content. I argue that the emojis are glorified form of the same thing, but one that works better than anything else.</p>
<p>In the past, Facebook only had the <em>Like</em> button while <a target="_blank" href="https://www.youtube.com/">YouTube</a> had the <em>Like</em> and <em>Dislike</em> buttons. But these are not enough to track emotions, and do not provide much value to researchers and advertisers. Most people express their emotions in comments, and yet there are more _Like_s than comments.</p>
<p>The comments are text based or even presented with an image, which is harder to analyze. That is because there are many contextual connections the algorithm needs to guess. For example, how familiar is the person who reacts to a post with the person who posted it, and vice versa? How is the person connected to the specified subject?</p>
<p>Is there subtext, slang, or anything related to the person’s experience? Is it a continued conversation from the past? Facebook did a wonderful job of keeping the conversation positive. Facebook prevented the <em>Dislike</em> button from pulling focus, which could have discouraged people from creating and sharing content. Facebook kept it positively pleasant.</p>
<p>Now I would compare Facebook to a glorified forum. Users can reply to comments or to the emojis. Users can <em>Like</em> a <em>Like</em> ?. Yet it is still very hard to know what people are feeling. Most people who read don’t leave a comment. What do readers feel when they read a post then?</p>
<h3 id="heading-the-old-user-experience-for-cameras">The old user experience for cameras</h3>
<p>What do you do with a camera? Take pictures and videos, and that’s about it. There has been huge development in camera apps. There are many features that are related to the surroundings of the main use case, for example high dynamic range (HDR), slow motion, portrait mode, etc.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/V6KJupNV99PBW08gLaLSxYH6xtXFCnpWWgce" alt="Image" width="600" height="234" loading="lazy">
_[image source](https://twitter.com/lukew/status/522056776477200384" rel="noopener" target="<em>blank" title=")</em></p>
<p>Based on the enormous number of pictures users generate, a new wave of smart galleries, photo processing, and metadata apps has been created.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/sSbmUt0PEag0Eo7G97k7vHdZK4TNAHhbMluu" alt="Image" width="800" height="511" loading="lazy">
<em>Photography from the Mac App Store</em></p>
<p>However, recently the focus has changed. It is now on the life-integrated camera, which is a combination of the strongest treats and best use cases for mobile phones. The next generation of cameras will be fully integrated into our lives and could replace all the input icons in a messaging app (microphone, camera, and location).</p>
<p>Cameras are one of the three items that has been consistently developed at a dizzying pace. The screen and the processor are the other two. Every phone that has come out has pushed the limits, and has done so year after year. The improvements made to the cameras are to their megapixels, movement stabilization, aperture, speed and, as mentioned above, the apps.</p>
<p>Let’s evaluate the evolution of a few products created by companies.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/fXI99nmPeAu7Jt9PyGqcTBV81iSH8QuzzUPr" alt="Image" width="702" height="143" loading="lazy">
<em>This is just a glimpse of the upgrade to megapixels (MP), not including double cameras, flash, etc. There have been many software changes.</em></p>
<p>Much of the development was focused on the camera at the back of the phone because, at least at first, the front camera was thought to be useful for video calls only. However, selfie culture and Snapchat changed that. Snapchat’s masks, which were later copied by everyone else, are a huge success. Face masks are not new. Google introduced them a while ago, but Snapchat was effective at growing the use of masks.</p>
<h3 id="heading-highlights-from-memory-lane">Highlights from memory lane</h3>
<p>In December 2009, Google introduced <a target="_blank" href="https://en.wikipedia.org/wiki/Google_Goggles">Google Goggles</a>. It was the first time that users could use their phone to get information about the environment around them. The information was mainly about landmarks initially.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/oHjMlhSBeYeeT0W2rh-NBhguf30uQ19daYyK" alt="Image" width="600" height="425" loading="lazy"></p>
<p>In November 2011, <a target="_blank" href="https://en.wikipedia.org/wiki/Galaxy_Nexus">Samsung Nexus</a> introduced facial recognition as a way to unlock phones. Like many things done for the first time, it wasn’t very good and was later scrapped.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/lA5hJhW2P9uz0i9H4qU0W6mCGjdpgETQk8XL" alt="Image" width="590" height="460" loading="lazy">
<em>Samsung (Google) Nexus</em></p>
<p>In February 2013, Google released <a target="_blank" href="https://en.wikipedia.org/wiki/Google_Glass">Google Glass</a>. It had more use cases because it was able to receive input not just from the camera but also from other items like the voice. The Google Glass was always there, but it failed to gain traction because it was too expensive, looked unfashionable, and triggered a backlash from the public. It was just not ready for prime time.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/Mi6TZll-7VCgLIMC1i0fmEVytAwQkAio4gh2" alt="Image" width="800" height="435" loading="lazy">
<em>Google Glass 1</em></p>
<p>So far, devices have only limited information at their disposal. The information they have are audio visual with GPS and historical data. But it limited Google Glass from displaying the information on the small screen that is positioned near the users’ eye. The screen blocked users from looking at anything else. Putting this technology on a phone for external use is not just a technological limitation but also a physical one.</p>
<p>When you focus on your phone, you cannot see anything else. Your field of view is limited. This is similar to the field of view in user experience principles for virtual reality (VR). That’s why there are cities that have created routes for people who use their phones while walking and have set up traffic lights that help people walk and text. A premise like <a target="_blank" href="https://en.wikipedia.org/wiki/Microsoft_HoloLens">Microsoft’s HoloLens</a> is more aligned with the spatial environment and can actually help users while they move and use their phones, rather than absorb their attention and put them in danger.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/lJJIwGw9jsElIfX8CLQldwFEGo7kmaxzIS92" alt="Image" width="570" height="285" loading="lazy">
_[Kinect](https://en.wikipedia.org/wiki/Kinect" rel="noopener" target="<em>blank" title=") bought by Apple Inc.</em></p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/UYGuaa5wqLz7fTpakaH8PkurU8lC-WLy9KOB" alt="Image" width="800" height="450" loading="lazy">
<em>Microsoft HoloLens</em></p>
<p>In July 2014, Amazon introduced the <a target="_blank" href="https://en.wikipedia.org/wiki/Fire_Phone">Fire Phone</a>. It featured four cameras at the front of the phone. This was a breakthrough, even though it didn’t succeed. The four frontal cameras were used for scrolling and created 3D effects based on the accelerometer and users’ gaze. It was the first time that a phone used its front cameras as an input method from users.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/mviXdikNehcnYX6pczcR5YZh5Sq42EXowt8w" alt="Image" width="800" height="450" loading="lazy">
<em>The Fire Phone</em></p>
<p>On August 2016, <a target="_blank" href="https://en.wikipedia.org/wiki/Samsung_Galaxy_Note_7">Samsung’s Note 7</a> was launched. It allowed users to unlock their phones with iris scanning. Samsung resurrected a facial-recognition technology that had rested on the shelf for six years. Unfortunately, just looking at the tutorial can be vexing. Samsung didn’t do much user experience testing for that feature.</p>
<p>It is disturbing to hold this huge phone and put it at a 90° angle to your face. It is not something that anyone should be doing while they are walking on the street. It can work nicely for Saudi women who cover their faces, but because of manufacturing defects, many Note 7 phones overheated, combusted, or exploded. The concept of iris scanning was put on hold for another full year until the <a target="_blank" href="http://www.samsung.com/ca/smartphones/galaxy-note8/">Note 8</a> came out.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/ARJMQFIYO3SSPkWqvyTdLAhYYCzAb7zMJclk" alt="Image" width="615" height="410" loading="lazy">
<em>From Samsung’s keynote</em></p>
<p>But by that time no one mentioned iris scanning. The Note 8 mentioned that iris scanning is an another way of unlocking a phone in conjunction with the fingerprint sensor. It’s probably because the phone was not good enough or Samsung wasn’t able to make a decision (similar to the release of the <a target="_blank" href="https://en.wikipedia.org/wiki/Samsung_Galaxy_S6">Galaxy 6</a> and <a target="_blank" href="http://www.samsung.com/global/galaxy/galaxys6/galaxy-s6-edge/">6 Edge</a>). For a product to succeed, it needs to have multiple functions, otherwise it risks being forgotten.</p>
<p>Google took a break, and then in July 2017 released the second version of Google Glass as a business-to-business product. The use cases became more specific for some industries.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/as8KODosoGmf0DLqu3BxyagxqR0Z4TrsKO9r" alt="Image" width="800" height="448" loading="lazy">
<em>Google Glass 2</em></p>
<p>Now Google is about to release the <a target="_blank" href="https://en.wikipedia.org/wiki/Google_Lens">Google Lens</a> to bring the initial Goggles use case to the present. It’s Google’s effort to learn how to use visual information with additional context, and to figure out what product to develop next. It appears that Google is leaning towards a camera that users can wear.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/dLR9bU8pa3T27s1RZIW-WBMFs3ckufaR-lD5" alt="Image" width="800" height="445" loading="lazy">
<em>Google Lens App</em></p>
<p>There are other companies that are exploring visual input as well. For example, <a target="_blank" href="https://www.pinterest.ca/">Pinterest</a> is seeing a huge demand for its visual search lens, which its users are using to search for items to buy and to help people curate products and services online.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/xbxqsL71dpEQ6ntKGhbFCnc26uaRkqU5oqsb" alt="Image" width="800" height="385" loading="lazy">
<em>Pinterest Visual Search</em></p>
<p><a target="_blank" href="https://www.spectacles.com/">Snapchat’s spectacles</a> allow users to record short videos easily (even though the upload process is cumbersome).</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/XOSBAIIZMPeZueZqGGjjFr-TnVbrvYqZF9tr" alt="Image" width="800" height="371" loading="lazy">
<em>Snap’s Specs</em></p>
<p>Now facial recognition is also on the Note 8 and <a target="_blank" href="http://www.samsung.com/ca/smartphones/galaxy-note8/">Galaxy 8</a>, but this feature is not panning out as well as hoped.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/gF2-zI1OCAi6Tk19SkfDzu3ToUSOGWKX4ko2" alt="Image" width="770" height="370" loading="lazy">
<em>Galaxy S8 facial recognition</em></p>
<p>To check out a facial recognition demonstration, click <a target="_blank" href="https://twitter.com/MelTajon/status/904058526061830144/video/1">here</a>.</p>
<p>Apple is slow to adopt new technologies relative to its competitors. But on the other hand, Apple commercializes its products, for example, the Apple <a target="_blank" href="https://www.apple.com/ca/apple-watch-series-1/">Watch</a>. The Watch was all about facial recognition and finite screen. There is no better way to make people use this feature than by removing all other options (like the <a target="_blank" href="https://en.wikipedia.org/wiki/Touch_ID">Touch ID</a>). It’s not surprising that Apple did this last year with wireless audio (Apple removed the headphone jack) and <a target="_blank" href="https://en.wikipedia.org/wiki/USB-C">USB C</a> on the <a target="_blank" href="https://en.wikipedia.org/wiki/MacBook_Pro">MacBook Pro</a> (by removing everything else).</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/R6-NvHnItPrCBGOhUvgAFI0UooTXbri8ptfL" alt="Image" width="800" height="533" loading="lazy"></p>
<p>There is a much bigger reason why Apple chose this technology at this time. It has to do with its augmented reality efforts.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/M2rXsHl5mhJykSZHXfG0gAftwt7zbxUmTlDT" alt="Image" width="800" height="443" loading="lazy"></p>
<p>Face ID has challenges that include recognizing users who wear the <a target="_blank" href="https://en.wikipedia.org/wiki/Niq%C4%81b">Niqāb</a> (face covers), users who have had plastic surgery, and users who are changing physically because they are growing. But the bigger picture here is much more interesting. This is the first time that users can do something that they naturally do with no effort, while receiving data that is meaningful for the future of technology. I believe that a screen that can read fingerprints is better. It appears that Samsung is heading in that direction (although rumours has it that Apple tried and failed).</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/95Zs7P65WrAdId1giWUCRpwfRWntXHeJComE" alt="Image" width="450" height="500" loading="lazy"></p>
<h3 id="heading-so-where-is-this-going-whats-the-target">So where is this going? What’s the target?</h3>
<p>In the past, companies used special glasses and devices to perform user testing. The only output they could give were <a target="_blank" href="https://en.wikipedia.org/wiki/Heat_map">heat maps</a>. Yet they weren’t able to document what the users’ were focusing on or their emotions and reactions.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/Fhb83q3pVYAtpQm2Raq73j3YP6tzlmDmtqIJ" alt="Image" width="480" height="240" loading="lazy">
<em>Tobii Pro glasses — is one example</em></p>
<p>Based on tech trends, it appears the future involves augmented reality and virtual reality. But in my opinion, it includes audio, 3D sound, and visual inputs combined. This would be a wonderful experience, which would allow users to look at anything, anywhere, and get information at the same time.</p>
<p>What if we are able to know where the users are looking at, and what they are focusing on? For years this is something that marketing and design professionals have tried to capture and analyze. What can do that better than the set of arrays a device like the <a target="_blank" href="https://www.apple.com/ca/iphone-x/">iPhone X</a> has as a starting point? Later on this evolved into glasses that can see what the user is focused on.</p>
<h3 id="heading-reactions-are-powerful-and-addictive"><strong>Reactions are powerful and addictive</strong></h3>
<p>Reactions help people converse and increase retention and engagement. Some apps offer reactions to posts as messages that can be sent to friends. There are funny videos on YouTube that show the reactions of people who watch videos. There is even a TV show, called <a target="_blank" href="https://en.wikipedia.org/wiki/Gogglebox">Gogglebox</a>, that is dedicated to showing people watching TV.</p>
<p>In <a target="_blank" href="https://events.google.com/io/">IO Google</a>, the annual developer festival, Google opened the option to pay creators for their platform. Its like what the brilliant <a target="_blank" href="https://en.wikipedia.org/wiki/Patreon">Patreon</a> site is doing but in a much more dominant way. A way that helps you to stand out from the crowd and grab the creator’s attention is <a target="_blank" href="https://www.youtube.com/watch?v=b9szyPvMDTk">SuperChat</a>.</p>
<p><a target="_blank">In Chris Harrison’s student project from 2009</a>, Harrison created a keyboard that had pressure sensing keys. Depending on the force with which users type, the keyboard reads the users’ emotions and determine if they are angry or excited. The letters too get bigger as a result. Now imagine combining it with a camera that sees the users’ facial expressions while they are typing, as people tend to express their emotions while they’re typing a message.</p>
<h3 id="heading-how-would-such-a-ux-look-like"><strong>How would such a UX look like</strong></h3>
<p>Consider the pairing of a remote and center point in virtual reality. The center is our focus, but we also have a secondary focus point, which is where the remote points are. However, this type of user experience cannot work in augmented reality. To take advantage of augmented reality, which is a new focus for Apple, the user’s focus must be known.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/zXxAcez4iyKwXaOLzwLMZwiHoDqe9zJDlRzW" alt="Image" width="800" height="600" loading="lazy">
_[illustration source](https://blog.kickpush.co/beyond-reality-first-steps-into-the-unknown-cbb19f039e51" rel="noopener" target="<em>blank" title=")</em></p>
<p>What started as <a target="_blank" href="https://developer.apple.com/arkit/">ARKit</a> and Google’s <a target="_blank" href="https://venturebeat.com/2017/08/29/google-launches-arcore-sdk-in-preview-ar-on-android-phones-no-extra-hardware-required/">ARCore SDK</a>, both will be the future of development. This is because of the amazing output and input both can get from the front and back cameras <strong>combined</strong>. This will allow for a greater focus on the input.</p>
<h3 id="heading-a-more-critical-view-on-future-developments"><strong>A more critical view on future developments</strong></h3>
<p><img src="https://cdn-media-1.freecodecamp.org/images/k8lObTtpncpJAnbB97YSSeYVYkKiV2TYZ8yw" alt="Image" width="603" height="222" loading="lazy"></p>
<p>While Apple opened the way for facial recognition and triggered reactionary <a target="_blank" href="https://www.theverge.com/2017/9/12/16290210/new-iphone-emoji-animated-animoji-apple-ios-11-update">Animoji</a>, it is going to get interesting when other organizations start implementing <a target="_blank" href="https://en.wikipedia.org/wiki/Face_ID">Face ID</a>. Currently, it is manifested in a basic and harmless way, but the goal remains to get more information. Information that will be used to keep track of and sell products to us. It will also allow us to learn more about us and gather our emotional data.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/2Nn-xjcAIB6HZ8zRuxDvV4TpCHKKhfwnSk6X" alt="Image" width="800" height="533" loading="lazy">
<em>Animoji</em></p>
<p>It is important to say that the front camera doesn’t come alone. It’s the expected result of Apple buying <a target="_blank" href="https://en.wikipedia.org/wiki/PrimeSense">PrimeSense</a>. The array of front-facing technology includes an infrared camera, depth sensor, etc. (I think they could do well with a heat sensor too.) It’s not that someone will keep videos of our faces using the phone, but rather there will be a scraper that will document all the information about our emotions.</p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/rky60ojrRD-nPMHs9J9EmbHmIx3aBmNK2DLp" alt="Image" width="800" height="454" loading="lazy">
<em>Can’t be fooled by a mask — from Apple’s keynote</em></p>
<p><img src="https://cdn-media-1.freecodecamp.org/images/D4T1EvmyV9cvV08Ca3MJW2FUsxc3C7rYAaSZ" alt="Image" width="698" height="400" loading="lazy">
<em>Or funny enough</em></p>
<h3 id="heading-summary"><strong>Summary</strong></h3>
<p>It’s exciting that augmented reality have algorithms that can read faces. There are many books that talks about how to identify facial reactions, but now it’s time for technology to do this. It will be wonderful for many reasons. For example, robots can now see how we feel and react, or with glasses, we can get more context about what we need them to do. Relating to computers, it’s better to look at a combination of elements because that helps the machine to understand you better.</p>
<p>The things that you can do if you have the information about what the user is focusing on are endless. It’s the dream of every person who works with technology.</p>
<p>This blog post was originally showed <a target="_blank" href="https://superavi.com/facial-recognition-as-ux-driver-from-ar-to-emotion-detection-how-the-camera-turned-to-be-the-best-tool-to-decipher-the-world/">here</a>.</p>
 ]]>
                </content:encoded>
            </item>
        
    </channel>
</rss>
