Liveness application for mobile and web user registration and authentication. Over 200,000 checks per day worldwide.
As easy as taking a selfie.
Liveness Mobile\WEB SDK: for action control and video recording.
Liveness Server API: for the capture of the best shot and protect against biometry attacks.
We have developed client-side mobile and WEB SDKs for embedding into your WEB site and mobile application. We recommend using our SDKs for the following purposes:
In some cases, external conditions do not allow taking a clear photo or video of the face for further biometric identification and verification for Liveness.
Since users can check in various situations: (outdoors, in the dark, on the move, in bright light), our SDK automatically detects shooting conditions (darkening, blurring, flashing) and illuminates the face in the absence of light or recommends choosing more favorable conditions for shooting.
The purpose of Liveness verification is to protect against spoofing attacks and select the best shot for further biometric identification. Face recognition algorithms show the best results on images that correspond to the VISA or VISABORDER format.
The more freely and from different angles a face is photographed, the more likely it is for face recognition algorithms to fail. If the face is cut off, rotated, tilted, or very distant into the shot, this may affect the fact that the face may not be found, or the error will significantly increase.
Using our SDK, you can always be sure that the entire biometric identification or authentication process will proceed as quickly as possible.
The security of the biometric identification/authentication process depends not only on the threats of spoofing attacks but also on specific attacks by software tools that create a virtual camera and “replace” the original video.
Such attacks exploit vulnerabilities in the process of filming and sending media content, attacking the recording device itself.
We have implemented footage originality verification tools in our SDKs to verify that the footage is genuine and used the original camera.
Our algorithms are based on deep machine learning, check shots from the video, and track dozens of parameters (presence of glare and reflections, micromotions, pulse, etc.)
We have trained our system on tens of thousands of attacks. We also work with manufacturers of 3D masks and constantly searching for new samples:
This approach allowed our Oz Liveness solution to pass ISO 30107 certification at the NIST-accredited iBeta laboratory. With a result of 100%.View detailed report
iBeta is nationally accredited as a test lab by the National Voluntary Lab Accreditation Program (NVLAP Testing Lab Code 200962) to the requirements of ISO/IEC 17025:2017 requirements for the competence of testing and calibration laboratories).
In 2011, iBeta was accredited by NIST under the National Voluntary Laboratory Accreditation Program (NVLAP) for Biometric Testing under NIST handbook 150-25 and had become an expert in the field of biometrics.
In addition, iBeta procedures against the ISO 30107-3 Presentation Attached Detection (PAD) standard were audited by our accrediting body and iBeta’s Scope of Accreditation was increased to include conformance testing to the ISO 30107-3 standard in April 2018.
As the subjects were cooperative, each species appeared as a natural face duplication (meeting the requirements of Property 1 and 2). All of the face features captured in the artefacts contained extractable features as they were acquired from the genuine subject (meeting the requirement of Property 3). In some cases, hats and glasses were added to the artefact during the presentation attack.
How is testing done?
The Ibeta laboratory prepares artifacts for attacks. Artefacts for the testing consisted of six species:
2D Photo on matte paper with edges cut
2D Photo on matte paper presented on a curved surface
2D Mask with eyes cut out
Photo displayed on laptop or iPad
3D Handmade paper mask
Video displayed on laptop or iPad
Using attack artifacts, a sequence of Liveness checks is made: one original check and three attacks. This sequence is repeated 50 times for each attack.
On iPhone 6s
Of the 300 original Liveness tests that were performed with a real face, 299 were successful.
The error false positive of Oz Liveness was less than 1%. At the same time, not a single attack was missed, i.e., the accuracy of detecting attacks is - 100%.VIEW DETAILED REPORT
300 out of 300 checks of a live person were completed successfully. All attacks were detected with 100% accuracy.
Thus, the error false positive of Oz Liveness is 0%. The detection accuracy is 100%.
Our architecture supports various implementations of the Liveness algorithm: you can set the size of the video that will be processed and stored in your infrastructure:
Video from 1 to 5 seconds: size from 1 to 5MB, processing time 5 seconds.
It is recommended for remote identification in the banking sector. In some cases, regulatory requirements prescribe the recording and storage of a video file for an extended period of time, ranging from 1 to 10 years.
This option can be used to establish business relationships in sectors with remote identification requirements.
Video from 1 shot (duration up to 1 second). Size 300 KB - 1 MB.
for image enhancement and verification in the fintech sector, sharing economy with simplified remote identification requirements.
for biometric authentication (Confirmation of high-risk transactions, passwordless login, access recovery).
In cases where the speed of processing and transmission and, accordingly, the length of the client path is of paramount importance. Also, there are no regulatory requirements for file storage.
Active Liveness (cooperative) - invites the user to make some kind of action (approach, wink, smile, turn the head)
The benefits of active liveness:
In some cases, when obtaining a loan remotely or concluding an agreement or other options for establishing business relations, the customer is required to record the fact of an active action (user consent, expressed in the form of active action).
An active action means that the user has read the user agreement and has done what is asked of him: for example, he smiled or turned his head, thereby confirming that he is conscious and acting of his own free will.
Disadvantages of active liveness:
Active Liveness requires a lengthy video (3 to 5 seconds) for the user to take action. This increases the customer path associated with the transmission and processing of the request.
Active Liveness reduces conversions by 5-10%. According to our statistics, from 5 to 10% of users do not want or cannot understand what is required of them in the process of active Liveness (at what moment you need to smile or blink).
Detecting the best shot according to the Visa or VisaBORDER standard for subsequent face comparison is difficult since in the process of passing emotions appear (smile, wink), the head is tilted or turned.
Active Liveness does not affect security and protection against spoofing attacks.
The modern distribution of deepfake allows you to bypass any active liveness by animating the picture to perform the required action.
Therefore, modern approaches to countering spoofing attacks should not rely on active action. Active action should only be used to resolve disputes on the legal plane.
It does not imply any active action, except for the need to look at the camera.
The benefits of passive liveness:
Passive liveness does not require lengthy video and can fit in 1 shot. This simplifies transmission and processing speed up to 1 process per second.
Passive Liveness does not decrease conversions because it does not require any additional action.
Passive Liveness is as safe as active. According to the Visa or VisaBORDER standard, detecting the best frame occurs better on video that does not imply the presence of emotions and head turns.
Lack of proactive action can be the cause of controversial situations. However, such cases, according to our statistics, are unlikely.
Our algorithm is resistant to attacks:
Developed by Oz Forensics specialists, the biometric module incorporates the latest practices in artificial intelligence and is consistently improved by continuous data enrichment.Face recognition algorithm from Oz Forensics achieved one of the best test results in accuracy on LFW dataset in University of Massachussets tests in 2020 MIT LFW. The Oz Biometry module allows to identify people with a speed of less than 1 second and with 99.87% accuracy
Since the beginning of the 2010s, every smartphone has a front camera, which makes facial biometrics a natural way for authentication. Face biometry can be used both in the registration process, and in the process of searching and authentication on large biometric databases.
Facial biometrics are used in the KYC process to compare selfie photos with photographs from documents, as well as to confirm the presence of a user in biometric “black” and “white” lists.
Oz Forensics Face Recogntion Algorithms allow:
What different datasets are tested on?
Pairs of ISO type images - ISO/IEC 19794-5 Full Frontal. Pose is generally excellent. The image type is applicable to portrait for passport, driver license, and “mugshot” images. This dataset excludes "mugshot" images.
Only adults’ images.
Pairs of ISO type images - ISO/IEC 19794-5 Full Frontal. Pose is generally excellent. The image type is applicable to portrait for passport, driver license, and “mugshot” images. This dataset includes only "mugshot" images.
Only USA adults` citizens images.
Pairs of the following images:
the first image - the image is taken from web-camera. This image is "partially" frontal and is done under the following conditions: different lighting, different angle of head rotation or partial overlap of the faces;
the second image is ISO type image - ISO/IEC 19794-5 Full Frontal . Pose is generally excellent. The image type is applicable to portrait for passport, driver license, and “mugshot” images. This dataset excludes "mugshot" images.
Only adults’ images.
Pairs of images that are very unconstrained, with wide yaw and pitch pose variation. Faces can be occluded, including hair and hands.
Only adults’ images.
Photo or video taken with a web camera is compared with the photo in the passport scan OR with a photo and video database using key features
Verdict: match or no match = 99,87% accuracy
All products include:
Flexible SDKs for iOS,
Android and web
Get a demo >