How AI Is Used in Static Biometric Verification
This article outlines the technical principles behind static biometric verification and its use cases, as well as how to implement a liveness detection service in an app.
Join the DZone community and get the full member experience.
Join For FreeStatic biometric verification is a commonly used AI feature that captures faces in real time and can determine whether a face belongs to a real person or not, without prompting the user to move their head or face. In this way, the service helps deliver a convenient user experience that wins positive feedback.
Technical Principles
Static biometric verification requires an RGB camera and is able to differentiate between a real person's face and a spoof attack (such as an image or screenshot of a face and a face mask), through details (such as the moiré pattern or reflection on a paper photo) in the image captured by the camera. The service supports data from a wide array of scenarios, including different lighting conditions, face accessories, genders, hairstyles, and mask materials. The service analyzes a face's surroundings to detect suspicious environments.
The static biometric verification model adopts the lightweight convolutional module, and linear computation is converted to a single convolutional module or a fully connected layer in the inference phase through reparameterization. The MindSpore Lite inference framework can be used for model deployment, which crops operators. The model's package size is then shrunk, making it more convenient for integration.
Application Scenarios
Liveness detection is usually used before face verification. For example, when a user uses facial recognition to unlock their phone, liveness detection first determines whether the captured face is real or not. If yes, face verification will then check whether the face matches the one recorded in the system. These two technologies complement one another to protect a user's device from unauthorized access.
So it's safe to say that static biometric verification provides rigid protection for apps, and I'm here to illustrate how this can be integrated.
Integration Procedure
Preparations
The detailed preparations are all provided in the documentation for the service.
Two modes are available to call the service:
Call Mode |
Liveness Detection Process |
Liveness Detection UI |
Function |
Default View Mode |
Processed by ML Kit |
Provided |
Determines whether a face is real or not. |
Customized View Mode |
Processed by ML Kit |
Custom |
Determines whether a face is real or not. |
Default View Mode
1. Create a callback to obtain the static biometric verification result.
private MLLivenessCapture.Callback callback = new MLLivenessCapture.Callback() {
@Override
public void onSuccess(MLLivenessCaptureResult result) {
// Callback when verification is successful. The result indicates whether the face is of a real person.
}
@Override
public void onFailure(int errorCode) {
// Callback when verification fails. For example, the camera is abnormal (CAMERA_ERROR). Add the processing logic to deal with the failure.
}
};
2. Create a static biometric verification instance and start verification.
MLLivenessCapture capture = MLLivenessCapture. getInstance();
capture.startDetect(activity, callback);
Customized View Mode
1. Create an MLLivenessDetectView instance and load it to the activity layout.
/**
* i. Bind the camera preview screen to the remote view and set the liveness detection area.
* In the camera preview stream, static biometric verification determines whether a face is in the middle of the image. To improve the pass rate, you are advised to place the face frame in the middle of the screen and set the liveness detection area to be slightly larger than the face frame.
* ii. Set whether to detect the mask.
* iii. Set the result callback.
* iv. Load MLLivenessDetectView to the activity.
*/
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_liveness_custom_detection);
mPreviewContainer = findViewById(R.id.surface_layout);
// ObtainLLivenessDetectView
mlLivenessDetectView = new MLLivenessDetectView.Builder()
.setContext(this)
// Set whether to detect the mask.
.setOptions(MLLiveness DetectView.DETECT_MASK)
// Set the rectangle of the face frame relative to MLLivenessDetectView.
.setFaceRect(new Rect(0, 0, 0, 200))
// Set the result callback.
.setDetectCallback(new OnMLLivenessDetectCallback() {
@Override
public void onCompleted(MLLivenessCaptureResult result) {
// Callback when verification is complete.
}
@Override
public void onError(int error) {
// Callback when an error occurs during verification.
}
@Override
public void onInfo(int infoCode, Bundle bundle) {
// Callback when the verification prompt message is received. This message can be displayed on the UI.
// if(infoCode==MLLivenessDetectInfo.NO_FACE_WAS_DETECTED){
// No face is detected.
// }
// ...
}
@Override
public void onStateChange(int state, Bundle bundle) {
// Callback when the verification status changes.
// if(state==MLLivenessDetectStates.START_DETECT_FACE){
// Start face detection.
// }
// ...
}
}).build();
mPreviewContainer.addView(mlInteractiveLivenessDetectView);
mlInteractiveLivenessDetectView.onCreate(savedInstanceState);
}
2. Set a lifecycle listener for MLLivenessDetectView.
@Override
protected void onDestroy() {
super.onDestroy();
mlLivenessDetectView.onDestroy();
}
@Override
protected void onPause() {
super.onPause();
mlLivenessDetectView.onPause();
}
@Override
protected void onResume() {
super.onResume();
mlLivenessDetectView.onResume();
}
@Override
protected void onStart() {
super.onStart();
mlLivenessDetectView.onStart();
}
@Override
protected void onStop() {
super.onStop();
mlLivenessDetectView.onStop();
}
Published at DZone with permission of Jackson Jiang. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments