Imageware identifies, verifies and authenticates who people are, not just what keys and codes they have.
As we’ve seen lately, most apps that require authentication are leaning toward allowing built-in mobile biometrics to gate logins once users have confirmed their credentials one time. This trend will only continue as on-device biometric scanners become more prevalent on devices. With Apple’s FaceID and future TouchID improvements and Android improvements like Samsung’s built-in iris scanners, the future of biometrics in the mobile paradigm is bright.
But what does this mean for users? And how can we make sure that our mobile apps stay clean and easy to use? How do we leverage this biometrics trend to make our apps more secure?
With the ImageWare Authenticator app, we think we’ve struck a solid balance. Our app lets you leverage biometrics for authentication while keeping the app simple and user-friendly. In this article, you’ll read about what we’ve discovered and how you can use biometric authentication -- either in your own apps (ideally using our Authenticator SDK) or by integrating your services directly with our ImageWare Authenticator app.
Humans have traits that are unique to each individual, like fingerprints and iris patterns. With some effort, these traits can be used to uniquely identify individuals to within a margin of error. This kind of measurement is called biometry, and the use of these traits is called biometrics. We can leverage these biometrics to verify that the person using the device is actually who they claim to be.
As Apple mentions in their FaceID security docs (https://support.apple.com/en-us/HT208108)
The probability that a random person in the population could look at your iPhone or iPad Pro and unlock it using Face ID is approximately 1 in 1,000,000 with a single enrolled appearance. As an additional protection, Face ID allows only five unsuccessful match attempts before a passcode is required. The statistical probability is different for twins and siblings that look like you and among children under the age of 13, because their distinct facial features may not have fully developed. If you're concerned about this, we recommend using a passcode to authenticate.
All algorithms used to compare biometrics have traits like “false positives” and “false negatives”. These are statistical bounds around certainty that the algorithms can support. Since all biometric algorithms are basically just passing data points through neural networks of various complexity, you almost never receive a completely perfect result. As an implementation example, an algorithm could return a verification score between 0 and 1, with 1 being a “perfect match”. In practice, unless you send the exact same sample twice (which many algorithms are capable of catching as an error in their “liveness” detection step, seen below), you’ll never receive a 1. But a score of 0.95 may be considered a strong success. Because of this, most third-party algorithm vendors have a “threshold” of what they consider strong successes and strong failures.
As Apple mentions in its FaceID doc, best practices for using biometrics is typically to back the biometrics check itself with a passcode, password, or other way to validate a user. We can see this approach in many apps that use biometrics today, including the lock screen of phones that allow biometrics. If the user fails verification a pre-defined number of times, they’re forced to fallback to a PIN or enter some other credentials.
One way we get around this issue in the ImageWare Authenticator app is by providing what we call multimodal biometrics, or checking several biometric types at once. This acts as a mathematical bulwark against this issue, and lets us be more confident in our verification attempts than using a single modality alone.
Apple and Samsung aren’t the only players in the biometrics game. As biometrics become a more popular and convenient form of authentication, more algorithm providers are arriving in the market with different and exciting technologies. ImageWare’s Authenticator app lets you leverage the more popular vendors with new ones coming online as time passes.
There are several considerations to be made when adopting a third party biometrics library into your security solution. I’ve listed a couple of them below.
In biometrics there’s a concept called liveness, which is one of the most important aspects of biometric authentication. The idea is that a single image of a face or palm is easy to fake, so most good algorithms have ways to check that the subject is live — that is, the capture you’re trying to authenticate is of a live subject and not a picture of a computer screen or a synthesized voice sample.
Local authentication on mobile devices uses built-in liveness, which is a huge accuracy boon. For example, Apple’s FaceID uses an IR camera to project a mesh of IR dots across your face, which are then treated internally as a 3D mesh that’s part of the template definition they use to authenticate. This means that FaceID can’t be fooled by a picture of a face, since there’s no depth information. It can also help mitigate attacks by poorly-made masks.
Client-side authentication risks leaking templates to attackers with root access unless backed by hardware like with Apple’s Secure Enclave.
Server-side authentication is typically better as long as you secure it by TLS or use cert-pinning to mitigate MITM attacks.
Capturing biometrics seems like a simple process, but this is deceptive. The ones adopted by on-device solutions are the most convenient — iris captures detect the user’s eye with the assumption that the user is looking at their phone if they’re attempting to unlock it. FaceID detects the user’s face and authenticates using the same premise. TouchID, and the various fingerprint captures on Android phones, have scanners built into the phones in places implementers assume you’ll be holding and have your fingers already on the sensor.
For other captures, though, this isn’t as easy. Face captures using the front-facing camera are simple, but what about a palm capture? What about capturing a voice in a busy grocery store? What if the user is in construction laying concrete or a surfer, both groups which often have severely degraded fingerprints?
These solutions require accessibility and user studies, and as a company that focuses on accessibility and usability, these questions are at the forefront of our minds at ImageWare.
We’ve found a couple of solutions to the problem of user friction, and we’ll highlight those solutions here. The key elements of user friction are responsiveness, complexity, and risk.
Responsiveness is important in any mobile app, but with biometrics, responsiveness is even higher as a requirement. You can see this in the way Android and iOS expose seamless biometrics APIs and how those APIs expose their interfaces to users, with Apple going to the extreme of locking down access to a simple “do a biometrics event” call that returns on success or failure, with some simple configuration options like falling back to PIN or not. The speedy turn-around of on-device biometrics isn’t possible with server-side authentication, but as with any networking call, communication and quick action on success or failure is important. Users will only use biometrics if they’re convenient.
Complexity in a biometric capture is exactly what it sounds like: captures should be fast, simple to perform, and require as little thought and movement as possible. Face captures are simple, since the user is typically looking at their phone as they manipulate it. Palm captures are more complex, since in the simplest case the user has to point a camera at their palm while following directions on the screen. The most complex, we’ve found, is voice captures. These captures in particular are not only complex in the user sense (repeating digits or text on screen, while presenting feedback info to the user that they must respond to), but voice captures can also have social complexity -- for example, capturing a set voice passphrase in a busy grocery store could be challenging for users who are socially anxious. This also opens users up to the complexity of dealing with poor sampling of their voice, causing false failures, and even risks their set passphrase being copied by attackers.
Finally, risk is the element of consequence for failure. If a user fails to verify, what happens? If they succeed, what’s the result? Risk is higher for a failed verification if it stops a car payment going through, but less high for a failed verification if you’re at Starbucks.
For an app designed to cover custom use cases we can’t really control the risk element of the user interaction. But we’ve found that by reducing complexity, increasing responsiveness, and making users aware of the task they’re trying to achieve using biometrics, we’re able to mitigate the risk element while providing a powerful and versatile biometrics authentication platform.
The future of biometrics on mobile devices is bright. Rumors abound that TouchID will return in the next iPhone and will be part of the glass itself, with FaceID’s special cameras being built under the display instead of in a bezel. Samsung’s iris authentication will only improve with time. Soon other methods of biometric authentication, like palm-vein and heart rate will be integrated into devices or available as third party libraries as well. We’ve seen biometrics included in stand-alone devices in airports, and biometrics are used in identity provider services around the world. All of this translates to a push to constantly improve and iterate on biometrics as a service.
Create your free account to unlock your custom reading experience.