I really want an iPhone X. It has lots of cool features: a headphone jack (oops), edge to edge display (well, almost edge to edge), and the ability to create anthropomorphic poop. What’s not to like? However, as with other technologies, advances are a double-edged sword; in this case, the face ID program.

The newest iPhone abandons the fingerprint sensor in favor of an infrared scanner which maps the face of the user and then uses that facial map for authentication. I have previously written about the fact that, while biometrics provide greater privacy and security from a technical point of view, they provide significantly less privacy and protection from a legal point of view. That is because courts are more inclined to force someone to scan a finger or a face to unlock a phone than they are to compel someone to provide or enter a password or passphrase. There’s something about having to “say” your password, or provide something that is the contents of your mind or memory, which triggers in people in black robes the concept that this is “testimonial” and therefore might implicate the Fifth Amendment. 

The Apple facial sensors continue this trend. A court is likely to find holding a phone up to a suspect’s face a minimal intrusion on privacy, and therefore a limited Fifth Amendment violation (even though the two concepts are not at all related to each other.) Just as the courts allow cops to take blood, DNA, fingerprint or other forensic evidence with a very low showing of probable cause, courts look at the degree of intrusiveness to determine the reasonableness of what the cops do. A cheek swab or a breathalyzer puff is different from a body cavity search. However, for Fifth Amendment purposes, the law provides that no person shall be compelled to “be a witness” against themselves – or to “testify” against themselves – and we all know what “testifying” looks like: some dude in an ill-fitting suit standing in court saying something. So, courts make an illogical distinction between spoken passwords and faces.

The real problem with the facial recognition and 3D interactive emojis of the Apple X has to do with privacy. A fingerprint authenticates you at the point of accessing the phone, and potentially at other points as well (say, Apple Pay or iTunes purchases). Once you authenticate yourself to the phone, you are done. The only information you provide is that you are you, and what time and location you authenticated and for what purpose.

When you move to facial recognition, your face tells Apple – and whomever Apple decides it may share the information with – a lot more about you. For decades we have had software that analyzes people’s phone calls to detect whether a customer, for example, is angry or frustrated with a customer service call, and to automatically escalate the call to a supervisor. Great if you are an actually angry customer (if it leads to a good resolution), but less so if you are the customer service agent fired because the algorithm thinks that you have too many angry customers. 

The Apple facial recognition provides additional data points that have the potential for being used to invade people’s privacy. If it’s just used to authenticate and then stops, it’s not much more intrusive of privacy than current fingerprint readers—although for people like me, (and my identical twin brother) not as effective. 

But Apple plans to release APIs for the facial recognition to allow new applications to use the scanners and mappers for all kinds of purposes. Amazon will know if you crinch your nose at a particular ad; Google will find out if your eyes open wider for a particular image; Facebook will know if you are excited or bored; as you watch YouTube, YouTube is watching you. And if you want your phone to work without the home button and don’t want to provide a password each time, it’s hard to turn the scanning off.

Which points out another problem with customer choice, particularly for privacy. The settings you are provided are both too specific and not granular enough. Imagine having to opt in or out for every app, website, and data point. You can’t possibly do it. On the other hand, when the Daily News website says, “this site would like to access your location – YES or NO?” I need more information to answer. WHY does it want my information? Will it collect it all the time? How will it use it? Will it share it? What will happen if my information is subpoenaed from them? How long will they store it? Imagine having to ask these questions every time you scan your face.

Any time you add a new sensor or a new data point, you have to inquire not only how this can be used, but how it can be abused – or what really cool idea someone will have to use that data later on. And then face the problem head on. AMIRITE?