AT&T recently unveiled a Gigabit Internet service in Texas, with a catch.  There are different prices if you opt out of sharing your surfing habits than if you decide you want your personal data to be, well, personal.

So in effect, you are paying for privacy – or as AT&T would describe it, getting a discount for sharing.

From a pure market standpoint, it makes sense.  First, AT&T makes money by selling information about you.  Just like Google, Yahoo, and everyone else does.  So AT&T is sharing part of that profit as a discount if you agree to allow it to share.

You reduce their price, they reduce yours.  We do this kind of thing all the time.  You can download “free” software or free apps, but they come with in-app ads that are targeted to you.  If you pay for the software or apps, then the ads (and the tracking) disappear.  Pandora has a regular and a premium site.  Your choice.

So the question is, do people really care about data privacy?  Or data security for that matter?

Yes.  And no.  And that’s the problem.   The issues for privacy and security are subtlety different; so let’s address each one separately.

“Free Market Privacy”

So here’s the idea.  You don’t like Facebook because of its complex privacy policies, the difficulty of preventing people from sharing data about you, and the lack of transparency about what data is collected and how it is used.  So theoretically, there should be a rival “privacy enhancing” social networking service that provides its users with such protections.  Competition.  If enough people want a privacy compliant site, one will spring up.  Investors will invest in the site, developers will develop it, and money will be made.

But there’s a problem with this business model.  When was the last time you wrote a check (remember checks?) to the order of “Facebook, Inc.” for one month’s use of social networking — $19.95.

You don’t.  And you probably wouldn’t.  While many of us value our social networking, it’s not like we would be willing to PAY for it.  Well, not with real money.  We have been habitualized into thinking of this, and all other privacy degrading services as “free.”

Because they are free to us.  And once we get used to them as being free, it’s really hard to get us to paying for a “premium” service that protects privacy.

Part of that is because we can’t value privacy.  At least not in the abstract.  We can’t see an invasion of privacy.  We don’t see what Facebook and its customers do with (and don’t do with) the mountains of data generated by the site.

We can’t know the thousands of ways we are discriminated against (and in favor of) because of the massive data analytics performed on us. You’re a 21-year-old girl weighing 111 lbs.?  You’re not getting ads for “Rochester Big and Tall” Men’s Store.  And you are happy about it.

In the ordinary course of a day, you see little “harm” with the kind of information collection, analysis and sharing that goes on every day.  I don’t mean to imply that there isn’t harm.  I mean you don’t see it.  You are not aware of it.  You can’t know the ads you don’t see; the discounts you don’t get, the subtle ways you are steered into certain products or services.

John Wannamaker, the Philadelphia retail mogul once said, “Half the money I spend on advertising is wasted; the trouble is, I don’t know which half.”  With targeted ads, behavioral profiling, and deep dive analytics, the John Wannamakers of the world do know the value of privacy.

Or, more accurately, invasions thereof. As Eric Schmitt noted, “Google was founded to get information to everybody. A by-product of that strategy is that we invented an advertising business which has provided great economics that allows us to build the servers, hire the employees, create value.”

The founders of Google, Facebook, Yahoo!, and Twitter didn’t start out to sell personal data to advertisers.  Nor did AT&T, Verizon, or other ISP’s.  But by collecting massive amounts of data that was useful to advertizers, they found a new source of revenue.  And the people knew very little about it.

Privacy for “Sale?”

I used to participate in some focus groups in the Southern Maryland area.  A bunch of people in a room with a two way mirror and some clipboards discussing what we wanted in some new product or service.  And we got paid $50 to $100 for sitting in that room for an hour and sharing our ideas and thoughs.  Our ideas and privacy were truly for “sale.”

But that’s not how the sale of private or targeted information works online.  The consumer isn’t directly paid for their data.  In fact, it’s not considered their data at all.  In general, the data “belongs” to the search engine you use, the provider you choose, the ISP, website, retailer, service provider, etc.  It’s not YOUR record, it’s their record of what you did.

The same is true by the way for your medical records, your bank records, your phone records, your tax records.  They aren’t yours.  “Your lab results are in..” the nurse says.  Actually, it’s the labs results about your blood sample.  Their records.  Not yours.  In the absence of regulation, they can do with them what they want.  Use them.  Sell them.  Post them.  Give them away.  In the words of the recently departed Lesley Gore, “It’s my party, and I’ll cry if I want to.”

So if we start with the assumption that the data “belongs” to whomever collects it, then we have almost no bargaining power.  If I put up cameras in your neighborhood (at my own expense) and collect data about your travel habits, I can use, sell or distribute that data freely.  Free market.  Oh, and I can also charge you money NOT to collect that data or use it in particular ways.  That’s called extortion.  I’m sorry, I meant “opt out.”

If I can make 8 bucks selling your personal data,  then I can charge you 9 bucks to not sell it.  A bargain for you.

So people don’t pay for privacy because they don’t know privacy is being invaded (they don’t know what’s being collected or sold, or to whom), they don’t see any harm (they don’t know how the data is being used), and they don’t feel a genuine sense of “ownership” of this data anyway.  I don’t “own” the fact that I drove to the strip mall, or the strip joint.

Plus, there may be a benefit to the consumer in giving up privacy.  They get targeted ads instead of random blasted ones.  They get discounts on things they might actually want.  Their search results show things they are actually looking for.  You just had a baby?  Here’s a coupon for Similac.  Diagnosed with diabetes?  Here’s a discount on test strips.  Just got a promotion?  BMW is having a sale.  Everyone is happy.

Until they aren’t.

It’s all fun and games until someone screengrabs your Snapchat.   In a true free market for privacy, your data can be sold to anyone.  This means that there’s nothing wrong with selling your Social Security Number or credit card number to Russian hackers or foreign intelligence agencies.

I mean, it’s not YOUR social security number, it belongs to the Federal Government.  It’s not your credit card.  It belongs to the bank.  And there’s a market for it.  Oh, and for your medical information.  And your phone calls.  They all have value.  Let the free market decide on privacy.  If you are willing to pay me NOT to disclose your SSN more than the market is willing to pay for it, we have a free market.

Also, we “sell” our personal information (give it away) with certain express or implied limitations.  Sure, you can track my browsing habits to improve performance, or to cache certain webpages.  But don’t tell my boss that I have been surfing job listings while at home or tell my wife I have been surfing porn sites.  It’s OK if Jeep knows I like trucks, but I don’t want people to know that I am a bronie.  (my little pony enthusiast).

Privacy isn’t binary.  It isn’t black and white.  It’s grey.  It’s more than 50 shades of grey.  Every bit of data has different expectations embedded in it, and it depends on who is collecting the data, what they are doing with it, and with whom they are sharing, and what the impact of that sharing and use might be on me.  It’s not simply that people who are not doing anything wrong have nothing to hide, but the flip side of that – what I am doing, right or wrong is nobody’s business.

So we don’t have, and probably can’t have a “free market” for personal information.  That’s mostly because most of this personal information is stolen from the data subject, collected without their knowledge and sold and used without their effective consent.  In that marketplace, it’s really hard to sell privacy.

Would you be willing to sell your medical records for a discount on treatment?  Would you be willing to sell your surfing habits for a free browser?

And once the data is “sold” it is resold.  And it is hacked, and stolen, and subpoenaed, and distributed, and analyzed, and processed, and aggregated.  There’s a price to all of that as well.

So while a free market for personal data sounds good, it may not work in real life.

Paying for Security

A similar problem exists for the concept of paying for security.  Would you pay extra to use a browser that really protected the security of your data?  What about a cloud provider?  An email provider?  A data storage facility?  Would you pay more for a laptop that had more security on it?  Embedded in it? With biometrics and real-time encryption?

Mostly, um… no.

That’s because we generally ASSUME (despite thousands of times knowing it’s not true) that our data is safe and secure.  At least to some level.

Sure, corporations will pay to secure their own data, and the data they hold for others.  Contracts frequently call for a particular (well, a vague) level of security.  In fact, the entire data security market (hardware, software, etc.) is based on the idea that there is a “market” for data security.

But the truth is, consumers and others rarely select providers based on security.  I don’t shop (or not shop) at Target based on their data security programs.  I don’t select my bank that way, or my doctor.  I select my doctor like everyone else does.  I find one in my insurance plan, convenient to work or home, with decent office hours, recommended by friends, specializing in something I have or am likely to have.

I put files on Dropbox without a clue whether they are secure.  Not a clue.  I use Gmail without knowing whether the email is truly secure.  Nor do I have a meaningful opportunity to test it.

I bought an iPhone cause it’s cool.  Not because it’s secure.  Well, a little bit because it’s secure (the fingerprint thingy is also cool though).  But mostly because it’s cool.  People may buy Macs rather than PC’s because of a perception of security from malware, but it’s more a “touchy feely” think –you know, like Volvos are safer than other cars.

I have no idea if my ISP, Verizon FIOS is “secure” or what it even means to be secure.  And I will never know.  Companies are not required to disclose their security posture, and in fact, disclosing it (um, we have an open port vulnerable to exploit) would make the company more vulnerable and really not help the decision process.

True choice demands information.  But when it comes to security, more information may mean more insecurity.  The SEC guidance  on cybersecurity suggests some disclosure, but only when the security issue becomes “material.”  Not material to YOU as a consumer, but material to the company and its investors.  The SONY hack was not material to its investors, so why disclose?

Without information, we can’t choose.  And nobody has access to the kind of information necessary to make an informed choice about security.  I don’t just mean that companies don’t disclose it.  I mean that even the company doesn’t know.  Who is going to be the next target of a data breach?   How extensive is that breach likely to be?  How much damage is it likely to cause.  And,  most importantly – how will it impact ME and my data?

Sure, if I am a corporation looking at cloud hosting solutions, I can demand a certain level of security.  I can audit based on current ISO standards, do a pen test and maybe even have grey hat hackers try to get in.  But overall, I can’t decide who my lawyers are, my vendors or suppliers based on security.  At least not without adequate security.

Finally, people aren’t demanding privacy and security because they are assuming – often erroneously – that they already have it.  People think that there are laws about what data can be collected, and what can be done with it.  Mostly, that’s not the case.  People think that there are laws requiring security.  Mostly that is not the case.  We rely mostly on the concept of “reasonableness.”  If everyone were reasonable, we wouldn’t need law would we?  And we wouldn’t need lawyers.  And where would we be if that happened?

Leave a Reply