Can someone explain why people don't trust Google?
Yesterday, Android chief Sundar Pichai laid out his reasoning as to why people should trust Google with their data, and that spurred on an extremely entertaining discussion in the comment thread about Google and the privacy concerns around the company. The trouble is that throughout the debate, we never got a compelling argument as to why people don't trust Google. This is key, we had a few people talking about various privacy issues that Google has had (we'll touch on those in-depth in a minute), and we had people saying that they value privacy, but valuing privacy doesn't explain why Google is untrustworthy.
Google's business and you
Let's start right at the core of things: Google's primary business is advertising, and it make more and more money through advertising based on how successful those ads are. Ads are successful by targeting products to people who would most likely be interested in those products. The best way to do that is for Google to learn about you and serve you ads that are more likely to catch your attention because the ads are relevant to you.
For just a second, let's take a look at how advertising is traditionally done. It was a wide-spray system: you sign up for a customer loyalty card at the supermarket, or companies purchase your information from a public database, then you get endless amounts of junk mail in your mailbox. It's incredibly difficult to stop this flow of junk mail. Or, you pick up a newspaper, magazine, or watch a TV show, and based on the content provided (and the time it is provided in the case of TV), advertisers try to guess what you might like and serve up ads that you can't really avoid, but you can ignore if you want. And, for the most part, you probably will ignore them, because really who needs to see an ad for toilet paper? Is there anyone out there who is only buying toilet paper because it's being advertised? No.
Now, back to Google. Google has taken the wide-spray technique of old advertising and has focused the aim to serve you ads that you are more likely to care about, if you let Google learn about you. And, if you allow Google to learn about you, you don't just get better ads that are more relevant, you get a host of other services that are subsidized by ads. Let's be clear about it, Google doesn't offer free services, Google offers ad-subsidized services. We get Gmail, Search, Maps, YouTube, Drive, Calendar, Blogger, Translate, and everything else for no out of pocket cost. All Google asks is that you trade some information for the access.
This is not a novel approach. Facebook and Twitter do the same thing, and most news websites also do the same thing. News websites may not always gather a lot of information, for example, we here at PhoneArena assume that our audience likes technology (since we cover mobile tech), so our site features ads from relevant companies like Best Buy and Intel (in addition to some wide-spray ads like Kia). Ultimately, the more information you give to systems like Facebook and Google, the more they will be able to profit from you.
Of course, we want to be careful about putting Facebook and Google in the same category, because while each profits in the same way, the process and user control options for each are vastly different. Facebook has had a history of changing policies without really explaining things, and of obfuscating controls. Google, on the other hand, may not always succeed, but does at least attempt to be more transparent about what it is doing, and how you can control your data.
Google's transparency and control
As far as control, Google probably could be better, but at the same time the company is still better than a lot of others. Most of Google's data collection is opt-in by default, but you can turn off quite a bit of it. And, at the end of the day, you can always stay logged out of your Google account. Google will still be able to gather some general information about you, like your operating system and location, which is information that any website can gather unless you spoof the system.
And, you can delete most of the information that Google has gathered on you, and that data will be removed from Google's servers after a certain period of time. On this front, Google could be more transparent, because it is unclear exactly how long some data is kept, or how it works with images linked to a document. But ultimately, your data will be removed if you want it to be.
There is no unified way to delete all of your data from Google. Some people mistakenly think that this is what Google Takeout is for, but that's not true. Google Takeout is only there so that you can download your data from Google. If you want to delete your data, you would have to go to your Google Dashboard and go through each service individually. It is time consuming, but it is possible.
Google does use your data in order to better target ads, but it should be made clear that Google does not share any personally identifiable information, and even protects what it calls "sensitive personal information". This means that while Google may know who you are and where you live, and can attach that information to your interest graph, advertisers don't know anything more than that there is a person in a certain general area who likes a certain thing. Google doesn't share your name, and "sensitive personal information" covers data like "confidential medical facts, racial or ethnic origins, political or religious beliefs or sexuality." For information like that, Google requires an opt-in before it will share anything like that.
And, we should mention the ultimate form of control that you have: don't use Google. If you don't use Google, Google doesn't get information about you. It's quite simple, but it's also something that sometimes gets lost in the privacy debate. Often, those complaining about Google's privacy policies will make it sound like they actually want to use Google services, but feel that those services should be completely free, with no ads and no data collection. Frankly, this sounds like a silly attitude of entitlement. Google doesn't owe anyone free services, that's not how companies work.
The value you get in return and the enigma of privacy
Of course, as we mentioned, a big part of the value in this trade is that it allows Google to offer most of its services for free, but the more information Google has on you, the better the services get. Because Google knows what you search for, it can better recommend a wide range of things like music, YouTube videos, news stories, restaurants, and of course general search results get better as well. Voice recognition and spell checking gets better because Google knows how you speak or how you type. And, we won't go into it again, because we've talked about it at length before, but Google Now wouldn't be one of the best new pieces of software available without Google knowing about you.
That leads to this idea of privacy. We understand that there are some limited reasons why someone would want to protect certain information. But, in general, what exactly is the value of privacy? We can understand if you would be put in some sort of danger by sharing certain information, but that's not really what we're talking about here. Google offers what we think is a fair amount of value for your data, and aside from a few slip-ups (which we're about to examine), Google has had a pretty strong record of protecting user data.
Google's privacy problems examined
Obviously, Google isn't perfect. Google has had privacy issues in the past, and has even faced various fines around the world because of the troubles. But, we feel like the various issues have been a bit miscast for the most part.
For example, many people point to the Street View case as a smoking gun that Google doesn't care about your privacy. The trouble we have with that characterization is that it is mostly based on hyperbolic media reports that don't quite understand what happened. Media reports have often used phrasing like "Google stole data", which is not accurate. The only data that Google Street View cars gathered were from open WiFi access points, meaning they did not have any password protection at all, and anyone could walk by any of those same places and gather far more data than Google cars did while passing by. If you can't be bothered to secure your data in even the most basic way, what right do you have to complain if some of that data is gathered by someone outside?
The second case that tends to get a lot of attention is when Google bypassed Safari's cookie settings in order to get +1 buttons to work on websites. This is a bit closer to being an actual problem, although once again, despite link-bait headlines, Google didn't "track" anyone using this method. The story was that Safari, by default, doesn't allow any 3rd party cookies at all, so Google used a loophole to get around that limitation, and get +1 buttons working in the browser. Part of the story that doesn't get much press is that the loophole was one that Google itself patched and submitted the code back to WebKit, but Apple never adopted the code into Safari, effectively leaving the backdoor open. We'll admit that Google acted shady in this case, even though the company didn't actually do anything harmful.
The last case that gets press is the one that is by far the biggest problem that Google has had, because although the problems were born out of stupidity, the damages were serious. Of course, we're talking about the Buzz fiasco. The story there was that when Google first launched Buzz, it tried to set up users with social connections that made sense. Unfortunately, this was done automatically, so contact details were shared without consent. The most famous story from this fiasco was from a woman whose details were automatically shared with an ex who was under a restraining order. This was legitimately a problem, although it should be mentioned that Google raced to fix the problems, and pushed changes the day after launching, and ultimately had the problems sorted out in less than a week.
Google's incentive is your protection
That quick response in the Buzz fiasco leads us to our main issue with those who cling to the response of wanting to protect their privacy, and try to stir up fears that Google is going to misuse your data for its own gain. First of all, as we mentioned before, Google does not share personally identifiable information at all, and doesn't share "sensitive personal information" unless you specifically allow it.
Of course, those concerned with privacy question if Google can be trusted to follow this promise. The theory seems to be that Google can make more money by breaking that oath and selling off any data that it can. Here's the trouble with that theory: Google's business model means that it is incentivized to protect your data, not exploit it.
Sure, it is possible that Google could break its promises and misuse your data for its own gain. But, is it likely? We certainly don't think so. Think about it this way: Google's business is built on advertising, and those ads are successful because of the targeting. Google could misuse your data, but if it does, it loses your trust. If Google loses your trust, it loses your data. If it loses your data, it can't target ads as well, so ad revenue drops. When ad revenue drops, the company can't support all of the services that people love, and the whole house of cards collapses.
Google's best path to revenue is to protect your data, and use that data to provide better services and better ads. Given that, and the value that Google offers with its services, can someone please explain: what incentive does Google have to misuse your data? And, why exactly do some people not trust Google? And, please, be specific. If you want to protect your privacy, please explain why you want to, and what you feel like you gain from being more private rather than sharing.
a special thanks goes out to ardent1, Hemlocke, and the others who debated this with me yesterday and inspired this post.