RefBan

Referral Banners

Yashi

Thursday, March 22, 2012

The Case Against Google

March 22nd, 2012Top Story

The Case Against Google

By Mat Honan

The Case Against GoogleFor the last two months, you've seen some version of the same story all over the Internet: Delete your search history before Google's new privacy settings take effect. A straightforward piece outlining a rudimentary technique, but also evidence that the search titan has a serious trust problem on its hands.

Our story on nuking your history was read nearly 200,000 times on this site alone—and it was a reprint of a piece originally put out by the EFF. Many other outlets republished the same piece. The Reddit page linking to the original had more than 1,000 comments. And the topic itself was debated on decidedly non-techie forums like NPR.

It's not surprising that the tracking debate had people up in arms. A Pew Internet study, conducted just before Google combined its privacy policies (and after it rolled out personalized search results in Search Plus Your World) found that three quarters of people don't want their search results tracked, and two thirds don't even want them personalized based on prior history.

The bottom line: People don't trust Google with their data. And that's new.

Google is a fundamentally different company than it has been in the past. Its culture and direction have changed radically in the past 18 months. It is trying to maneuver into position to operate in a post-pc, post-Web world, reacting to what it perceives as threats, and moving to where it thinks the puck will be.

At some point in the recent past, the Mountain View brass realized that owning the Web is not enough to survive. It makes sense—people are increasingly using non Web-based avenues to access the Internet, and Google would be remiss to not make a play for that business. The problem is that in branching out, Google has also abandoned its core principles and values.

Many of us have entered into a contract with the ur search company because its claims to be a good actor inspired our trust. Google has always claimed to put the interests of the user first. It's worth questioning whether or not that's still the case. Has Google reached a point where it must be evil?

Search is Dying

Imagine you woke up tomorrow and Google was gone. You would still be able to search the Web. You could still send email. You could still use maps, make phone calls, watch videos, network with friends, write blog posts. There would be a period of adjustment, and it would be incredibly inconvenient but you would get by. There are other options.

Some would feel it more than others; Google is a tool of the masses. Despite more than 20 years of the World Wide Web and more than 35 of personal computers, the Internet is still a very troubling place for many people. Google is the cipher they use to make sense of the chaos.

The Case Against GoogleCase in point: A prolific science writer I know tells a story about how his mother calls him every time her Google is broken. What she means is that her Internet is down. But for her, Google is the Internet. And that's true for many, who use its search box as a gateway to the networked world. They get to Facebook by typing "Facebook" and hitting Search. Without Google, they'd be lost.

Google may not be a utility, but search is a very utility-like service. Search is what Google was built on, and why people go to Google in the first place. And when Google rolled out its newest iteration of search—Search Plus your World (SPYW)—people reacted to it like viewing an open grave.

There's a good reason for that revulsion: SPYW is a mess. In trying to deliver personalized results, Google polluted the page with its own inferior products (like Google+ instead of Twitter, Google Places instead of Yelp) while banishing competitors to lower listings in the results. Ads are everywhere. The People and Pages sidebar that now appears in search results is particularly galling. It is the ultimate subversion of Google to a commercial end. Basically, it's an enormous ad for Google's other products, hogging your screen.

It's hard to understand how Google could screw up its core product like that. But there's a remarkably simple explanation: Search is no longer Google's core product.

One Googler authorized to speak for the company on background (meaning I could use the information he gave me, but not directly quote or attribute it) told me something that I found shocking. Google isn't primarily about search anymore. Sure, search is still a core product, but it's no longer the core product. The core product, he said, is simply Google.

Ultimately, it's not about Gmail or Search or Android or Chrome or Maps or Plus. All of those are in service to one great master; pieces of the larger Google. He said that if I paid attention to what Larry Page has been saying recently, this would be apparent. And yup, PandoDaily recently quoted Page saying, "This is the path we're headed down – a single unified, 'beautiful' product across everything. If you don't get that, then you should probably work somewhere else."

It's stunning when you stop and think about it. Search isn't just what Google does best, it's what it is in most people's minds. The company's name is often used as a verb meaning "to search." It's in the Oxford English Dictionary! So what happened?

The Move from Search to Answers

Google owns the Web, but it didn't build it. And as it turns out, the open Web is kind of shitty real estate. Yes, the mansion itself is huge, but it's not built to code and is in constant need of renovation to keep it from falling apart.

Meanwhile, there are all these new homes going up in the same neighborhood. Nice places. Built from the ground up to perfectly fit their owners' needs. Places that people can can get to from the Web, but aren't really made of Web. Those are the kind of joints users want to go hang out in. As Chris Anderson argued in WIRED:

Over the past few years, one of the most important shifts in the digital world has been the move from the wide-open Web to semiclosed platforms that use the Internet for transport but not the browser for display. It's driven primarily by the rise of the iPhone model of mobile computing, and it's a world Google can't crawl, one where HTML doesn't rule.

Google needs to get inside those houses. Or failing that, build one of its own.

The Internet is the world's greatest collection of knowledge, but increasingly, that wisdom lives in walled off apps. It lives in services and platforms. Places where we build up relationships, express preferences, and reveal so much about ourselves. We're on Foursquare and Netflix and Facebook and Twitter and Skype. We're interacting in real time, and in ways that don't lend themselves well to indexing. Google can't know exactly what's going on in all those places. How the links between entities work. What and who we like and dislike. There is information there that it can't index. And if it can't index it, or understand it, it damn sure can't serve an ad.

Trouble is, that hard-to-index information is key to Google's future. Mountain View may not be all about search anymore, but it desperately wants to be able to answer real world questions for you; there's a huge difference. Search is just about retrieving information. Actually answering subjective questions requires a deep knowledge of the person doing the asking: Where you are, who your are friends, what your interests are, what you like and don't like.

Picture this scenario. You are about to leave San Francisco to drive to Lake Tahoe for a weekend of skiing, so you fire up your Android handset and ask it "what's the best restaurant between here and Lake Tahoe?"

It's an incredibly complex and subjective query. But Google wants to be able to answer it anyway. (This was an actual example given to me by Google.) To provide one, it needs to know things about you. A lot of things. A staggering number of things.

To start with, it needs to know where you are. Then there is the question of your route—are you taking 80 up to the north side of the lake, or will you take 50 and the southern route? It needs to know what you like. So it will look to the restaurants you've frequented in the past and what you've thought of them. It may want to know who is in the car with you—your vegan roommates?—and see their dining and review history as well. It would be helpful to see what kind of restaurants you've sought out before. It may look at your Web browsing habits to see what kind of sites you frequent. It wants to know which places your wider circle of friends have recommended. But of course, similar tastes may not mean similar budgets, so it could need to take a look at your spending history. It may look to the types of instructional cooking videos you've viewed or the recipes found in your browsing history.

It wants to look at every possible signal it can find, and deliver a highly relevant answer: You want to eat at Ikeda's in Auburn, California. Hey, I love that place too! Try the apple pie.

There is only one path to that answer, and it goes straight through your privacy. Google can't deliver this kind of a tailored result if you're using all kinds of other services that it doesn't control. Nor can it do it if you keep your Google services separated. You have to do all the things you used to do elsewhere within the confines of one big information sharing service called Google.

The Way Out

Google mastered search by looking at our values. In addition to just looking at the content on the page itself, Google looked at other pages that linked to it—backlinks. They were, essentially, objective verification of the page's importance. It used those backlinks and its proprietary algorithm to create a PageRank with the most relevant examples placed highest in the results. You no longer had to sift through pages and pages of results to find what you were looking for. It was wonderful. It was relevant.

The Case Against GoogleBut while Google was busy holding up the sky, the ground beneath its feet shifted in ways it didn't anticipate. Our searches have evolved from the merely factual to the deeply personal. We want to find a nice hotel or a good restaurant or a particular person. We want to know what's happening right now, right here. And increasingly, we turned to smaller, fragmented, platforms to get that stuff.

Facebook did for people searching what Google did for Web searching, in a very similar way. While Google used existing links between Web pages to determine relevance, Facebook used the existing links between people—the connections that we ourselves defined—to determine social relevance.

It explains why Facebook works so well right from the gate. Log onto Facebook for the first time, give it some social data—like your contacts database, your workplace, your high school, your university—and it begins finding people you know as if by magic.

Google was never very good at that. It doesn't know who we know. Let's say I'm looking to connect with Joe Brown: If I enter his name into Google, I'll get thousands of results for various Joes all over the world—judges, punk rockers, comedians. But on Facebook, when I enter his name, I find exactly who I want because he's connected to so many of my other friends already.

The backlinks my friends have already established give Facebook a social relevance Google doesn't have. And because Facebook hides the connections in its social graph, Google can't index that data. It can't understand it. In other words, Google can't even use what Facebook knows about me to know which Joe Brown I am looking for. (You know, this guy.)

And as we have begun to carry the Internet with us everywhere we go—posting photos and status updates and blog posts and videos along the way—we've increasingly wanted a different type of relevance, one that speaks more to what's relevant right now than overall and forever. We want up to the millisecond data. The kind of thing found on, say, Twitter.

Twitter is often mislabeled as a social network when it's actually more of a real-time information network. Yes, people make connections, but they tend to connect based on shared interests and location above existing friendships. You don't follow your friends from high school, or others with whom you have nothing in common; You follow people who have something to say. And-more importantly for Google-Twitter is the most expansive, real-time, searchable window to the world today.

Twitter and Facebook both have things Google needs if it wants to move into the post-web world. Facebook has social relevance. Twitter has real-time information. But Facebook and Google view themselves as competitors. And while Google and Twitter once had an arrangement, that deal fell through, for reasons neither party will fully disclose.

People often say Google's previous CEO, Eric Schmidt, missed the boat on social. But in reality, where he missed the boat was by not inking a deal that could get Google what it needed to deliver answers in a post-PC, post-Web world. Which brings us to Google+.

Google+ solves Google's big problems, at least in theory. It delivers a social network—arguably better constructed Facebook—that lets it understand the connections between people. It also lets Google tap into a stream of real-time data, and build a search system around that without having to worry that it will ever be left at the altar. And it does so much more, too! It has real time photos, like Instagram. It has a video chat service, like Skype. It lets you see which businesses your friends recommend, like Yelp. It's a one size fits all solution, and what's more it's on the open Web. Perfect!

One problem: People don't really want to use it. They're already entrenched in other stuff. Many of Google's recent actions can be explained by understanding that dilemma. Google wants to know things about you that you aren't already telling it so you will continue asking it questions and it can continue serving ads against the questions you ask it. So, it feels like it has to herd people into using Google+ whether they want to go there or not.

This explains why Google has been driving privacy advocates crazy and polluting its search results. It explains why now, on the Google homepage, there's a big ugly black bar across the top that reminds you of all its properties. It explains the glaring red box with the meaningless numbers that so desperately begs you to come see what's happening in its anti-social network. It explains why Google is being a bully. It explains why Google broke search: Because to remain relevant it has to give real-world answers.

Google has to get you under its tent, and break down all the silos between its individual products once you're there. It needs you to reveal your location, your friends, your history, your desires, your finances; nothing short of your essence. And it needs to combine all that knowledge together. That's Search Plus Your World. "Your World" is not just your friends, or your location. It's your everything. The breadth of information Google wants to collect and collate is the stuff of goosebumps.

And the thing is, Google's going to get it. All of it.

The question is not if Google will be able to do this. Of course it will. It doesn't have to build better products, it just has to force enough people into them. It will leverage everything it has—and it already is—to squeeze more information from us. The question is: should we be okay with that?

Perversely, some of the things Google has been doing to get us in that tent, and get that information from us, are the very things that suggest we may want to stay outside and keep our mouths shut.

What is Evil?

The only reason anyone uses the word evil about Google, is because Google asked us to. When it said that it wasn't evil, it immediately invited an argument.

It is actually quite difficult for a corporation to be evil in the traditional sense of the word. There are outliers most of us might call evil—Enron or arms merchants—that may operate outside the law. There are a smaller few companies like Monsanto, or Dow Chemical or Goldman Sachs that have done crucial damage to our planet or society. But it starts to get subjective pretty quickly. You might think a company with abhorrent labor practices is evil. Or one that is a large polluter. Or logs old-growth forests. Or provides abortions. Or doesn't provide abortions.

Evil is different things to different people.

Which is why it is ultimately not a very useful way of thinking about things. Evil is subjective. So perhaps what we should focus on instead of what we mean by evil, is what did Google mean by "evil?"

Fortunately, this is on the record; they said it, and we wrote it down.

Josh McHugh's January 2003 story about Google could have been written today. It identifies all the major problems Google faced then, which are still, largely, the problems it faces today. But it does something else, too. It pins the company down on what, exactly, evil is.

Google's code of conduct can be boiled down to a mere three words: Don't be evil.
Very Star Wars. But what does it mean?
"Evil," says Google CEO Eric Schmidt, "is what Sergey says is evil."
As a private company, Google has one master: users. As a public company, there are shareholders to worry about. And more than happy users, shareholders want ever-greater profits.

If Brin's code of good and evil permits the company to negotiate with sovereign governments and allows for some legal meddling from unpopular religions, there is no wiggle room—no gray area whatsoever—when it comes to those who attempt to subvert the power of Google to their own commercial ends. One thing Brin is sure of: On the side of evil lies trickery.

I ask Brin to imagine, for a moment, running his company's evil twin, a sort of anti-Google. "We would be doing things like having advertising that wasn't marked as being paid for. Stuff that violates the trust of the users," he says, describing a site that sounds not unlike the pay-for-placement search site Overture. "Say someone came looking for breast cancer information and didn't know that some listings were paid for with money from drug companies. We'd be endangering people's health."

I've taken those passages from several different places in the story and the emphasis is mine. But the points are quite clear. In the past year—and especially the past six months—Google has unquestionably and to an unprecedented extent violated its users' trust. And of course the great irony is that the subversion of Google's power, the ultimate trickery, came not from an external force, but Google itself. Google has spent much of 2011 and 2012 getting called out for all kinds of nasty brutish behavior. Here are a few small but telling examples of that trickery:

  • Google subverted mobile Safari's default protections to track users in ways they did not agree to be tracked. And lied about it, as the Wall Street Journal reported: "The findings appeared to contradict some of Google's own instructions to Safari users on how to avoid tracking."
  • Google began promoting its own products in search over more obviously relevant ones. It placed Google+ profiles above those that are obviously more relevant on other social networks. Its Places frequently appear above the actual location listings.
  • Google has increasingly given prominence to ads over results. If you use an 11" Macbook Air, for example, and search for a generalized term like "music" your small screen will be full of ads—you will have to scroll to find search results.
  • Google falsely claimed it couldn't effectively index and rank Twitter.
  • Google illegally accepted ads for Canadian pharmacies with the purpose of delivering them to American users.
  • Google seems to have committed overt fraud in Kenya.
  • So yes, evil is different things to different people. But if we use Google's definition of evil, if we believe evil to be subverting the power of Google's information delivery system to a commercial end, tricking users and violating their trust, well...

    While this is a problem today, it will be an even bigger one tomorrow.

    What Now? Your Data. Your Privacy. Your Choices. Your Future.

    Google has, for many years, essentially said "trust us." It's in its Founder's Letters, its mission statements, even on its about page. Google has pledged both overtly and in suggestive ways to not be evil by making a great product and putting users first.

    You could argue that it still does both of those things. That its attempts to move beyond Search are just attempts to provide a better product. You could see these actions as taking a long term view. You could point to Google's Data Liberation policy—which is a fine policy—that lets you take all of your data with you if you decide to leave Google.  And that is, in its own weird engineer-centric way, a user friendly thing to do. This is certainly the case right now, but if Google is OK with changing its course on one of its core values, how long will these policies and directions last?

    What happens if, ten years from now, Google drastically changes again? Will you still be able to wipe yourself from Google's drives? Will there be a massive, or incremental policy shift? Will it secretly keep bits of you, just as it has secretly tracked bits of you, against your wishes? If Google is already going back on some of its initial promises, what comes next? If it can break one, can't it break them all?

    What Google seems to have forgotten is that we were only willing to give them all that data in the first place because it gave us great products and seemed trustworthy.

    Google has forgotten why we loved it. It has degraded its premier product in service of promoting others. It has done devious things to ferret out information from its users that they do not willingly provide. It is too much focused on the future, and conversely too scared of current competition.

    Many years ago, when Google was embroiled in its first major privacy scandal—over the outcry that its robots would read the content of user emails in its then nascent and publicly unavailable Gmail service—I argued that this was no big deal. That scanning for content did not equal reading. That we shouldn't be scared of Google. I trusted Google. A few years later, I was told by an executive at the company that this story was seen as a turning point in the debate.

    Google is far bigger now, and far less susceptible to the whims of the public. But I hope that, to some extent, it is still listening. Because the case against Google is for the first time starting to outweigh the case for it.

    Google may have to get us to use Google+ if it wants to remain relevant. But it should be able to go about that in a fundamentally honest fashion.

    If it can't keep its promises, if it can't avoid resorting to trickery, if it can't keep itself from subverting the power of its search engine for commercial ends, and on top of all that if it can't even deliver the highest quality search results at a default setting—the most basic thing people have come to expect from Google, the very thing its name has become synonymous with—why should you trust it with your personal data?

    That's a question that we'll all have to answer for ourselves.

Number of comments

No comments:

Yashi

Chitika