美文网首页
2019-02-07 Prologue

2019-02-07 Prologue

作者: 界境 | 来源:发表于2019-02-07 12:21 被阅读0次

    November 9, 2016

    2016年11月9日

    “The Russians used Facebook to tip the election!”

    "俄罗斯人利用 Facebook 来操纵选举!"

    So began my side of a conversation the day after the presidential election. I was speaking with Dan Rose, the head of media partnerships at Facebook. If Rose was taken aback by how furious I was, he hid it well.

    在总统选举的第二天,我就开始了我这边的谈话。 我是在和 Dan Rose 谈话,他是 Facebook 媒体合作伙伴的负责人。 如果罗斯对我的愤怒感到吃惊的话,他掩饰得很好。

    Let me back up. I am a longtime tech investor and evangelist. Tech had been my career and my passion, but by 2016, I was backing away from full-time professional investing and contemplating retirement. I had been an early advisor to Facebook founder Mark Zuckerberg—Zuck, to many colleagues and friends—and an early investor in Facebook. I had been a true believer for a decade. Even at this writing, I still own shares in Facebook. In terms of my own narrow self-interest, I had no reason to bite Facebook’s hand. It would never have occurred to me to be an anti-Facebook activist. I was more like Jimmy Stewart in Hitchcock’s Rear Window. He is minding his own business, checking out the view from his living room, when he sees what looks like a crime in progress, and then he has to ask himself what he should do. In my case, I had spent a career trying to draw smart conclusions from incomplete information, and one day early in 2016 I started to see things happening on Facebook that did not look right. I started pulling on that thread and uncovered a catastrophe. In the beginning, I assumed that Facebook was a victim and I just wanted to warn my friends. What I learned in the months that followed shocked and disappointed me. I learned that my trust in Facebook had been misplaced.

    让我倒回去。 我是一个长期的技术投资者和福音传播者。 科技一直是我的事业和激情所在,但到了2016年,我开始放弃全职专业投资和考虑退休。 我曾是 Facebook 创始人马克•扎克伯格(Mark zuckerberg)的早期顾问,也是 Facebook 的早期投资者。 十年来,我一直是一个真正的信徒。 即使在写这篇文章的时候,我仍然拥有 Facebook 的股份。 就我自己狭隘的个人利益而言,我没有理由去反咬 Facebook 一口。 我从来没有想过自己会成为一个反 facebook 的活跃分子。 我更像希区柯克的《后窗》里的吉米 · 斯图尔特。 他正在专注于自己的事情,从客厅看风景,当他看到一个看起来像犯罪的过程,然后他不得不问自己应该做什么。 以我为例,我一直试图从不完整的信息中得出明智的结论,2016年初的一天,我开始在 Facebook 上看到一些看起来不对劲的事情发生。 我开始拉那根线,发现了一场灾难。 一开始,我以为 Facebook 是一个受害者,我只是想警告我的朋友们。 在接下来的几个月里,我所学到的让我震惊和失望。 我意识到我对 Facebook 的信任是错误的。

    This book is the story of why I became convinced, in spite of myself, that even though Facebook provided a compelling experience for most of its users, it was terrible for America and needed to change or be changed, and what I have tried to do about it. My hope is that the narrative of my own conversion experience will help others understand the threat. Along the way, I will share what I know about the technology that enables internet platforms like Facebook to manipulate attention. I will explain how bad actors exploit the design of Facebook and other platforms to harm and even kill innocent people. How democracy has been undermined because of design choices and business decisions by internet platforms that deny responsibility for the consequences of their actions. How the culture of these companies causes employees to be indifferent to the negative side effects of their success. At this writing, there is nothing to prevent more of the same.

    这本书讲述了为什么我不由自主地相信,尽管 Facebook 为大多数用户提供了引人注目的体验,但它对美国来说太糟糕了,需要改变或被改变,而我一直在努力改变它。 我希望我自己的转变经历能够帮助其他人理解这种威胁。 在这个过程中,我将分享我所知道的关于使像 Facebook 这样的互联网平台能够操纵注意力的技术。 我将解释坏人如何利用 Facebook 和其他平台的设计来伤害甚至杀害无辜的人。 由于互联网平台拒绝为其行为的后果承担责任,其设计选择和商业决策如何破坏了民主。 这些公司的文化如何使得员工对他们成功的负面影响漠不关心。 在写这篇文章的时候,没有什么可以阻止同样的事情再次发生。

    This is a story about trust. Technology platforms, including Facebook and Google, are the beneficiaries of trust and goodwill accumulated over fifty years by earlier generations of technology companies. They have taken advantage of our trust, using sophisticated techniques to prey on the weakest aspects of human psychology, to gather and exploit private data, and to craft business models that do not protect users from harm. Users must now learn to be skeptical about products they love, to change their online behavior, insist that platforms accept responsibility for the impact of their choices, and push policy makers to regulate the platforms to protect the public interest.

    这是一个关于信任的故事。 包括 Facebook 和 Google 在内的技术平台,是前几代技术公司50年来积累起来的信任和商誉的受益者。 他们利用了我们的信任,使用复杂的技术,掠夺人类心理最薄弱的方面,收集和利用私人数据,并精心打造不能保护用户免受伤害的商业模式。 用户现在必须学会对他们喜欢的产品持怀疑态度,改变他们的在线行为,坚持要求平台为他们的选择所造成的影响承担责任,并敦促政策制定者规范这些平台,以保护公众利益。

    This is a story about privilege. It reveals how hypersuccessful people can be so focused on their own goals that they forget that others also have rights and privileges. How it is possible for otherwise brilliant people to lose sight of the fact that their users are entitled to self-determination. How success can breed overconfidence to the point of resistance to constructive feedback from friends, much less criticism. How some of the hardest working, most productive people on earth can be so blind to the consequences of their actions that they are willing to put democracy at risk to protect their privilege.

    这是一个关于特权的故事。 它揭示了超级成功的人是如何如此专注于自己的目标,以至于忘记了其他人也有权利和特权。 原本聪明的人怎么可能忽略这样一个事实: 他们的用户有自主决定权。 成功是如何滋生过度自信,以至于抵制来自朋友的建设性反馈,更不用说批评了。 为什么地球上一些最勤奋工作、最有生产力的人会对自己行为的后果视而不见,以至于他们愿意为了保护自己的特权而将民主置于危险之中。

    This is also a story about power. It describes how even the best of ideas, in the hands of people with good intentions, can still go terribly wrong. Imagine a stew of unregulated capitalism, addictive technology, and authoritarian values, combined with Silicon Valley’s relentlessness and hubris, unleashed on billions of unsuspecting users. I think the day will come, sooner than I could have imagined just two years ago, when the world will recognize that the value users receive from the Facebook-dominated social media/attention economy revolution masked an unmitigated disaster for our democracy, for public health, for personal privacy, and for the economy. It did not have to be that way. It will take a concerted effort to fix it.

    这也是一个关于权力的故事。 它描述了即使是最好的想法,在善意的人的手中,仍然可以走向可怕的错误。 想象一下,不受管制的资本主义、令人上瘾的技术和独裁主义价值观,加上硅谷的冷酷和傲慢,在数十亿毫无戒心的用户身上释放出来。 我认为这一天会到来,比我两年前所能想象的还要早,那时世界将认识到,用户从 facebook 主导的社交媒体 / 关注度经济革命中获得的价值,掩盖了我们的民主、公共健康、个人隐私和经济所面临的一场不折不扣的灾难。 事情本不该是这样的。 我们需要齐心协力来解决这个问题。

    When historians finish with this corner of history, I suspect that they will cut Facebook some slack about the poor choices that Zuck, Sheryl Sandberg, and their team made as the company grew. I do. Making mistakes is part of life, and growing a startup to global scale is immensely challenging. Where I fault Facebook—and where I believe history will, as well—is for the company’s response to criticism and evidence. They had an opportunity to be the hero in their own story by taking responsibility for their choices and the catastrophic outcomes those choices produced. Instead, Zuck and Sheryl chose another path.

    当历史学家完成了历史的这个角落,我怀疑他们会对 Facebook 公司成长过程中扎克、谢丽尔 · 桑德伯格和他们的团队所做的糟糕选择放松一些。 我知道。 犯错是生活的一部分,成长为一个全球规模的创业公司是极具挑战性的。 我指责 facebook 的地方——以及我相信历史也会指责它的地方——是该公司对批评和证据的回应。 他们有机会成为自己故事中的英雄,为自己的选择以及这些选择带来的灾难性后果承担责任。 相反,扎克和谢丽尔选择了另一条道路。

    This story is still unfolding. I have written this book now to serve as a warning. My goals are to make readers aware of a crisis, help them understand how and why it happened, and suggest a path forward. If I achieve only one thing, I hope it will be to make the reader appreciate that he or she has a role to play in the solution. I hope every reader will embrace the opportunity.

    It is possible that the worst damage from Facebook and the other internet platforms is behind us, but that is not where the smart money will place its bet. The most likely case is that the technology and business model of Facebook and others will continue to undermine democracy, public health, privacy, and innovation until a countervailing power, in the form of government intervention or user protest, forces change.

    TEN DAYS BEFORE the November 2016 election, I had reached out formally to Mark Zuckerberg and Facebook chief operating officer Sheryl Sandberg, two people I considered friends, to share my fear that bad actors were exploiting Facebook’s architecture and business model to inflict harm on innocent people, and that the company was not living up to its potential as a force for good in society. In a two-page memo, I had cited a number of instances of harm, none actually committed by Facebook employees but all enabled by the company’s algorithms, advertising model, automation, culture, and value system. I also cited examples of harm to employees and users that resulted from the company’s culture and priorities. I have included the memo in the appendix.

    Zuck created Facebook to bring the world together. What I did not know when I met him but would eventually discover was that his idealism was unbuffered by realism or empathy. He seems to have assumed that everyone would view and use Facebook the way he did, not imagining how easily the platform could be exploited to cause harm. He did not believe in data privacy and did everything he could to maximize disclosure and sharing. He operated the company as if every problem could be solved with more or better code. He embraced invasive surveillance, careless sharing of private data, and behavior modification in pursuit of unprecedented scale and influence. Surveillance, the sharing of user data, and behavioral modification are the foundation of Facebook’s success. Users are fuel for Facebook’s growth and, in some cases, the victims of it.

    When I reached out to Zuck and Sheryl, all I had was a hypothesis that bad actors were using Facebook to cause harm. I suspected that the examples I saw reflected systemic flaws in the platform’s design and the company’s culture. I did not emphasize the threat to the presidential election, because at that time I could not imagine that the exploitation of Facebook would affect the outcome, and I did not want the company to dismiss my concerns if Hillary Clinton won, as was widely anticipated. I warned that Facebook needed to fix the flaws or risk its brand and the trust of users. While it had not inflicted harm directly, Facebook was being used as a weapon, and users had a right to expect the company to protect them.

    The memo was a draft of an op-ed that I had written at the invitation of the technology blog Recode. My concerns had been building throughout 2016 and reached a peak with the news that the Russians were attempting to interfere in the presidential election. I was increasingly freaked out by what I had seen, and the tone of the op-ed reflected that. My wife, Ann, wisely encouraged me to send the op-ed to Zuck and Sheryl first, before publication. I had been one of Zuck’s many advisors in Facebook’s early days, and I played a role in Sheryl’s joining the company as chief operating officer. I had not been involved with the company since 2009, but I remained a huge fan. My small contribution to the success of one of the greatest companies ever to come out of Silicon Valley was one of the true highlights of my thirty-four-year career. Ann pointed out that communicating through an op-ed might cause the wrong kind of press reaction, making it harder for Facebook to accept my concerns. My goal was to fix the problems at Facebook, not embarrass anyone. I did not imagine that Zuck and Sheryl had done anything wrong intentionally. It seemed more like a case of unintended consequences of well-intended strategies. Other than a handful of email exchanges, I had not spoken to Zuck in seven years, but I had interacted with Sheryl from time to time. At one point, I had provided them with significant value, so it was not crazy to imagine that they would take my concerns seriously. My goal was to persuade Zuck and Sheryl to investigate and take appropriate action. The publication of the op-ed could wait a few days.

    Zuck and Sheryl each responded to my email within a matter of hours. Their replies were polite but not encouraging. They suggested that the problems I cited were anomalies that the company had already addressed, but they offered to connect me with a senior executive to hear me out. The man they chose was Dan Rose, a member of their inner circle with whom I was friendly. I spoke with Dan at least twice before the election. Each time, he listened patiently and repeated what Zuck and Sheryl had said, with one important addition: he asserted that Facebook was technically a platform, not a media company, which meant it was not responsible for the actions of third parties. He said it like that should have been enough to settle the matter.

    Dan Rose is a very smart man, but he does not make policy at Facebook. That is Zuck’s role. Dan’s role is to carry out Zuck’s orders. It would have been better to speak with Zuck, but that was not an option, so I took what I could get. Quite understandably, Facebook did not want me to go public with my concerns, and I thought that by keeping the conversation private, I was far more likely to persuade them to investigate the issues that concerned me. When I spoke to Dan the day after the election, it was obvious to me that he was not truly open to my perspective; he seemed to be treating the issue as a public relations problem. His job was to calm me down and make my concerns go away. He did not succeed at that, but he could claim one victory: I never published the op-ed. Ever the optimist, I hoped that if I persisted with private conversations, Facebook would eventually take the issue seriously.

    I continued to call and email Dan, hoping to persuade Facebook to launch an internal investigation. At the time, Facebook had 1.7 billion active users. Facebook’s success depended on user trust. If users decided that the company was responsible for the damage caused by third parties, no legal safe harbor would protect it from brand damage. The company was risking everything. I suggested that Facebook had a window of opportunity. It could follow the example of Johnson & Johnson when someone put poison in a few bottles of Tylenol on retail shelves in Chicago in 1982. J&J immediately withdrew every bottle of Tylenol from every retail location and did not reintroduce the product until it had perfected tamperproof packaging. The company absorbed a short-term hit to earnings but was rewarded with a huge increase in consumer trust. J&J had not put the poison in those bottles. It might have chosen to dismiss the problem as the work of a madman. Instead, it accepted responsibility for protecting its customers and took the safest possible course of action. I thought Facebook could convert a potential disaster into a victory by doing the same thing.

    One problem I faced was that at this point I did not have data for making my case. What I had was a spidey sense, honed during a long career as a professional investor in technology.

    I had first become seriously concerned about Facebook in February 2016, in the run-up to the first US presidential primary. As a political junkie, I was spending a few hours a day reading the news and also spending a fair amount of time on Facebook. I noticed a surge on Facebook of disturbing images, shared by friends, that originated on Facebook Groups ostensibly associated with the Bernie Sanders campaign. The images were deeply misogynistic depictions of Hillary Clinton. It was impossible for me to imagine that Bernie’s campaign would allow them. More disturbing, the images were spreading virally. Lots of my friends were sharing them. And there were new images every day.

    I knew a great deal about how messages spread on Facebook. For one thing, I have a second career as a musician in a band called Moonalice, and I had long been managing the band’s Facebook page, which enjoyed high engagement with fans. The rapid spread of images from these Sanders-associated pages did not appear to be organic. How did the pages find my friends? How did my friends find the pages? Groups on Facebook do not emerge full grown overnight. I hypothesized that somebody had to be spending money on advertising to get the people I knew to join the Facebook Groups that were spreading the images. Who would do that? I had no answer. The flood of inappropriate images continued, and it gnawed at me.

    More troubling phenomena caught my attention. In March 2016, for example, I saw a news report about a group that exploited a programming tool on Facebook to gather data on users expressing an interest in Black Lives Matter, data that they then sold to police departments, which struck me as evil. Facebook banned the group, but not until after irreparable harm had been done. Here again, a bad actor had used Facebook tools to harm innocent victims.

    In June 2016, the United Kingdom voted to exit the European Union. The outcome of the Brexit vote came as a total shock. Polling had suggested that “Remain” would triumph over “Leave” by about four points, but precisely the opposite happened. No one could explain the huge swing. A possible explanation occurred to me. What if Leave had benefited from Facebook’s architecture? The Remain campaign was expected to win because the UK had a sweet deal with the European Union: it enjoyed all the benefits of membership, while retaining its own currency. London was Europe’s undisputed financial hub, and UK citizens could trade and travel freely across the open borders of the continent. Remain’s “stay the course” message was based on smart economics but lacked emotion. Leave based its campaign on two intensely emotional appeals. It appealed to ethnic nationalism by blaming immigrants for the country’s problems, both real and imaginary. It also promised that Brexit would generate huge savings that would be used to improve the National Health Service, an idea that allowed voters to put an altruistic shine on an otherwise xenophobic proposal.

    The stunning outcome of Brexit triggered a hypothesis: in an election context, Facebook may confer advantages to campaign messages based on fear or anger over those based on neutral or positive emotions. It does this because Facebook’s advertising business model depends on engagement, which can best be triggered through appeals to our most basic emotions. What I did not know at the time is that while joy also works, which is why puppy and cat videos and photos of babies are so popular, not everyone reacts the same way to happy content. Some people get jealous, for example. “Lizard brain” emotions such as fear and anger produce a more uniform reaction and are more viral in a mass audience. When users are riled up, they consume and share more content. Dispassionate users have relatively little value to Facebook, which does everything in its power to activate the lizard brain. Facebook has used surveillance to build giant profiles on every user and provides each user with a customized Truman Show, similar to the Jim Carrey film about a person who lives his entire life as the star of his own television show. It starts out giving users “what they want,” but the algorithms are trained to nudge user attention in directions that Facebook wants. The algorithms choose posts calculated to press emotional buttons because scaring users or pissing them off increases time on site. When users pay attention, Facebook calls it engagement, but the goal is behavior modification that makes advertising more valuable. I wish I had understood this in 2016. At this writing, Facebook is the fourth most valuable company in America, despite being only fifteen years old, and its value stems from its mastery of surveillance and behavioral modification.

    When new technology first comes into our lives, it surprises and astonishes us, like a magic trick. We give it a special place, treating it like the product equivalent of a new baby. The most successful tech products gradually integrate themselves into our lives. Before long, we forget what life was like before them. Most of us have that relationship today with smartphones and internet platforms like Facebook and Google. Their benefits are so obvious we can’t imagine foregoing them. Not so obvious are the ways that technology products change us. The process has repeated itself in every generation since the telephone, including radio, television, and personal computers. On the plus side, technology has opened up the world, providing access to knowledge that was inaccessible in prior generations. It has enabled us to create and do remarkable things. But all that value has a cost. Beginning with television, technology has changed the way we engage with society, substituting passive consumption of content and ideas for civic engagement, digital communication for conversation. Subtly and persistently, it has contributed to our conversion from citizens to consumers. Being a citizen is an active state; being a consumer is passive. A transformation that crept along for fifty years accelerated dramatically with the introduction of internet platforms. We were prepared to enjoy the benefits but unprepared for the dark side. Unfortunately, the same can be said for the Silicon Valley leaders whose innovations made the transformation possible.

    If you are a fan of democracy, as I am, this should scare you. Facebook has become a powerful source of news in most democratic countries. To a remarkable degree it has made itself the public square in which countries share ideas, form opinions, and debate issues outside the voting booth. But Facebook is more than just a forum. It is a profit-maximizing business controlled by one person. It is a massive artificial intelligence that influences every aspect of user activity, whether political or otherwise. Even the smallest decisions at Facebook reverberate through the public square the company has created with implications for every person it touches. The fact that users are not conscious of Facebook’s influence magnifies the effect. If Facebook favors inflammatory campaigns, democracy suffers.

    August 2016 brought a new wave of stunning revelations. Press reports confirmed that Russians had been behind the hacks of servers at the Democratic National Committee (DNC) and Democratic Congressional Campaign Committee (DCCC). Emails stolen in the DNC hack were distributed by WikiLeaks, causing significant damage to the Clinton campaign. The chairman of the DCCC pleaded with Republicans not to use the stolen data in congressional campaigns. I wondered if it were possible that Russians had played a role in the Facebook issues that had been troubling me earlier.

    Just before I wrote the op-ed, ProPublica revealed that Facebook’s advertising tools enabled property owners to discriminate based on race, in violation of the Fair Housing Act. The Department of Housing and Urban Development opened an investigation that was later closed, but reopened in April 2018. Here again, Facebook’s architecture and business model enabled bad actors to harm innocent people.

    Like Jimmy Stewart in the movie, I did not have enough data or insight to understand everything I had seen, so I sought to learn more. As I did so, in the days and weeks after the election, Dan Rose exhibited incredible patience with me. He encouraged me to send more examples of harm, which I did. Nothing changed. Dan never budged. In February 2017, more than three months after the election, I finally concluded that I would not succeed in convincing Dan and his colleagues; I needed a different strategy. Facebook remained a clear and present danger to democracy. The very same tools that made Facebook a compelling platform for advertisers could also be exploited to inflict harm. Facebook was getting more powerful by the day. Its artificial intelligence engine learned more about every user. Its algorithms got better at pressing users’ emotional buttons. Its tools for advertisers improved constantly. In the wrong hands, Facebook was an ever-more-powerful weapon. And the next US election—the 2018 midterms—was fast approaching.

    Yet no one in power seemed to recognize the threat. The early months of 2017 revealed extensive relationships between officials of the Trump campaign and people associated with the Russian government. Details emerged about a June 2016 meeting in Trump Tower between inner-circle members of the campaign and Russians suspected of intelligence affiliations. Congress spun up Intelligence Committee investigations that focused on that meeting.

    But still there was no official concern about the role that social media platforms, especially Facebook, had played in the 2016 election. Every day that passed without an investigation increased the likelihood that the interference would continue. If someone did not act quickly, our democratic processes could be overwhelmed by outside forces; the 2018 midterm election would likely be subject to interference, possibly greater than we had seen in 2016. Our Constitution anticipated many problems, but not the possibility that a foreign country could interfere in our elections without consequences. I could not sit back and watch. I needed some help, and I needed a plan, not necessarily in that order.

    相关文章

      网友评论

          本文标题:2019-02-07 Prologue

          本文链接:https://www.haomeiwen.com/subject/oqgwsqtx.html