The Strange Story of Google's Selfish Ledger

google Jan 8, 2022

If you need Google to run your life, this is definitely for you.

At one time, not too many years ago, Google top hats developed the idea to push the rest of us to change society, as follows:

The video was obtained and published on Thursday by The Verge. It describes a so-called “Selfish Ledger” that would collect all of your data, including actions you make on your phone, preference settings, and decisions you make, and not just keep it there for future evaluation. Instead, the ledger, which would be designed and managed by Google, would interpret that information and guide you down a path towards reaching a goal, or on a broader scale, doing your part to help solve poverty or other societal problems.

In one example, the video describes how the ledger would ask you to create a life goal. It would then tell you what kind of activities to engage in to achieve that goal. So, for instance, if you want to lose weight, the ledger would see that you’re shopping for food on your phone and direct you to buy a healthier option. The video even suggests that some of the recommendations would “reflect Google’s values as an organization” to get you to reduce your carbon footprint. - DON REISINGER

Yum. And we thought the internet would just allow us to stay in touch with Grandma and cousin George…

Of course, it could also be used to reshape our thoughts:

According to a short film uncovered by The Verge called “The Selfish Ledger,” Google had been thinking about using “total data collection” and social engineering to modify the behavior of entire populations. The nine-minute video examines the possibilities of using Big Data to guide users into conforming to a predetermined agenda. While the video does take a “for the common good” slant by using thought control techniques to solve problems like poverty and global warming, the mere fact that the video is seriously discussing behavior modification on a massive global scale is scary. - CAL JEFFREY

So when governments want Google to account for itself, keep this in mind:

Google has built a multibillion-dollar business out of knowing everything about its users. Now, a video produced within Google and obtained by The Verge offers a stunningly ambitious and unsettling look at how some at the company envision using that information in the future.

The video was made in late 2016 by Nick Foster, the head of design at X (formerly Google X) and a co-founder of the Near Future Laboratory. The video, shared internally within Google, imagines a future of total data collection, where Google helps nudge users into alignment with their goals, custom-prints personalized devices to collect more data, and even guides the behavior of entire populations to solve global problems like poverty and disease. - VLAD SAVOV

That said, the future could be dumber than the past:

And while in theory there are more “choices” and “flexibility” available than ever, in practice these are winner-take-all platforms, with the default choices and settings dominating user behavior. Google can return tens of millions of results for a search, but most users won’t leave the first page. Essentially random suggestions to users can become self-fulfilling prophesies, as Wired reported of the obscure 1988 climbing memoir Touching the Void, which by 2004 had become a hit due to Amazon’s recommendation algorithm. - JON ASKONAS

By Mind Matters News

CONTINUED:

Google’s Hypothetical ‘Selfish Ledger’ Imagines Collecting All Your Data to Push You to Change Society

A couple of years ago, Alphabet’s X “moonshot factory” conjured up a concept that describes how total and absolute data collection could be used to shape the decisions you make. And now a video about that concept has leaked online.

The video was obtained and published on Thursday by The Verge. It describes a so-called “Selfish Ledger” that would collect all of your data, including actions you make on your phone, preference settings, and decisions you make, and not just keep it there for future evaluation. Instead, the ledger, which would be designed and managed by Google, would interpret that information and guide you down a path towards reaching a goal, or on a broader scale, doing your part to help solve poverty or other societal problems.

In one example, the video describes how the ledger would ask you to create a life goal. It would then tell you what kind of activities to engage in to achieve that goal. So, for instance, if you want to lose weight, the ledger would see that you’re shopping for food on your phone and direct you to buy a healthier option. The video even suggests that some of the recommendations would “reflect Google’s values as an organization” to get you to reduce your carbon footprint.

While it’s unclear how Google would go about creating the technology, the implications could be major. The ledger would essentially collect everything there is to know about you, your friends, your family, and everything else. It would then try to move you in one direction or another for your or society’s apparent benefit. Privacy concerns and whether people would feel comfortable with a single company swaying public opinion would obviously come about if the ledger were ever pitched as an actual product.

By Don Reisinger

CONTINUED:

The Selfish Ledger: Google's dystopian vision of populace control through 'total data collection'

Machines that know our wants and needs even before we do may be able to twist us to their hidden agendas

It is interesting to see how concerned tech companies like Facebook, Twitter, and Google have become about user privacy. Of course, we know that the impetus for this drive for confidentiality comes primarily from the fallout surrounding the Cambridge Analytica scandal and the EU’s General Data Protection Regulation (GDPR), which goes into effect May 25.

Still, the general feel from notices companies have been sending out is that they have always been concerned with our data privacy, but they just want to be more transparent about it now. The big talking point on data collection is that they (Big Tech) only want to use our data to “improve the user experience.” This altruistic point of view sounds good, but we all know that the data is being used to make money.

But what if there was an ulterior motive to Big Tech’s data collection — something that did not involve improving our experience or making money? What if it was secretly looking to control the world? Well, at least one company was, or at least had put a lot of thought into it.

According to a short film uncovered by The Verge called “The Selfish Ledger,” Google had been thinking about using “total data collection” and social engineering to modify the behavior of entire populations. The nine-minute video examines the possibilities of using Big Data to guide users into conforming to a predetermined agenda. While the video does take a “for the common good” slant by using thought control techniques to solve problems like poverty and global warming, the mere fact that the video is seriously discussing behavior modification on a massive global scale is scary.

The film was made in 2016 by Google X (now just X) head of design Nick Foster and fellow researcher David Murphy for internal use at Google. In it, Foster envisions a future where massive amounts of data are collected on users and stored in what he refers to as a “ledger.” The ledger contains a user's “actions, decisions, preferences, movements, and relationships.”

"We understand if this is disturbing — it is designed to be."

Artificial intelligence will analyze this data. If the ledger finds a gap in the in the information that it needs to understand the user better, it will search for a device that the target might have that could contain the missing piece. If one is not found, the AI could use historical information on the user to design, propose, and deliver a custom product they might want via 3D printing. While the product will be for the use of the consumer, it will also be capable of supplying the ledger with the information it requires.

It is an alarming prospect that one would expect from a dystopian novel, but not from real life.

When asked about the film, an X spokesperson told The Verge, “We understand if this is disturbing — it is designed to be. This is a thought-experiment by the Design team from years ago that uses a technique known as ‘speculative design’ to explore uncomfortable ideas and concepts in order to provoke discussion and debate. It’s not related to any current or future products.”

I find this explanation somewhat hard to swallow. I know of no company that spends time, resources, and money on research that it has no intention of acting upon. Firms big and small are always looking for a return on investment. If the company knows there is little or no ROI, it abandons the idea. Does X expect us to believe that the ROI for The Selfish Ledger was only an internal philosophical discussion amongst employees over coffee?

Google has made considerable strides in the field of artificial intelligence. Although I remain skeptical that it was real, its Duplex AI, demoed at I/O, seemed to pass the Turing test by fooling people into thinking they were talking with a human. Having a so-called “ledger” of user data capable of self-analyzation with an agenda of behavior manipulation is not a big stretch.

Concerns over privacy are at the forefront right now, but once the fervor dies down, I can see Google revisiting the possibilities of The Selfish Ledger in the near future trying to find that ROI.

By Cal Jeffrey

CONTINUED:

Google's Selfish Ledger is an Unsettling Vision of Silicon Valley Social Engineering

This internal video from 2016 shows a Google concept for how total data collection could reshape society

Google has built a multibillion-dollar business out of knowing everything about its users. Now, a video produced within Google and obtained by The Verge offers a stunningly ambitious and unsettling look at how some at the company envision using that information in the future.

The video was made in late 2016 by Nick Foster, the head of design at X (formerly Google X) and a co-founder of the Near Future Laboratory. The video, shared internally within Google, imagines a future of total data collection, where Google helps nudge users into alignment with their goals, custom-prints personalized devices to collect more data, and even guides the behavior of entire populations to solve global problems like poverty and disease.

When reached for comment on the video, an X spokesperson provided the following statement to The Verge:

“We understand if this is disturbing -- it is designed to be. This is a thought-experiment by the Design team from years ago that uses a technique known as ‘speculative design’ to explore uncomfortable ideas and concepts in order to provoke discussion and debate. It’s not related to any current or future products.”
All the data collected by your devices, the so-called ledger, is presented as a bundle of information that can be passed on to other users for the betterment of society.

Titled The Selfish Ledger, the 9-minute film starts off with a history of Lamarckian epigenetics, which are broadly concerned with the passing on of traits acquired during an organism’s lifetime. Narrating the video, Foster acknowledges that the theory may have been discredited when it comes to genetics but says it provides a useful metaphor for user data. (The title is an homage to Richard Dawkins’ 1976 book The Selfish Gene.) The way we use our phones creates “a constantly evolving representation of who we are,” which Foster terms a “ledger,” positing that these data profiles could be built up, used to modify behaviors, and transferred from one user to another:

“User-centered design principles have dominated the world of computing for many decades, but what if we looked at things a little differently? What if the ledger could be given a volition or purpose rather than simply acting as a historical reference? What if we focused on creating a richer ledger by introducing more sources of information? What if we thought of ourselves not as the owners of this information, but as custodians, transient carriers, or caretakers?”

The so-called ledger of our device use — the data on our “actions, decisions, preferences, movement, and relationships” — is something that could conceivably be passed on to other users much as genetic information is passed on through the generations, Foster says.

Resolutions by Google, the concept for a system-wide setting that lets users pick a broad goal and then directs their everyday actions toward it.

Building on the ledger idea, the middle section of the video presents a conceptual Resolutions by Google system, in which Google prompts users to select a life goal and then guides them toward it in every interaction they have with their phone. The examples, which would “reflect Google’s values as an organization,” include urging you to try a more environmentally friendly option when hailing an Uber or directing you to buy locally grown produce from Safeway.

An example of a Google Resolution superimposing itself atop a grocery store’s shopping app, suggesting a choice that aligns with the user’s expressed goal.

Of course, the concept is premised on Google having access to a huge amount of user data and decisions. Privacy concerns or potential negative externalities are never mentioned in the video. The ledger’s demand for ever more data might be the most unnerving aspect of the presentation.

Foster envisions a future where “the notion of a goal-driven ledger becomes more palatable” and “suggestions may be converted not by the user but by the ledger itself.” This is where the Black Mirror undertones come to the fore, with the ledger actively seeking to fill gaps in its knowledge and even selecting data-harvesting products to buy that it thinks may appeal to the user. The example given in the video is a bathroom scale because the ledger doesn’t yet know how much its user weighs. The video then takes a further turn toward anxiety-inducing sci-fi, imagining that the ledger may become so astute as to propose and 3D-print its own designs. Welcome home, Dave, I built you a scale.

A conceptual cloud processing node that is analyzing user information and determining the absence of a relevant data point; in this case, user weight.

Foster’s vision of the ledger goes beyond a tool for self-improvement. The system would be able to “plug gaps in its knowledge and refine its model of human behavior” — not just your particular behavior or mine, but that of the entire human species. “By thinking of user data as multigenerational,” explains Foster, “it becomes possible for emerging users to benefit from the preceding generation’s behaviors and decisions.” Foster imagines mining the database of human behavior for patterns, “sequencing” it like the human genome, and making “increasingly accurate predictions about decisions and future behaviours.”

“As cycles of collection and comparison extend,” concludes Foster, “it may be possible to develop a species-level understanding of complex issues such as depression, health, and poverty.”

A central tenet of the ledger is the accumulation of as much data as possible, with the hope that at some point, it will yield insights about major global problems.

Granted, Foster’s job is to lead design at X, Google’s “moonshot factory” with inherently futuristic goals, and the ledger concept borders on science fiction — but it aligns almost perfectly with attitudes expressed in Google’s existing products. Google Photos already presumes to know what you’ll consider life highlights, proposing entire albums on the basis of its AI interpretations. Google Maps and the Google Assistant both make suggestions based on information they have about your usual location and habits. The trend with all of these services has been toward greater inquisitiveness and assertiveness on Google’s part. Even email compositions are being automated in Gmail.

At a time when the ethics of new technology and AI are entering the broader public discourse, Google continues to be caught unawares by the potential ethical implications and downsides of its products, as seen most recently with its demonstration of the Duplex voice-calling AI at I/O. The outcry over Duplex’s potential to deceive prompted Google to add the promise that its AI will always self-identify as such when calling unsuspecting service workers.

The Selfish Ledger positions Google as the solver of the world’s most intractable problems, fueled by a distressingly intimate degree of personal information from every user and an ease with guiding the behavior of entire populations. There’s nothing to suggest that this is anything more than a thought exercise inside Google, initiated by an influential executive. But it does provide an illuminating insight into the types of conversations going on within the company that is already the world’s most prolific personal data collector.

Update: Nick Foster’s title has been updated to include the Near Future Laboratory and X’s response has been moved.

CONTINUED:

How Tech Utopia Fostered Tyranny

Authoritarians’ love for digital technology is no fluke — it’s a product of Silicon Valley’s “smart” paternalism.

The rumors spread like wildfire: Muslims were secretly lacing a Sri Lankan village’s food with sterilization drugs. Soon, a video circulated that appeared to show a Muslim shopkeeper admitting to drugging his customers — he had misunderstood the question that was angrily put to him. Then all hell broke loose. Over a several-day span, dozens of mosques and Muslim-owned shops and homes were burned down across multiple towns. In one home, a young journalist was trapped, and perished.

Mob violence is an old phenomenon, but the tools encouraging it, in this case, were not. As the New York Times reported in April, the rumors were spread via Facebook, whose newsfeed algorithm prioritized high-engagement content, especially videos. “Designed to maximize user time on site,” as the Times article describes, the newsfeed algorithm “promotes whatever wins the most attention. Posts that tap into negative, primal emotions like anger or fear, studies have found, produce the highest engagement, and so proliferate.” On Facebook in Sri Lanka, posts with incendiary rumors had among the highest engagement rates, and so were among the most highly promoted content on the platform. Similar cases of mob violence have taken place in India, Myanmar, Mexico, and elsewhere, with misinformation spread mainly through Facebook and the messaging tool WhatsApp.

This is in spite of Facebook’s decision in January 2018 to tweak its algorithm, apparently to prevent the kind of manipulation we saw in the 2016 U.S. election, when posts and election ads originating from Russia reportedly showed up in newsfeeds of up to 126 million American Facebook users. The company explained that the changes to its algorithm will mean that newsfeeds will be “showing more posts from friends and family and updates that spark conversation,” and “less public content, including videos and other posts from publishers or businesses.” But these changes, which Facebook had tested out in countries like Sri Lanka in the previous year, may actually have exacerbated the problem — which is that incendiary content, when posted by friends and family, is guaranteed to “spark conversation” and therefore to be prioritized in newsfeeds. This is because “misinformation is almost always more interesting than the truth,” as Mathew Ingram provocatively put it in the Columbia Journalism Review.

How did we get here, from Facebook’s mission to “give people the power to build community and bring the world closer together”? Riot-inducing “fake news” and election meddling are obviously far from what its founders intended for the platform. Likewise, Google’s founders surely did not build their search engine with the intention of its being censored in China to suppress free speech, and yet, after years of refusing this demand from Chinese leadership, Google has recently relented rather than pull their search engine from China entirely. And YouTube’s creators surely did not intend their feature that promotes “trending” content to help clickbait conspiracy-theory videos go viral.

Alex Williamson

These outcomes — not merely unanticipated by the companies’ founders but outright opposed to their intentions — are not limited to social media. So far, Big Tech companies have presented issues of incitement, algorithmic radicalization, and “fake news” as merely bumps on the road of progress, glitches and bugs to be patched over. In fact, the problem goes deeper, to fundamental questions of human nature. Tools based on the premise that access to information will only enlighten us and social connectivity will only make us more humane have instead fanned conspiracy theories, information bubbles, and social fracture. A tech movement spurred by visions of libertarian empowerment and progressive uplift has instead fanned a global resurgence of populism and authoritarianism.

Despite the storm of criticism, Silicon Valley has still failed to recognize in these abuses a sharp rebuke of its sunny view of human nature. It remains naïvely blind to how its own aspirations for social engineering are on a spectrum with the tools’ “unintended” uses by authoritarian regimes and nefarious actors.

AI Persuasion

The digital utopian dream of our age looks something like the 2016 concept video created by a Google R&D lab for a never-released product called the Selfish Ledger. The video was obtained in May by The Verge, which described it as “an unsettling vision of Silicon Valley social engineering.” Borrowing from Richard Dawkins’s notion of the “selfish gene,” the Selfish Ledger would be a self-help product on steroids, combining Google’s cornucopia of personal data with artificial-intelligence tools whose sole aim was to help you meet your goals.

Want to lose weight? Google Maps might prioritize smoothie shops or salad places when you search for “fast food.” Want to reduce your carbon footprint? Google might help you find vacation options closer to home or prioritize locally grown foods in the groceries that Google Express delivers to your doorstep. When the program needs more information than Google’s data banks can provide, it might suggest you buy a sensor, such as an Internet-connected scale or Google’s new AI-powered wearable camera. Or, if the needed product is not on the market, it might even suggest a design and 3D-print it.

The program is “selfish” in that it stubbornly pursues the self-identified goal the user gives it. But, the video explains, further down the road “suggestions may be converted not by the user but by the ledger itself.” And beyond individual self-help, by surveilling users over space and time Google would develop a “species-level understanding of complex issues such as depression, health, and poverty.”

The idea, according to a lab spokesperson, was meant only as a “thought-experiment … to explore uncomfortable ideas and concepts in order to provoke discussion and debate.” But the slope from Google’s original product — the seemingly value-neutral search engine — to the social engine of the Selfish Ledger is slipperier than one might think. The video’s vision of a smart Big Brother follows quite naturally from the company’s founding mission “to organize the world’s information, making it universally accessible and useful.” As Adam White recently wrote in these pages (“Google.gov,” Spring 2018), “Google has always understood its ultimate project not as one of rote descriptive recall but of informativeness in the fullest sense.”

After plucking the low-hanging fruit of web search, Google’s engineers began creating predictive search technologies like “autocomplete” and search results tailored to individual users based on their search histories. But what we are searching for — what we desire — is often shaped by what we are exposed to and what we believe others desire. And so predicting what is useful, however value-neutral this may sound, can shade into deciding what is useful, both to individual users and to groups, and thereby shaping what kinds of people we become, for both better and worse.

The moral nature of usefulness becomes even clearer when we consider that our own desires are often in conflict. Someone may say he wants to have a decent sleep schedule, and yet his desire to watch another YouTube video about “deep state” conspiracy theories may get the better of him. Which of these two conflicting desires is the truer one? What is useful in this case, and what is good for him? Is he searching for conspiracy theories to find the facts of the matter, or to get the informational equivalent of a hit of cocaine? Which is more useful? What we wish for ourselves is often not what we do; the problem, it seemed to Walker Percy, is that modern man above all wants to know who he is and should be.

YouTube’s recommendation feature has helped to radicalize users through feedback loops — not only, again, by helping clickbait conspiracy videos go viral, but also by enticing users to view more videos like the ones they’ve already looked at, thus encouraging the user merely intrigued by extremist ideas to become a true diehard. Yet this result is not a curious fluke of the preference-maximizing vision, but its inevitable fruition. As long as our desires are unsettled and malleable — as long as we are human — the engineering choices of Google and the rest must be as much acts of persuasion as of prediction.

California Streamin’

The digital mindset of precisely measuring, analyzing, and ever more efficiently fulfilling our individual desires is of course not unique to Google. It pervades all of the Big Tech companies whose products give them access to massive amounts of user data, including also Facebook, Microsoft, Amazon, and to some extent Apple. Each company was founded on a variation of the premise that providing more people with more information and better tools, and helping them connect with each other, would help them lead better, freer, richer lives.

This vision is best understood as a descendant of the California counterculture, another way of extending decentralized, bottom-up power to the people. The story is told in Fred Turner’s 2006 book From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. Turner writes that Stewart Brand, erstwhile editor of the counterculture magazine Whole Earth Catalog, “suggested that computers might become a new LSD, a new small technology that could be used to open minds and reform society.” Indeed, Steve Jobs came up with the name “Apple Computing” from living in an acid-infused commune at an Oregon apple orchard.

Not coincidentally, the tech giants are now investing heavily in using artificial intelligence to provide customized user experiences — not the information that is most useful to people in general, but to individual users.* The AI assistant is the culmination of utopian aspiration and shareholder value, a kind of techno-savvy guardian angel that perfectly and mysteriously knows how to meet your requests and sort your infinitely scrollable feed of search results, products, and friend updates, just for you. In the process, these companies run headfirst into the impossibility of separating the supposedly value-neutral criterion of usefulness from the moral aims of personal and social transformation.

For at the foundation of the digital revolution there was a hidden tension. First through personal computing and then through the Internet, the revolutionaries offered, as Brand’s Whole Earth Catalog put it, “access to tools.” A precious few users today grasp and take advantage of the full promise of networked computers to build ever more useful applications and tools. Instead, the vast majority spend their time and resources on only a few functions on a few platforms, consuming entertainment, searching for information, connecting with friends, and buying products or services.

And while in theory there are more “choices” and “flexibility” available than ever, in practice these are winner-take-all platforms, with the default choices and settings dominating user behavior. Google can return tens of millions of results for a search, but most users won’t leave the first page. Essentially random suggestions to users can become self-fulfilling prophesies, as Wired reported of the obscure 1988 climbing memoir Touching the Void, which by 2004 had become a hit due to Amazon’s recommendation algorithm.

Moreover, because algorithms are subject to strategic manipulation and because they are attempting to provide results unique to you, the choices shaping these powerful defaults are necessarily hidden away by platforms demanding you simply trust them. Ever since its founding, Google has had to keep its search algorithm’s specific preferences secret and constantly re-adjust them to foil enterprising marketers trying to boost their profits at the expense of what users actually want. Every other Big Tech company has followed suit. As results have become more personalized, it becomes increasingly difficult to specify why, exactly, your newsfeed might differ from a friend’s; the complex math behind it creates a black box that is “optimized” for some indiscernible set of metrics. Tech companies demand you simply trust the choices they make about how they manipulate results.

Much of the politics of Silicon Valley is explained by this Promethean exchange: gifts of enlightenment and ease in exchange for some measure of awe, gratitude, and deference to the technocratic elite that manufactures them. Algorithmic utopianism is at once optimistic about human motives and desires and paternalistic about humans’ cognitive ability to achieve their stated preferences in a maximally rational way. Humans, in other words, are mostly good and well-intentioned but dumb and ignorant. We rely on poor intuitions and bad heuristics, but we can overcome them through tech-supplied information and cognitive adjustment. Silicon Valley wants to debug humanity, one default choice at a time.

We can see the shift from “access to tools” to algorithmic utopianism in the unheralded, inexorable replacement of the “page” by the “feed.” The web in its earliest days was “surfed.” Users actively explored what was interesting to them, shifting from page to page via links and URLs. While certain homepages — such as AOL or Yahoo! — were important, they were curated by actual people and communities. Most devoted “webizens” spent comparatively little time on them, instead exploring the web based on memory, bookmarks, and interests. Each blog, news source, store, and forum had its own site. Where life on the Internet didn’t follow traditional editorial curation, it was mostly a do-it-yourself affair: Creating tools that might show you what your friends were up to, gathering all the information you cared about in one place, or finding new sites were rudimentary and tedious activities.

The feed was the solution to the tedium of surfing the web, of always having to decide for yourself what to do next. Information would now come to you. Gradually, the number of sites involved in one’s life online dwindled, and the “platform” emerged, characterized by an infinite display of relevant information — the feed. The first feeds used fairly simple algorithms, but the algorithms have grown vastly more complex and personalized over time. These satisfaction-fulfillment machines are designed to bring you the most “relevant” content, where relevancy is ultimately based on an elaborate and opaque model of who you are and what you want. But the opacity of these models, indeed the very personalization of them, means that a strong element of faith is required. By consuming what the algorithm says I want, I trust the algorithm to make me ever more who it thinks I already am.

In this process, users have gone from active surfers to sheep feeding at the algorithmic trough. Over time, platforms have come up with ever more sophisticated means of inducing behavior, both online and in real life, using AI-fueled notifications, messages, and default choices to nudge you in the right direction, ostensibly toward your own maximum satisfaction. Yet now, in order to rein in the bad behaviors the feeds themselves have encouraged — fake news, trolling, and so on — these algorithms have increasingly become the sites of stealthy intervention, using tweaks like “shadowbanning,” “down-ranking,” and simple erasure or blocking of users to help determine what information people do and don’t access, and thereby to subtly shape their minds.

Facebook in the Wild

Big Tech companies have thus married a fundamentally expansionary approach to information-gathering to a woeful naïveté about the likely uses of that technology. Motivated by left-liberal utopian beliefs about human progress, they are building technologies that are easily, naturally put to authoritarian and dystopian ends. While the Mark Zuckerbergs and Sergey Brins of the world claim to be shocked by the “abuse” of their platforms, the softly progressive ambitions of Silicon Valley and the more expansive visions of would-be dictators exist on the same spectrum of invasiveness and manipulation. There’s a sense in which the authoritarians have a better idea of what this technology is for.

Wasn’t it rosy to assume that the main uses of the most comprehensive, pervasive, automated surveillance and behavioral-modification technology in human history would be reducing people’s carbon footprints and helping them make better-informed choices in city council races? It ought to have been obvious that the new panopticon would be as liable to cut with the grain as against it, to become in the wrong hands a tool not for ameliorating but exploiting man’s natural capacity for error. Of the two sides, cheer for Dr. Jekyll, but bet on Mr. Hyde.

In recent years, two related problems have been shattering Silicon Valley’s dreams of progress. The first problem is that people have stubbornly refused to be debugged and empowered. Google hoped to provide users with more “useful” information, but if you already know what you want to believe, Google exaggerates confirmation bias by feeding you more of what you want to hear. Facebook wanted to help people connect with their friends, share experiences, and learn from each other, but it turns out that people often pick the friends they want to engage with based on whether they care about the same things, leading the newsfeed algorithm to produce a custom-built echo chamber. Amazon stocks a wider selection of books than any store in history, but suggests them to you based on your search history and previous purchases, eliminating the cultivated, mind-broadening randomness of the bookstore browse.

In a sense, people often use these technologies backwards from how they were intended. In each case, what at first blush seems like a great tool for building what sociologists call “bridging capital” — connections to our neighbors or people in different interest groups — has in fact done far more to build “bonding capital” — tighter interconnections with people who are already like us in important ways.

This gap between what these systems are for and how they are actually used is amplified by globalization. Big Tech, to use a term from psychological research, is “WEIRD” — Western, educated, industrialized, rich, and democratic. These products were initially built by and for college-educated, Western, urban users. Facebook, for example, helped earn its early cachet by being exclusively for Harvard students (before it was expanded to Stanford, Columbia, and Yale). This means that the design choices product engineers make, and the behaviors those choices are designed to elicit, are often intended for a much more limited set of users than the technology will encounter “in the wild.”

A London economist, an underemployed Brazilian, and a Pakistani shepherd might each respond to the same algorithmic design choices with vastly different behaviors — in both the digital and the real world. Each of these big systems is designed, in its own way, to maximize user engagement, but what content users engage with, and how, depends in large part on culture, class, and psychology.

For a WEIRD user working in journalism or politics, “user engagement” might mean an addiction to Twitter. For a teenage girl on Instagram, it might lead to anorexia and depression. Among Sri Lankan villagers, it was a recipe for “fake news,” overheated rhetoric, and riotous violence. As the New York Times article on the story explained, “Online outrage mobs will be familiar to any social media user. But in places with histories of vigilantism, they can work themselves up to real-world attacks.”

These technologies were based on a model in which users’ desires were crafted outside the system, and the purpose of the algorithms was to measure and meet those desires with ever greater efficiency. The designers did not imagine the algorithms themselves shaping users by feeding their basest impulses, turning the high of a notification ping into whatever behaviors result in more pings — snarkier tweets, sexier pictures, or more feverish posts. The engineering choices that have made these technologies so compelling and addictive have also made it completely implausible that they would fulfill their founders’ noble ambitions. Like Dr. Frankenstein, Big Tech’s creators in no way control their creations.

Surveillance State, Made in U.S.A.

Thus we arrive at the second problem besetting Big Tech: Malicious actors, authoritarian regimes chief among them, are sophisticated adopters and promoters of the information revolution. How long ago were the halcyon days of the Arab Spring, when commentators could argue that Facebook and Twitter presented an existential threat to dictatorships everywhere. In reality, authoritarian regimes the world over quickly learned to love technologies that enticed their subjects into carrying around listening devices and putting their innermost thoughts online.

Big Brother can read tweets too, which is why China’s massive surveillance system includes monitoring social media. Slowing down Internet traffic, as Iran has apparently done, turns out to be an even more effective source of censorship than outright blocking of websites — accessing information becomes a matter of great frustration instead of forbidden allure. Before Russian troll farms were aimed at American Facebook users, they were found to be useful at home for stirring up anti-American sentiments and defending Russia’s aggressions in Ukraine.

By pulling so much of social life into cyberspace, the information revolution has made dissent more visible, manageable, and manipulable than ever before. Hidden public anger, the ultimate bête noire of many a dictator, becomes more legible to the regime. Activating one’s own supporters, and manipulating the national conversation, become easier as well. Indeed, the information revolution has been a boon to the police state. It used to be incredibly manpower-intensive to monitor videos, accurately take and categorize images, analyze opposition magazines, track the locations of dissidents, and appropriately penalize enemies of the regime. But now, tools that were perfected for tagging your friends in beach photos, categorizing new stories, and ranking products by user reviews are the technological building blocks of efficient surveillance systems. Moreover, with big data and AI, regimes can now engage in especially “smart” forms of what is sometimes called “smart repression” — exerting just the right amount of force and nudging, at the lowest possible cost, to pull subjects into line. The computational counterculture’s promise of “access to tools” and “people power” has, paradoxically, contributed to mass surveillance and oppression.

What’s shocking isn’t that technological development is a two-edged sword. It’s that the power of these technologies is paired with a stunning apathy among their creators about who might use them and how. Google employees have recently declared that helping the Pentagon with a military AI program is a bridge too far, convincing the company to cancel a $10 billion contract. But at the same time, Google, Apple, and Microsoft, committed to the ideals of open-source software and collaboration toward technological progress, have published machine-learning tools for anyone to use, including agents provocateur and revenge pornographers.

In 2017 researchers from the tech company Nvidia published an algorithm for realistically modifying video, for example to turn a winter scene into a summer scene. Within months, as Motherboard reported, an anonymous Internet hobbyist had developed a similar technology to create and release software for swapping faces in videos with high fidelity. While the intent was (inevitably) pornographic, the political implications of the technology were immediately recognized, as in a BuzzFeed video of a fake announcement by former President Obama. Recently, IBM announced the creation of a free database of over one million racially diverse facial images to help train facial recognition algorithms and reduce bias. One wonders whether the Uighur people arrested by the Chinese government with the help of facial recognition technology are grateful that they weren’t discriminated against.

Silicon Valley’s tech founders envisioned a world where information technology directly contributed to an increasingly democratic society, characterized by decentralization, a do-it-yourself attitude, and an independence of thought associated with both their brand of Sixties counterculture and a deeper American tradition. They and their successors, based on optimistic assumptions about human nature, built machines to maximize those naturally good human desires. But, to use a line from Bruno Latour, “technology is society made durable.” That is, to extend Latour’s point, technology stabilizes in concrete form what societies already find desirable.

The counterculture’s humanism has long been overthrown by dreams of maximizing satisfaction, metrics, profits, “knowledge,” and connection, a task now to be given over to the machines. The emerging soft authoritarianism in Silicon Valley’s designs to stoke our desires will go hand in hand with a hard authoritarianism that pushes these technologies toward their true ends.


* One must qualify that much of what is today called “artificial intelligence” is little more than traditional regression analysis, the basic technique taught in introductory statistics courses, but on an unprecedented scale and presence in daily life. None of this technology approaches the conscious, adaptive, reflective capacities often associated with the term, the kind we would find in 2001: A Space Odyssey’s HAL 9000 or Star Trek’s Mr. Data. The labeling of these techniques as “artificial intelligence” arises in part from the ideological aspirations of Silicon Valley and in part from its overhyped marketing, and so ought to be resisted. But for the sake of critique we will adopt it here.

By Jon Askonas

The Strange Story of Google’s Selfish Ledger
The Strange Story of Google’s Selfish Ledger If you need Google to run your life, this is definitely for you. At one time, not too many years ago, Google t...
Discuss on the forum

Tags

fml.lol

Latest news & random rants