I don’t think it’s an understatement to say that social media has a deep rooted grasp upon society. Many people from all walks of life use social media to connect, share, endorse, and grow their own online communities. I’ll start by saying that isn’t inherently evil, however, apps like Google, Facebook, Instagram, Snapchat, Twitter, and YouTube are exploiting human attention for their own profitable gains. What does that mean? A 2020 film called The Social Dilemma seeks to answer this question.
I watched this movie for the second time over the weekend and it’s even more scary just months from my first view. This movie looks into the mega technology companies such as Facebook and Google to see how their apps and software affect real life. The movie calls upon ex executives, designers, and employees from these technology companies to explain why social media in particular has mutated from what these apps were expected to be used for versus what they’re actually being used for. One of the interviewees, Tristan Harris, is a former Google employee working as a design ethicist and is the co-founder of a group called the Center for Humane Technology.
A handful of minutes into the movie, Tristan gives credence to something many people feel today, this idea of abnormality, he asks “is this normal? Or have we all fallen under some type of spell?” Indeed with regards to technology, mobile devices and social media apps feel normal. They have become a part of our lives and have been for years. While Tristan was working at Google, he began to raise concerns amongst his co-workers about the addictiveness, particularly with Google’s applications, like Gmail. He prepared a presentation, sent it out to his trusted friends and fellow employees, and it was received multiple times by his higher up. He recalls thinking this was the start of a revolution. However, in the end, the hype was short lived and everyone went back to business as usual with zero changes to future or existing technology.
Another interviewee, Tim Kendall, worked as an employee for Facebook in it’s early stage of development. Tim worked on figuring out how Facebook would monetize. An advertising based business model seemed to be the most optimal choice for such a business. Unfortunately, what happened, as he describes it, is companies started “selling their users” to the highest bidder for viewing, clicking, or sharing advertisements. This mutated into an entire multi-trillion dollar market in trading human futures; trading in the predictions of what a user will do next on social media apps.
What this selling of human futures looks like, the process by which the companies make the money, is simple. There are places where a user’s data is stored in huge computer systems deeply interconnected with one another. These computers are running algorithms so complex that they are indistinguishable from what a “sentient” machine might perform. The algorithms work to create a model of each and every person connected to the web, or, in context to the movie, social media, to predict and manipulate humans into never being away from their devices.
If that sounds crazy, that’s because it is. When you look at your phone, on the other end, there is essentially an artificial intelligence feeding content to you based on your internet footprint to keep you scrolling. Many know the differences between searching “global warming is…” on google and what you’ve expressed interest in through sharing, liking, and commenting, the first result could be anything from global warming is “real” or “fake.” While that might not sound like a big deal, consider billions of people using these platforms and the amount of polarization such subliminal messages can give to someone.
As long as this technology continues, there will be more polarization and less of a grasp on what reality is simply because online, reality is whatever you want it to be, whatever the computers think will keep you plugged in, whatever nudge it will take to have you continue using such technology. This has led to real world disasters and the former employees who’ve raised these concerns see it as a humanitarian existential crisis.
For context, an app called “Stay Free” that monitors phone usage, tells me I spend, on average, 7.5 hours on my phone. That’s 7.5 hours I am essentially being mined of my attention for the gain of an already multi-billion dollar company. So what’s the solution? How can this stop or at least slow down so we don’t end up in a civil war because some people believe the earth is actually flat or that Coronavirus can be cured by drinking bleach or that global warming is a myth? Luckily, this team of developers give individual fixes that can have massive implications.
Obviously the first being delete social media apps that take up time or are extraneous. Also seemingly obvious, we are unlikely to ever do that. The next best thing would be to limit phone usage. Perhaps start using apps like Quant which don’t store your data. On a more sensible level, doing research before sharing information to assure the post is accurate and factual. You can surround yourself with perspectives that you don’t believe in to better understand the “other side” and hopefully help reach mutual understandings. Turning off notifications, and not choosing “recommended” videos is also a way to fight the mechanical algorithms.
The Social Dilemma tries to make sense of a nonsensical, virtual reality in which profit programed machines trick users into continuing to scroll and submit themselves. Political enforcement is archaic when it comes to privacy and internet ethics, and technology companies do not have to stop this habitual manipulation. At the end of it all, the next ten, twenty, thirty, years are going to be pivotal if we ever have hope of changing these technologies before they quite literally swallow humans whole.