The Chaos Machine - How Social Medias Are Shaping Our World

Recently, I read a book called The Chaos Machine The Inside Story of How Social Media Rewired Our Minds and Our World. This is a book discusses how social medias, such as Facebook, TikTok, and Twitter, have been shaping the way people live their lives. More importantly, they are influencing how people think.

February 22, 2023 - 9 minute read -
Article Books Reading

Recently, I read a book called The Chaos Machine: The Inside Story of How Social Media Rewired Our Minds and Our World. This is a book discusses how social medias, such as Facebook, TikTok, and Twitter, have been shaping the way people live their lives. More importantly, they are influencing how people think.

There has always been a saying in the tech world, if you can use a product for free, you are the product. This applies to social medias perfectly. Facebook, Twitter, YouTube, and TikTok, are all free to use for anyone with a smartphone has access to Internet. People can scroll through them at anytime they wish for however long they want. In the mean time, it looks like these companies are making huge profits from nothing. How? By selling ads. To whom? Their uses. As long as they can sell their ads, the imagination on how to is limitless. The major technique to push ads to targeted users is called Machine Learning. It sounds very fancy when you heard a machine which can learn, but the reality is opposite. Up to the date this book was published, engineers and scientists still didn’t figure out how Machine Learning, or Deep Learning to be exact, works. The way how Machine Learning is done is purely feeding raw data to machines and let them figure it out by themselves. There is zero mathematical rigorous through this process whatsoever. Among computer scientists, there has always been a debate that whether this is legit and should be used in practice, and how much should be applied if it does before we fully understand how it works. However, in a corporation’s perspective, if one does not use it, others will do. If others do, they will generate profits. Since using Machine Learning blindly can generate large amount of profits, corporations don’t care other things too much but their own earning at the end of each year.

It looks not that bad at the beginning. Those were just some ads. People are not going to die because they watched too many ads, and they can turn off their phone at any point. Since we are focusing on social medias here, let us omit the case that people can actually die because of Machine Learning in the case of Tesla. However, things are not looking good as time goes. People can indeed threat others’ lives on social medias under the sophisticated manipulation caused by its algorithm. In Computer Science, algorithms mean procedures machines follow to carry out their tasks. In this case, the algorithm is the procedure apps follow to push news feed to their users. This targeting can be dangerous if no own knows how to contain it. As we have been seeing every day, people live their lives with their phones. There is no point of judging it, but we have to ask ourselves, what exactly makes phone apps that addictive? People who judge others that they cannot put their phones down should instead be thinking whether these apps are purposely made to be extrémale addictive. Users can spend hours on them easily without actually doing anything. Why this is made this way? Because the longer users stay on their screens, the more ads they can obscure, and the more profits these apps can make. So, the algorithm’s ultimate goal is not helping users find what they were looking for in the first place, but making users stay as long as they can.

YouTube is an excellent example. The old YouTube, if you still remember those old days, was created for people to share their knowledge. Whenever you want to learn something such as how to set up a fish tank, you can go to YouTube and watch some tutorials made by professionals who have been dealing with fish tank for years. These tutorials were sometimes made with a bad resolution, but you can learn what you wanted to learn and build a fish tank after watching it. Now, regardless the unbearable ads in the beginning if you don’t pay for the subscription, tutorials become into how to look cool while setting up a fish tank. Content creators are more concerned with whether they look good and dressed well instead of whether they are actually teaching their viewers the knowledge they need. The camera is good, resolution is perfect, and the host is very cool when setting up the fish tank. But, you still don’t know how to set up a fish tank after watching it even though you enjoyed watching it. After all, who doesn’t like pretty creatures being cute in front of us. And why YouTube ends up like this? Because in this way, people will keep watching. This is what the algorithm cares, how to keep users watching as long as possible. Whether or not users found what they want does not matter.

The algorithm understands this too well. It doesn’t care whether the user actually found what they wanted at all, and what it cares is how to get more videos like this so users can keep watching. At the end of the day, only content creators whose videos can keep users watching are picked up by the algorithm regardless how garbage they are. And content creators who are creating actual content are out of business. Some content creators don’t understand why their videos are not trending even they make videos with high quality, and this is the reason, your videos are not addictive. People’s worst behaviors train the algorithm, the algorithm trains content creators. Now you might be wondering, what exactly is running these social medias.

Just because the algorithm is ignorance, it can cause actual harm to real people. The book mentioned that if you search who is Micheal Obama, there will be a link shows up in your search engine which goes to a pages says she is actually a man. Conspiracy, the most dangerous and addictive topic in this world, is training the algorithm. This page drives so much traffic which comes from, well I hope, people who are curious what kind of lie this is. This is like you want to go to a cinema and see how bad a movie actually is. However, the algorithm does not understand all these, and it only knows that saying Micheal Obama is a man generates lots of traffic, aka profits. So, let us push whatever relates to this idea to users. Did the algorithm do anything wrong from its own perspective? No. The companies who choose the way to build this algorithms are wrong. The companies who choose to use technologies without understanding it are wrong. Because they don’t understand how it works nor why it works, they cannot fix it, they cannot build it better.

Anger, is probably the most powerful emotion utilized by social medias to manipulate their users, as pointed out by this book. If happiness is individualized, being angry is a group activity. People wish to be understood, and they also want to understand each other. If there is a certain emotion which can glue people together, that would be even better. Let us say you found a person killed a cat, and you post this online with an angry comment says, ‘what kind of horrible person would do this’. Then your friends saw your post, and they resonate your anger immediately. They share your post to more and more people. Soon, you have an army driven by the same emotion, anger. However, is this really necessary? Does this actually solve any real problem? No. You just have a group of people sharing the same emotion with you. And it feels good.

Being able to be angry is a good thing, but we should be careful with what angry can lead to us on social medias. In recent years, more and more content on social medias are created based on anger. Racism, gender issues, sexual orientations, and many more. It gives us an impression I that horrible people are everywhere, and it is so hard to find something nice and peaceful anywhere, either online or offline. Bug, if you talk to your neighbors, things are not that bad. They might have flaws, but they are just normal human beings. It feels like there are two worlds, one is online full of hatred, and the other is the real world which is not that bad. What is the cause that made us feel this way? This is how social medias generate traffic. Nice and good things don’t drive traffic, borderline offensive content does. When people see horrible things, the resonate, they complain, they comment, and they share. All these actions generate traffic and profits. If anything worse, the more distant social medias make users feel from the real world, the longer they stay online and labor for them for free. Of course, the algorithm figured this out right away and starts promotions this type of content everywhere immediately.

Things got out of control since the 2016 election. At that point, you felt this country has never been this divided ever before. People are fed up by all the content in social medias. There were always Internet wars going on every single day. You saw a content that offense you, you click on it, watched ads no matter how long they took, then commented ‘this is not okay’, and shared it with others. I wonder if we let the algorithm develop without control for longer, what kind of thing it will become, and what type of content it will feed its users.

Besides the manipulation from corporations side, it is also very easy to manipulate others on the platform as a user. To be honest, the price to spread lies on Internet is approximately zero. I can lie that I work at Google and only work one hour per day while earning six figures, with no coding background nor a bachelor degree and learned my way through online coding classes myself. There is no ads in my post, and I was not selling classes. However, more people would start search how to learn to code online. Then, I feed in my ads through search engines. After all, who doesn’t want to be paid a six figures salary while working five hours per week? No one cares whether this is true or not, since searching something online doesn’t cost anything. No one would revel my lie either, since real Googlers are busy working, and boot camps are busy collecting money thanks to my lie. That was one example which looks not too harmful, since the worst case is someone lost some money. But, what if a country’s policy is influenced by liars online? What about the election? What if a country put fake users online to spread misleading information towards others they don’t like? The algorithm will not stop them, and it probably will help promote them.

After above long discussion which shows how evil social medias in current days have become, you might be wondering, do we still stand a chance? What can we do? At the end of the day, we as individuals, cannot do much. There is no point of fighting big corporations to convince them to change the way they operate social medias as long as they can make profits from it. However, there are some actions we can choose to take. Reduce, or even eliminate the time spent on social medias, and start to talk to others in real life. Changing the way how we consume information, so we are not targeted by social media algorithms. Some other ways of receiving information include radios, books, offline group discussions, and even just our friends. Although these ways of receiving information are also driven by some sort of ‘algorithm’ and can be bias, but the good thing is we know how these algorithms work. Newspapers like to publish things drive the sell, and publishers select boos which they think can trend. But, there is a major difference between these algorithms and social media algorithms, these algorithms are operated based on real human selections. Instead of machines, There is always human interaction throughout the selections of either news or books. We understand how it works, we are not blind. Our friends would select something similar to our tastes to share with use; business newspaper would select stock articles; PlayBoy would select beautiful nude pictures; radical radios would select gender issue topics; right wing groups would select topics go against other countries. No matter how much we disagree with some of these sources, we would appreciate this kind of human interactions in today’s age. They are trustworthy because they show us how they operate. On the contrary, Machine Learning is a black box, and we all have seen what is the result of using something we don’t understand recklessly.