Skynet is Already Here

But not in the way you think…

Fernando Villalba
6 min readApr 10, 2023

AI is very far away from being sentient or human-like

Sentient AI is not coming soon, not this decade, not next, and I am willing to wager your grandchildren won’t see it either. Don’t get me wrong, ChatGPT and other AI tools are extremely impressive, but they are not human intelligence; they are not even animal intelligence — they are glorified automation.

The hyped narrative lacks a minor detail: our brain is immensely complex and optimized for creative thinking, and we understand very little how it works. Instead, we have devised some models that mimic an area of what we qualify as intelligence, and now we are projecting our fears into it because it acts a bit like it’s human, but it isn’t — not by a long shot.

Even making a computer that’s as smart as your cat is impossible today, and it will still be in a decade from now. Yes, your cat can’t do maths or spit out rehashed GitHub code, but it can negotiate obstacles, display affection when hungry, and catch flies with razor-sharp precision. Your cat also learns and can be creative. If you don’t believe it, look at how my cats figured out how to get food out of this bag I unwittingly left out — I certainly didn’t teach them that!

My cats certainly found a way to the food

AI is showing us what human intelligence is not.

Recently you have seen a lot of buzz about how ChatGPT 4 is so great at passing examinations; with people claiming this is it, AGI is here, and soon machines will be able to do everything we do.

I’ve always disliked categorizing people’s intelligence by how well they do on IQ tests or solve math problems. Anything that a computer can do better than you is not a quality that demonstrates the best human intelligence has to offer. In fact, countries world-renowned for having the highest IQ test scores, like China and Singapore, are not exactly known for being the most creative — you can even make a case that China has an atrocious record with creativity.

The education system in these countries favors a form of learning heavily optimized to pass exams and do well academically but not to be creative and think outside the box. I’ve also met people who have gone to top universities like Cambridge and were exceptionally good academically and at doing the tasks ChatGPT would be great at but not so good at being creative.

The problem with measuring intelligence is that if you devise a system to measure anything, you can create an automation that targets whatever metrics you are testing; that’s what ML is great at, highly sophisticated automation. On the other hand, humans are biologically optimized to be creative and deal well with unpredictable patterns.

So while I think it’s really cool and helpful that ML can pass bar exams, I also feel this should be a chance to redefine how we measure intelligence more accurately and stop making the false assumption that because ML is good at this, it means it is sentient, or more intelligent than us.

But we are losing our jobs!

Some people will lose their jobs, but others will be transformed. AI will take away so much of the drudgery away because it is a highly competent dynamic template generator for our daily work; it will supercharge our creativity because we won’t be tired of doing the tedious parts of our job and can focus on what matters.

For example, an artist can use an AI to generate multiple images for inspiration or to use as a base for her work, the same way coders can use it to create programs that they can then modify and use as a springboard for something greater.

The number of jobs we lose could be chaotic, but ultimately I expect human output and creativity to be maximized by the advent of AI.

So where is Skynet?

The real danger that AI poses is not that it will become sentient but that we will believe it blindly, and in some cases, we already are!

I used to work for banks that have ML algorithms to detect fraud. These algorithms were right most of the time, but when they weren’t, the bank still froze customer funds or even closed their account. When a customer asked why they were doing this, the bank would refuse to say so. Why? Because it meant revealing how the algorithm worked, exposing chinks in the armor that would be exploited by malicious actors. Customers would be in tears, unable to access their money, and sometimes they would need to wait weeks to get it back, and even then, their account would be closed because the bank refused to risk bending the algorithm or adding exceptions for that one person.

This trend didn’t hit me until an AI ruined a week of my life when buying audiobooks in Audible. I placed a large order, and the algorithm flagged me as a hacker and canceled my transaction. I repeatedly told customer support that I was doing this, but they said they could do nothing. After triggering the algorithm three times, I got a scary email from Amazon telling me my account was closed and that I could not recover it, which I managed to do, but it was a little terrifying how out of my control this whole thing was.

In the end, after much arguing and even sending emails to the CEOs of Audible and Amazon — even Jeff Bezos himself — to no avail, I had to bow to the algorithm and place orders however it wanted me to, conservatively buying very few books at a time. Audible could have added an exception for me or limited the purchases before it triggered the algorithm, but they did nothing, they bowed to the machine, so I had to do the same.

Now imagine a world where an algorithm decides what’s acceptable for us to say and do in every area of our life. It won’t be sentient, but people believe in it so blindly that whatever it comes up with becomes the truth. Apply that to politics, ethics, money, and society; the result is scary.

This is happening now. YouTube, FB, Twitter, and others recommend content based on your engagement. If you lean slightly right or left in your political views, the algorithm will keep suggesting content based on your ideology, further reinforcing it. If you are prone to believe conspiracy theories, it’s even worse; the algorithm will bombard you with everything sensational and untrue. It doesn’t care whether your mind is sound or your view of the world will be skewed; it only cares about keeping you engaged.

The algorithm and those behind it can also determine what’s factual and what isn’t and will shape how you think and what you can say — that’s the real Skynet. The platform owners can make it shadowban content that doesn’t align with their bias, age-restrict it, or make it harder to find. If you are a content creator, you are incentivized to create content that caters to whatever narrative the algorithm deems acceptable, further shaping society’s thoughts. I often wonder how much we can attribute the rise of flat earthers, extreme left and right-wingers, and other ridiculous conspiracies to the algorithms that serve content to us.

Now that ChatGPT is widespread and known to hallucinate, how long until we start taking its responses as the truth and more conspiracies and ridiculous ideology stems from it? Right now, people don’t take it too seriously, but as they become more accurate, we may stop fact-checking out of laziness and publish content based on that information.

Conclusion

The way I see people talking about AI online now, it already feels a little like worship, projecting power and human qualities on a tool, and it may get worse in time. People have worshipped idols, pieces of land, trees, homes, and more. Projecting your fears, aspirations, and desires on a chatty algorithm is far easier because it can talk back and act somewhat like a human. But we need to understand that because it acts like a human, it is not human; it isn’t sentient. It has no soul, but we do, and if we are not careful, we could find ourselves selling it to the machine.

--

--

No responses yet