The Real Danger of AI that No One’s Talking About

The+Real+Danger+of+AI+that+No+Ones+Talking+About

AI. With such massive leaps and bounds made in the field of artificial intelligence over the past few months, it seems to be the topic that’s on everybody’s mind. Whether that topic’s viewed in a positive or negative light, though, is in the eye of the beholder. Some claim that AI will be able to facilitate our lives in such a revolutionary way that its only rival, in terms of global impact, would be the Internet itself. With the management of so many jobs of society now under computer control, humans themselves would be free to explore new pursuits in science and technology, living their lives to the fullest in a way that has never been realized before.

Others, however, are less optimistic about the imminent future. It’s no surprise why: movies like The Terminator have been around for nearly half a century now. In countless media, people have expressed their vision that AI, if given enough time, will become sentient and decide that humanity is obsolete. While there’s no guarantee that this won’t happen sometime in the future, there’s a more pressing threat to human safety from the realm of AI that’s manifesting right now: large corporations.

While big corp has been a pain for some time with squeezing people out of their money, the introduction of AI adds an entirely different element to this conversation. Now, it’s not just money, but privacy that the average person will lose as well. Think about it; every major generative AI is owned by a tech mogul, whether it be Google, Bing, Microsoft, or Snapchat. These big tech companies already have a habit of analyzing one’s personal information and utilizing it to personalize website experiences and ad placements, often without the consent of their targets.

AI significantly exacerbates this problem, making the sky the limit in terms of easy privacy infringement. With AI collating copious amounts of personal data and analyzing it faster than ever before, companies can take their agendas to the extreme. One obvious example of this would be ads, which will quickly become personalized to the point where they’ll seem like advice coming from a friend; however, there is a less-discussed point that may prove more than a nuisance, and that is misinformation.

Humans have often prided themselves on being able to tell fact from fiction, but that confidence is quickly eroding due to AI’s ability to produce realistic pictures and information at a rate and quality deemed unthinkable in the past. Once AI-generated content fully crosses the bounds of human perception, whoever is in control of it will be able to freely manipulate massive amounts of people with the internet’s powers of dissemination. If tech corporations are still at the helm, then their rampant capitalism and aggressive marketing will truly know no bounds. AI will be able to analyze people’s reactions and responses to various stimuli to ensure that whatever content it’s asked to generate will be aimed at where the recipient’s nature is most gullible.

If this is to be our future, then one thing is abundantly clear: Fact-checking, now more than ever, will be vital to individual consumers’ abilities to make accurate and informed decisions about their lives going forward. If we don’t, the consequences will be dire, and this is already starting to play out in some fields. A few weeks ago, a lawyer was drafting a statement to make in a court case that called upon a number of straightforward legal principles. When one of his aides used ChatGPT to help write the document, it seemed innocuous until the opposing legal team reviewed the document. There, they realized that all of the cases that were referenced were fake, construed by ChatGPT’s incomplete knowledge of legal history (and odd defensiveness when questioned on the cases it had invented). Consider the implications that this can have on the world: if one accidental mistake can determine the outcome of a trial, imagine what a barrage of intentional, targeted feeds and relays of misinformation can do to a society that is quickly becoming more and more dependent on such AI-driven content. Though AI’s sentience is far away, the weaponization of AI by big tech corporations is a very real threat with immediate and profound consequences for the people of the world if we do not adequately prepare for it.