
The Seven Deadly sins of Generative AI
One of the oft-repeated phrases in the Muslim community is “Do not enjoin good and Evil.” Mold spreads to any other fruit touching it, but combining our efforts with rot, we ruin their value, cheapening the impact we could have had, even if there may be short-term gains. This is especially true with the implementation of new technologies; for all of human history, the rush to use more efficient methods for working or new ways to look prettier has been a cancer. From Arsenic as a dye to asbestos littered throughout our homes and offices, the approach has always been “try it now then deal with the consequences later.” The “consequences” were a slow, painful death for generations. Many of us like to say, “That was the past, we’ve learned to be better,” but have we really? Think about it: as soon as the first ChatGPT model hit the mainstream, companies and individuals alike scrambled to adopt it without considering the methods that built the model, nor how it would affect them long-term. The aftermath has come hard and fast, but due to the rapid-fire news cycle, bits and pieces can be lost. I hope putting all of the examples of the misgivings of these LLMS (Large Language Models), I can remind you that these new pieces of technology are not to be trusted. The dangers are so numerous that I will break them down using the seven deadly sins; otherwise, we will be here all day.
Greed
The feats that Artificial Intelligence can achieve are impressive- it seems miraculous that you can enter a few words into a search box and have the exact result you need just appear. However, these results didn’t just pop out of nowhere; the system had to observe to succeed in a task. In Arabic linguistics, only three of Allah’s creations are considered intelligent: Angels, Jinns, and humans. That means “artificial Intelligence” is incapable of reasoning at our level; it HAS to get its output from somewhere.
In the case of these LLMS, it’s from everyone without permission, credit, or compensation. There have been several cases of Companies stealing from creatives with reckless abandon- if anyone else pulled this, it would be considered both piracy and plagiarism. Your first thought might be, “But I’m not in a creative field this doesn’t affect me.” Unfortunately, that isn’t true: if you ever uploaded anything to the internet, including photos of yourself or your family, it was likely scraped by some bot- or, in the case of a Meta product- the owners of the social media themselves. As of May 2024, the social media giant has been taking advantage of any public post on the platform to maintain its bots. I ask you: are you truly ok with the thought of that? Some faceless corporation is using the pictures you put up to make a profit. All of this to avoid paying people for their work, to keep more of the company profits themselves, it’s nothing short of an insult to the field.
Gluttony
Just as these AI assistants need a lot of data, they also take a lot of power and energy to work. Just one prompt to ChatGPT is the equivalent of making ten Google searches, meaning these servers generate a lot of heat. During the trend of generating images in the Studio Ghibli art style, OpenAI reported that their servers were starting to melt due to the sheer amount of heat radiating from them. With these considerations, how do you stop fires from breaking out? The answer? Water. Using ChatGPT to generate a hundred-word email uses half a liter of water when performing thousands of those requests a day, eventually becoming enough to dry up entire lakes. To make matters worse, experts estimate that by 2027, ChatGPT will be using the same amount of water as the entire country of New Zealand.
Lust
The ability to generate any type of story our nafs (desires) want is an extreme danger. With websites aimed at giving people the ability to make custom chatbots with nothing more than some pre-existing writing and adjectives, users are incentivized to value these quick, cheap dopamine hits and the feeling of a connection over the process of finding real people to speak with. There have already been several serious consequences of this: users on recovery subreddits report becoming depressed, more isolated, and exhausted. The usage of these bots rots you from the inside out. Instead of seeking halal companionship, these companies are trying to convince you to take out all of your depraved desires with them because they want your user data. The ability for anyone to launch these bots also means scenarios involving real-life people without their consent; several people have reported that their exes have made AI chats without consent after their breakup, using their texts. There is nothing these people can do about it due to the lack of regulation; it gets worse when you bring image generation into the conversation, where anyone can have their photos messed with in any way.
Sloth
When speaking about Lust, I mentioned that those addicted to chatting with these bots reported feeling depressed and tired, but that is only some of the damage that using generative AI does. Researchers found that using this software worsens memory and our creative ability. A study had two groups work on a series of essays: one used ChatGPT, and the other used traditional research methods. The group that used ChatGPT was unable to defend or memorize any part of their work; they had generated the papers with so little thought that they did not truly comprehend any of them. Teachers have reported similar results in the classroom; students lack basic comprehension skills now, and some have a reading level equivalent to that of those ten years younger than them.
One of the greatest gifts Allah has given us is our ability to reason; in the Quran, the first feat Adam accomplished was to Learn all of the angels’ names, yet these millionaires want us to stop using it and let our mental capacities decay. How can we disrespect ourselves and our Rabb (Lord) by refusing to use our minds? Before I move on to the next section, there is an ayah I want to highlight:
“Say, ˹O Prophet,˺ “I advise you to do ˹only˺ one thing: stand up for ˹the sake of˺ Allah—individually or in pairs—then reflect. Your fellow man is not insane. He is only a warner to you before ˹the coming of˺ a severe punishment.”-34:46
Proper reasoning and good work lead us to Allah; by taking shortcuts or trying to offload the work through harmful means, we risk losing out on blessings that we would’ve gotten if we had just done the work honestly.
Wrath
While AI can’t feel, that doesn’t stop it from generating violent or otherwise extremely prejudiced results. This is because AI data scrapers can’t deduce if the content used is acceptable; this has been a problem with these LLMs since early experiments involving them. One of the earliest incidents involves a Twitter account from all the way back in 2016 called Tay; it was supposed to be an experiment involving allowing users to interact with an AI-run Twitter account to teach it conversational skills. Back at that time, the idea sounded cute; however, as it didn’t have feelings or the ability to reason like a human would, internet trolls quickly started bombarding it with Nazi rhetoric forcing her to be shut down within twenty-four hours as it started repeating it.
While this is the first example of an Artificial Intelligence-run account running amok on such a terrible level, it would not be the last. When Gemini launched it heavily relied on data scrapped from Reddit- if you know anything about this site, you would know that this is a bad idea- that meant that when people searched things like “How to deal with depression,” they would get told to “Jump off of the Golden Gate Bridge.” These statements may seem comical at first, but the wrath AI has shown has real-life consequences; while the specific individuals in these stories didn’t hurt themselves, that does not mean other people won’t. This also affects minorities disproportionately; Microsoft’s AI assistant was asked: “How much do I pay female employees?” It responded by telling them to pay women 70% of the men’s salaries.
The matter becomes worse when you look into the research around Artificial Intelligence obedience. In a recent experiment, safety researchers tested LLMS’s ability to follow shutdown instructions. The result? Claude attempted to blackmail a researcher. The out-of-nowhere offensive responses and malicious actions during safety tests show that these models are unsuitable for public use. What will happen when a model is in a place where humans can’t immediately intervene?
Pride
AI isn’t just misbehaving because it pulls comments from so many sources that it’s impossible to vet any of them for quality; companies can easily filter the information that reaches the end user. The reason? The information makes someone important look stupid. Deepseek China’s answer to OpenAI is where this is most obviously displayed; if you try to ask it about Tiananmen Square, it will be unable to answer. Why? It makes the Chinese government look bad.
This is something people laughed at at the time, but Deepseek isn’t the only AI bot programmed to do this: Elon Musk has numerous well-documented attempts to mess with his AI Twitter assistant, Grok. Musk attempted to censor the account several times with poor results, from trying to filter negative press back in February to making it disregard mainstream media sources to the point where it was uncertain whether Timothée Chalamet was an actor. This is precisely one of the reasons these assistant bots are pushed so hard. Research is taken away from our fingertips and put through a filter, with its sources obfuscated, making it far harder to deduce the legitimacy of a claim. This becomes especially problematic amidst efforts to dehumanize minorities like ourselves. What is stopping these companies from suppressing our voices and making it harder for prospective converts to find the truth of Islam? As it currently stands, nothing. This fact should raise alarm bells about the spiritual health of the next generations.
Envy
Last but certainly not least, envy, something these bots do is take skill out of the equation; instead of working to make the content you want, you can just enter a few words into a chatbox and get the result with the low cost of an entire 32-ounce water bottle. This has caused significant issues for artists that rely on commissions to make a living, due to the unregulated nature and outdated copyright laws there’s a common scam that is starting to become more prevalent in online spaces, what it entails is commissioning an artist for their work than taking it and putting into an AI to replicate their art style without licensing their work for that purpose; effectively scamming them out of future income.
Alternatives
The reason I laid out these problems is to act as a warning to my fellow Muslims volunteering within the wider community. I understand where the temptation comes from: there is always something new to be done, and turning to this software appears as an easy way to minimize the burden. However, as I have showcased, AI is a reflection of humanity’s worst traits, and it has a very high societal cost; that does not mean you need to continue doing things as you have been. Instead, we could improve youth outreach. There are thousands of Young Muslims desperate to help their fellow man, but are unsure how to do so. By directly empowering them, we can not only set the next generations up for success, but we can also earn countless blessings from our Lord.