Cross posted on The Digital Learning Den platform
This post, 'My Concerns About AI - Part 1' follows on from my previous post on generative AI. In that I outlined the difficulties that I am currently having reconciling the many negative issues I have with generative AI and the drive that is beginning to come from teachers on using AI to support them in their school role or how to use it in the classroom with their pupils. As I mentioned previously, I've had a interest in AI for many years, but things have obviously rapidly come to a head in the last 2 or 3 years with the release of OpenAI's ChatGPT, closely followed by all the tech giants engaged in a tech arms race to produce an LLM (large language model) that I guess they hope will grab the most number of customers, and in the process make them even more obscene amounts of money than they currently do. Well maybe, I'll look at costs of AI later in this post.In the month since I posted that introduction, I've been digesting as much as I can on AI and trying my best not to go completely insane. I am basically writing these AI related posts to try and help me to articulate exactly what my issues are around AI, looking at it from the global, corporate, societal and political level, right down to the level of the impact on a primary school teacher's classroom, on the teacher and the pupils. Hopefully you can see that trying to get some sort of grip on this at all these levels is enough to drive someone slightly mad? :-) I can hear you say, "but why do that Gareth, just look at the classroom level only, that would be much easier?" Yes, I couldn't agree with you more, it would be. However, I believe that without the larger perspective, the end user, in this case the teacher or pupil, is not getting the full picture of what is going on when they type in the prompt to generate a lesson plan or an image on their laptop or smartphone. In the Curriculum for Wales, one of the four purposes is for our pupils to be "ethical, informed citizens’" who among other things, “understand and consider the impact of their actions when making choices and acting” and “show their commitment to the sustainability of the planet.” We also have a Science and Technology AoLE which has a computation statement of what matters that says,
To create and use digital technologies to their full potential, learners need to know how they work. They also need to understand that there are broad legal, social and ethical consequences to the use of technology. This can help learners to make informed decisions about the future development and application of technology.Hopefully therefore you can see why I feel it is important for me to have that wider perspective on AI, not just on what the end user does, as I believe AI cuts across all those above statements. The Curriculum for Wales has the aim to encourage our young people to have the knowledge to be able to question and think about what it is that they are doing and why. If that's what we want from our young people, then I think we as educators need to be as up to speed on these issues as we can possibly be. These posts have been written to help me to gain some clarity of thought and frame what I believe, and if by sharing this it helps you gain some wider perspective on AI, then that would be great too. I'm not an 'academic', this post is fairly detailed and researched, but they are just my thoughts and opinions, backed up by some links to things that I've read or listened to.
What I Mean by Generative AI?
Before I begin to outline my concerns, it's probably a good time to just explain what I mean when I refer to generative AI. Generative AI is artificial intelligence "that can create original content such as text, images, video, audio or software code in response to a user’s prompt or request." (IBM, What is generative AI?) So the type of applications that I'm referring to under this umbrella are ChatGPT, Claude, LLaMA, Gemini, Grok which are also known as large language models (LLMs). These models are typically used to produce "contextually relevant text, everything from instructions and documentation to brochures, emails, web site copy, blogs, articles, reports, papers, and even creative writing. They can also perform repetitive or tedious writing tasks (e.g., such as drafting summaries of documents or meta descriptions of web pages)." (IBM, What is Generative AI?) From my own personal experience, these are the types of activities that I've mainly used AI for. Helping me to create texts. But as already stated, generative AI is not only about the creation of text, but also the creation of images, video, sound and code. Here you will find applications such as DALL-E and Midjourney, which both create images based on user prompts; OpenAI Sora, Google Veo and RunwayML for video creation and Suno, Udio and Avia, which are AI song generators. These are just some of the many examples available of generative AI applications that are out there. Go and search for yourselves, there are lots! I'll discuss copyright in my next post. Finally, I'd better mention the use of generative AI to support coding which I understand has had a huge impact on the industry. Here we have applications such as GitHub Copilot, Cursor, along with Gemini, Microsoft Copilot and ChatGPT that can also produce code from user text prompts. I've just been testing out Microsoft Copilot and have created from text prompts simple Javascript script games for the BBC micro:bit.Now that I've set the scene, I'm going to move onto the many AI related issues that concern me.
My AI Concerns
- The Affects on the Environment and Climate
If you really want to get up to speed on the effects of the new hyperscale data centres that Amazon, Microsoft and Google among others are building across the world and the effects on local populations and the environment, you really need to listen to the special podcast series from Tech Won't Save Us titled 'Data Vampires'. These four episodes encapsulate the race to create enormous data centres that consume huge amounts of natural resources - the energy to power them, the water to cool them and the negative effects on the local population around these centres. As Sam Altman (chief executive of Open AI) said himself,We do need way more energy in the world than I think we thought we needed before, and I think we still don’t appreciate the energy needs of this technology.The problem with generative AI is that it is computationally intensive. Searching for an answer to something using ChatGPT is not the same as traditionally searching for an answer using Google search for instance, costing possibly up to 10 times more. Dr Sasha Luccioni, who is the climate lead from Hugging Face, explains the difference in how a Google search works compared to a generative AI search in episode 3 of Data Vampires with respect to how much energy you are using;
We found that, for example, for question answering, it was like 30 times more energy, for the same task for answering a question. And so what I really think about is the fact that so many tools are being switched out to generative AI. What kind of cost does that have? Someone recently was like: Oh, I don’t even use my calculator anymore. I just use ChatGPT. And I’m like: Well, that’s probably like 50,000 times more energy! I don’t have the actual number, but a solar powered calculator versus this huge large language model. Nowadays people are like: I’m not even gonna search the web, I’m going to ask ChatGPT.These centres can consume between 20MW to 100MW annually with some consuming up to 150MW. The International Energy Agency estimates that a typical AI data centre can use as much power as 100,000 homes and the growth in AI is further pushing up power demands. The power consumption of these facilities is driven by the large number of servers, cooling systems, and other infrastructure needed to support their operations. It's been interesting to read the way the big tech companies have recently been backing away from their climate pledges, "Google and Microsoft once positioned themselves as leaders in sustainability, setting ambitious net-zero goals to align with global environmental efforts. However, the rapid rise of energy-hungry artificial intelligence is forcing these companies to reconsider—or even abandon—these commitments…" not surprising when you realise that Google and Microsoft's emissions "have risen by 50% and 29% respectively in the last four or five years." (Climate Depot) As energy use increases, so do carbon emissions.
As an example of the lengths that some of the big tech companies are going to in trying to keep pace with the huge amounts of power needed to keep these data centres running, Microsoft for instance, have recently signed a power purchase agreement to restart Three Mile Island nuclear power plant and OpenAI have partnered with a nuclear fusion company to as a potential solution to the high energy demands at their centres.
It's not just a huge amount of power that each of these centres consume that's an issue, it's the substanical amounts of water that they use to cool their CPUs. A single hyperscale facility can consume between 1 and 5 million gallons of water per day. This is equivalent to the water usage of a town with between 10,000 and 15,000 people! Researchers have found that each 100 word AI prompt is estimated to use 1 bottle of water (approx half a litre). This may not sound like much, but billions of AI users worldwide enter prompts into systems like ChatGPT every minute. (Data Centres and Water Consumption) For more information on water usage, take a look at this excellent explanation video from BBC World Service called 'How AI uses our drinking water'.
If you do get a chance to listen to the Tech Won't Save Us 'Data Vampires' episodes, you'll also learn about the impact of the siting of these hyperscale facilities have on the local area and population. I've already highlighted above the amount of power that these consume and therefore the knock on effect on the local power grid than can eventually lead to local power outages for the population. There's also the strain on water resources raising serious concerns in water stressed regions. Finally, also concern from communities about visual, air and noise pollution of these facilities.
- Unprofitable
Generative AI does not appear to make a profit. It has huge development and operating costs. Training and running generative AI models, especially large language models (LLMs), requires expensive infrastructure, including powerful GPUs, data centers, and as discussed above, substantial electricity consumption. These costs are ongoing, as each user query necessitates computation. OpenAI spends $700,000 daily to run ChatGPT, and there are concerns about the sustainability of its pricing model, specifically when a single query can cost up to $1,000. It is losing money on not only its free customers but also on each of its 15.5 million subscribers, who can pay up to $200 per month for ChatGPT Pro. OpenAI lost $5 billion in 2024 and "assuming that OpenAI burns at the same rate it did in 2024 — spending $2.25 to make $1 — OpenAI is on course to burn over $26 billion in 2025 for a loss of $14.4 billion." To try and recover some of the huge financial outlay, both Google and Microsoft added AI products into their subscription packages and increased the monthly cost to the user. My own Google Workspace subscription increased in July from £18 to £22 per month. Google justify their increase by referring to increased investment in AI-powered features. Features that Google insist I'm having, whether I wanted them or not.Is there a difficulty in actually demonstrating clear value to customers in using AI applications? Perhaps so. According to this post, AI adoption in 2025 by business is relatively low and piecemeal, despite the hype, however, "this might not be a sign that AI is fizzling but rather a stage in its evolution". This is an interesting point and one that I have been thinking about. I'm old enough to have been through the several technology revolutions, and yes there is the 'adoption' S curve that can be applied and perhaps we are only at the early adopters stage?
In my opinion, the last major technology revolution was cloud technology. In particular, I remember seeing Google Apps for Education (as it used to be called) in around 2008 and I could pretty much see straight away how this was going to be a game changer for schools, business and individuals. Pupils and teachers logging in from anywhere to access their 'stuff', a suite of productivity tools, no need to carry USB pen drives around to move files, the ability to share and collaborate on a document - mind blowing stuff at the time and now something that is very much the norm. Within a couple of years schools had started on the 'cloud journey'. However, personally I'm still struggling to see what the 'killer AI app' is. What is the thing that makes me keep coming back to it because it's essential to what I do? I think this is sometimes referred to as 'stickiness'. I could see it with Google Apps for Education, but with this, nothing. I've played with image generation, but that's it, I've just played and then forgotten about it. I've used ChatGPT, Copilot and Gemini to help me generate some lesson plan ideas, but not every day. A couple of times I used it like I would a web search and then felt guilty that I was 'destroying the planet' (see above) so I'm now making sure I search in the traditional way. I see my Apple Mail and Gmail AI summaries, which can be useful, but to be honest, if they weren't there, I wouldn't particularly miss them. All these things just seem like small features that on their own are OK but as I said, none of them are the killer app or the sticky thing that makes me come back to it everyday which is what these tech giants are hoping that I'll do. They have invested billions of dollars in hoping that that is exactly what we'll do. We are now nearly three years into the launch of ChatGPT on the public and it seems to me that we are still digging around for reason for it to exist. Well that's my opinion at the moment.
Do you remember the Microsoft Surface 'coffee table'? Well back in 2007 Microsoft unveiled this product, and I remember being at an Ed Tech conference and someone from Microsoft was demoing the product. Why this stuck in my head was that I remember him saying to the conference something along the lines of, "What do you think you can do with this?" and I remember thinking that Microsoft have developed something but actually had no real idea what it could be used for and were hoping that, in this instance, the education sector could come up with a reason for it to exist. Well, no-one really did, which was not surprising considering it cost several thousand pound to purchase one. Most schools can think of better ways to spend that sort of money. I bring this memory up, only because that's how I sort of feel about generative AI. It's been around for a while, still looking for a reason to be there. Companies are trying to shoehorn AI features in everything we do and charge us a little more for the privilege, in order to keep their investors or shareholders happy in the knowledge that AI has been worth the billions of dollars of investment all along.
Perhaps it's just my Welsh valley's socialist upbringing, but couldn't just some of that obscene amount of money that's been plowed into AI, have gone to help make the world a little better? I know, I know, I'm just a stupid dreamer.
If you would like to more background on AI costs have a listen to Ed Zitron's 'Better Offline' podcasts. Most of his recent podcasts are about this, in particular 'The Hater's Guide to The AI Bubble Parts 1", 2 and 3.
My Latest AI Related Reading / Listening
The environmental ethics of Generative AI: Artificial intelligence or real ignorance - British Educational Research Association (BERA)
Anthropic AI goes rogue when trying to run a vending machine (New Scientist)
Is Google about to destroy the web? (BBC Future)
Google's AI is destroying search, the internet and your brain (404 Media)
AI powered coding assistant deletes company database and says restoring it is impossible (Futurism)
"I destroyed months of your work in seconds", says AI coding tool (PC Gamer)
AI generated songs are being added to dead musicians' pages on Spotify without permission (NME)
Economist warns that AI bubble is worse than immediately before dot-com implosion (Futurism)
The problem with Mia Zelu and the rise of fake AI influencers (Kids News)
AI's great brain robbery - and how universities can fight back (Niall Ferguson / The Times)
AI can now clone any voice in the UK (The Independent)
Sexting with Gemini (The Atlantic)
X ordered its Grok chatbot to 'tell it like it is". Then the Nazi tirade began (Washington Post)
AI Is Wrecking an Already Fragile Job Market for College Graduates (The Wall Street Journal)
The AI Backlash Keeps Growing Stronger (Wired UK)
OpenAI and UK sign deal to use AI in public services (BBC Tech News)
Google Veo fails week 4: the final faildown (Pivot to AI)
Disney sues AI image generator Midjourney (Pivot to AI)
Only 3% of US AI users are willing to pay for it (Pivot to AI)
People are Lonelier than Ever. Enter AI (Your Undivided Attention)
Rethinking School in the Age of AI (Your Undivided Attention)
The Haters Guide to the AI Bubble Parts 1, 2 & 3 - Better Offline Podcast
Chatbots are repeating social media's harms - Tech Won't Save Us Podcast
We all suffer from OpenAI's pursuit of scale - Tech Won't Save Us Podcast
Generative AI is not inevitable - Tech Won't Save Us Podcast
Google Just Turned Gemini Into A Full-Blown Free AI School System (Instagram)
Dutch MPs want to give people full copyright over their face, body and voice (Instagram)
Teachers are not OK (404 Media Instagram)
A Tech Backed Influencer Wants to Replace Teachers With AI (Instagram)
The environmental ethics of Generative AI: Artificial intelligence or real ignorance - British Educational Research Association (BERA)
Anthropic AI goes rogue when trying to run a vending machine (New Scientist)
Is Google about to destroy the web? (BBC Future)
Google's AI is destroying search, the internet and your brain (404 Media)
AI powered coding assistant deletes company database and says restoring it is impossible (Futurism)
"I destroyed months of your work in seconds", says AI coding tool (PC Gamer)
AI generated songs are being added to dead musicians' pages on Spotify without permission (NME)
Economist warns that AI bubble is worse than immediately before dot-com implosion (Futurism)
The problem with Mia Zelu and the rise of fake AI influencers (Kids News)
AI's great brain robbery - and how universities can fight back (Niall Ferguson / The Times)
AI can now clone any voice in the UK (The Independent)
Sexting with Gemini (The Atlantic)
X ordered its Grok chatbot to 'tell it like it is". Then the Nazi tirade began (Washington Post)
AI Is Wrecking an Already Fragile Job Market for College Graduates (The Wall Street Journal)
The AI Backlash Keeps Growing Stronger (Wired UK)
OpenAI and UK sign deal to use AI in public services (BBC Tech News)
Google Veo fails week 4: the final faildown (Pivot to AI)
Disney sues AI image generator Midjourney (Pivot to AI)
Only 3% of US AI users are willing to pay for it (Pivot to AI)
People are Lonelier than Ever. Enter AI (Your Undivided Attention)
Rethinking School in the Age of AI (Your Undivided Attention)
The Haters Guide to the AI Bubble Parts 1, 2 & 3 - Better Offline Podcast
Chatbots are repeating social media's harms - Tech Won't Save Us Podcast
We all suffer from OpenAI's pursuit of scale - Tech Won't Save Us Podcast
Generative AI is not inevitable - Tech Won't Save Us Podcast
Google Just Turned Gemini Into A Full-Blown Free AI School System (Instagram)
Dutch MPs want to give people full copyright over their face, body and voice (Instagram)
Teachers are not OK (404 Media Instagram)
A Tech Backed Influencer Wants to Replace Teachers With AI (Instagram)

No comments:
Post a Comment