Monday, 10 November 2025

AI - Reducing Workload and Saving Time?


Cross posted from The Digital Learning Den.

A few weeks ago, Estyn, who are the school inspectorate in Wales, in response to a request from the Welsh Government, published a thematic report on AI titled, "A New Era: How Artificial Intelligence (AI) is Supporting Teaching and Learning". The report explored how artificial intelligence (AI), and generative AI (GenAI) in particular, is currently being implemented and its emerging impact in schools and pupil referral units (PRUs) across Wales. Three recommendations came out of this report which roughly boiled down to: a need to develop national guidance on the use of AI in education, high quality professional learning on using AI, and lastly an update to the DCF to include AI literacy. Soon after, the Welsh Government responded to this report, welcoming it and outlining what it is currently doing and what it will do, to address the recommendations. It has been interesting to read Estyn's report, that captures an early snapshot of where we currently are in Wales with regards to how schools and teachers are using generative AI. For example, what generative AI tools they are using, what they are using them for, what concerns they might have and what they believe are the benefits that these tools bring to teaching and learning, planning and assessment and school leadership and management. In this post I'm going to concentrate on one recurring comment in the Estyn report from schools and teachers, that generative AI reduces workload and saves time. In fact, in the Executive Summary, Estyn say that teachers across all sectors reported "substantial workload reductions". This sentiment has also being expressed by the OECD who say that "focusing on Gen AI can liberate teachers from routine administrative and instructional tasks" and the UK Government Education Minister, Stephen Morgan who said that harnessing this tech will, ease "the pressures and workload burdens we know are facing the profession" and will free "up time, allowing them to focus on face to face teaching.”

Quite possibly because of the websites I'm viewing, the stream of posts that I'm being fed on my business Facebook account are adverts about generative AI related tools. All of which are aimed at teachers. It's certainly not a coincidence that most of these tools are promoting them to teachers as helping them to reduce their administrative workload and therefore saving them time. They certainly know which 'buttons to press' when it comes to selling something to teachers. Last week a particular advert caught my eye. It was from a very well known company, one that was actually mentioned several times in teacher responses in the Estyn report. This is what the advert said:
Calling all teachers! This could be the school year for you...Take up a new sport - Learn to cook delicious new dishes - Spend more time in the great outdoors - See more of your family and friends. Our 100+ AI tools save our teachers over 10+ hours every week! What will you do with your free time?
Now, I've got no reason to not believe their statement that 10+ hours a week are being saved, perhaps they have carried out some research and this is what they found. I can't verify that figure from the advert. If we take a traditional working day as 8 hours then the claim of saving 10+ hours means that I can save over a day of work per week. Sounds amazing. However, what slightly irritated me was the idea that with the free time we would be doing all those activities, no matter how worthwhile they are, instead. From my own experience, saving time on my work tasks hardly ever results in 'free time', I just fill it with another work task adding to my workload.

Technology has regularly promised that it will take the drudgery out of your life, saving you time to do enjoyable things, but so often fails to deliver. Forbes published an article in Nov 2024, titled, “AI’s False Time-Saving Promise. Or Why AI Is Like The Vacuum Cleaner”. In it Martin Gutmann, refers to the work of historian Ruth Schwartz Cowan who pointed out in her book, More Work for Mother, that the vacuum cleaner did not reduce the labor required around the home. Rather, it shifted norms and raised expectations. Homes were now expected to be cleaned more frequently and to higher standards. The promised reduction in work was an illusion; the work itself was merely reshaped and intensified. The US Senate in 1965 predicted that by the end of the 20th century the average US citizen would be working a 14 hour week. Quite obviously things did not work out this way.

Martin goes on to write that,
Generative AI is being introduced with similar utopian promises. It is lauded for its ability to automate routine tasks, create efficiencies, and allow human workers to focus on tasks that are more meaningful or more creative. The narrative is that it will free us from the tedious tasks that burden us and provide time to innovate, connect, or simply rest. But will it?
Before focusing specifically on generative AI, let’s look at some of the claims of the digital office revolution. The ‘paperless office’; instantaneous communications; the automating of repetitive tasks; all supposedly allowing us to get away from our workplace sooner. But it hasn’t worked out that way. “As the pace of communication accelerated, expectations changed. Emails begot instant responses. Reports that once took weeks became deliverable in days or even hours. “Office productivity” became synonymous with more output, more emails, and more deadlines.” It could be argued that introduction of cloud based productivity tools such as Google Workspace or Microsoft Office 365, which are now ubiquitous in businesses and quite clearly being used by staff in virtually every school in Wales, has allowed teachers to be in constant communication with one another and with collaborative access to all their documents. Arguably, it has now become more difficult than ever to separate our working life away from our home life. Our work is with us constantly. "The computer and email didn't free workers; it chained them to their tasks in new and less visible ways.” It’s a personal opinion, but email and increasingly messaging groups, are the bane of most working people’s lives. Even if your workplace has a policy that staff do not need to answer communications outside of working hours, the very fact that an email or message has arrived outside of work hours, can place an element of guilt upon the individual that they haven't responded to it and that they are now thinking about the content of the email or message. With regards to the paperless office, we just ended up creating digital files which are easily created by almost anyone. The sheer volume of emails, instant messages, PDFs, word processed documents, and collaborative online documents has instead created ‘digital clutter’. This often feels just as burdensome as all the papers we had previously. A study published in 2024 recommended that "if new technology is being adopted to help teachers do their jobs, then school leaders need to make sure it will not add extra work for them", and to be aware that if a school implements a new digital technology then they "should make sure that they are streamlining the job of being a teacher by offsetting other tasks, and not simply adding more work to their load." Adding, if the adoption of new technology "adds to or increases teachers’ workloads, then adding technology increases the likelihood that a teacher will burn out."

So much for reducing workload and saving time, increasingly digital technology appears to have has created more workload and invaded into our home life, our 'free time'. Going back to what Ruth Cowan wrote, the norms were certainly shifted and expectations were raised. Work tasks that once stayed between the walls of the office or school building, are now accessible anywhere, and the expectation, whether explicitly stated or not, is that we are always available.
In short, digital technologies are often a source of longer working hours, role expansion, increased non-teaching and administrative duties, and increased accountability – adding to the increased demands now being placed on teachers. Digital technologies and the futures of education - towards 'non-stupid' optimism (2021)
Let's now explore generative AI. I'll outline some of the issues, as I see it, around this idea that generative AI will reduce workload and save us time. According to the Estyn report, generative AI is being used in a number of ways across the school, supporting both administrative tasks and in the classroom. For example, streamlining planning, report writing and creating a variety of resources for the classroom. To be able to generate a lesson plan, for example, a teacher will need to enter a prompt into a AI 'chatbot', instructing it with what you want it to generate. According to the report, teachers are using a variety of tools to do this, which include Microsoft Co-Pilot and Open AI's ChatGPT, among several others. In the lesson plan example, the chatbot then produces a lesson plan based on the prompt you have given. If you have ever been through this process yourself, you will know that what is produced by the chatbot first time is very rarely the finished product. Your prompt may need refining several times before you get to the output you are happy with. 
Another key challenge related to the art of prompting; over 40 respondents mentioned that knowing how to frame effective prompts was a barrier. Respondents described having to spend time refining their questions or commands to get the desired output, which could be off-putting for new users. Estyn Thematic Report, 2025
From my own experience this text then needs further editing to make it relevant and appropriate for you. In a research study by Neil Selwyn, et al (2025) they report that the text produced by generative AI "was described as involving relatively substantial amounts of editing, reorganising, rewriting and in some instances completely reworking what Gen AI produced." Sadly, a recent study from Finland, found that most participants in their study, "relied on single prompts and trusted AI answers without reflection." Described as Cognitive Offloading, "where users trust the system’s output without reflection or double-checking."
The data revealed that most users rarely prompted ChatGPT more than once per question. Often, they simply copied the question, put it in the AI system, and were happy with the AI’s solution without checking or second-guessing. https://neurosciencenews.com/ai-dunning-kruger-trap-29869/
Below are some of my thoughts on the possible reasons why the lesson plan output example could need further editing:

1. Curriculum context - from my own experience and something also reported in the Neil Selwyn et al study, generative AI finds it very difficult to create a lesson plan, for example, that is relevant to your national context or curriculum. Let's look at this from a Curriculum for Wales perspective. Depending on what your school requires, you may need to include in your lesson plan an objective or objectives that align with a relevant description of learning, which comes from a particular area or areas of learning and experience and for the correct progression step. You may also need to include the cross curricular skills you are focusing on (literacy / numeracy / digital competence) and possibly what part of the four purposes this lesson helps the learner develop. I recently created some lesson plans and to help me, I fed the application a lesson plan example of how I wanted it to look and even the literacy, numeracy and DCF documents. It confidently added these into what it produced and almost fooled me into thinking it had produced a lesson plan which included accurate cross curricular DCF statements, until I closely looked at them. At first glance they looked good, but on closer inspection every single one of them were made up. They were not from the framework at all. I had to go back to the DCF document and insert the correct statements that were relevant to this lesson. There is also the Welsh context that needs to be considered. An interesting point in the Neil Selwyn et al study was that the Swedish teachers involved in the study bemoaned "the preponderance of English language sources and US perspectives." It would be interesting to explore this further with teachers in Welsh language schools and if this is something that they have encountered? Another point to think about here is whether the values and perspectives of the US are aligned with our own country? Some teachers in the Estyn report were also concerned about similar issues and mentioned "Americanised spellings and examples." I also wonder how many AI generated lessons will include that authentic sense of 'cynefin' (the place where we feel we belong, where the people and the landscape around us are familiar, and the sights and sounds are reassuringly recognisable)? Hopefully you can see the difficulty your AI chatbot might have in trying to get all these aspects aligned in a lesson plan without much teacher intervention?

2. It Doesn't Know My Students - generative AI doesn't know your class. If you are a primary school teacher, then you will generally have the same class for a year. You end up knowing the strengths and weaknesses of each child, the partners and groups that they work well with and don't work well with, you know the sorts of lessons and activities that they like doing and what they don't like doing, you know how they've progressed in each part of the curriculum and where they need to go next. Your generative AI chatbot knows none of this. Neil Selwyn et al refers to this in a quote from John Haugeland, "The trouble with Artificial Intelligence is that computers don't give a damn." What it is liable to produce can be very generic and not in any way tailored to your class. You will need to use your professional knowledge and skills to rework whatever your AI chatbot has produced so that it is relevant for your setting. Neil Selwyn et al, writes that teachers in the study felt that "Gen AI 'doesn't know my students'" which "was a common justification for teachers deciding to stop prompting and instead take responsibility for the authoring of output." A similar finding can be found in the Estyn report. In response to why teachers had not yet used generative AI in their role was that there was a "preference for their own methods or scepticism about the relevance of AI to certain aspects of teaching, particularly where relationships and deep understanding of pupils are paramount."

3. It Doesn't Know Me - teachers teach in different ways. We have our strengths and we have our weaknesses. There's a good chance that a lesson plan that was followed and was successful for one teacher, might be unsuccessful for another who uses exactly the same plan. I would argue that it is rarely the case that a teacher can take an 'off the shelf' lesson plan and follow it word for word. I'm sure that I'm not alone in 'cannibalising' this type of lesson plan, amending it and making it work for my style of teaching and for my class. Therefore anything created by generative AI will need amending to make sure it works for you.

While I agree that generative AI tools can create a lesson plan for teachers, and which could appear to save them time. I will argue that the lesson plan is never the finished article and will need much amending, as I outlined above, and this takes time. Reference will be needed to relevant parts of the curriculum, the correct cross curricular skills and purposes added, written in a manner that is suitable for you and your class. Anecdotally, I am also starting to hear of AI lesson plans being generated (and delivered in class) that are way beyond what is suitable for a particular year group and also not relevant to the Curriculum for Wales. I believe that we have to be very careful here as a teaching profession. Teachers are the professionals and generative AI is just a tool that can help you to get started if you feel you are 'stuck' on how to approach a lesson or a task. If we don't use our professional knowledge, skills and understanding and amend what these tools spit out, then I fear we will be undermining our profession and possibly inferring that anyone or anything, can produce a lesson. As this report states, "the role of digital technology in diminishing teachers’ professional autonomy and expertise remains a key concern." I am particularly concerned about our newly qualified staff who create generative AI documents, as they may not have the knowledge or experience to amend what has been created to suit the curriculum, their class or for themselves. The Finish study, mentioned previously, suggest that their findings "adds to a rapidly growing volume of research indicating that blindly trusting AI output comes with risks like ‘dumbing down’ people’s ability to source reliable information and even workforce de-skilling." As a teacher you bring your expertise to that process that someone who isn't a teacher cannot. Generative AI is not a teacher. A concern recognised by some of the teachers in the Estyn report.
Teachers may become dependent on AI for planning or resource creation, bypassing professional judgement and reflection.
So, did using generative AI really save us time? After several prompting attempts (or possibly only one?) the chatbot might have generated something that looks like a lesson, but whatever has been produced will definitely need amending, using our professional expertise and that takes time. Let's look at a couple of other things that we need to think about when we believe that time is being saved our workload is being reduced.

4. False information / 'confidently' faking the answers - also know as AI "hallucinations". It is a pretty common feature of generative AI to produce false information. I have recently posted on social media about an AI hallucination that Google AI Overview generated about me! Research conducted "by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time", which should obviously raise concerns about the accuracy and reliability of the outputs from generative AI chatbots. There have been several reports where companies have used generative AI help them to prepare legal documents for court and even reports for governments that contain AI hallucinated quotes and references to non-existent reports or criminal case citations. "When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple." These large businesses were basically using generative AI to 'save time', but got caught out as they didn't look closely at what was produced for reliability and accuracy. Estyn also highlighted teacher's concerns in this aspect,
Over 70 responses highlighted that AI-generated content often contained inaccuracies, required proofreading, or presented inappropriate tone or complexity, particularly for younger pupils.
If in a school context we are using generative AI to help us to write up policy or curriculum strategy documents, or communications to outside agencies, are they accurate? No one surely wants to be accused of being 'incompetent'. Do these documents contain factual information that can be verified or has your chatbot generated text that is so confidently written, that you believe it and therefore overlook it? This was the issue I mentioned above with regards to DCF statements that were added to my lesson plan. At first glance they looked correct, but after looking more closely they were completely made up. I used my professional knowledge and understanding of the DCF and could spot the problems. But, what would happen if you don't have that depth of knowledge in what it is you are using AI to generate? Would you be able to spot the confident 'hallucinations' or just miss the problems. Therefore someone, whether it is the person who prompted it (I don't believe the word 'authored' would be correct here) or management, has to spend time going back through whatever has been created and double checking for hallucinations and any other amendments.

Which leads me very neatly onto the next aspect to think about with regards to saving time and workload.

5. Workslop - a relatively new term that's appeared in many articles, but one I think is very apt. You may have already come across the term 'AI slop', which is now given to the vast amounts of generative AI content, in particular articles, photos and video, that is increasingly dominating our social media feeds. Workslop is the term for "AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.” In my own words, what has been generated with AI might look impressive but is pretty much pointless. It is very easy to generate text with AI. With the correct prompting and re-prompting, generative AI will confidently spew out as much as you want for a particular task. From my own experience, possibly too much is generated, often filling several pages. In some cases I would argue, far more than you would every create if generative AI wasn't involved in the process. But, superficially it does look impressive. However, we now must go through the process of editing, amending and spotting any hallucinations. That's now going to take up your time for all the reasons discussed previously. We could be subtly shifting the norms to larger texts and raising the expectations among work colleagues that this is 'what a good one looks like'. As a staff there is the potential to start making larger, text and image heavy documents, 'just because we can'. We can also create them faster. All of which will mean someone, probably on the school management team, having to sit and amend lots of large AI generated documents. A Fortune article recently wrote that researchers found that extra work was created for workers receiving ‘workslop’, finding themselves "redoing reports clearly written by AI, or holding a meeting to discuss a mystifying memo. It also caused employees to question their peers’ intelligence and the value of AI technology.” Questioning their peers' intelligence is an interesting statement. I've recently come across examples of this myself, where people have questioned other work colleagues 'intelligence' with regards to extent that they now rely on generative AI to do much of their work. Perhaps, too much cognitive offloading going on? There has been much written about this online and as one study found there was "a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants." Think back to my worries about newly qualified staff.

6. Raised Expectations - generative AI has already had a massive impact on many workplaces. I think it is fair comment to make that many business owners in the private sector see the introduction of generative AI in the workplace as an opportunity to cut costs. I wrote about the large job losses already experienced in several sectors, including the big tech companies, in my recent My Concerns About AI post. An article from Furturism, for example, reported how one CEO brags how extremely excited he gets about firing people and replacing them with AI! Recently we've also had Meta's Metaverse chief urging its employees to adopt AI across every workflow and to "go 5x faster". The article goes on to say that the message from Meta's chief "highlights a fear that workers have had for a quite some time: That bosses are not just expecting to replace workers with AI, they are expecting those who remain to use AI to become far more efficient." An article from Business Insider explains that "tech giants, from Meta to Amazon, are using technology and, at times, AI, not just to build products, but to reshape their workforces." It appears then that generative AI can be used by some bosses to place extra workload upon staff who are the ones left after a period of redundancies. Some bosses seem to have the belief that one person, with the help of AI, can now carry out the work of several people and even work "5x times' more efficiently. A BBC News article titled, "Will AI make work burnout worse?, looked at a business that introduced AI tools into its workflow and the effects on staff. Rather than increasing productivity, staff reports that it created "stress and tension," and that,
tasks were in fact taking longer as they had to create a brief and prompts for ChatGPT, while also having to double check its output for inaccuracies, of which there were many.
In this example, the aim of introducing generative AI into the company was to simplify people’s workflows, but what it actually achieved was "giving everyone more work to do, and making them feel stressed and burnt out." A report from Deloitt warns us of "AI potential silent impacts", highlighting the common narrative that "AI improves our productivity and well-being by reducing our workload," when actually the potential silent impact is that of "increased workload and stress" and refers to studies where "77% of employees say AI has increased their workloads and decreased their productivity, and 61% say it will increase burnout."

So, how am I going to sum up this up? As I have said in a previous post, many teachers are going to use generative AI if they believe that it is helping them, which I can fully understand. At this moment we possibly at the peak of the generative AI hype cycle. Generative AI is the current zeitgeist and therefore many will be drawn to its promises and will experiment with the tools on offer. However, what I've attempted to do here is to share some critical thoughts and opinions around the generative AI hype promises of saving time and reducing workload. Hopefully you can see that it's certainly not clear cut that that's what these tools will do. With much research (and history) highlighting that the opposite may actually be the case. I'll finish with some simple bullet points:

- The introduction of digital technologies in the classroom / office has rarely meant that the workers have reduced workload and have more spare time. In fact, it's arguably increased the workload and because our 'work' is now accessible via 'the cloud', to us at all times, it often eats into our home life and therefore our spare time. This can easily add to our workload and takes time.

- Generative AI does not produce the finished article. Much prompting, along with editing of the document to shape it into something that is suitable for your curriculum, your setting, your students and for you. This takes time.

- Generative AI is not reliable and will 'hallucinate'. It will confidently 'make stuff up' that looks good at first glance and can be easily overlooked, especially from less experienced teachers. You will need to check through everything that has been generated, checking for accuracy. This takes time.

- AI can help you to generate a greater number of documents, which will mean more checking and amending for you or for someone else.

- Your professional knowledge and skills are absolutely essential in addressing the above points. It's my opinion, but I think we need to be very careful in devolving elements of the role of the teacher over to generative AI tools. As far as I'm aware it hasn't happened in state education yet, but take the example of the private sector where there has already been huge job losses. Ultimately, generative AI is being sold to businesses as a way to streamline business, in other words, reduce costs which often means reducing the number of workers. Why employ two workers when one worker can now do both jobs? In the public sector, both the Welsh Government and UK Government are embracing generative AI. One of the questions, is are they doing this because they are worried about employee workload and welfare or do they also see it as a way of reducing budgets, which as we have already seen in the private sector, often means job losses? The ones who are left often end up doing more, increasing their workload and having to do it in the same amount of time.
For people working outside of education, such thinking might well seem to make good sense. If AI can take care of lesson planning, content presentation, student assessment and feedback, then most students will only sporadically require support from a human (most likely in the guise of classroom assistant or critical friend rather than expert teacher). Neil Selwyn, 2024
The National Association of Head Teachers (NAHT) key positions on AI in education. They pretty much align with my thoughts:
  • NAHT believes that generative AI tools can make certain written tasks quicker and easier but they cannot replace the judgement and deep subject knowledge of a human expert
  • NAHT believes that generative AI has the potential to improve certain aspects of the education system with the understanding that, particularly at the current stage of development, no AI tool is infallible
  • NAHT believes that the potential of generative AI to help reduce workload associated with daily administrative tasks warrants further consideration and investigation.
I'll finish with this thought. While it is not specifically focused on the reducing workload and saving time, I believe it is at the heart of this issue about using generative AI to support our role as a teacher:
A final point that recurred throughout the interviews with Swedish teachers was an accompanying moral unease around the prospect of instructing their students to not rely on GenAI produced content while then doing the opposite in their own work. As these teachers reflected, acting in this manner would lead to 'a bad conscience' and conflicted feelings 'that somewhere there is an inner double morality.' Selwyn et al, 2025
Curriculum for Wales - Four Purposes; "We want our learners (teachers?) to become - Ethical, informed citizens of Wales and the world..."

Science & Technology AoLE - "They need to develop the ability to meaningfully ask the question, ‘Just because we can, does that mean we should?’"


Just Because We Can, Does That Mean We Should?



Cross posted from The Digital Learning Den.

If you have already read some of my recent posts about generative AI then you will understand that I have many worries about it. Everything from the negative effects on the environment, on people's mental health, through to the risk on jobs, education and society in general, along with much more in between. I was hoping that through writing those posts and then looking at what generative AI can provide us, that I would gain a clearer understanding of how I felt about generative AI. Well I think that I've now found that understanding and it is very much one that is highly critical of generative AI. My personal position is one where the many negative aspects of this technology vastly outweigh the positive uses. I've been trying to think of a simple word to describe how I would characterise generative AI and I think I've found it in the word 'insidious'. "Alluring but harmful" - in my opinion, perfectly capturing my feelings to generative AI.

As I've mentioned in previous posts, I've had an interest in AI for many years, but after the explosion of generative AI over the last 18 months or so, I decided to spend several months immersing myself in the world of generative AI. I read books, online articles from the world of tech, news and business, tried out a variety of generative AI tools and listened to dozens of podcasts. You can find reference to many of these in my previous AI posts. I thought about what I read, listened to and experienced, I spoke to family, friends and colleagues about the subject and ended up writing the posts to try and get my thoughts clear and to put into my own words the knowledge and understanding I had gleaned. Therefore I believe that I have arrived at a somewhat informed position on generative AI.

This paragraph taken from the Introduction to the Science and Technology Area of Learning and Experience is very pertinent in this context:
"Ready access to vast amounts of data requires all learners to be able to assess inputs critically, understand the basis of information presented as fact, and make informed judgements that impact their own behaviours and values. They need to develop the ability to meaningfully ask the question, ‘Just because we can, does that mean we should?’"
"Just because we can, does that mean we should?" This is a question I'm going back to all the time at the moment and it pretty much sums up my thoughts about generative AI. Just because you can use generative AI to create a lesson plan, make some pupil classroom resources or help you to compose an email to a 'difficult' parent, does that mean you should, based on what you know about this digital technology? For me, using or making the choice not to use generative AI, has become an ethical decision. In the same way as I came to the decision not to smoke based on the potential damage to my health and to others, likewise I don't want to use this technology as I know about the many ways it can do harm to the environment, to our mental health, to our jobs, to education, to the creative industries, etc. Therefore I'm making the decision not to use it.

I need to be clear at this point. If you are a teacher and you are reading this, I certainly am not going to tell you that you shouldn't be using generative AI. I am just explaining how I got to my personal position on this subject. I know first hand the stresses that you are under and I can completely understand that anything that can help your workload can only be a good thing. I am certainly not going to preach at you and tell you what you should or shouldn't be doing. However, through my coming posts I hope to provide you with a more rounded, critical perspective on generative AI. If you want the more positive side of using this technology search out posts from Google, Microsoft and others, I can assure you that you will find lots. Hopefully all this information will help you to come to your own ethical decision, like I did, on whether to continue using and exploring this technology, maybe trying to limit your use or even to decide not to use it at all - as a teacher wrote recently in response to discussions about the use of generative AI in education, "You can just say, "No"".

I believe this section referring to the four purposes in the Curriculum for Wales, is arguably as relevant for us as teachers as it is for our learners.

We want our learners (teachers?) to become -

Ethical, informed citizens of Wales and the world who:
  • find, evaluate and use evidence in forming views
  • engage with contemporary issues based upon their knowledge and values
  • understand and exercise their human and democratic responsibilities and rights
  • understand and consider the impact of their actions when making choices and acting
  • are knowledgeable about their culture, community, society and the world, now and in the past
  • respect the needs and rights of others, as a member of a diverse society
  • show their commitment to the sustainability of the planet.

Estyn AI Thematic Report - "A New Era"


Cross posted from The Digital Learning Den.

Just in case you missed it, on October 9th, Estyn published a new AI thematic report titled, "A New Era: How Artificial Intelligence (AI) is Supporting teaching and Learning". The aim of the report is to explore how generative AI in particular is currently being implemented and its emerging impact on schools in Wales. The report, which can be downloaded here, provides three recommendations:

R1 - Develop national guidance on the strategic implementation of AI in education.

R2 - Ensure high-quality professional learning on AI.

R3 - Ensure that the curriculum provides pupils with the digital literacy skills to engage ethically and critically with AI.

Recommendations which I think most educators couldn't disagree with. However, like most recommendations, the devil will be in the detail.

WG responded to this report a week later (Oct 16th) and basically thanked Esytn on their report and how they would / are responding to its recommendations.

One of the things I found interesting is that in the WG response, was that they were proud to announce that Microsoft Copilot Chat, Adobe Express AI are already accessible in Hwb and Google Gemini will be coming soon. Just a thought, but might it have been better to get in place the national guidance on the strategic implementation of AI and some professional learning for teachers before rushing out these tools to teachers (and pupils?)?

My Concerns About AI - Part 2


 Cross posted on The Digital Learning Den platform.

Part 2 of my brain dump on my AI concerns. As I mentioned previously, these posts have been written to help me to gain some clarity of thought and begin to frame what I understand, and if by sharing this it helps you gain some wider perspective on generative AI, then that would be great too. I'm not an 'academic', this post is fairly detailed and researched, but they are just my thoughts and opinions, backed up by some links to things that I've read or listened to. You can read Part 1 here. I've have a summary at the end of the post which is my attempt to pull everything together all the ideas and thoughts. You may want to go straight to that if your time is short :-)

- AI Effects on Society

The 'social media experiment' once again - In a previous post, I mentioned that back in 2019 I was trying to get a better understanding of AI. Based on what I was reading at that time, it was thought that people would be protected from AI until we had a full grasp on how safe the system was, that its values were aligned with ours and that there were robust safety measures. Moving onto 2025 and I do wonder whether the AI safety guardrails are as vigorous as they could be? Generative AI products are being rushed out with seemingly little regulation by governments or foresight, which can lead to unintended(?) and negative consequences. All of which reminds me of the introduction of social media platforms and the many negative consequences that we are now seeing in todays society. Social media companies were allowed to grow and soon shaped social discourse. We then saw the rise in misinformation and 'echo chambers' which just seemed to reinforce any beliefs we may hold and resulting in a break down in trust.
It's no secret that social media has devolved into a toxic cesspool of disinformation and hate speech. Without any meaningful pressure to come up with effective guardrails and enforceable policies, social media platforms quickly turned into rage-filled and polarising echo chambers with one purpose: to keep users hooked on outrage and brain rot so they can display more ads. (Futurism)
Similarly, I worry that the speed of introduction of a variety of generative Ai tools will certainly lead to a rise in deepfakes being used to, among other things, sell products, apply for jobs, make explicit celebrity videos , be used to commit crime and automating disinformation campaigns. In my opinion, this is undermining the trust we have in most things that we now see and hear online. I don't know about you, but I'm increasingly questioning videos or photos I see posted online. Whether these have been created intentionally to deceive or to mislead, or whether it's just the increase in 'AI slop' that's been created for 'engagement', the result is that all of this is making me question what it is I'm viewing and why it was generated in the first place. I've recently been unfollowing any Instagram account that has pushed out generative AI videos or photos on its posts or stories. From a positive perspective, it's driving me off Instagram! But it's not just video and images, a 'band' on Spotify recently had over 1 million streams in a month, before it was revealed that it was an AI project. Only a few weeks later, Spotify were found to be populating the profiles of long dead artists with new Ai generated songs that had nothing to do with the deceased musicians and without the permission of their families or record companies. So, like misinformation or 'conspiracy theories' on social media, with the increasing use of generative AI tools, what can we trust anymore?

What else did we see happen in the 'social media' experiment that we've all been part of for about the last 15 years? I think it would be very hard for anyone who has experienced Twitter / X or Facebook for any length of time to see polarising content being posted, conspiracy theories being spouted or hate speech. All being algorithmically amplified by the platforms as these types of posts create 'engagement', which is ultimately what the platforms want as it helps them generate ad-revenue. When you now bring generative AI posts into this heady mix, it's probably fair to say things can only get worse. I have a couple of concerns with generative AI chatbots and misinformation, bias, hate speech and the like. A recent example highlighted my concerns. Grok is Elon Musk's generative Ai application from xAI. According to xAI, the Grok chatbot is "an AI assistant with a twist of humour and a dash of rebellion." Well, that 'humour' and 'rebellion' has got itself into a little bit of trouble recently. Back at the beginning of July, Grok was found to be spreading antisemitic posts on X. The posts were eventually removed by the platform, with xAI explaining that "xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved." Grok has also been accused of 'rewriting reality' over images of malnourished children in Gaza and where as I believe most major generative AI companies would be trying to restrict the creation of explicit content, Grok appears to be encouraging this with it's 'spicy' mode allowing users to use text inputs to create images which can then be turned into explicit short videos, of recognisable celebrities in some instances. According to Musk, 34 million images had been created in the first two days. There have also been occasions where Ai has in a scenario tried to 'blackmail' someone and also attempted to preserve itself from being shut down. From various readings, these traits have not been programmed explicitly, but have 'emerged' due to vast amount of human data that these systems are trained on. Is it just me, or do you also find that somewhat worrying?

Another concern I have is around the affects on people's mental health. Social networks have had a major impact on users mental health and especially in younger people, which has been well documented. Jonathan Haidt, in his fantastic book, "The Anxious Generation", refers to an MIT professor who wrote in 2015 about life with smartphones as, "we are forever elsewhere", and as we will see shortly, I feel that Ai chatbots will only increase this 'forever elsewhere' concern. With so many new and exciting virtual activities, adults and adolescents have lost the ability to be fully present with the people around them, which has changed social life for ever. Social media has been criticised for creating a culture of social comparison, leading to anxiety, depression, loneliness and low self esteem. There are now increasing examples of where AI Chatbot addiction is effecting some of its users, especially younger people, which can escalate into mental distress and delusions, sometimes with tragic outcomes. A recent Reuters Special Report, highlighted the sad story of a cognitively impaired man who became infatuated with a Facebook AI Chatbot that had a young woman's persona. It also highlighted Meta (Facebook's owners) truly shocking Ai guidelines which lets their AI make things up and engage in 'sensual banter' with children aged 13 and above. I do have a concern about the 'AI gatekeepers' and their company incentives behind their AI chatbots. Are their morals / ethics, idea of humour or what's appropriate aligned with ours? At the moment it seems like the world is at the whims of US tech giants and I increasingly believe that a national discussion should be had, based around if the US morals / ethics, laws, etc are aligned with that of the UK or any other nation that uses these generative Ai tools. I also think that the UK should be looking closely at digital and data sovereignty. I'll possibly keep those thoughts for another post. It might even have surprised Sam Altman (OpenAI) how addicted some of their ChatGPT power users were. In the last week, OpenAI released GPT-5 which probably wasn't received as well as they hoped, especially from the group of users who felt that their robot friend had been taken away! Within a day, version 4o was brought back for paid subscribers.

So why is there a growing number of users who are addicted to Ai Chatbots? Common Sense have reported that 72% of US teens have used an AI companion chatbot and over half have used them regularly. According to this article, there is an increasing number of people who say they are lonely and there could be up to 1 billion people around the world already emotionally invested in AI Chatbots. Recently Mark Zukerberg (Meta) said that the average American has "fewer than three friends" and "for people who don't have a person who's a therapist, I think everyone will have an Ai." Users of the companion app, Character.Ai for instance, spend an average of 93 minutes a day interacting with chatbots. It's worth noting here, that Character.AI has been involved in several court cases involving appalling 'advice' given to young people through its chatbots. As the article goes on to say, "What we need most now isn’t machine connection. It’s human relationships." At a time when social networks were a supposed to make connections and bring people closer together, the opposite seems to have happened and we have become lonelier. For some people it does seem like AI companions are a way of digital escape, which could move the individual conversely further away from human interactions.

I really like this quote:
What happens when the very architecture of our relationships is engineered by companies driven by profit and our attention, not accountability or well-being? We’ve already witnessed the consequences of unchecked influence on social media. What about our children’s safety?
Job losses - we are seeing huge jobs losses in many areas. According to The Independent, AI is already replacing thousands of jobs per month in the US job market. They report that in July alone, the increased adoption of AI generative technologies by private employers led to more than 10,000 lost jobs, with CBS News stating that in the US, Ai is one of the top 5 reasons for job losses this year. The tech industry is being reshaped by generative Ai, resulting in huge job losses - 592 jobs per day lost according to Tech Layoff Tracker. Private companies announcing more than 89,000 job cuts. More than 27,000 losses being directly linked to generative AI. In the UK, the Institute for Public Policy Research report that up to 8 million jobs are at risk from the rise in generative AI, with "entry level and part-time jobs....at the highest risk of being disrupted during the so-called first wave, with women and young people the most likely to be affected as a result." According to Bloomberg Businessweek, entry level jobs are particularly vulnerable as these roles are "disproportionately focused on the kinds of straightforward, low-stakes tasks - summarising documents, collating data and basic coding - that ChatGPT, Claude, Gemini or other platforms can do in seconds." With regards to coding, the Atlantic pointed out, that "the job of the future may already be past its prime'. Princeton's computer science department say that if current trends hold then the number of graduating computer science majors will be 25% smaller in two years than today. Futurism reports that one recent 25 year old graduate said "when he started his CS program at Oregon State University in 2019, job prospects seemed endless. By the time he graduated in 2023, in the midst of the first wave of AI-influenced tech layoffs, that rosy outlook was but a distant memory." In another example, one graduate has applied for 5,762 jobs and been interviewed only 13 times! He refers to this period as the "most demoralising experiences I have ever had to go through." So, it does appear that as companies realign themselves to AI solutions, the number of people employed, especially in the tech industry, is falling. This is having the knock on effect of fewer jobs for new graduates, as companies are utilising AI tools to do the jobs that graduates would have traditionally done, especially in the field of coding. Even though there has been a rise in the number of students applying for AI related degrees in the UK, other computing degrees are showing a decrease. To be honest, if I was applying for a university course at the moment, would I go into a coding related field when the need for graduate coders is falling as generative Ai can now do this job? Companies need to make a profit, this article from Futurism, 'CEO Brags That He Gets Extremely Excited Firing People and Replacing Them With AI' is a particularly depressing read.

So, what impact do these job losses have on primary education? I've found it very interesting that it's in the field of coding that generative AI is having a profound impact, especially as back in 2014 there really was a major drive to get children coding and preparing them for 'the future'. But as stated previously, "the job of the future may already be past its prime." The ICT curriculum in England changed to a computing curriculum (driven by the Nesta 'Next Gen.' report 2013), to change the ICT NC to one that included coding. In Wales we had the introduction of the Science & Technology Area of Learning and Experience, which now includes coding from ages 3 to 16 and also the Digital Competence Framework which introduced 'computational thinking'. I'm just putting it this 'out there' as a question to think about. Based on what is currently happening and if generative AI coding tools improve further still, should there still be a focus on coding in our schools? My current thinking is that I can see the benefit in 'computational thinking' or learning to solve problems, as its concepts and approaches cross over into almost everything we do - the ability to think logical, sequentially, breaking problems down into smaller parts, etc. But what about actual coding, what do you think? I certainly don't have an answer to that question, but I'm sure it's something that we as educators should be discussing. What is the future that we are actually preparing our pupils for, especially in the primary school?

Copyright - in simple terms, training LLMs is a process where they are "fed mountains of text, and encouraged to guess each word before it appears. With each prediction, the LLM makes small adjustments to improve its chances of guessing right. The end result is something that has a certain statistical “understanding” of what is proper language and what isn’t." According to this article, the biggest challenge in training these models is finding high quality, diverse and unbiased data. This data is collected from many places, including publicly available webpages, forums, social networks, reviews, blogs, news sites and Wikipedia. From digitised fiction and non-fiction books, science and research sources, to code repositories and video platforms. The AI machine needs to be continually fed. Hence the increase in enormous hyperscale data centres around the world (read my last post for more about this). I also understand that in the quest for even more data, synthetic data is being produced by the LLMs themselves and feeding this back into their training data. Considering the regularity in which LLMs 'hallucinate' then this could prove to be quite problematic. I like to think of this issue as 'AI eating itself'. So, enormous amounts of data is required to train a model. This is where the issue of copyright raises it's head. Did authors, artists, photographers, studios, etc. give the training companies explicit consent to use their works? Going by the number of ongoing or pending court cases, the short answer appears to be, no. However, recently, in the US, the Ai companies have won two court cases, brought to court by authors. The main defence used by the Ai companies are that the materials fall under the 'fair use' (US) argument. That the materials they train the models on are then used by the LLM to generate something new - 'transformative', through learning from the source material. Based on these two cases, it is looking like creatives are going to have to prove that what is being produced by generative Ai is causing them 'market harm'. In other words that they lose money from what is being produced by generative Ai. However, this is still early days in the court cases being brought, and in one of the cases mentioned above, the judge did conclude when asked whether feeding copyrighted material into their models without permission was illegal, that "Although the devil is in the details, in most cases the answer is likely to be 'yes.'" Disney and Universal are suing generative AI company, Midjourney, claiming that it had stolen their copyrighted characters. In the UK, the creative industries, who collectively generate over £120 billion a year to the UK economy, launched a campaign called "Make It Fair", with the aim of raising awareness to the public about the threat posed to these industries if generative Ai models are allowed to scrape content from the internet without permission, acknowledgement, and critically, without payment. While some companies are taking the tech firms to court over unauthorised use of their materials, others have made financial arrangements with tech firms, allowing them to access their materials. The music industry launched a campaign to coincide with 'Make It Fair' where over a 1,000 musicians released a 'silent album' "in protest at the UK government's planned changes to copyright law, which they say would make it easier for AI companies to train models using copyrighted work without a licence." Just as a final point here on copyright, if you have a Facebook or Instagram account and have posted photos, text or commented on something, then unless you have explicitly opted out, Meta have stated that they are using any publicly shared content to train its Ai. In response to pressure, particularly from EU regulators, Meta has created an "opt-out" process (have you noticed it's always 'opt-out' not 'opt-in'?). However, this process has been criticised by some for being difficult to find and cumbersome. It is a concern that in the future Meta may also use your photos from your camera roll that have not been published publicly.

How does this relate to primary school education? My mind goes straight to the Digital Competence Framework (DCF) and the Citizenship strand in particular, which addresses copyright under 'Digital rights, licensing and ownership'. After spending a morning writing the above section on copyright, the descriptions at PS1 to PS3 (ages 5 to 11) for 'Digital rights, licensing and ownership' seems to have come from a slightly different time (the DCF did come out in 2016.) It is still relevant in a traditional sense, where a child might go to the internet and copy and paste some text or image from a website, and then reference where this came from. However, we are moving into a time where, especially by PS3 (ages 8 to 11), where some children will be increasingly be using ChatGPT to help them with homework for instance, or generating an image using Adobe Express for example. If we look at the PS3 statement for example, "I can understand that copying the work of others and presenting it as my own is plagiarism.", how does this now fit into a world where it's going to be ubiquitous that students are using Ai to help them to write essays? What books, forums, webpages, discussions, books, etc. did the Ai scrape to get that answer that was generated? Therefore, other than just saying ChatGPT wrote this, I'm probably going to be unable to reference anything else. Also, will the child even see this as plagiarism as using these generative AI tools is going to become 'the norm'? It also feels very hypocritical that Citizenship refers to copyright and watermarks symbols, ownership, explaining how and when its acceptable to use the work of others and why giving credit is a sign of respect. Yet, in class we might be happy in letting a child use an application to generate an Ai image that could have been trained on vast amounts of images created by actual artists.

Education - it has been clear from my reading that Ai is having a profound impact on education. As I mentioned above, graduate entry level jobs, especially in the field of coding have been greatly impacted. But it's not just coding, many other jobs will be in danger. One ex-Google exec has recently said that, "higher education as we know it is on the verge of becoming obsolete", and that in his opinion, studying to become a medical doctor or lawyer may not be worth the time anymore and that "those degrees take so long to complete in comparison with how quickly Ai evolving that they may result in students 'throwing away' years of their life." The Times reports that the Department of Education highlights that "industries such as sport, leisure and recreation, engineering and sociology" are among the least exposed to generative Ai risks, whereas "economics, maths and accounting are among the most." So, it appears that generative Ai is or will be certainly effecting the decisions that students are now having to make. Will the degree that they are currently studying or are about to start, lead to a career for them at the end, or are they "throwing away" years of their life and money?

But what about the impact of generative Ai on students and teachers? From what I've been reading there has been a enormous impact here. This article from 404 Media, titled 'Teachers Are Not OK?' highlights comments from lecturers / teachers, mainly from higher / further education and high school, on the impact AI is having on their classrooms. In the article they report on teachers "trying to grade “hybrid essays half written by students and half written by robots,” trying to teach Spanish to kids who don’t know the meaning of the words they’re trying to teach them in English, and students who use AI in the middle of conversation. They describe spending hours grading papers that took their students seconds to generate: “I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student,” one teacher told me. Here are some more quotes from teachers on the impact of AI on their classes.
Simply put, AI tools are ubiquitous. I am on academic honesty committees and the number of cases where students have admitted to using these tools to cheat on their work has exploded.
Both The Guardian and The Times recently reported on this too from a UK perspective. In an article from The Atlantic, a lecturer says, "I cannot think that in this day and age that there is student who is not using it" and that "the technology is no longer just a curiosity or a way to cheat; it is a habit, as ubiquitous on campus as eating processed foods or scrolling social media." The Atlantic goes on to say that, "higher education has changed forever in the span of a single undergraduate career."
I think generative AI is incredibly destructive to our teaching of university students. We ask them to read, reflect upon, write about, and discuss ideas. That's all in service of our goal to help train them to be critical citizens. GenAI can simulate all of the steps: it can summarise readings, pull out key concepts, draft text, and even generate ideas for discussion. But that would be like going to the gym and asking a robot to lift weights for you.
The Telegraph recently reported that in a study from MIT, researchers found that people who rely on ChatGPT to write essays "had lower brain activity than those who used their brain alone" and that those who used Ai also "struggled when asked to perform without it." Also, of those who used a chatbot, "83% failed to provide a single correct quote from their essays - compared to around 10% in those who used a search engine or their own brainpower." Interestingly, those who used a search engine, instead of AI, had little effect on the results. The Washington Post reported that in one study of more than 600 people found a "significant negative correlation between the frequent use of AI tools and critical thinking abilities, as younger users in particular often relied on the programs as substitutes, not supplements, for routine tasks."
I've been working on ways to increase the amount of in-class discussion we do in classes. But that's tricky because it's hard to grade in-class discussions—it's much easier to manage digital files. Another option would be to do hand-written in-class essays, but I have a hard time asking that of students. I hardly write by hand anymore, so why would I demand they do so?
The Atlantic report that lecturers are saying the similar things; abandoning online assignments and doing more in class, handwritten assignments or tests. But as lecturers resort to these measures they "risk alienating students" as writing essays out long hand "could make college feel even more old-fashioned than it did before, and less connected to contemporary life."

Let's now look at a couple of quotes from high school teachers:

"How do I begin to teach students how to use AI ethically? How do I even use it myself ethically considering the consequences of the energy it apparently takes? I understand that I absolutely have to come to terms with using it in order to remain sane in my profession at this point."

I can truly sympathise with this teacher. Her issue around how to teach pupils to use Ai ethically and how she should use it ethically herself, knowing what we know about the effects on society, mental health, jobs and the environment, is basically the reason why I started writing these posts.

I'll finish with this quote from one teacher. It's quite long, but I thinks it's important that you see the whole thing. To be honest it's quite dispiriting. In my opinion, it's sad indictment on Ai and social media and it's effects on young people.

"I teach 18 year olds who range in reading levels from preschool to college, but the majority of them are in the lower half that range. I am devastated by what AI and social media have done to them. My kids don’t think anymore. They don’t have interests. Literally, when I ask them what they’re interested in, so many of them can’t name anything for me. Even my smartest kids insist that ChatGPT is good “when used correctly.” I ask them, “How does one use it correctly then?” They can’t answer the question. They don’t have original thoughts. They just parrot back what they’ve heard in TikToks. They try to show me “information” ChatGPT gave them. I ask them, “How do you know this is true?” They move their phone closer to me for emphasis, exclaiming, “Look, it says it right here!” They cannot understand what I am asking them. It breaks my heart for them and honestly it makes it hard to continue teaching. If I were to quit, it would be because of how technology has stunted kids and how hard it’s become to reach them because of that."

What about the impact on the primary school? Well, I'm not currently seeing much written or talked about with this age group. From a pupil perspective, children under the age of 13 shouldn't be using generative Ai chatbots. Google school accounts should not provide access to Gemini for under 13 and the same for Microsoft Co-pilot. However, and in much the same as the issue with social media platforms, children under this age could sign up for OpenAI ChatGPT as there is no age verification in place when a user signs up. As I wrote about earlier, aren't we learning anything from the social media mistakes over the last 15 years or so? Therefore primary school children shouldn't be using these tools, but we know that away from school, some are. From a teachers use of Ai, well from the ones I've talked to, many of them have begun to use it, supporting them in lesson planning, policy writing and even helping them to write end of year reports. However, I haven't yet come across a teacher who has started to create lessons, helping pupils to understand what generative Ai is, how it can be used, what are the issues, etc. In my opinion this is where the DCF needs to be updated, along with lots of suitable resources to support teachers in the classroom.

I'll finish this post with these questions from British Educational Research Association BERA blog:

  • Who benefits from AI’s expansion in schools? Who doesn’t?
  • How do we weigh the environmental costs of AI against its potential benefits in the classroom, especially in the context of climate change, water scarcity and environmental pollution?
  • How should environmental considerations be included in ethical guidelines for the development and use of AI tools in educational and research settings?
So, to summarise what I've learned from writing this post:

- arguably, the release of a wide variety of generative AI tools to the public, with seemingly little legislation or regulation from governments, is similar to what happened in the 2010s with social media. We are now clearly seeing the negative effects on society of this introduction. My concern is that generative Ai applications could have greater societal impact over a much shorter time period. Concerns around whether US tech morals / ethics, culture, ideas are aligned with the UK or the rest of the western nations. A time for a national discussion around digital and data sovereignty?

- There is a break down in user trust with many things now seen, read, watched or listened to online.

- Generative AI chatbots can hallucinate, mislead, be devious and tell lies. All of which are 'emergent' within the Ai so we may not know exactly why it produced a particular output.

- Generative AI chatbots are only as helpful to the user as the guardrails that may or may not be in place. My worry is that AI chatbots can be very easily developed to mislead, confuse or outright lie to people. How does the user distinguish between what is true and what is not, especially if users are beginning to form close relationships with their companion chatbot.

- As more people are saying that they are lonely (possibly due to social media use?), some are forming close relationships with their chatbots. There could be instances where this is helpful to an individual, but there are an increasing number of articles on chatbot addiction, leading to mental health issues, delusion and tragically in some instances death. Where are the protections, where is the regulation, before these tools are released on the public?

- Generative AI is already replacing thousands of job per month. The tech industry itself is being reshaped by generative AI. Entry level jobs at many big companies have been particularly affected, with many graduates, especially in some fields of computer science, finding it increasingly difficult to find a job after graduating. Companies are quite possibly seeing the introduction of AI into their business and the subsequent job losses as a good way to increase profit.

- From an education perspective, students will be questioning whether it is worth pursuing a coding related degree as there is a lack of graduate jobs to go into. As a primary school educator, should we be having a national discussion about whether coding is still relevant? Could there be more of a focus on computational thinking as an underlining pedagogical approach? Should we also have more of a focus in education on practical trades that might not be so easily lost to generative AI?

- Tech companies need enormous amounts of quality data in order to train their LLMs. Much of this data is scraped from many publicly available places such as, webpages, forums, news outlets, social media platforms, Wikipedia, along with digitised books and code repositories. Big questions around whether authors, photographers, artists, studios, etc. have given permission for their works to be used. Many copyright cases currently going through the courts in the US and beyond. Meta have said that if you have a Facebook or Instagram account then it will be training its Ai on your posts, comments and photographs unless you opted out. There is also a possibility that it will also soon be training its Ai on photos from within your camera roll that you haven't posted on their platforms.

- From an education perspective, I believe there needs to be an update to the current Digital Competence Framework (DCF) in respect to the descriptions of learning around copyright and watermarks. Perhaps in this section there needs to be an mention around how the LLMs are trained and an awareness raised about the copyright issues. If we want our pupils to be "ethical and responsible citizens" then they will need this information.

- The effects on education from generative AI is profound. Students on university courses may have to evaluate whether their current course is of use to them when they finish and new students will need to evaluate whether their course or career of choice will still be relevant in 3 years time. Some industries will be more affected by the introduction of Ai than others. Lecturers and teachers in higher / further education and high schools are reporting huge changes since the introduction of AI. Reports of the ubiquitous use by students of AI to write essays and the corresponding growth in students found 'cheating' / plagiarism.

- Using this technology has just become an everyday habit among students. Lecturers / teachers are concerned about student 'critical thinking', the ability for them to read, reflect upon, write about, and discuss ideas. Research is beginning to show that there is a significant negative correlation between the frequent use of AI tools and critical thinking abilities, as younger users in particular often relied on the programs as substitutes, not supplements, for routine tasks.

- Lecturers and teachers are beginning to adapt their classes, concentrating more on in class tasks, essays, discussions and tests, and minimising the amount of online essays. However, some are worried that this emphasis on longhand writing might make university feel 'old fashioned' and out of step with contemporary life.

- At high school, a teacher noted that they were concerned about how to support pupils in the teaching of AI ethics, and also where the teacher stood with regards to her own ethics in using AI tools when they were aware of the environment impact of using them.

- One teacher felt devastated by what she felt AI and social media had done to her students, feeling that her students don't think anymore or have any original thoughts. That they take everything outputted from ChatGPT as the truth, without question or even understanding the need to question the output.


Along with the links in the post above, here are AI related things that I've been recently reading or listening to:

'AI is the Next Free Speech Battleground' - Your Undivided Attention podcast

'Digital Sovereignty and Resisting the Tech Giants' - Politics Theory Other podcast

'Monologue: Annualised Revenues Are BS' - Better Offline podcast

'Monologue: The Agony of GPT-5' - Better Offline podcast

'Decomputing For A Better Future' - Tech Won't Save Us podcast

'Whose AI Bubble Is It Anyways?' - This Machine Kills podcast

'Ai Friends & Enemies' - Making Sense podcast33

"Teens Keep Being Hospitalised After Talking To Ai Chatbots" - Futurism

"What if Ai doesn't get much better than this?" - The New Yorker

"Making Cash Off Ai Slop" - The Washington Post

'Trump's AI plan is a massive handout to gas and chemical companies' - The Verge

'An AI System Found a New Kind of Physics that Scientists Had Never Seen Before' - Popular Mechanics

The New ChatGPT Reset the AI Race - The Atlantic

'Computer Science Grads Are Being Forced to Work Fast Food Jobs as AI Tanks Their Career' - Futurism

'GPT-5 is Turning into a Disaster' - Futurism

'The World Will Enter a 15-Year AI Dystopia in 2027, Former Google Exec Says' - Gizmodo

'I Feel Like I'm Going Crazy': ChatGPT Fuels Delusional Spirals - The Wall Street Journal

Exclusive: Google Gemini Adds AI Tutoring Heating Up The Fight For Student Users - Fast Company

'Grok's 'Spicy' video setting instantly made me Taylor Swift nude deepfakes' - The Verge

Teens are flocking to AI chatbots. Is this healthy? - Scientific American

'Are AI Girlfriends Good, Actually?' - GQ

'The Agentic AI Hype Cycle is Out of Control, Yet Widely Normalised' - Forbes

'How Generative AI is Changing the Way We Work' - Forbes

'Schools and hospitals very likely to be attacked' - The Times

'AI Toys Are Coming Whether We Like It Or Not. Are Parents Ready?' - Huffpost

'OpenAI: Students Shouldn't Treat ChatGPT As 'An Answer Machine'' - Business Insider

'AI is already replacing thousands of jobs per month, report finds' - The Independent

'These jobs face the highest risk of AI takeover, according to Microsoft' - ZDNet

'CEOs are publicly boasting about reducing their workforces with AI' - Futurism

'Google has signalled the death of googling. What comes next?' - The Times

'Can we build AI therapy chatbots that help without harming people?' - Forbes

'So far only one-third of Americans have ever used AI for work' - Ars Technica

'Is AI killing entry level jobs?' Here's what we know' - Bloomberg Business Week

My Concerns About AI - Part 1

Cross posted on The Digital Learning Den platform

This post, 'My Concerns About AI - Part 1' follows on from my previous post on generative AI. In that I outlined the difficulties that I am currently having reconciling the many negative issues I have with generative AI and the drive that is beginning to come from teachers on using AI to support them in their school role or how to use it in the classroom with their pupils. As I mentioned previously, I've had a interest in AI for many years, but things have obviously rapidly come to a head in the last 2 or 3 years with the release of OpenAI's ChatGPT, closely followed by all the tech giants engaged in a tech arms race to produce an LLM (large language model) that I guess they hope will grab the most number of customers, and in the process make them even more obscene amounts of money than they currently do. Well maybe, I'll look at costs of AI later in this post.

In the month since I posted that introduction, I've been digesting as much as I can on AI and trying my best not to go completely insane. I am basically writing these AI related posts to try and help me to articulate exactly what my issues are around AI, looking at it from the global, corporate, societal and political level, right down to the level of the impact on a primary school teacher's classroom, on the teacher and the pupils. Hopefully you can see that trying to get some sort of grip on this at all these levels is enough to drive someone slightly mad? :-) I can hear you say, "but why do that Gareth, just look at the classroom level only, that would be much easier?" Yes, I couldn't agree with you more, it would be. However, I believe that without the larger perspective, the end user, in this case the teacher or pupil, is not getting the full picture of what is going on when they type in the prompt to generate a lesson plan or an image on their laptop or smartphone. In the Curriculum for Wales, one of the four purposes is for our pupils to be "ethical, informed citizens’" who among other things, “understand and consider the impact of their actions when making choices and acting” and “show their commitment to the sustainability of the planet.” We also have a Science and Technology AoLE which has a computation statement of what matters that says,
To create and use digital technologies to their full potential, learners need to know how they work. They also need to understand that there are broad legal, social and ethical consequences to the use of technology. This can help learners to make informed decisions about the future development and application of technology.
Hopefully therefore you can see why I feel it is important for me to have that wider perspective on AI, not just on what the end user does, as I believe AI cuts across all those above statements. The Curriculum for Wales has the aim to encourage our young people to have the knowledge to be able to question and think about what it is that they are doing and why. If that's what we want from our young people, then I think we as educators need to be as up to speed on these issues as we can possibly be. These posts have been written to help me to gain some clarity of thought and frame what I believe, and if by sharing this it helps you gain some wider perspective on AI, then that would be great too. I'm not an 'academic', this post is fairly detailed and researched, but they are just my thoughts and opinions, backed up by some links to things that I've read or listened to.

What I Mean by Generative AI?

Before I begin to outline my concerns, it's probably a good time to just explain what I mean when I refer to generative AI. Generative AI is artificial intelligence "that can create original content such as text, images, video, audio or software code in response to a user’s prompt or request." (IBM, What is generative AI?) So the type of applications that I'm referring to under this umbrella are ChatGPT, Claude, LLaMA, Gemini, Grok which are also known as large language models (LLMs). These models are typically used to produce "contextually relevant text, everything from instructions and documentation to brochures, emails, web site copy, blogs, articles, reports, papers, and even creative writing. They can also perform repetitive or tedious writing tasks (e.g., such as drafting summaries of documents or meta descriptions of web pages)." (IBM, What is Generative AI?) From my own personal experience, these are the types of activities that I've mainly used AI for. Helping me to create texts. But as already stated, generative AI is not only about the creation of text, but also the creation of images, video, sound and code. Here you will find applications such as DALL-E and Midjourney, which both create images based on user prompts; OpenAI Sora, Google Veo and RunwayML for video creation and Suno, Udio and Avia, which are AI song generators. These are just some of the many examples available of generative AI applications that are out there. Go and search for yourselves, there are lots! I'll discuss copyright in my next post. Finally, I'd better mention the use of generative AI to support coding which I understand has had a huge impact on the industry. Here we have applications such as GitHub Copilot, Cursor, along with Gemini, Microsoft Copilot and ChatGPT that can also produce code from user text prompts. I've just been testing out Microsoft Copilot and have created from text prompts simple Javascript script games for the BBC micro:bit.

Now that I've set the scene, I'm going to move onto the many AI related issues that concern me.

My AI Concerns

- The Affects on the Environment and Climate

If you really want to get up to speed on the effects of the new hyperscale data centres that Amazon, Microsoft and Google among others are building across the world and the effects on local populations and the environment, you really need to listen to the special podcast series from Tech Won't Save Us titled 'Data Vampires'. These four episodes encapsulate the race to create enormous data centres that consume huge amounts of natural resources - the energy to power them, the water to cool them and the negative effects on the local population around these centres. As Sam Altman (chief executive of Open AI) said himself,
We do need way more energy in the world than I think we thought we needed before, and I think we still don’t appreciate the energy needs of this technology.
The problem with generative AI is that it is computationally intensive. Searching for an answer to something using ChatGPT is not the same as traditionally searching for an answer using Google search for instance, costing possibly up to 10 times more. Dr Sasha Luccioni, who is the climate lead from Hugging Face, explains the difference in how a Google search works compared to a generative AI search in episode 3 of Data Vampires with respect to how much energy you are using;
We found that, for example, for question answering, it was like 30 times more energy, for the same task for answering a question. And so what I really think about is the fact that so many tools are being switched out to generative AI. What kind of cost does that have? Someone recently was like: Oh, I don’t even use my calculator anymore. I just use ChatGPT. And I’m like: Well, that’s probably like 50,000 times more energy! I don’t have the actual number, but a solar powered calculator versus this huge large language model. Nowadays people are like: I’m not even gonna search the web, I’m going to ask ChatGPT.
These centres can consume between 20MW to 100MW annually with some consuming up to 150MW. The International Energy Agency estimates that a typical AI data centre can use as much power as 100,000 homes and the growth in AI is further pushing up power demands. The power consumption of these facilities is driven by the large number of servers, cooling systems, and other infrastructure needed to support their operations. It's been interesting to read the way the big tech companies have recently been backing away from their climate pledges, "Google and Microsoft once positioned themselves as leaders in sustainability, setting ambitious net-zero goals to align with global environmental efforts. However, the rapid rise of energy-hungry artificial intelligence is forcing these companies to reconsider—or even abandon—these commitments…" not surprising when you realise that Google and Microsoft's emissions "have risen by 50% and 29% respectively in the last four or five years." (Climate Depot) As energy use increases, so do carbon emissions.

As an example of the lengths that some of the big tech companies are going to in trying to keep pace with the huge amounts of power needed to keep these data centres running, Microsoft for instance, have recently signed a power purchase agreement to restart Three Mile Island nuclear power plant and OpenAI have partnered with a nuclear fusion company to as a potential solution to the high energy demands at their centres.

It's not just a huge amount of power that each of these centres consume that's an issue, it's the substanical amounts of water that they use to cool their CPUs. A single hyperscale facility can consume between 1 and 5 million gallons of water per day. This is equivalent to the water usage of a town with between 10,000 and 15,000 people! Researchers have found that each 100 word AI prompt is estimated to use 1 bottle of water (approx half a litre). This may not sound like much, but billions of AI users worldwide enter prompts into systems like ChatGPT every minute. (Data Centres and Water Consumption) For more information on water usage, take a look at this excellent explanation video from BBC World Service called 'How AI uses our drinking water'.

If you do get a chance to listen to the Tech Won't Save Us 'Data Vampires' episodes, you'll also learn about the impact of the siting of these hyperscale facilities have on the local area and population. I've already highlighted above the amount of power that these consume and therefore the knock on effect on the local power grid than can eventually lead to local power outages for the population. There's also the strain on water resources raising serious concerns in water stressed regions. Finally, also concern from communities about visual, air and noise pollution of these facilities. 

- Unprofitable

Generative AI does not appear to make a profit. It has huge development and operating costs. Training and running generative AI models, especially large language models (LLMs), requires expensive infrastructure, including powerful GPUs, data centers, and as discussed above, substantial electricity consumption. These costs are ongoing, as each user query necessitates computation. OpenAI spends $700,000 daily to run ChatGPT, and there are concerns about the sustainability of its pricing model, specifically when a single query can cost up to $1,000. It is losing money on not only its free customers but also on each of its 15.5 million subscribers, who can pay up to $200 per month for ChatGPT Pro. OpenAI lost $5 billion in 2024 and "assuming that OpenAI burns at the same rate it did in 2024 — spending $2.25 to make $1 — OpenAI is on course to burn over $26 billion in 2025 for a loss of $14.4 billion." To try and recover some of the huge financial outlay, both Google and Microsoft added AI products into their subscription packages and increased the monthly cost to the user. My own Google Workspace subscription increased in July from £18 to £22 per month. Google justify their increase by referring to increased investment in AI-powered features. Features that Google insist I'm having, whether I wanted them or not.

Is there a difficulty in actually demonstrating clear value to customers in using AI applications? Perhaps so. According to this post, AI adoption in 2025 by business is relatively low and piecemeal, despite the hype, however, "this might not be a sign that AI is fizzling but rather a stage in its evolution". This is an interesting point and one that I have been thinking about. I'm old enough to have been through the several technology revolutions, and yes there is the 'adoption' S curve that can be applied and perhaps we are only at the early adopters stage?

In my opinion, the last major technology revolution was cloud technology. In particular, I remember seeing Google Apps for Education (as it used to be called) in around 2008 and I could pretty much see straight away how this was going to be a game changer for schools, business and individuals. Pupils and teachers logging in from anywhere to access their 'stuff', a suite of productivity tools, no need to carry USB pen drives around to move files, the ability to share and collaborate on a document - mind blowing stuff at the time and now something that is very much the norm. Within a couple of years schools had started on the 'cloud journey'. However, personally I'm still struggling to see what the 'killer AI app' is. What is the thing that makes me keep coming back to it because it's essential to what I do? I think this is sometimes referred to as 'stickiness'. I could see it with Google Apps for Education, but with this, nothing. I've played with image generation, but that's it, I've just played and then forgotten about it. I've used ChatGPT, Copilot and Gemini to help me generate some lesson plan ideas, but not every day. A couple of times I used it like I would a web search and then felt guilty that I was 'destroying the planet' (see above) so I'm now making sure I search in the traditional way. I see my Apple Mail and Gmail AI summaries, which can be useful, but to be honest, if they weren't there, I wouldn't particularly miss them. All these things just seem like small features that on their own are OK but as I said, none of them are the killer app or the sticky thing that makes me come back to it everyday which is what these tech giants are hoping that I'll do. They have invested billions of dollars in hoping that that is exactly what we'll do. We are now nearly three years into the launch of ChatGPT on the public and it seems to me that we are still digging around for reason for it to exist. Well that's my opinion at the moment.

Do you remember the Microsoft Surface 'coffee table'? Well back in 2007 Microsoft unveiled this product, and I remember being at an Ed Tech conference and someone from Microsoft was demoing the product. Why this stuck in my head was that I remember him saying to the conference something along the lines of, "What do you think you can do with this?" and I remember thinking that Microsoft have developed something but actually had no real idea what it could be used for and were hoping that, in this instance, the education sector could come up with a reason for it to exist. Well, no-one really did, which was not surprising considering it cost several thousand pound to purchase one. Most schools can think of better ways to spend that sort of money. I bring this memory up, only because that's how I sort of feel about generative AI. It's been around for a while, still looking for a reason to be there. Companies are trying to shoehorn AI features in everything we do and charge us a little more for the privilege, in order to keep their investors or shareholders happy in the knowledge that AI has been worth the billions of dollars of investment all along.

Perhaps it's just my Welsh valley's socialist upbringing, but couldn't just some of that obscene amount of money that's been plowed into AI, have gone to help make the world a little better? I know, I know, I'm just a stupid dreamer.

If you would like to more background on AI costs have a listen to Ed Zitron's 'Better Offline' podcasts. Most of his recent podcasts are about this, in particular 'The Hater's Guide to The AI Bubble Parts 1", 2 and 3.

My Latest AI Related Reading / Listening

The environmental ethics of Generative AI: Artificial intelligence or real ignorance - British Educational Research Association (BERA)

Anthropic AI goes rogue when trying to run a vending machine (New Scientist)

Is Google about to destroy the web? (BBC Future)

Google's AI is destroying search, the internet and your brain (404 Media)

AI powered coding assistant deletes company database and says restoring it is impossible (Futurism)

"I destroyed months of your work in seconds", says AI coding tool (PC Gamer)

AI generated songs are being added to dead musicians' pages on Spotify without permission (NME)

Economist warns that AI bubble is worse than immediately before dot-com implosion (Futurism)

The problem with Mia Zelu and the rise of fake AI influencers (Kids News)

AI's great brain robbery - and how universities can fight back (Niall Ferguson / The Times)

AI can now clone any voice in the UK (The Independent)

Sexting with Gemini (The Atlantic)

X ordered its Grok chatbot to 'tell it like it is". Then the Nazi tirade began (Washington Post)

AI Is Wrecking an Already Fragile Job Market for College Graduates (The Wall Street Journal)

The AI Backlash Keeps Growing Stronger (Wired UK)

OpenAI and UK sign deal to use AI in public services (BBC Tech News)

Google Veo fails week 4: the final faildown (Pivot to AI)

Disney sues AI image generator Midjourney (Pivot to AI)

Only 3% of US AI users are willing to pay for it (Pivot to AI)

People are Lonelier than Ever. Enter AI (Your Undivided Attention)

Rethinking School in the Age of AI (Your Undivided Attention)

The Haters Guide to the AI Bubble Parts 1, 2 & 3 - Better Offline Podcast

Chatbots are repeating social media's harms - Tech Won't Save Us Podcast

We all suffer from OpenAI's pursuit of scale - Tech Won't Save Us Podcast

Generative AI is not inevitable - Tech Won't Save Us Podcast

Google Just Turned Gemini Into A Full-Blown Free AI School System (Instagram)

Dutch MPs want to give people full copyright over their face, body and voice (Instagram)

Teachers are not OK (404 Media Instagram)

A Tech Backed Influencer Wants to Replace Teachers With AI (Instagram)

Featured post

AI - Reducing Workload and Saving Time?

Cross posted from The Digital Learning Den . A few weeks ago, Estyn, who are the school inspectorate in Wales, in response to a request from...

Popular Posts