Cross posted from The Digital Learning Den.
A few weeks ago, Estyn, who are the school inspectorate in Wales, in response to a request from the Welsh Government, published a thematic report on AI titled, "A New Era: How Artificial Intelligence (AI) is Supporting Teaching and Learning". The report explored how artificial intelligence (AI), and generative AI (GenAI) in particular, is currently being implemented and its emerging impact in schools and pupil referral units (PRUs) across Wales. Three recommendations came out of this report which roughly boiled down to: a need to develop national guidance on the use of AI in education, high quality professional learning on using AI, and lastly an update to the DCF to include AI literacy. Soon after, the Welsh Government responded to this report, welcoming it and outlining what it is currently doing and what it will do, to address the recommendations. It has been interesting to read Estyn's report, that captures an early snapshot of where we currently are in Wales with regards to how schools and teachers are using generative AI. For example, what generative AI tools they are using, what they are using them for, what concerns they might have and what they believe are the benefits that these tools bring to teaching and learning, planning and assessment and school leadership and management. In this post I'm going to concentrate on one recurring comment in the Estyn report from schools and teachers, that generative AI reduces workload and saves time. In fact, in the Executive Summary, Estyn say that teachers across all sectors reported "substantial workload reductions". This sentiment has also being expressed by the OECD who say that "focusing on Gen AI can liberate teachers from routine administrative and instructional tasks" and the UK Government Education Minister, Stephen Morgan who said that harnessing this tech will, ease "the pressures and workload burdens we know are facing the profession" and will free "up time, allowing them to focus on face to face teaching.”
Quite possibly because of the websites I'm viewing, the stream of posts that I'm being fed on my business Facebook account are adverts about generative AI related tools. All of which are aimed at teachers. It's certainly not a coincidence that most of these tools are promoting them to teachers as helping them to reduce their administrative workload and therefore saving them time. They certainly know which 'buttons to press' when it comes to selling something to teachers. Last week a particular advert caught my eye. It was from a very well known company, one that was actually mentioned several times in teacher responses in the Estyn report. This is what the advert said:
Technology has regularly promised that it will take the drudgery out of your life, saving you time to do enjoyable things, but so often fails to deliver. Forbes published an article in Nov 2024, titled, “AI’s False Time-Saving Promise. Or Why AI Is Like The Vacuum Cleaner”. In it Martin Gutmann, refers to the work of historian Ruth Schwartz Cowan who pointed out in her book, More Work for Mother, that the vacuum cleaner did not reduce the labor required around the home. Rather, it shifted norms and raised expectations. Homes were now expected to be cleaned more frequently and to higher standards. The promised reduction in work was an illusion; the work itself was merely reshaped and intensified. The US Senate in 1965 predicted that by the end of the 20th century the average US citizen would be working a 14 hour week. Quite obviously things did not work out this way.
Martin goes on to write that,
So much for reducing workload and saving time, increasingly digital technology appears to have has created more workload and invaded into our home life, our 'free time'. Going back to what Ruth Cowan wrote, the norms were certainly shifted and expectations were raised. Work tasks that once stayed between the walls of the office or school building, are now accessible anywhere, and the expectation, whether explicitly stated or not, is that we are always available.
1. Curriculum context - from my own experience and something also reported in the Neil Selwyn et al study, generative AI finds it very difficult to create a lesson plan, for example, that is relevant to your national context or curriculum. Let's look at this from a Curriculum for Wales perspective. Depending on what your school requires, you may need to include in your lesson plan an objective or objectives that align with a relevant description of learning, which comes from a particular area or areas of learning and experience and for the correct progression step. You may also need to include the cross curricular skills you are focusing on (literacy / numeracy / digital competence) and possibly what part of the four purposes this lesson helps the learner develop. I recently created some lesson plans and to help me, I fed the application a lesson plan example of how I wanted it to look and even the literacy, numeracy and DCF documents. It confidently added these into what it produced and almost fooled me into thinking it had produced a lesson plan which included accurate cross curricular DCF statements, until I closely looked at them. At first glance they looked good, but on closer inspection every single one of them were made up. They were not from the framework at all. I had to go back to the DCF document and insert the correct statements that were relevant to this lesson. There is also the Welsh context that needs to be considered. An interesting point in the Neil Selwyn et al study was that the Swedish teachers involved in the study bemoaned "the preponderance of English language sources and US perspectives." It would be interesting to explore this further with teachers in Welsh language schools and if this is something that they have encountered? Another point to think about here is whether the values and perspectives of the US are aligned with our own country? Some teachers in the Estyn report were also concerned about similar issues and mentioned "Americanised spellings and examples." I also wonder how many AI generated lessons will include that authentic sense of 'cynefin' (the place where we feel we belong, where the people and the landscape around us are familiar, and the sights and sounds are reassuringly recognisable)? Hopefully you can see the difficulty your AI chatbot might have in trying to get all these aspects aligned in a lesson plan without much teacher intervention?
2. It Doesn't Know My Students - generative AI doesn't know your class. If you are a primary school teacher, then you will generally have the same class for a year. You end up knowing the strengths and weaknesses of each child, the partners and groups that they work well with and don't work well with, you know the sorts of lessons and activities that they like doing and what they don't like doing, you know how they've progressed in each part of the curriculum and where they need to go next. Your generative AI chatbot knows none of this. Neil Selwyn et al refers to this in a quote from John Haugeland, "The trouble with Artificial Intelligence is that computers don't give a damn." What it is liable to produce can be very generic and not in any way tailored to your class. You will need to use your professional knowledge and skills to rework whatever your AI chatbot has produced so that it is relevant for your setting. Neil Selwyn et al, writes that teachers in the study felt that "Gen AI 'doesn't know my students'" which "was a common justification for teachers deciding to stop prompting and instead take responsibility for the authoring of output." A similar finding can be found in the Estyn report. In response to why teachers had not yet used generative AI in their role was that there was a "preference for their own methods or scepticism about the relevance of AI to certain aspects of teaching, particularly where relationships and deep understanding of pupils are paramount."
3. It Doesn't Know Me - teachers teach in different ways. We have our strengths and we have our weaknesses. There's a good chance that a lesson plan that was followed and was successful for one teacher, might be unsuccessful for another who uses exactly the same plan. I would argue that it is rarely the case that a teacher can take an 'off the shelf' lesson plan and follow it word for word. I'm sure that I'm not alone in 'cannibalising' this type of lesson plan, amending it and making it work for my style of teaching and for my class. Therefore anything created by generative AI will need amending to make sure it works for you.
While I agree that generative AI tools can create a lesson plan for teachers, and which could appear to save them time. I will argue that the lesson plan is never the finished article and will need much amending, as I outlined above, and this takes time. Reference will be needed to relevant parts of the curriculum, the correct cross curricular skills and purposes added, written in a manner that is suitable for you and your class. Anecdotally, I am also starting to hear of AI lesson plans being generated (and delivered in class) that are way beyond what is suitable for a particular year group and also not relevant to the Curriculum for Wales. I believe that we have to be very careful here as a teaching profession. Teachers are the professionals and generative AI is just a tool that can help you to get started if you feel you are 'stuck' on how to approach a lesson or a task. If we don't use our professional knowledge, skills and understanding and amend what these tools spit out, then I fear we will be undermining our profession and possibly inferring that anyone or anything, can produce a lesson. As this report states, "the role of digital technology in diminishing teachers’ professional autonomy and expertise remains a key concern." I am particularly concerned about our newly qualified staff who create generative AI documents, as they may not have the knowledge or experience to amend what has been created to suit the curriculum, their class or for themselves. The Finish study, mentioned previously, suggest that their findings "adds to a rapidly growing volume of research indicating that blindly trusting AI output comes with risks like ‘dumbing down’ people’s ability to source reliable information and even workforce de-skilling." As a teacher you bring your expertise to that process that someone who isn't a teacher cannot. Generative AI is not a teacher. A concern recognised by some of the teachers in the Estyn report.
4. False information / 'confidently' faking the answers - also know as AI "hallucinations". It is a pretty common feature of generative AI to produce false information. I have recently posted on social media about an AI hallucination that Google AI Overview generated about me! Research conducted "by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time", which should obviously raise concerns about the accuracy and reliability of the outputs from generative AI chatbots. There have been several reports where companies have used generative AI help them to prepare legal documents for court and even reports for governments that contain AI hallucinated quotes and references to non-existent reports or criminal case citations. "When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple." These large businesses were basically using generative AI to 'save time', but got caught out as they didn't look closely at what was produced for reliability and accuracy. Estyn also highlighted teacher's concerns in this aspect,
Which leads me very neatly onto the next aspect to think about with regards to saving time and workload.
5. Workslop - a relatively new term that's appeared in many articles, but one I think is very apt. You may have already come across the term 'AI slop', which is now given to the vast amounts of generative AI content, in particular articles, photos and video, that is increasingly dominating our social media feeds. Workslop is the term for "AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.” In my own words, what has been generated with AI might look impressive but is pretty much pointless. It is very easy to generate text with AI. With the correct prompting and re-prompting, generative AI will confidently spew out as much as you want for a particular task. From my own experience, possibly too much is generated, often filling several pages. In some cases I would argue, far more than you would every create if generative AI wasn't involved in the process. But, superficially it does look impressive. However, we now must go through the process of editing, amending and spotting any hallucinations. That's now going to take up your time for all the reasons discussed previously. We could be subtly shifting the norms to larger texts and raising the expectations among work colleagues that this is 'what a good one looks like'. As a staff there is the potential to start making larger, text and image heavy documents, 'just because we can'. We can also create them faster. All of which will mean someone, probably on the school management team, having to sit and amend lots of large AI generated documents. A Fortune article recently wrote that researchers found that extra work was created for workers receiving ‘workslop’, finding themselves "redoing reports clearly written by AI, or holding a meeting to discuss a mystifying memo. It also caused employees to question their peers’ intelligence and the value of AI technology.” Questioning their peers' intelligence is an interesting statement. I've recently come across examples of this myself, where people have questioned other work colleagues 'intelligence' with regards to extent that they now rely on generative AI to do much of their work. Perhaps, too much cognitive offloading going on? There has been much written about this online and as one study found there was "a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants." Think back to my worries about newly qualified staff.
6. Raised Expectations - generative AI has already had a massive impact on many workplaces. I think it is fair comment to make that many business owners in the private sector see the introduction of generative AI in the workplace as an opportunity to cut costs. I wrote about the large job losses already experienced in several sectors, including the big tech companies, in my recent My Concerns About AI post. An article from Furturism, for example, reported how one CEO brags how extremely excited he gets about firing people and replacing them with AI! Recently we've also had Meta's Metaverse chief urging its employees to adopt AI across every workflow and to "go 5x faster". The article goes on to say that the message from Meta's chief "highlights a fear that workers have had for a quite some time: That bosses are not just expecting to replace workers with AI, they are expecting those who remain to use AI to become far more efficient." An article from Business Insider explains that "tech giants, from Meta to Amazon, are using technology and, at times, AI, not just to build products, but to reshape their workforces." It appears then that generative AI can be used by some bosses to place extra workload upon staff who are the ones left after a period of redundancies. Some bosses seem to have the belief that one person, with the help of AI, can now carry out the work of several people and even work "5x times' more efficiently. A BBC News article titled, "Will AI make work burnout worse?, looked at a business that introduced AI tools into its workflow and the effects on staff. Rather than increasing productivity, staff reports that it created "stress and tension," and that,
So, how am I going to sum up this up? As I have said in a previous post, many teachers are going to use generative AI if they believe that it is helping them, which I can fully understand. At this moment we possibly at the peak of the generative AI hype cycle. Generative AI is the current zeitgeist and therefore many will be drawn to its promises and will experiment with the tools on offer. However, what I've attempted to do here is to share some critical thoughts and opinions around the generative AI hype promises of saving time and reducing workload. Hopefully you can see that it's certainly not clear cut that that's what these tools will do. With much research (and history) highlighting that the opposite may actually be the case. I'll finish with some simple bullet points:
- The introduction of digital technologies in the classroom / office has rarely meant that the workers have reduced workload and have more spare time. In fact, it's arguably increased the workload and because our 'work' is now accessible via 'the cloud', to us at all times, it often eats into our home life and therefore our spare time. This can easily add to our workload and takes time.
- Generative AI does not produce the finished article. Much prompting, along with editing of the document to shape it into something that is suitable for your curriculum, your setting, your students and for you. This takes time.
- Generative AI is not reliable and will 'hallucinate'. It will confidently 'make stuff up' that looks good at first glance and can be easily overlooked, especially from less experienced teachers. You will need to check through everything that has been generated, checking for accuracy. This takes time.
- AI can help you to generate a greater number of documents, which will mean more checking and amending for you or for someone else.
- Your professional knowledge and skills are absolutely essential in addressing the above points. It's my opinion, but I think we need to be very careful in devolving elements of the role of the teacher over to generative AI tools. As far as I'm aware it hasn't happened in state education yet, but take the example of the private sector where there has already been huge job losses. Ultimately, generative AI is being sold to businesses as a way to streamline business, in other words, reduce costs which often means reducing the number of workers. Why employ two workers when one worker can now do both jobs? In the public sector, both the Welsh Government and UK Government are embracing generative AI. One of the questions, is are they doing this because they are worried about employee workload and welfare or do they also see it as a way of reducing budgets, which as we have already seen in the private sector, often means job losses? The ones who are left often end up doing more, increasing their workload and having to do it in the same amount of time.
Science & Technology AoLE - "They need to develop the ability to meaningfully ask the question, ‘Just because we can, does that mean we should?’"
A few weeks ago, Estyn, who are the school inspectorate in Wales, in response to a request from the Welsh Government, published a thematic report on AI titled, "A New Era: How Artificial Intelligence (AI) is Supporting Teaching and Learning". The report explored how artificial intelligence (AI), and generative AI (GenAI) in particular, is currently being implemented and its emerging impact in schools and pupil referral units (PRUs) across Wales. Three recommendations came out of this report which roughly boiled down to: a need to develop national guidance on the use of AI in education, high quality professional learning on using AI, and lastly an update to the DCF to include AI literacy. Soon after, the Welsh Government responded to this report, welcoming it and outlining what it is currently doing and what it will do, to address the recommendations. It has been interesting to read Estyn's report, that captures an early snapshot of where we currently are in Wales with regards to how schools and teachers are using generative AI. For example, what generative AI tools they are using, what they are using them for, what concerns they might have and what they believe are the benefits that these tools bring to teaching and learning, planning and assessment and school leadership and management. In this post I'm going to concentrate on one recurring comment in the Estyn report from schools and teachers, that generative AI reduces workload and saves time. In fact, in the Executive Summary, Estyn say that teachers across all sectors reported "substantial workload reductions". This sentiment has also being expressed by the OECD who say that "focusing on Gen AI can liberate teachers from routine administrative and instructional tasks" and the UK Government Education Minister, Stephen Morgan who said that harnessing this tech will, ease "the pressures and workload burdens we know are facing the profession" and will free "up time, allowing them to focus on face to face teaching.”
Quite possibly because of the websites I'm viewing, the stream of posts that I'm being fed on my business Facebook account are adverts about generative AI related tools. All of which are aimed at teachers. It's certainly not a coincidence that most of these tools are promoting them to teachers as helping them to reduce their administrative workload and therefore saving them time. They certainly know which 'buttons to press' when it comes to selling something to teachers. Last week a particular advert caught my eye. It was from a very well known company, one that was actually mentioned several times in teacher responses in the Estyn report. This is what the advert said:
Calling all teachers! This could be the school year for you...Take up a new sport - Learn to cook delicious new dishes - Spend more time in the great outdoors - See more of your family and friends. Our 100+ AI tools save our teachers over 10+ hours every week! What will you do with your free time?Now, I've got no reason to not believe their statement that 10+ hours a week are being saved, perhaps they have carried out some research and this is what they found. I can't verify that figure from the advert. If we take a traditional working day as 8 hours then the claim of saving 10+ hours means that I can save over a day of work per week. Sounds amazing. However, what slightly irritated me was the idea that with the free time we would be doing all those activities, no matter how worthwhile they are, instead. From my own experience, saving time on my work tasks hardly ever results in 'free time', I just fill it with another work task adding to my workload.
Technology has regularly promised that it will take the drudgery out of your life, saving you time to do enjoyable things, but so often fails to deliver. Forbes published an article in Nov 2024, titled, “AI’s False Time-Saving Promise. Or Why AI Is Like The Vacuum Cleaner”. In it Martin Gutmann, refers to the work of historian Ruth Schwartz Cowan who pointed out in her book, More Work for Mother, that the vacuum cleaner did not reduce the labor required around the home. Rather, it shifted norms and raised expectations. Homes were now expected to be cleaned more frequently and to higher standards. The promised reduction in work was an illusion; the work itself was merely reshaped and intensified. The US Senate in 1965 predicted that by the end of the 20th century the average US citizen would be working a 14 hour week. Quite obviously things did not work out this way.
Martin goes on to write that,
Generative AI is being introduced with similar utopian promises. It is lauded for its ability to automate routine tasks, create efficiencies, and allow human workers to focus on tasks that are more meaningful or more creative. The narrative is that it will free us from the tedious tasks that burden us and provide time to innovate, connect, or simply rest. But will it?Before focusing specifically on generative AI, let’s look at some of the claims of the digital office revolution. The ‘paperless office’; instantaneous communications; the automating of repetitive tasks; all supposedly allowing us to get away from our workplace sooner. But it hasn’t worked out that way. “As the pace of communication accelerated, expectations changed. Emails begot instant responses. Reports that once took weeks became deliverable in days or even hours. “Office productivity” became synonymous with more output, more emails, and more deadlines.” It could be argued that introduction of cloud based productivity tools such as Google Workspace or Microsoft Office 365, which are now ubiquitous in businesses and quite clearly being used by staff in virtually every school in Wales, has allowed teachers to be in constant communication with one another and with collaborative access to all their documents. Arguably, it has now become more difficult than ever to separate our working life away from our home life. Our work is with us constantly. "The computer and email didn't free workers; it chained them to their tasks in new and less visible ways.” It’s a personal opinion, but email and increasingly messaging groups, are the bane of most working people’s lives. Even if your workplace has a policy that staff do not need to answer communications outside of working hours, the very fact that an email or message has arrived outside of work hours, can place an element of guilt upon the individual that they haven't responded to it and that they are now thinking about the content of the email or message. With regards to the paperless office, we just ended up creating digital files which are easily created by almost anyone. The sheer volume of emails, instant messages, PDFs, word processed documents, and collaborative online documents has instead created ‘digital clutter’. This often feels just as burdensome as all the papers we had previously. A study published in 2024 recommended that "if new technology is being adopted to help teachers do their jobs, then school leaders need to make sure it will not add extra work for them", and to be aware that if a school implements a new digital technology then they "should make sure that they are streamlining the job of being a teacher by offsetting other tasks, and not simply adding more work to their load." Adding, if the adoption of new technology "adds to or increases teachers’ workloads, then adding technology increases the likelihood that a teacher will burn out."
So much for reducing workload and saving time, increasingly digital technology appears to have has created more workload and invaded into our home life, our 'free time'. Going back to what Ruth Cowan wrote, the norms were certainly shifted and expectations were raised. Work tasks that once stayed between the walls of the office or school building, are now accessible anywhere, and the expectation, whether explicitly stated or not, is that we are always available.
In short, digital technologies are often a source of longer working hours, role expansion, increased non-teaching and administrative duties, and increased accountability – adding to the increased demands now being placed on teachers. Digital technologies and the futures of education - towards 'non-stupid' optimism (2021)Let's now explore generative AI. I'll outline some of the issues, as I see it, around this idea that generative AI will reduce workload and save us time. According to the Estyn report, generative AI is being used in a number of ways across the school, supporting both administrative tasks and in the classroom. For example, streamlining planning, report writing and creating a variety of resources for the classroom. To be able to generate a lesson plan, for example, a teacher will need to enter a prompt into a AI 'chatbot', instructing it with what you want it to generate. According to the report, teachers are using a variety of tools to do this, which include Microsoft Co-Pilot and Open AI's ChatGPT, among several others. In the lesson plan example, the chatbot then produces a lesson plan based on the prompt you have given. If you have ever been through this process yourself, you will know that what is produced by the chatbot first time is very rarely the finished product. Your prompt may need refining several times before you get to the output you are happy with.
Another key challenge related to the art of prompting; over 40 respondents mentioned that knowing how to frame effective prompts was a barrier. Respondents described having to spend time refining their questions or commands to get the desired output, which could be off-putting for new users. Estyn Thematic Report, 2025From my own experience this text then needs further editing to make it relevant and appropriate for you. In a research study by Neil Selwyn, et al (2025) they report that the text produced by generative AI "was described as involving relatively substantial amounts of editing, reorganising, rewriting and in some instances completely reworking what Gen AI produced." Sadly, a recent study from Finland, found that most participants in their study, "relied on single prompts and trusted AI answers without reflection." Described as Cognitive Offloading, "where users trust the system’s output without reflection or double-checking."
The data revealed that most users rarely prompted ChatGPT more than once per question. Often, they simply copied the question, put it in the AI system, and were happy with the AI’s solution without checking or second-guessing. https://neurosciencenews.com/ai-dunning-kruger-trap-29869/Below are some of my thoughts on the possible reasons why the lesson plan output example could need further editing:
1. Curriculum context - from my own experience and something also reported in the Neil Selwyn et al study, generative AI finds it very difficult to create a lesson plan, for example, that is relevant to your national context or curriculum. Let's look at this from a Curriculum for Wales perspective. Depending on what your school requires, you may need to include in your lesson plan an objective or objectives that align with a relevant description of learning, which comes from a particular area or areas of learning and experience and for the correct progression step. You may also need to include the cross curricular skills you are focusing on (literacy / numeracy / digital competence) and possibly what part of the four purposes this lesson helps the learner develop. I recently created some lesson plans and to help me, I fed the application a lesson plan example of how I wanted it to look and even the literacy, numeracy and DCF documents. It confidently added these into what it produced and almost fooled me into thinking it had produced a lesson plan which included accurate cross curricular DCF statements, until I closely looked at them. At first glance they looked good, but on closer inspection every single one of them were made up. They were not from the framework at all. I had to go back to the DCF document and insert the correct statements that were relevant to this lesson. There is also the Welsh context that needs to be considered. An interesting point in the Neil Selwyn et al study was that the Swedish teachers involved in the study bemoaned "the preponderance of English language sources and US perspectives." It would be interesting to explore this further with teachers in Welsh language schools and if this is something that they have encountered? Another point to think about here is whether the values and perspectives of the US are aligned with our own country? Some teachers in the Estyn report were also concerned about similar issues and mentioned "Americanised spellings and examples." I also wonder how many AI generated lessons will include that authentic sense of 'cynefin' (the place where we feel we belong, where the people and the landscape around us are familiar, and the sights and sounds are reassuringly recognisable)? Hopefully you can see the difficulty your AI chatbot might have in trying to get all these aspects aligned in a lesson plan without much teacher intervention?
2. It Doesn't Know My Students - generative AI doesn't know your class. If you are a primary school teacher, then you will generally have the same class for a year. You end up knowing the strengths and weaknesses of each child, the partners and groups that they work well with and don't work well with, you know the sorts of lessons and activities that they like doing and what they don't like doing, you know how they've progressed in each part of the curriculum and where they need to go next. Your generative AI chatbot knows none of this. Neil Selwyn et al refers to this in a quote from John Haugeland, "The trouble with Artificial Intelligence is that computers don't give a damn." What it is liable to produce can be very generic and not in any way tailored to your class. You will need to use your professional knowledge and skills to rework whatever your AI chatbot has produced so that it is relevant for your setting. Neil Selwyn et al, writes that teachers in the study felt that "Gen AI 'doesn't know my students'" which "was a common justification for teachers deciding to stop prompting and instead take responsibility for the authoring of output." A similar finding can be found in the Estyn report. In response to why teachers had not yet used generative AI in their role was that there was a "preference for their own methods or scepticism about the relevance of AI to certain aspects of teaching, particularly where relationships and deep understanding of pupils are paramount."
3. It Doesn't Know Me - teachers teach in different ways. We have our strengths and we have our weaknesses. There's a good chance that a lesson plan that was followed and was successful for one teacher, might be unsuccessful for another who uses exactly the same plan. I would argue that it is rarely the case that a teacher can take an 'off the shelf' lesson plan and follow it word for word. I'm sure that I'm not alone in 'cannibalising' this type of lesson plan, amending it and making it work for my style of teaching and for my class. Therefore anything created by generative AI will need amending to make sure it works for you.
While I agree that generative AI tools can create a lesson plan for teachers, and which could appear to save them time. I will argue that the lesson plan is never the finished article and will need much amending, as I outlined above, and this takes time. Reference will be needed to relevant parts of the curriculum, the correct cross curricular skills and purposes added, written in a manner that is suitable for you and your class. Anecdotally, I am also starting to hear of AI lesson plans being generated (and delivered in class) that are way beyond what is suitable for a particular year group and also not relevant to the Curriculum for Wales. I believe that we have to be very careful here as a teaching profession. Teachers are the professionals and generative AI is just a tool that can help you to get started if you feel you are 'stuck' on how to approach a lesson or a task. If we don't use our professional knowledge, skills and understanding and amend what these tools spit out, then I fear we will be undermining our profession and possibly inferring that anyone or anything, can produce a lesson. As this report states, "the role of digital technology in diminishing teachers’ professional autonomy and expertise remains a key concern." I am particularly concerned about our newly qualified staff who create generative AI documents, as they may not have the knowledge or experience to amend what has been created to suit the curriculum, their class or for themselves. The Finish study, mentioned previously, suggest that their findings "adds to a rapidly growing volume of research indicating that blindly trusting AI output comes with risks like ‘dumbing down’ people’s ability to source reliable information and even workforce de-skilling." As a teacher you bring your expertise to that process that someone who isn't a teacher cannot. Generative AI is not a teacher. A concern recognised by some of the teachers in the Estyn report.
Teachers may become dependent on AI for planning or resource creation, bypassing professional judgement and reflection.So, did using generative AI really save us time? After several prompting attempts (or possibly only one?) the chatbot might have generated something that looks like a lesson, but whatever has been produced will definitely need amending, using our professional expertise and that takes time. Let's look at a couple of other things that we need to think about when we believe that time is being saved our workload is being reduced.
4. False information / 'confidently' faking the answers - also know as AI "hallucinations". It is a pretty common feature of generative AI to produce false information. I have recently posted on social media about an AI hallucination that Google AI Overview generated about me! Research conducted "by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time", which should obviously raise concerns about the accuracy and reliability of the outputs from generative AI chatbots. There have been several reports where companies have used generative AI help them to prepare legal documents for court and even reports for governments that contain AI hallucinated quotes and references to non-existent reports or criminal case citations. "When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that's incompetence, just pure and simple." These large businesses were basically using generative AI to 'save time', but got caught out as they didn't look closely at what was produced for reliability and accuracy. Estyn also highlighted teacher's concerns in this aspect,
Over 70 responses highlighted that AI-generated content often contained inaccuracies, required proofreading, or presented inappropriate tone or complexity, particularly for younger pupils.If in a school context we are using generative AI to help us to write up policy or curriculum strategy documents, or communications to outside agencies, are they accurate? No one surely wants to be accused of being 'incompetent'. Do these documents contain factual information that can be verified or has your chatbot generated text that is so confidently written, that you believe it and therefore overlook it? This was the issue I mentioned above with regards to DCF statements that were added to my lesson plan. At first glance they looked correct, but after looking more closely they were completely made up. I used my professional knowledge and understanding of the DCF and could spot the problems. But, what would happen if you don't have that depth of knowledge in what it is you are using AI to generate? Would you be able to spot the confident 'hallucinations' or just miss the problems. Therefore someone, whether it is the person who prompted it (I don't believe the word 'authored' would be correct here) or management, has to spend time going back through whatever has been created and double checking for hallucinations and any other amendments.
Which leads me very neatly onto the next aspect to think about with regards to saving time and workload.
5. Workslop - a relatively new term that's appeared in many articles, but one I think is very apt. You may have already come across the term 'AI slop', which is now given to the vast amounts of generative AI content, in particular articles, photos and video, that is increasingly dominating our social media feeds. Workslop is the term for "AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.” In my own words, what has been generated with AI might look impressive but is pretty much pointless. It is very easy to generate text with AI. With the correct prompting and re-prompting, generative AI will confidently spew out as much as you want for a particular task. From my own experience, possibly too much is generated, often filling several pages. In some cases I would argue, far more than you would every create if generative AI wasn't involved in the process. But, superficially it does look impressive. However, we now must go through the process of editing, amending and spotting any hallucinations. That's now going to take up your time for all the reasons discussed previously. We could be subtly shifting the norms to larger texts and raising the expectations among work colleagues that this is 'what a good one looks like'. As a staff there is the potential to start making larger, text and image heavy documents, 'just because we can'. We can also create them faster. All of which will mean someone, probably on the school management team, having to sit and amend lots of large AI generated documents. A Fortune article recently wrote that researchers found that extra work was created for workers receiving ‘workslop’, finding themselves "redoing reports clearly written by AI, or holding a meeting to discuss a mystifying memo. It also caused employees to question their peers’ intelligence and the value of AI technology.” Questioning their peers' intelligence is an interesting statement. I've recently come across examples of this myself, where people have questioned other work colleagues 'intelligence' with regards to extent that they now rely on generative AI to do much of their work. Perhaps, too much cognitive offloading going on? There has been much written about this online and as one study found there was "a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants." Think back to my worries about newly qualified staff.
6. Raised Expectations - generative AI has already had a massive impact on many workplaces. I think it is fair comment to make that many business owners in the private sector see the introduction of generative AI in the workplace as an opportunity to cut costs. I wrote about the large job losses already experienced in several sectors, including the big tech companies, in my recent My Concerns About AI post. An article from Furturism, for example, reported how one CEO brags how extremely excited he gets about firing people and replacing them with AI! Recently we've also had Meta's Metaverse chief urging its employees to adopt AI across every workflow and to "go 5x faster". The article goes on to say that the message from Meta's chief "highlights a fear that workers have had for a quite some time: That bosses are not just expecting to replace workers with AI, they are expecting those who remain to use AI to become far more efficient." An article from Business Insider explains that "tech giants, from Meta to Amazon, are using technology and, at times, AI, not just to build products, but to reshape their workforces." It appears then that generative AI can be used by some bosses to place extra workload upon staff who are the ones left after a period of redundancies. Some bosses seem to have the belief that one person, with the help of AI, can now carry out the work of several people and even work "5x times' more efficiently. A BBC News article titled, "Will AI make work burnout worse?, looked at a business that introduced AI tools into its workflow and the effects on staff. Rather than increasing productivity, staff reports that it created "stress and tension," and that,
tasks were in fact taking longer as they had to create a brief and prompts for ChatGPT, while also having to double check its output for inaccuracies, of which there were many.In this example, the aim of introducing generative AI into the company was to simplify people’s workflows, but what it actually achieved was "giving everyone more work to do, and making them feel stressed and burnt out." A report from Deloitt warns us of "AI potential silent impacts", highlighting the common narrative that "AI improves our productivity and well-being by reducing our workload," when actually the potential silent impact is that of "increased workload and stress" and refers to studies where "77% of employees say AI has increased their workloads and decreased their productivity, and 61% say it will increase burnout."
So, how am I going to sum up this up? As I have said in a previous post, many teachers are going to use generative AI if they believe that it is helping them, which I can fully understand. At this moment we possibly at the peak of the generative AI hype cycle. Generative AI is the current zeitgeist and therefore many will be drawn to its promises and will experiment with the tools on offer. However, what I've attempted to do here is to share some critical thoughts and opinions around the generative AI hype promises of saving time and reducing workload. Hopefully you can see that it's certainly not clear cut that that's what these tools will do. With much research (and history) highlighting that the opposite may actually be the case. I'll finish with some simple bullet points:
- The introduction of digital technologies in the classroom / office has rarely meant that the workers have reduced workload and have more spare time. In fact, it's arguably increased the workload and because our 'work' is now accessible via 'the cloud', to us at all times, it often eats into our home life and therefore our spare time. This can easily add to our workload and takes time.
- Generative AI does not produce the finished article. Much prompting, along with editing of the document to shape it into something that is suitable for your curriculum, your setting, your students and for you. This takes time.
- Generative AI is not reliable and will 'hallucinate'. It will confidently 'make stuff up' that looks good at first glance and can be easily overlooked, especially from less experienced teachers. You will need to check through everything that has been generated, checking for accuracy. This takes time.
- AI can help you to generate a greater number of documents, which will mean more checking and amending for you or for someone else.
- Your professional knowledge and skills are absolutely essential in addressing the above points. It's my opinion, but I think we need to be very careful in devolving elements of the role of the teacher over to generative AI tools. As far as I'm aware it hasn't happened in state education yet, but take the example of the private sector where there has already been huge job losses. Ultimately, generative AI is being sold to businesses as a way to streamline business, in other words, reduce costs which often means reducing the number of workers. Why employ two workers when one worker can now do both jobs? In the public sector, both the Welsh Government and UK Government are embracing generative AI. One of the questions, is are they doing this because they are worried about employee workload and welfare or do they also see it as a way of reducing budgets, which as we have already seen in the private sector, often means job losses? The ones who are left often end up doing more, increasing their workload and having to do it in the same amount of time.
For people working outside of education, such thinking might well seem to make good sense. If AI can take care of lesson planning, content presentation, student assessment and feedback, then most students will only sporadically require support from a human (most likely in the guise of classroom assistant or critical friend rather than expert teacher). Neil Selwyn, 2024The National Association of Head Teachers (NAHT) key positions on AI in education. They pretty much align with my thoughts:
- NAHT believes that generative AI tools can make certain written tasks quicker and easier but they cannot replace the judgement and deep subject knowledge of a human expert
- NAHT believes that generative AI has the potential to improve certain aspects of the education system with the understanding that, particularly at the current stage of development, no AI tool is infallible
- NAHT believes that the potential of generative AI to help reduce workload associated with daily administrative tasks warrants further consideration and investigation.
A final point that recurred throughout the interviews with Swedish teachers was an accompanying moral unease around the prospect of instructing their students to not rely on GenAI produced content while then doing the opposite in their own work. As these teachers reflected, acting in this manner would lead to 'a bad conscience' and conflicted feelings 'that somewhere there is an inner double morality.' Selwyn et al, 2025Curriculum for Wales - Four Purposes; "We want our learners (teachers?) to become - Ethical, informed citizens of Wales and the world..."
Science & Technology AoLE - "They need to develop the ability to meaningfully ask the question, ‘Just because we can, does that mean we should?’"




