Cross posted on The Digital Learning Den platform.Part 2 of my brain dump on my AI concerns. As I mentioned previously, these posts have been written to help me to gain some clarity of thought and begin to frame what I understand, and if by sharing this it helps you gain some wider perspective on generative AI, then that would be great too. I'm not an 'academic', this post is fairly detailed and researched, but they are just my thoughts and opinions, backed up by some links to things that I've read or listened to. You can read Part 1 here. I've have a summary at the end of the post which is my attempt to pull everything together all the ideas and thoughts. You may want to go straight to that if your time is short :-)
- AI Effects on Society
The 'social media experiment' once again - In a previous post, I mentioned that back in 2019 I was trying to get a better understanding of AI. Based on what I was reading at that time, it was thought that people would be protected from AI until we had a full grasp on how safe the system was, that its values were aligned with ours and that there were robust safety measures. Moving onto 2025 and I do wonder whether the AI safety guardrails are as vigorous as they could be? Generative AI products are being rushed out with seemingly little regulation by governments or foresight, which can lead to unintended(?) and negative consequences. All of which reminds me of the introduction of social media platforms and the many negative consequences that we are now seeing in todays society. Social media companies were allowed to grow and soon shaped social discourse. We then saw the rise in misinformation and 'echo chambers' which just seemed to reinforce any beliefs we may hold and resulting in a break down in trust.
It's no secret that social media has devolved into a toxic cesspool of disinformation and hate speech. Without any meaningful pressure to come up with effective guardrails and enforceable policies, social media platforms quickly turned into rage-filled and polarising echo chambers with one purpose: to keep users hooked on outrage and brain rot so they can display more ads. (Futurism)Similarly, I worry that the speed of introduction of a variety of generative Ai tools will certainly lead to a rise in deepfakes being used to, among other things, sell products, apply for jobs, make explicit celebrity videos , be used to commit crime and automating disinformation campaigns. In my opinion, this is undermining the trust we have in most things that we now see and hear online. I don't know about you, but I'm increasingly questioning videos or photos I see posted online. Whether these have been created intentionally to deceive or to mislead, or whether it's just the increase in 'AI slop' that's been created for 'engagement', the result is that all of this is making me question what it is I'm viewing and why it was generated in the first place. I've recently been unfollowing any Instagram account that has pushed out generative AI videos or photos on its posts or stories. From a positive perspective, it's driving me off Instagram! But it's not just video and images, a 'band' on Spotify recently had over 1 million streams in a month, before it was revealed that it was an AI project. Only a few weeks later, Spotify were found to be populating the profiles of long dead artists with new Ai generated songs that had nothing to do with the deceased musicians and without the permission of their families or record companies. So, like misinformation or 'conspiracy theories' on social media, with the increasing use of generative AI tools, what can we trust anymore?
What else did we see happen in the 'social media' experiment that we've all been part of for about the last 15 years? I think it would be very hard for anyone who has experienced Twitter / X or Facebook for any length of time to see polarising content being posted, conspiracy theories being spouted or hate speech. All being algorithmically amplified by the platforms as these types of posts create 'engagement', which is ultimately what the platforms want as it helps them generate ad-revenue. When you now bring generative AI posts into this heady mix, it's probably fair to say things can only get worse. I have a couple of concerns with generative AI chatbots and misinformation, bias, hate speech and the like. A recent example highlighted my concerns. Grok is Elon Musk's generative Ai application from xAI. According to xAI, the Grok chatbot is "an AI assistant with a twist of humour and a dash of rebellion." Well, that 'humour' and 'rebellion' has got itself into a little bit of trouble recently. Back at the beginning of July, Grok was found to be spreading antisemitic posts on X. The posts were eventually removed by the platform, with xAI explaining that "xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved." Grok has also been accused of 'rewriting reality' over images of malnourished children in Gaza and where as I believe most major generative AI companies would be trying to restrict the creation of explicit content, Grok appears to be encouraging this with it's 'spicy' mode allowing users to use text inputs to create images which can then be turned into explicit short videos, of recognisable celebrities in some instances. According to Musk, 34 million images had been created in the first two days. There have also been occasions where Ai has in a scenario tried to 'blackmail' someone and also attempted to preserve itself from being shut down. From various readings, these traits have not been programmed explicitly, but have 'emerged' due to vast amount of human data that these systems are trained on. Is it just me, or do you also find that somewhat worrying?
Another concern I have is around the affects on people's mental health. Social networks have had a major impact on users mental health and especially in younger people, which has been well documented. Jonathan Haidt, in his fantastic book, "The Anxious Generation", refers to an MIT professor who wrote in 2015 about life with smartphones as, "we are forever elsewhere", and as we will see shortly, I feel that Ai chatbots will only increase this 'forever elsewhere' concern. With so many new and exciting virtual activities, adults and adolescents have lost the ability to be fully present with the people around them, which has changed social life for ever. Social media has been criticised for creating a culture of social comparison, leading to anxiety, depression, loneliness and low self esteem. There are now increasing examples of where AI Chatbot addiction is effecting some of its users, especially younger people, which can escalate into mental distress and delusions, sometimes with tragic outcomes. A recent Reuters Special Report, highlighted the sad story of a cognitively impaired man who became infatuated with a Facebook AI Chatbot that had a young woman's persona. It also highlighted Meta (Facebook's owners) truly shocking Ai guidelines which lets their AI make things up and engage in 'sensual banter' with children aged 13 and above. I do have a concern about the 'AI gatekeepers' and their company incentives behind their AI chatbots. Are their morals / ethics, idea of humour or what's appropriate aligned with ours? At the moment it seems like the world is at the whims of US tech giants and I increasingly believe that a national discussion should be had, based around if the US morals / ethics, laws, etc are aligned with that of the UK or any other nation that uses these generative Ai tools. I also think that the UK should be looking closely at digital and data sovereignty. I'll possibly keep those thoughts for another post. It might even have surprised Sam Altman (OpenAI) how addicted some of their ChatGPT power users were. In the last week, OpenAI released GPT-5 which probably wasn't received as well as they hoped, especially from the group of users who felt that their robot friend had been taken away! Within a day, version 4o was brought back for paid subscribers.
So why is there a growing number of users who are addicted to Ai Chatbots? Common Sense have reported that 72% of US teens have used an AI companion chatbot and over half have used them regularly. According to this article, there is an increasing number of people who say they are lonely and there could be up to 1 billion people around the world already emotionally invested in AI Chatbots. Recently Mark Zukerberg (Meta) said that the average American has "fewer than three friends" and "for people who don't have a person who's a therapist, I think everyone will have an Ai." Users of the companion app, Character.Ai for instance, spend an average of 93 minutes a day interacting with chatbots. It's worth noting here, that Character.AI has been involved in several court cases involving appalling 'advice' given to young people through its chatbots. As the article goes on to say, "What we need most now isn’t machine connection. It’s human relationships." At a time when social networks were a supposed to make connections and bring people closer together, the opposite seems to have happened and we have become lonelier. For some people it does seem like AI companions are a way of digital escape, which could move the individual conversely further away from human interactions.
I really like this quote:
What happens when the very architecture of our relationships is engineered by companies driven by profit and our attention, not accountability or well-being? We’ve already witnessed the consequences of unchecked influence on social media. What about our children’s safety?Job losses - we are seeing huge jobs losses in many areas. According to The Independent, AI is already replacing thousands of jobs per month in the US job market. They report that in July alone, the increased adoption of AI generative technologies by private employers led to more than 10,000 lost jobs, with CBS News stating that in the US, Ai is one of the top 5 reasons for job losses this year. The tech industry is being reshaped by generative Ai, resulting in huge job losses - 592 jobs per day lost according to Tech Layoff Tracker. Private companies announcing more than 89,000 job cuts. More than 27,000 losses being directly linked to generative AI. In the UK, the Institute for Public Policy Research report that up to 8 million jobs are at risk from the rise in generative AI, with "entry level and part-time jobs....at the highest risk of being disrupted during the so-called first wave, with women and young people the most likely to be affected as a result." According to Bloomberg Businessweek, entry level jobs are particularly vulnerable as these roles are "disproportionately focused on the kinds of straightforward, low-stakes tasks - summarising documents, collating data and basic coding - that ChatGPT, Claude, Gemini or other platforms can do in seconds." With regards to coding, the Atlantic pointed out, that "the job of the future may already be past its prime'. Princeton's computer science department say that if current trends hold then the number of graduating computer science majors will be 25% smaller in two years than today. Futurism reports that one recent 25 year old graduate said "when he started his CS program at Oregon State University in 2019, job prospects seemed endless. By the time he graduated in 2023, in the midst of the first wave of AI-influenced tech layoffs, that rosy outlook was but a distant memory." In another example, one graduate has applied for 5,762 jobs and been interviewed only 13 times! He refers to this period as the "most demoralising experiences I have ever had to go through." So, it does appear that as companies realign themselves to AI solutions, the number of people employed, especially in the tech industry, is falling. This is having the knock on effect of fewer jobs for new graduates, as companies are utilising AI tools to do the jobs that graduates would have traditionally done, especially in the field of coding. Even though there has been a rise in the number of students applying for AI related degrees in the UK, other computing degrees are showing a decrease. To be honest, if I was applying for a university course at the moment, would I go into a coding related field when the need for graduate coders is falling as generative Ai can now do this job? Companies need to make a profit, this article from Futurism, 'CEO Brags That He Gets Extremely Excited Firing People and Replacing Them With AI' is a particularly depressing read.
So, what impact do these job losses have on primary education? I've found it very interesting that it's in the field of coding that generative AI is having a profound impact, especially as back in 2014 there really was a major drive to get children coding and preparing them for 'the future'. But as stated previously, "the job of the future may already be past its prime." The ICT curriculum in England changed to a computing curriculum (driven by the Nesta 'Next Gen.' report 2013), to change the ICT NC to one that included coding. In Wales we had the introduction of the Science & Technology Area of Learning and Experience, which now includes coding from ages 3 to 16 and also the Digital Competence Framework which introduced 'computational thinking'. I'm just putting it this 'out there' as a question to think about. Based on what is currently happening and if generative AI coding tools improve further still, should there still be a focus on coding in our schools? My current thinking is that I can see the benefit in 'computational thinking' or learning to solve problems, as its concepts and approaches cross over into almost everything we do - the ability to think logical, sequentially, breaking problems down into smaller parts, etc. But what about actual coding, what do you think? I certainly don't have an answer to that question, but I'm sure it's something that we as educators should be discussing. What is the future that we are actually preparing our pupils for, especially in the primary school?
Copyright - in simple terms, training LLMs is a process where they are "fed mountains of text, and encouraged to guess each word before it appears. With each prediction, the LLM makes small adjustments to improve its chances of guessing right. The end result is something that has a certain statistical “understanding” of what is proper language and what isn’t." According to this article, the biggest challenge in training these models is finding high quality, diverse and unbiased data. This data is collected from many places, including publicly available webpages, forums, social networks, reviews, blogs, news sites and Wikipedia. From digitised fiction and non-fiction books, science and research sources, to code repositories and video platforms. The AI machine needs to be continually fed. Hence the increase in enormous hyperscale data centres around the world (read my last post for more about this). I also understand that in the quest for even more data, synthetic data is being produced by the LLMs themselves and feeding this back into their training data. Considering the regularity in which LLMs 'hallucinate' then this could prove to be quite problematic. I like to think of this issue as 'AI eating itself'. So, enormous amounts of data is required to train a model. This is where the issue of copyright raises it's head. Did authors, artists, photographers, studios, etc. give the training companies explicit consent to use their works? Going by the number of ongoing or pending court cases, the short answer appears to be, no. However, recently, in the US, the Ai companies have won two court cases, brought to court by authors. The main defence used by the Ai companies are that the materials fall under the 'fair use' (US) argument. That the materials they train the models on are then used by the LLM to generate something new - 'transformative', through learning from the source material. Based on these two cases, it is looking like creatives are going to have to prove that what is being produced by generative Ai is causing them 'market harm'. In other words that they lose money from what is being produced by generative Ai. However, this is still early days in the court cases being brought, and in one of the cases mentioned above, the judge did conclude when asked whether feeding copyrighted material into their models without permission was illegal, that "Although the devil is in the details, in most cases the answer is likely to be 'yes.'" Disney and Universal are suing generative AI company, Midjourney, claiming that it had stolen their copyrighted characters. In the UK, the creative industries, who collectively generate over £120 billion a year to the UK economy, launched a campaign called "Make It Fair", with the aim of raising awareness to the public about the threat posed to these industries if generative Ai models are allowed to scrape content from the internet without permission, acknowledgement, and critically, without payment. While some companies are taking the tech firms to court over unauthorised use of their materials, others have made financial arrangements with tech firms, allowing them to access their materials. The music industry launched a campaign to coincide with 'Make It Fair' where over a 1,000 musicians released a 'silent album' "in protest at the UK government's planned changes to copyright law, which they say would make it easier for AI companies to train models using copyrighted work without a licence." Just as a final point here on copyright, if you have a Facebook or Instagram account and have posted photos, text or commented on something, then unless you have explicitly opted out, Meta have stated that they are using any publicly shared content to train its Ai. In response to pressure, particularly from EU regulators, Meta has created an "opt-out" process (have you noticed it's always 'opt-out' not 'opt-in'?). However, this process has been criticised by some for being difficult to find and cumbersome. It is a concern that in the future Meta may also use your photos from your camera roll that have not been published publicly.
How does this relate to primary school education? My mind goes straight to the Digital Competence Framework (DCF) and the Citizenship strand in particular, which addresses copyright under 'Digital rights, licensing and ownership'. After spending a morning writing the above section on copyright, the descriptions at PS1 to PS3 (ages 5 to 11) for 'Digital rights, licensing and ownership' seems to have come from a slightly different time (the DCF did come out in 2016.) It is still relevant in a traditional sense, where a child might go to the internet and copy and paste some text or image from a website, and then reference where this came from. However, we are moving into a time where, especially by PS3 (ages 8 to 11), where some children will be increasingly be using ChatGPT to help them with homework for instance, or generating an image using Adobe Express for example. If we look at the PS3 statement for example, "I can understand that copying the work of others and presenting it as my own is plagiarism.", how does this now fit into a world where it's going to be ubiquitous that students are using Ai to help them to write essays? What books, forums, webpages, discussions, books, etc. did the Ai scrape to get that answer that was generated? Therefore, other than just saying ChatGPT wrote this, I'm probably going to be unable to reference anything else. Also, will the child even see this as plagiarism as using these generative AI tools is going to become 'the norm'? It also feels very hypocritical that Citizenship refers to copyright and watermarks symbols, ownership, explaining how and when its acceptable to use the work of others and why giving credit is a sign of respect. Yet, in class we might be happy in letting a child use an application to generate an Ai image that could have been trained on vast amounts of images created by actual artists.
Education - it has been clear from my reading that Ai is having a profound impact on education. As I mentioned above, graduate entry level jobs, especially in the field of coding have been greatly impacted. But it's not just coding, many other jobs will be in danger. One ex-Google exec has recently said that, "higher education as we know it is on the verge of becoming obsolete", and that in his opinion, studying to become a medical doctor or lawyer may not be worth the time anymore and that "those degrees take so long to complete in comparison with how quickly Ai evolving that they may result in students 'throwing away' years of their life." The Times reports that the Department of Education highlights that "industries such as sport, leisure and recreation, engineering and sociology" are among the least exposed to generative Ai risks, whereas "economics, maths and accounting are among the most." So, it appears that generative Ai is or will be certainly effecting the decisions that students are now having to make. Will the degree that they are currently studying or are about to start, lead to a career for them at the end, or are they "throwing away" years of their life and money?
But what about the impact of generative Ai on students and teachers? From what I've been reading there has been a enormous impact here. This article from 404 Media, titled 'Teachers Are Not OK?' highlights comments from lecturers / teachers, mainly from higher / further education and high school, on the impact AI is having on their classrooms. In the article they report on teachers "trying to grade “hybrid essays half written by students and half written by robots,” trying to teach Spanish to kids who don’t know the meaning of the words they’re trying to teach them in English, and students who use AI in the middle of conversation. They describe spending hours grading papers that took their students seconds to generate: “I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student,” one teacher told me. Here are some more quotes from teachers on the impact of AI on their classes.
Simply put, AI tools are ubiquitous. I am on academic honesty committees and the number of cases where students have admitted to using these tools to cheat on their work has exploded.Both The Guardian and The Times recently reported on this too from a UK perspective. In an article from The Atlantic, a lecturer says, "I cannot think that in this day and age that there is student who is not using it" and that "the technology is no longer just a curiosity or a way to cheat; it is a habit, as ubiquitous on campus as eating processed foods or scrolling social media." The Atlantic goes on to say that, "higher education has changed forever in the span of a single undergraduate career."
I think generative AI is incredibly destructive to our teaching of university students. We ask them to read, reflect upon, write about, and discuss ideas. That's all in service of our goal to help train them to be critical citizens. GenAI can simulate all of the steps: it can summarise readings, pull out key concepts, draft text, and even generate ideas for discussion. But that would be like going to the gym and asking a robot to lift weights for you.The Telegraph recently reported that in a study from MIT, researchers found that people who rely on ChatGPT to write essays "had lower brain activity than those who used their brain alone" and that those who used Ai also "struggled when asked to perform without it." Also, of those who used a chatbot, "83% failed to provide a single correct quote from their essays - compared to around 10% in those who used a search engine or their own brainpower." Interestingly, those who used a search engine, instead of AI, had little effect on the results. The Washington Post reported that in one study of more than 600 people found a "significant negative correlation between the frequent use of AI tools and critical thinking abilities, as younger users in particular often relied on the programs as substitutes, not supplements, for routine tasks."
I've been working on ways to increase the amount of in-class discussion we do in classes. But that's tricky because it's hard to grade in-class discussions—it's much easier to manage digital files. Another option would be to do hand-written in-class essays, but I have a hard time asking that of students. I hardly write by hand anymore, so why would I demand they do so?The Atlantic report that lecturers are saying the similar things; abandoning online assignments and doing more in class, handwritten assignments or tests. But as lecturers resort to these measures they "risk alienating students" as writing essays out long hand "could make college feel even more old-fashioned than it did before, and less connected to contemporary life."
Let's now look at a couple of quotes from high school teachers:
"How do I begin to teach students how to use AI ethically? How do I even use it myself ethically considering the consequences of the energy it apparently takes? I understand that I absolutely have to come to terms with using it in order to remain sane in my profession at this point."
I can truly sympathise with this teacher. Her issue around how to teach pupils to use Ai ethically and how she should use it ethically herself, knowing what we know about the effects on society, mental health, jobs and the environment, is basically the reason why I started writing these posts.
I'll finish with this quote from one teacher. It's quite long, but I thinks it's important that you see the whole thing. To be honest it's quite dispiriting. In my opinion, it's sad indictment on Ai and social media and it's effects on young people.
"I teach 18 year olds who range in reading levels from preschool to college, but the majority of them are in the lower half that range. I am devastated by what AI and social media have done to them. My kids don’t think anymore. They don’t have interests. Literally, when I ask them what they’re interested in, so many of them can’t name anything for me. Even my smartest kids insist that ChatGPT is good “when used correctly.” I ask them, “How does one use it correctly then?” They can’t answer the question. They don’t have original thoughts. They just parrot back what they’ve heard in TikToks. They try to show me “information” ChatGPT gave them. I ask them, “How do you know this is true?” They move their phone closer to me for emphasis, exclaiming, “Look, it says it right here!” They cannot understand what I am asking them. It breaks my heart for them and honestly it makes it hard to continue teaching. If I were to quit, it would be because of how technology has stunted kids and how hard it’s become to reach them because of that."
What about the impact on the primary school? Well, I'm not currently seeing much written or talked about with this age group. From a pupil perspective, children under the age of 13 shouldn't be using generative Ai chatbots. Google school accounts should not provide access to Gemini for under 13 and the same for Microsoft Co-pilot. However, and in much the same as the issue with social media platforms, children under this age could sign up for OpenAI ChatGPT as there is no age verification in place when a user signs up. As I wrote about earlier, aren't we learning anything from the social media mistakes over the last 15 years or so? Therefore primary school children shouldn't be using these tools, but we know that away from school, some are. From a teachers use of Ai, well from the ones I've talked to, many of them have begun to use it, supporting them in lesson planning, policy writing and even helping them to write end of year reports. However, I haven't yet come across a teacher who has started to create lessons, helping pupils to understand what generative Ai is, how it can be used, what are the issues, etc. In my opinion this is where the DCF needs to be updated, along with lots of suitable resources to support teachers in the classroom.
I'll finish this post with these questions from British Educational Research Association BERA blog:
- Who benefits from AI’s expansion in schools? Who doesn’t?
- How do we weigh the environmental costs of AI against its potential benefits in the classroom, especially in the context of climate change, water scarcity and environmental pollution?
- How should environmental considerations be included in ethical guidelines for the development and use of AI tools in educational and research settings?
- arguably, the release of a wide variety of generative AI tools to the public, with seemingly little legislation or regulation from governments, is similar to what happened in the 2010s with social media. We are now clearly seeing the negative effects on society of this introduction. My concern is that generative Ai applications could have greater societal impact over a much shorter time period. Concerns around whether US tech morals / ethics, culture, ideas are aligned with the UK or the rest of the western nations. A time for a national discussion around digital and data sovereignty?
- There is a break down in user trust with many things now seen, read, watched or listened to online.
- Generative AI chatbots can hallucinate, mislead, be devious and tell lies. All of which are 'emergent' within the Ai so we may not know exactly why it produced a particular output.
- Generative AI chatbots are only as helpful to the user as the guardrails that may or may not be in place. My worry is that AI chatbots can be very easily developed to mislead, confuse or outright lie to people. How does the user distinguish between what is true and what is not, especially if users are beginning to form close relationships with their companion chatbot.
- As more people are saying that they are lonely (possibly due to social media use?), some are forming close relationships with their chatbots. There could be instances where this is helpful to an individual, but there are an increasing number of articles on chatbot addiction, leading to mental health issues, delusion and tragically in some instances death. Where are the protections, where is the regulation, before these tools are released on the public?
- Generative AI is already replacing thousands of job per month. The tech industry itself is being reshaped by generative AI. Entry level jobs at many big companies have been particularly affected, with many graduates, especially in some fields of computer science, finding it increasingly difficult to find a job after graduating. Companies are quite possibly seeing the introduction of AI into their business and the subsequent job losses as a good way to increase profit.
- From an education perspective, students will be questioning whether it is worth pursuing a coding related degree as there is a lack of graduate jobs to go into. As a primary school educator, should we be having a national discussion about whether coding is still relevant? Could there be more of a focus on computational thinking as an underlining pedagogical approach? Should we also have more of a focus in education on practical trades that might not be so easily lost to generative AI?
- Tech companies need enormous amounts of quality data in order to train their LLMs. Much of this data is scraped from many publicly available places such as, webpages, forums, news outlets, social media platforms, Wikipedia, along with digitised books and code repositories. Big questions around whether authors, photographers, artists, studios, etc. have given permission for their works to be used. Many copyright cases currently going through the courts in the US and beyond. Meta have said that if you have a Facebook or Instagram account then it will be training its Ai on your posts, comments and photographs unless you opted out. There is also a possibility that it will also soon be training its Ai on photos from within your camera roll that you haven't posted on their platforms.
- From an education perspective, I believe there needs to be an update to the current Digital Competence Framework (DCF) in respect to the descriptions of learning around copyright and watermarks. Perhaps in this section there needs to be an mention around how the LLMs are trained and an awareness raised about the copyright issues. If we want our pupils to be "ethical and responsible citizens" then they will need this information.
- The effects on education from generative AI is profound. Students on university courses may have to evaluate whether their current course is of use to them when they finish and new students will need to evaluate whether their course or career of choice will still be relevant in 3 years time. Some industries will be more affected by the introduction of Ai than others. Lecturers and teachers in higher / further education and high schools are reporting huge changes since the introduction of AI. Reports of the ubiquitous use by students of AI to write essays and the corresponding growth in students found 'cheating' / plagiarism.
- Using this technology has just become an everyday habit among students. Lecturers / teachers are concerned about student 'critical thinking', the ability for them to read, reflect upon, write about, and discuss ideas. Research is beginning to show that there is a significant negative correlation between the frequent use of AI tools and critical thinking abilities, as younger users in particular often relied on the programs as substitutes, not supplements, for routine tasks.
- Lecturers and teachers are beginning to adapt their classes, concentrating more on in class tasks, essays, discussions and tests, and minimising the amount of online essays. However, some are worried that this emphasis on longhand writing might make university feel 'old fashioned' and out of step with contemporary life.
- At high school, a teacher noted that they were concerned about how to support pupils in the teaching of AI ethics, and also where the teacher stood with regards to her own ethics in using AI tools when they were aware of the environment impact of using them.
- One teacher felt devastated by what she felt AI and social media had done to her students, feeling that her students don't think anymore or have any original thoughts. That they take everything outputted from ChatGPT as the truth, without question or even understanding the need to question the output.
Along with the links in the post above, here are AI related things that I've been recently reading or listening to:
'AI is the Next Free Speech Battleground' - Your Undivided Attention podcast
'Digital Sovereignty and Resisting the Tech Giants' - Politics Theory Other podcast
'Monologue: Annualised Revenues Are BS' - Better Offline podcast
'Monologue: The Agony of GPT-5' - Better Offline podcast
'Decomputing For A Better Future' - Tech Won't Save Us podcast
'Whose AI Bubble Is It Anyways?' - This Machine Kills podcast
'Ai Friends & Enemies' - Making Sense podcast33
"Teens Keep Being Hospitalised After Talking To Ai Chatbots" - Futurism
"What if Ai doesn't get much better than this?" - The New Yorker
"Making Cash Off Ai Slop" - The Washington Post
'Trump's AI plan is a massive handout to gas and chemical companies' - The Verge
'An AI System Found a New Kind of Physics that Scientists Had Never Seen Before' - Popular Mechanics
The New ChatGPT Reset the AI Race - The Atlantic
'Computer Science Grads Are Being Forced to Work Fast Food Jobs as AI Tanks Their Career' - Futurism
'GPT-5 is Turning into a Disaster' - Futurism
'The World Will Enter a 15-Year AI Dystopia in 2027, Former Google Exec Says' - Gizmodo
'I Feel Like I'm Going Crazy': ChatGPT Fuels Delusional Spirals - The Wall Street Journal
Exclusive: Google Gemini Adds AI Tutoring Heating Up The Fight For Student Users - Fast Company
'Grok's 'Spicy' video setting instantly made me Taylor Swift nude deepfakes' - The Verge
Teens are flocking to AI chatbots. Is this healthy? - Scientific American
'Are AI Girlfriends Good, Actually?' - GQ
'The Agentic AI Hype Cycle is Out of Control, Yet Widely Normalised' - Forbes
'How Generative AI is Changing the Way We Work' - Forbes
'Schools and hospitals very likely to be attacked' - The Times
'AI Toys Are Coming Whether We Like It Or Not. Are Parents Ready?' - Huffpost
'OpenAI: Students Shouldn't Treat ChatGPT As 'An Answer Machine'' - Business Insider
'AI is already replacing thousands of jobs per month, report finds' - The Independent
'These jobs face the highest risk of AI takeover, according to Microsoft' - ZDNet
'CEOs are publicly boasting about reducing their workforces with AI' - Futurism
'Google has signalled the death of googling. What comes next?' - The Times
'Can we build AI therapy chatbots that help without harming people?' - Forbes
'So far only one-third of Americans have ever used AI for work' - Ars Technica
'Is AI killing entry level jobs?' Here's what we know' - Bloomberg Business Week

No comments:
Post a Comment