As AI technology advances, its ability to generate highly convincing synthetic media is transforming industries from social media to cybersecurity. From reshaping entertainment to undermining truth, this article explores the groundbreaking – and sometimes alarming – capability of AI going into 2025.
You’re scrolling on X, aimlessly sifting through updates when a post makes your heart stop. An image stares back at you – a graphic, AI-generated pornographic image of yourself. It’s disturbing, humiliating, and feels unreal, but still has you questioning if what you see is actually you.
This nightmare became reality for broadcaster and former Big Brother contestant Narinder Kaur. As the image spread across the platform, she was left horrified, grappling with the reality that such AI-generated content can cause irreparable harm. You may try to ignore it, but the emotional impact and the potential consequences of such technology are far from small.
In the past year, AI has driven life online more than ever before. From creating content to never escaping Chatbots in the corner of every website, this technology is reshaping how we engage with the digital world – blurring the line between what is real and fake.
The rise of AI in our everyday lives
Social media platforms have seen an explosion of content created by or including AI in 2024, with trends like AI generated music mixes and ageing filters taking TikTok by storm. Tools like ChatGPT offer young people ways to brainstorm ideas and boost productivity. However, this rise also brings concerns about AI’s ability to manipulate and harm.
Aiden Cramer, CEO and Co-founder of AiApply, explains the basics of AI as ‘a way to make computers think and process information like humans’.
“It involves creating algorithms and models that enable machines to process information, recognise patterns, and make decisions. While AI is still in its early stages, it has the potential to revolutionise many aspects of our lives, from healthcare and transportation to entertainment and education.”
The UK AI market, valued at £16.8 billion, is projected to grow to £801.6 billion by 2035, according to the US International Trade Administration. While AI enhances fields like healthcare and entertainment, they also raise critical concerns about data privacy, cybersecurity, and unclear legal regulations.
The dangerous world of deepfakes
One of the most pressing threats is the creation of deepfake content. As Cramer highlights, “You only have to look online over the last few months to see the amount of news stories covering deepfake content during the 2024 US presidential election and the misinformation that has been circulating online.”
“We also had a high-profile case involving Taylor Swift and a deepfake image of her that was shared online, that prompted a Bill being passed in the US senate that would criminalise the spread of non-consensual, sexualised images that were created using AI tools.”
This is something that broadcaster and former Big Brother contestant, Narinder Kaur, knows all too well. She has been no stranger to online abuse after her time on the reality show over 24 years ago, but was left ‘horrified’ after two deepfake pornographic images of herself, created by AI, emerged on X.
“You don’t realise until it happens to you.” She expressed.
“I think that’s why people don’t pay enough attention because it has not happened to them. They didn’t see how bad it is. So, until it happened to me, and I’m ashamed to say this, I didn’t realise how bad it is. I really didn’t.”
In June 2024, an Internet Matters survey in Britain revealed that 13% of children had encountered a nude deepfake in some capacity, including sending or receiving one, but the reporting rate remains relatively low as many victims remain frightened about the consequences of speaking out.
Kaur said: “Look, I’m a 52-year-old woman. I can almost take it. But can you imagine being a 16, year old girl or boy, and that’s your image being sent around? That would destroy you.”
Earlier in the year, Kaur also found herself seeking legal advice after actor Laurence Fox shared an upskirt picture of her on X, which took two weeks to be removed off the site, prompting charity Revenge Porn Helpline to step in and assist with taking similar images down.
“If you’re just a school child or a teenager or a young adult, it’s so much harder to get these images off. There’s Revenge Porn Helpline. You know, they work tirelessly to try and get some of these images off, but social media platforms, no. X does not do enough at all.”
Artificial intelligence is increasingly becoming essential in daily life, especially among young people. Ofcom research reveals that 59% of 7-17-year-olds and 79% of 13-17-year-olds in the UK have interacted with generative AI tools within the past year, with Snapchat’s My AI chatbot being the most popular choice, used by 51% of respondents.
ChatGPT and similar tools have also become increasingly used in education, where universities use AI detectors to identify AI-generated content. As AI becomes woven into daily routines, it marks a significant cultural shift in technology use.
Kaur said: “People love it. My kids love it. Even when I’m going to go and do a debate show, I put so many bits in AI, it’s fantastic.
“But it’s almost like we’ve unleashed a monster. We can’t get that monster back in the can. It’s unleashed now. And it’s like, how, how do we stop it? You can’t stop it. We just need tougher laws around it.”
Creating sexually explicit deepfake images is now a criminal offence in England and Wales under a law introduced in April 2024. The legislation makes it illegal to produce such content even if there is no intention to share it, marking a significant step in addressing this form of abuse.
However, enforcement remains a challenge, particularly on online platforms like X, which many argue are not doing enough to remove harmful content promptly. This lack of action can make the harm caused by deepfakes worse, as victims often face delays in having images taken down, leaving them vulnerable to ongoing distress and reputational damage.
Cramer says: “Legislations such as the EU AI Act and GDPR already exist, but governments will need to ensure that laws are fit for purpose and continue to protect their citizens as we see more elaborate AI-based methods that criminals use emerging when AI technology gets smarter.”
He warns that while AI holds transformative potential, it also introduces significant risks that require immediate attention.
“The rapid development of artificial intelligence means more people will have access to tools and information that will help make their lives easier or more efficient.”
The bright side of AI
AI’s integration into industries like healthcare and marketing is set to revolutionise operations. Cramer predicts significant strides in generative AI, especially in healthcare and manufacturing, where it could work effectively alongside the workforce.
“We’re already seeing companies such as Coca Cola using AI to enhance their TV advertising campaigns, so it’s likely that we’ll start to see more of this being rolled out in the future,” he explains.
“We could also see AI being used in healthcare more frequently, helping to aid with complex medical diagnoses and prescribing of medication.”
Over the past year, AI has brought creativity and fun to daily life. Platforms like Instagram and TikTok have embraced AI-powered filters, enabling users to transform their appearance or experiment with styles. Spotify’s AI DJ creates tailored playlists, adding a personal touch to music discovery. Even this article’s feature image was generated with ChatGPT’s help, showing how accessible AI tools make content creation for journalism and beyond.
Beyond entertainment, AI is advancing critical fields like cancer screening, where algorithms enhance early detection and save lives. From playful innovations to groundbreaking applications, AI continues to shape our routines, offering joy and utility alongside transformative potential in essential sectors.
The dark side of AI
With this progress comes the inevitability of AI being weaponised for criminal activities, such as highly convincing phishing schemes, where scammers use AI to craft personalised emails, text messaged or even voice recordings that deceive victims into revealing sensitive information.
Deepfake content is also increasingly being used to manipulate public perception and spread misinformation, as seen with fake endorsements like AI-generated images of Taylor Swift supporting Donald Trump during the 2024 US Election.
Cramer also highlights data profiling, where AI analyses patterns in data to uncover deeply personal information, such as financial habits or medical conditions.
He said: “This information can be exploited for blackmail, identity theft, or social engineering attacks. By using AI to understand an individuals or organisations preferences, habits, and weaknesses, attackers can craft highly convincing and personalised attacks that are more likely to succeed.”
Beyond criminal misuse, broader societal issues loom, such as job displacement due to automation and the risk of widespread social manipulation. A report by the Tony Blair Institute estimates that AI could displace one to three million UK jobs over time, with a peak annual loss of up to 275,000 positions.
However, the report also notes that many displaced roles could be offset by new AI sector jobs. Proactive measures like upskilling and workforce transition policies will be key to balancing AI’s disruptive potential with its promise for innovation and economic progress.
Striking the right balance
Transparency and accountability remain critical as AI evolves. Cramer points to LinkedIn’s controversial use of user-generated data to train its AI tools without consent.
He said: “Organisations should learn from LinkedIn’s mistakes and be completely transparent with how they collect, store, and use the data.”
AI scams, like fake voices calls, can exploit vulnerable groups, especially the elderly. Promoting public education is essential to help individuals recognise and respond to potential threats effectively.
Human oversight remains vital against these risks. Human presence alongside AI operations is important for quick and effective intervention when issues arise.
“Ensuring that a human is working alongside AI to monitor the efficiency of the outcome will allow any potential threats to be identified and dealt with as quickly as possible.”
The uncertain future of AI
In just a short time, AI like ChatGPT has become integral to daily life, aiding with tasks such as brainstorming and drafting documents. Its shift from a novelty to an essential tool highlights its rapid integration into everyday technology.
The evolution of AI tools has been particularly evident over the past year. Trends like AI-generated filters, voiceovers, and music mixes have gained immense popularity on platforms like TikTok, reshaping online interactions and creating viral moments.
The constant rapid growth of AI raises significant concerns. A Channel 4 News investigation in early 2024 revealed that while only one deepfake pornographic video was identified in 2016, by the first three quarters of 2023, over 143,000 new deepfake porn videos had been uploaded across the 40 most popular sites. This dramatic increase shows the darker implications of AI’s progress and its uncertainty in the future.
As AI evolves, it offers incredible opportunities alongside complex challenges. Striking a balance between its innovations and risks will be crucial to ensure the technology benefits society without causing harm. Its future remains both promising and unpredictable.
Featured image: Image generated by ChatGPT specifically for this article.
Join the discussion