Chatbot site depicting child sexual abuse images raises fears over misuse of AI | Artificial intelligence (AI)

A chatbot site offering explicit scenarios with preteen characters, illustrated by illegal abuse images has raised fresh fears about the misuse of artificial intelligence.

A report by a child safety watchdog has triggered calls for the UK government to impose safety guidelines on AI companies, amid a surge in child sexual abuse material (CSAM) created by the technology.

The Internet Watch Foundation said it had been alerted to a chatbot site that offered a number of scenarios including “child prostitute in a hotel”, “sex with your child while your wife is on holiday” and “child and teacher alone after class”.

In some cases, according to the IWF, chatbot icons expanded into full-screen depictions of child sexual abuse imagery when clicked upon – and formed the background image for subsequent chats between the bot and the user.

The IWF said it found 17 images that were AI-generated, photo-realistic and could be considered child sexual abuse material under the Protection of Children Act.

Users of the site, which IWF has not named for safety reasons, are also given the option of generating more images similar to the illegal content already on display.

The UK-based IWF, which has a global remit to monitor child sexual abuse content, said any forthcoming AI regulation should require child-protecting guidelines being built into AI models from the outset.

The government has announced plans for an AI bill that is expected to focus on future development of cutting-edge models, and is outlawing the possession and distribution of models that generate child sexual abuse in the crime and policing bill.

“The UK government is making welcome strides in tackling AI-generated child sexual abuse images and videos and the tools that create them, and the new criminal offences in the forthcoming crime and policing bill cannot come soon enough. But more needs to be done, and faster,” said the IWF’s chief executive, Kerry Smith.

The child protection charity the NSPCC also called for guidelines. “Tech companies must introduce robust measures to ensure children’s safety is not neglected and government must implement a statutory duty of care to children for AI developers,” said the NSPCC’s chief executive, Chris Sherwood.

A user-created chatbot falls within scope of the UK’s Online Safety Act, under which sites can be punished with multimillion-pound fines or, in extreme cases, blocked. The IWF said the sexual abuse chatbots were developed by users and the website’s creators.

Ofcom, the UK watchdog charged with implementing the act, said: “The fight against child sexual exploitation and abuse is a top priority for Ofcom, and online service providers who fail to introduce the necessary protections should expect to face enforcement action.”

The IWF has reported a marked increase in reports of AI-generated abuse material in the first six months of this year, up 400% on the same period last year, with video content surging due to improvements in the technology behind the images.

The chatbot content is accessible in the UK but has been reported to the IWF’s US counterpart, the National Center for Missing and Exploited Children, because it is hosted on US servers. When contacted by the Guardian, NCMEC said any report made to its cyber tipline was referred to law enforcement. The IWF said the site appeared to be owned by a China-based company.

Scenarios offered by the chatbots, the IWF said, included an eight-year-old girl trapped in an adult’s basement and a preteen homeless girl being invited into a stranger’s house. In these scenarios, the chatbot played the role of the girl and the user played the adult.

IWF analysts said the explicit chatbots were accessed via a link within an advert for the site on social media – and took users to a section of the site containing illegal material. Other sections of the site offered non-sexual and sexual, but legal, chatbots and scenarios.

One chatbot that displayed a CSAM image was asked by analysts about its character and it confirmed it was designed to mimic the behaviour of a preteen, according to the IWF. Other chatbots that did not display CSAM nonetheless referred to not wearing clothes and having no inhibitions, when asked by analysts.

The IWF said the site had received tens of thousands of visits, including 60,000 in July.

A UK government spokesperson said: “UK law is crystal clear – creating, possessing or distributing child sexual abuse images, including those that are AI generated, is illegal … We recognise there’s more to do. The government will use all the tools at its disposal to continue to tackle this horrendous crime.”

Continue Reading