
By Pravin Periasamy
Recently, the UK established a powerful legal precedent by strictly prohibiting the use of AI to generate explicit images of minors – becoming the first country to do so.
In light of the adaptive abilities of AI in manipulating the images of real people, the UK made it an offence to “nude-ify” children by processing, making or distributing such content.
Deepfake technology has seen an alarming spike in usage – often to defraud the public. AI-generated imagery has been used in investment scams, exploiting public trust with modified videos and images of prominent figures.
In 2022-23, Malaysia fell victim to a 1,000% increase in deepfake incidents. Notable influencers, ministers and high-profile individuals have not been spared: their likeness has routinely been appropriated to add a veneer of legitimacy to false schemes.
The potential for abuse is widening. Certain AI tools are able to push this further in a way that alters images of children to create exploitative material. This gives even more power to criminals who use underground channels to spread pornographic images, especially of minors.
The worry is that this might make it more difficult to enforce existing laws to crack down on illegal distribution and consumption.
- Sign up for Aliran's free daily email updates or weekly newsletters or both
- Make a one-off donation to Persatuan Aliran Kesedaran Negara, CIMB a/c 8004240948
- Make a regular pledge or periodic auto-donation to Aliran
- Become an Aliran member
In Malaysia, Section 4 of the Sexual Offences against Children Act 2017 prohibits visual, audio or written representation of a child in sexually explicit circumstances, whether graphic or realistic.
According to the Internet Watch Foundation, AI-generated imagery of child abuse material has risen fivefold globally. A trickle-down effect could pose a risk to Malaysia as well.
In the past, it was mainly organised criminal groups who had the power to source sexually explicit content through the systematic exploitation of children (ie through abduction, extortion and kidnapping).
But now, AI-tools are widely available, providing the ability to just about anyone to anonymously generate realistic images of nude minors. Not only does this make such content more readily available, it can be also be rapidly produced.
If these images were to flood underground channels, tracing their origin – how they were generated and who made them – becomes more difficult.
Malaysia has to revamp its approach should explicit AI-generated imagery become more rampant by placing stronger countermeasures against such AI tools.
The authorities could consider creating a dedicated taskforce to curb this problem. This taskforce would have to abide by a comprehensive legal framework designed to monitor prevalent AI tools. The authorities could deploy more online moderators to supervise, review and perhaps prohibit access to tools with features that can be abused.
Malaysia must emulate Britain in leading the fight against child sexual exploitation. Last year, the UK and Malaysia enhanced collaboration in AI through a partnership. The aim is to improve research through workshops and training that build skills and share knowledge.
Both countries could work together to assist authorities in evaluating the danger of AI tools that could be used to generate explicit content. They could also invest in sophisticated detection and monitoring tools which restrict access to certain grades of technology.
The priority now is for both nations to advocate for more awareness surrounding the abuse of AI tools and to push for a standardised system that could be adopted globally.
The future of our children is at stake. The time to act is now.
Pravin Periasamy is the networking and partnership director of the Malaysian Philosophy Society.
- Tegakkan maruah serta kualiti kehidupan rakyat
- Galakkan pembangunan saksama, lestari serta tangani krisis alam sekitar
- Raikan kerencaman dan keterangkuman
- Selamatkan demokrasi dan angkatkan keluhuran undang-undang
- Lawan rasuah dan kronisme