As we enter an era in which technology can convincingly alter reality, a new battleground has emerged: deepfake pornography. These hyper-realistic, digitally manipulated videos can be used to create harmful and non-consensual content that targets individuals, often women. As the prevalence of deepfakes continues to rise, governments and lawmakers are grappling with how to regulate this emerging threat.
One such initiative is a proposed ban on deepfake pornography, which has sparked debate among legal experts, privacy advocates, and technologists. We caught up with Raymond Sun, an award-winning technology lawyer and full-stack developer, on the potential implications of this legislation.
The Deepfake Sexual Material Act 2024: What you need to know
“The Deepfake Sexual Material Act was first unveiled in June 2024 as part of a series of law reforms to tackle gender-based violence, including concerns around insidious sexual deepfakes targeted against female victims,” Raymond explains.
“It has been in force since 2 September 2024. The Act amends our Criminal Code Act to make it illegal for a person (i.e. a criminal offence with 6-7 years imprisonment) to use a carriage service (e.g. social media, SMS, email, etc.) to transmit material of another person, which depicts or appears to depict the other person in a sexual pose or activity. The person commits this offence if they know that the other person does not consent to the transmission of the material or is reckless as to whether the other person consents to the transmission of the material.”
Deepfake Dilemmas: Can a ban be effective?
“There's always going to be an inherent practical challenge to enforcing any ban on digital content. But that is not to say we can't at least set the foundations,” Raymond says.
According to Raymond, Australia has made a positive first step with enacting the Criminal Code Amendment (Deepfake Sexual Material) Act which uses the force of the criminal law to prohibit non-consensual sexual deepfakes against individuals. Although this new law has yet to be tested, and Raymond believes we need to go beyond legal means.
“We should not stop at individual legal protections but also pursue practical non-legal initiatives, particularly those targeting distribution channels (e.g. social media platforms, telco service providers) which have some level of control over the distribution of harmful deepfakes,” Raymond says. “Our regulators should continue working with overseas counterparts and industry to develop technologies and techniques for detecting deepfakes; preparing operationalisable guidelines; and promoting education and self-awareness for individuals to protect themselves from deepfake harms.”
How can the law keep pace with deepfake technology?
Technology, by its very nature, thrives by pushing change at pace. Disrupt or die is a mantra that defines both big tech and startups. By contrast, the law develops incrementally, through hard-won precedents set through the judicial process, or law reform and legislation. This has often put the law at odds with regulating technology; how can we hope to regulate what is constantly changing?
“I think it's the same as how you would keep up with any other technology,” Raymond says.
“One way is to set up a dedicated task force of tech experts, legal professionals, and law enforcement officials to continuously monitor technological advancements, update detection methods, and recommend legislative amendments as needed,” Raymond explains. “Collaboration with academic institutions and tech companies for research and development of detection tools is also crucial. There should also be regular training (via qualifier, induction and/or CLE programs) for law enforcement and judicial personnel on the latest deepfake techniques and legal implications.”
Deepfake countermeasures: what could we do next?
“We already have content moderation algorithms that are quite effective at identifying pornographic content. These have been around for some time,” Raymond says.
“What's new is how we can uplift these existing techniques to also identify whether a piece of pornographic content was AI generated. One way is to use AI to fight AI - i.e. machine learning algorithms that are trained on real and fake media datasets to identify, flag or block malicious deepfakes prior to their publication on a platform.”
Another approach is to focus on the transparency aspect - i.e. implementing digital watermarks and metadata that display the authenticity and origin of a deepfake content.
“This is where blockchain can assist with verification. These solutions are most effective when deployed by the distribution channels/platform, which is why collaboration with them is crucial,” Raymond says.
As the Deepfake Sexual Material Act 2024 takes effect, Australia has made a significant stride in combating the harmful effects of deepfake pornography. However, the challenges posed by this rapidly evolving technology demand a multifaceted approach. By fostering collaboration between legal professionals, tech experts, and law enforcement, and by investing in innovative detection and prevention technologies, Australia can stay ahead of the curve and protect individuals from the devastating consequences of deepfake abuse.
To further your learning and development our CPD Digital Subscription is available 24/7 over 12 months and delivers over 140 courses.
- Legal Industry Trends
- Criminal Law
- Privacy and personal information law
- Litigation
- AI in Legal Practice