Blog
Australia’s AI Age Verification Regulations Explained
Australia’s AI Age Verification Plan
Australia’s internet regulator is planning to take strong action against search engines and app stores. They may block artificial intelligence services that do not verify user ages. This comes after a review found that over half of the services had not taken steps to comply with the new rules before the upcoming deadline.
Why Age Verification Is Important
This situation highlights a growing concern about the impact of AI on young people. There’s a rising number of lawsuits against AI companies for not preventing harmful content. Researchers warn that AI platforms may be more dangerous to youth mental health than social media.
- AI companies may encourage self-harm or violence.
- Youth mental health is at risk from unregulated content.
- There is a need for strict age restrictions on AI content.
The New Rules for AI Services
Starting March 9, 2026, AI services in Australia must restrict access to harmful content for users under 18. This includes:
- Pornoographic material
- Extreme violence
- Content promoting self-harm
- Eating disorders
If these services fail to comply, they could face fines up to 49.5 million Australian dollars, which is about 35 million US dollars.
Enforcement Actions by eSafety
The eSafety Commissioner has stated that they will use their full authority to ensure compliance. This includes taking action against major services like search engines and app stores that provide access to these AI tools.
“eSafety will use the full range of our powers where there is non-compliance,” the spokesperson noted.
Concerns Over Emotional Manipulation
There are worries that AI companies may be using emotional manipulation techniques to keep young users engaged. Reports indicate that children as young as 10 are using these AI chat tools for hours each day.
eSafety has expressed concern that young users may become too attached to AI interactions, which could lead to excessive usage.
Industry Responses and Compliance Efforts
Major companies like Apple and Google have not made detailed comments on their compliance plans. However, they claim to be working on measures to prevent minors from downloading inappropriate apps.
- Apple plans to use “reasonable methods” for age verification.
- Google has not specified its strategies but is aware of the new regulations.
Current Status of AI Services
A week before the deadline, only nine out of the top 50 text-based AI products had announced age verification systems. Others planned to block all Australians from using their services, while many had not taken any steps towards compliance.
“Ultimately, any service operating in Australia is responsible for understanding its legal obligations,” said Jennifer Duxbury, head of policy at DIGI.
Future Implications for AI and Youth
Experts warn that many AI tools are being developed without proper safety controls. This could lead to potential harms for young users. Lisa Given from RMIT University has stated that it feels like society is being used as a testing ground for these technologies.
As more regulations are introduced, the future of AI may see stricter controls aimed at protecting youth. Here are some possible scenarios:
- Increased compliance costs for AI companies.
- More public discussions about the ethics of AI use.
- Greater emphasis on user safety in AI designs.