Skip to main content
← All Insights
AIFuture Ready

AI Off the Leash

Ledax Team
AI Off the Leash

This article offers a global, high-level view of emerging AI laws across India, the EU, Singapore, Australia, China and Japan.

In a world where your fridge might be smarter than your neighbour, Artificial Intelligence (AI) is no longer a futuristic fantasy it’s our new reality. From recommending what to binge-watch next to scanning résumés faster than you can say “You’re hired,” AI is everywhere. But with great algorithms comes great responsibility. And right now, our laws are struggling to keep up.

When AI Acts Like a Toddler with Superpowers

Imagine a toddler who learns everything from the internet and starts making major life decisions for you. Scary, right? That’s basically what some AI systems are doing only with less cuteness and a lot more unpredictability. These tools are now influencing who gets hired, who qualifies for a loan, or even who’s flagged for extra airport screening.

Most of this happens without your knowledge, and without any clear rules. Welcome to the wild west of AI governance.

Cybersecurity and Privacy: The New Front Lines

AI isn’t just processing data it’s hoarding it. Personal info, browsing habits, voice commands, facial recognition you name it. Our growing dependence on AI has put privacy and cybersecurity under siege. One data breach can expose millions, and unlike human mistakes, AI errors happen at lightning speed and massive scale.

Laws like the EU’s proposed AI Act and AI Liability Directive aim to rein that in preventing AI from manipulating emotions, surveillant public behaviour, or discriminating against job applicants. But here’s the catch: most of these laws are still in draft form and not yet enforced.

Meanwhile, in the U.S., states like Illinois and California have introduced AI-specific rules targeting facial recognition and algorithmic bias. Still, it’s a patchwork system like trying to cover a leaking dam with duct tape. But privacy breaches aren’t the only concern. As AI grows smarter, it’s also getting darker powering a new wave of digital deception and crime.

AI and Crime: Not Just for Hackers Anymore

AI isn’t just making life easier it’s also making crime cooler (and scarier). Think deepfakes, phishing emails written by bots, and voice-cloning scams that can fool your grandma. Cybercrime has evolved into cyber-AI-crime-faster, smarter, and harder to trace

Some governments are already trying to contain the damage China and the EU, for instance, have introduced early rules on generative AI and deep synthesis technology requiring strict identity checks, content monitoring, and watermarking of AI-generated media.

But these are still reactive moves patches on a growing problem. What is really needed is a coordinated global framework to govern AI before it governs us.

So, What’s Being Done?

The good news? That global effort is finally taking shape.Around the world, governments are moving from discussion to action each with its own flavour of AI regulation.

Indiais developing the Digital India Act to eventually replace the IT Act, 2000, with a more modern framework for platforms, AI and online harms. In parallel, the government has proposed mandatory labelling rules for AI-generated and deepfake content under the IT Rules and issued election-specific labelling directions, moving toward stronger transparency requirements even before the new Act is in place..

Europehas adopted a comprehensive, risk-based framework under the EU AI Act, supported by the Digital Services Act, Cyber Resilience Act, and GDPR. High-risk AI systems must meet strict safety, transparency, and fairness standards, while uses like emotion recognition in schools or workplace surveillance are banned.

Chinahas implemented one of the world’s toughest AI regimes, through its Generative AI Measures and Deep Synthesis Regulations. These rules demand content controls, user verification, and safeguards against misinformation with city-level regulations in Shanghai and Shenzhen adding extra layers of oversight.

Singaporehas built a strong, largely soft-law AI governance model anchored in the PDPA, the PDPC’s Model AI Governance Framework (including a dedicated generative-AI framework) and the IMDA-led AI Verify testing toolkit. Together, these instruments give organisations practical guidance and tools to test AI systems against principles of transparency, safety and accountability, aligned with international standards. In financial services, MAS’s Veritas framework and Toolkit 2.0 operationalise the FEAT principles – Fairness, Ethics, Accountability and Transparency – for AI and data analytics, and are increasingly influencing broader industry practice.

Australiais developing more formal AI regulation on top of its 2021 AI Action Plan, centred on the government’s “Safe and Responsible AI” agenda. A proposals paper on mandatory guardrails for high-risk AI settings sets out options for legally binding requirements focused on safety, human-rights impacts, accountability and transparency, with public consultation under way on whether these guardrails should sit in a new AI law, existing sectoral laws or a hybrid model. In parallel, guidance and assurance frameworks for AI use in the public sector are emerging as the government experiments with AI while trying to maintain public trust.

Japancontinues to take a human-centric, voluntary approach through its Social Principles of Human-Centric AI and the AI Governance Report by METI. These frameworks promote ethical and transparent AI use, with updated business guidelines issued in 2025.

Why This Matters

If we don’t regulate AI now, we risk living in a world where machines decide who gets a job, a loan, or even parole with little transparency or accountability. This isn’t just a legal issue. It’s a human rights issue. Biased training data, lack of consent, opaque decision-making these aren’t bugs; they’re built-in features of an underregulated AI ecosystem.

And let’s be honest: trusting machines to always “do the right thing” is like expecting your cat to water your plants optimistic at best.

Bottom Line

AI is powerful, useful, and here to stay. But it’s also flawed, biased, and capable of harm. As AI continues to evolve, so must our laws because if we’re going to live alongside smart machines, we’ll need even smarter rules. Or else, we might just find ourselves governed by code we don’t understand, written by systems we cannot control.

Have a legal or commercial challenge?

Our team is ready to help. Let's have a conversation.

Talk to Us