AI Governance is Growing in Asia-Pacific: Key Developments and Takeaways for Multinational Companies

As AI tools advance, multinational employers are grappling with a growing patchwork of governance frameworks. Notably in the Asia-Pacific (APAC) region, Japan, Singapore, India, and Australia are making moves that encourage AI use while seeking to curb the risks. Businesses operating in these locations should stay on top of recent developments and create an action plan to address the nuances in each applicable location. Here’s a breakdown of new developments in this region and what you can do to prepare as AI regulatory frameworks continue to take shape.

Japan Focuses on Innovation

Earlier this year, the legislature in Japan passed a bill that encourages research and development – as well as advancements – in AI technology and commits to international leadership and cooperation. The new law sets expectations for businesses to cooperate with key principles for AI governance, including transparency and risk mitigation. Research institutions are expected to work on AI development and train skilled workers, and citizens are encouraged to learn more about AI. There are no sanctions for noncompliance. Rather, the focus is on voluntary collaboration and making strides in this realm.

Moreover, the law charges Japan’s national government with creating AI policies. Specifically, it establishes an Artificial Intelligence Strategy Headquarters within the Cabinet, which will oversee the new Artificial Intelligence Basic Plan and coordinate all AI policy. Local governments are responsible for implementing a regional plan.

Action Items: Businesses operating in Japan should be encouraged by the innovation-focused approach. Still, you should review (or develop) your AI policies for transparency and alignment with the new guidance. You should also consider actively participating in voluntary initiatives that support responsible AI development and use.

Singapore Highlights Testing and Sector-Specific Compliance

Balancing innovation with effective risk management is also a priority in Singapore. The government published its first Model AI Governance Framework in 2019 – which established principles for ethical AI use – and has invested millions in AI efforts. This year, through AI Verify Foundation, Singapore launched the Global AI Assurance Pilot, which served as a sandbox to test Generative AI applications and establish international standards. The purpose was “to help codify emerging norms and best practices around technical testing of GenAI applications.”

In terms of regulations, Singapore has chosen not to pursue a comprehensive AI statute. Instead, it continues to follow a sector-specific regulatory model that addresses risks through existing frameworks for finance, healthcare, employment, and other compliance areas.

Action Items: Explore AI Verify Foundation initiatives and review the pilot report. For ongoing compliance, continue to ensure AI practices are aligned with sector-specific regulations.

India Issues Robust Guidelines

India released its AI Governance Guidelines in 2025, emphasizing safety, trust, and flexibility. Specifically, the seven core principles are:

  1. Trust is the Foundation
  2. People First
  3. Innovation over Restraint
  4. Fairness and Equity
  5. Accountability
  6. Understandable by Design
  7. Safety, Resilience, and Sustainability

The guidelines, which you can learn about in more detail here, make six key recommendations, including education and skills training, as well as policies that support innovation, mitigate risks, and encourage compliance through transparency and voluntary measures.

The guidelines also recommend setting up an AI Governance Group, which would be supported by a Technology and Policy Expert Committee – and enabling the AI Safety Institute to provide technical expertise on trust and safety issues.

As in most other countries, the recommendations also call for sector-specific regulations to continue. For example, the Digital Personal Data Protection Act (DPDP Act), India’s first comprehensive data privacy legislation regulating digital personal data processing, governs the use of personal data to train AI models.

Action Items: Review AI practices and assess how they measure up to India’s core principles, particularly on fairness, accountability, and transparency. Audit processes to ensure compliance with existing laws like the DPDP Act and other rules that impact your business.

Australia Shifts Gears

In 2024, Australia initially proposed mandatory guardrails for AI use in high-risk settings and released its Voluntary AI Safety Standard to serve as guidance while the requirements were pending.

The country has since changed its direction from aiming to roll out a stricter EU-type AI framework to one that looks more like other APAC nations, focused on safe and responsible AI use, as well as innovation and compliance with existing laws and regulations.

Last month, Australia’s National AI Centre issued new Guidance for AI Adoption, which replaces the prior voluntary standard and sets out six practices for responsible AI governance:

  1. Decide who is accountable 
  2. Understand impacts and plan accordingly 
  3. Measure and manage risks
  4. Share essential information
  5. Test and monitor
  6. Maintain human control

To learn more, you can visit the government’s resource page.

Action Items: Get familiar with the new Guidance for AI Adoption and the accompanying resources. Consider aligning internal AI practices with the six recommended governance steps – and continue to monitor developments as Australia shapes its approach to AI governance.

Key Takeaways for Multinational Businesses

  • Global AI Regulations Are Evolving Quickly: While Japan, Singapore, India, and Australia have all introduced frameworks that balance innovation with risk mitigation, the European Union has taken a stricter approach and leading European companies are pushing back. You’ll want to track developments closely.
  • No One-Size-Fits-All Plan: Carefully review the applicable countries’ approach to voluntary vs. mandatory guidance and sector-specific rules vs. comprehensive AI statutes.
  • Themes Are Emerging: Although each approach is different, goals like safety, transparency, accountability, and human oversight are common.
  • Existing Laws Still Apply: AI regulatory frameworks generally do not replace compliance obligations under data privacy, finance, healthcare, employment, and other laws that may apply when AI tools are used. Be sure that your AI, Tech, Legal, HR, and other teams coordinate compliance efforts.
  • Privacy Rules Still Shape AI Use: Because AI is being handled through existing privacy frameworks, companies should watch how regulators interpret transparency and data use in AI contexts. These factors directly affect how AI models are trained and how they generate outputs.
  • Sector-Specific Rules May Affect Certain AI Use Cases More Than Others: Given the region’s reliance on sector-based oversight, areas like recruitment, financial services, and healthcare may face closer scrutiny when AI tools are used in those contexts.
  • Navigating AI Governance Can Be Challenging: Having a trusted legal or consulting partner can provide valuable guidance, ensuring compliance and aligning AI practices with business goals. From setting up policies to managing compliance, a partner experienced in AI governance can help prevent costly missteps.

Conclusion

We will continue to monitor legal changes affecting multinational companies, so make sure you are subscribed to Fisher Phillips’ Insight System to receive the latest updates directly to your inbox. If you have questions, contact your Fisher Phillips attorney, the authors of this Insight, or any attorney in our International Practice Group or our AI, Data, and Analytics Practice Group.

Continue Reading