Navigating AI Ethics, Guidelines, & Regulations Across Borders: A Global Snapshot for Vet Med Professionals
- Dr. Karen Bolten
- Jun 25
- 4 min read
Updated: 6 days ago

AI ethics isn’t one-size-fits-all - especially not on a global scale.
If you’re evaluating AI tools for your veterinary clinic, it’s easy to assume all companies are held to the same standards. But in reality, AI regulation and ethical frameworks vary dramatically depending on where a product is developed, deployed, or marketed.
Some countries - like those in the EU and parts of Asia - have taken proactive steps to build trustworthy frameworks. Others are still developing their approach. As a vet professional, this matters. A tool developed under strict regulatory oversight may be far more reliable than one that’s never been meaningfully reviewed. Fortunately, there are also several international organizations (like UNESCO, the OECD, and the WHO) that are contributing well-developed, globally applicable AI ethics guidance to help shape responsible development across borders.
Below is a curated list of global guidelines, frameworks, and medical device policies to help you understand what standards AI developers should be following.
For me - if in doubt, my rule of thumb is usually to default to the strictest standards. As I’m evaluating tools in practice, the hack for this (honestly) is often GDPR-compliant tools, as the EU is leading the international scene in AI regulations and guidelines currently. This is not a perfect methodology, but it can be a starting place in finding tools that hold themselves to the highest international standards.
International AI Ethics Guidelines
International Organizations

These organizations offer cross-border principles and recommendations:
UNESCO Recommendation on the Ethics of AI: UNESCO Guidelines - The first global standard on AI ethics, covering human rights, transparency, accountability, and sustainability.
OECD AI Principles: OECD Guidelines - Adopted by over 40 countries, including the U.S. and EU members, these focus on trustworthy and human-centered AI.
WHO Ethics & Governance of AI in Health: WHO Guidelines - Tailored for healthcare, with emphasis on fairness, safety, and informed consent.
Regional and National Guidelines
Asia
China: Interim Guidelines on Generative AI - Regulatory controls with a focus on national security, data provenance, and ethical risks.
Japan: AI Regulation Tracker - Ongoing discussions; current focus is on voluntary principles.
South Korea: AI Basic Law Overview - Strong focus on both industrial innovation and ethical oversight.
Europe

EU Guidelines for Trustworthy AI: Full Guidelines PDF - One of the most comprehensive global frameworks, built around 7 core principles including human agency, transparency, and accountability.
UN System Principles for Ethical AI: UN Guidelines - Tailored for use in public sector and humanitarian applications.
North America
Canada: Generative AI Guidance - Public sector-focused guide to ethical and responsible adoption.
United States:
Federal Government
White House Executive Order on AI (2025) - Reversed many protections from the 2023 Biden order by prioritizing innovation and economic competitiveness over regulation; agencies are now directed to roll back rules that may “hamper” AI development.
NIST AI Risk Management Framework: View Framework - A technical guide for identifying and managing AI-related risks.
FTC AI Use Policy: FTC Policy PDF - Focuses on bias, transparency, and consumer protection.
State Governments
There are several states with AI laws and regulations, many of which go beyond current federal policy by requiring transparency, bias audits, or human oversight. States like California, Colorado, Connecticut, Utah, Tennessee, Montana, and New York have enacted legislation targeting areas such as consumer rights, generative AI disclosures, automated decision-making, and deepfake misuse.
NGOs & Think Tanks
Center for AI and Digital Policy: Universal Guidelines for AI
Partnership on AI: Inclusive AI Framework
South America
Brazil: Brazil’s AI Act Overview - Currently in development, focusing on human rights, privacy, and democratic values.
Medical Devices: AI Guidelines & Device-Specific Regulations

AI used in medical settings (including diagnostic support tools and AI scribes) is often subject to additional scrutiny when it's considered a medical device. Here are some regional regulations, recent publications on AI guidelines, and device-specific publications.
I intend to update this section as I find more, as this is a constantly developing area.
China
NMPA Guidelines for AI Medical Devices: View Summary - Includes submission requirements and validation protocols for AI-based tools.
European Union
EMA Qualification of AI in Histology: EMA Guidance PDF
United Kingdom
Overview of Global AI Medical Regulations: Full Article
United States
FDA Guidelines for AI/ML-Based Devices: FDA Policy
Good Machine Learning Practices (GMLP): GMLP Principles
South Korea
Approval Framework for AI Medical Devices: Radiology Journal Overview
WHO Global
AI in Medical Devices - Evidence Framework: WHO Report
What Does This Mean for You?
You don’t need to memorize every global regulation. But when evaluating an AI product for

your clinic, ask:
Where was this product developed?
Which guidelines and regulations would apply to it?
Is the product actually compliant with those regulations or guidelines?
It's not a fool-proof method, but if a product was developed in a location with more stringent regulations (like under the EU's GDPR), it will likely protect your data better than... no regulations at all. IDK, just a theory.
Coming Soon: My AI Transparency Index
I'm working on a searchable version of this information that links AI products in my database to their corresponding certifications and transparency levels. Stay tuned.
👉 For now, bookmark this page and use it when evaluating tools - especially those handling high-stakes medical decisions and private client data.
Comments