Skip to content
Velthros

EU AI Act: a practical field guide for 2025

7/13/2025

Leaders now need a practical way to map their AI systems to the EU AI Act and make steady progress without slowing core work. This guide focuses on classification, prioritization, and execution for 2025, reflecting the EU’s goal of setting a global standard for trustworthy AI that protects fundamental rights and safety.

What the law does in brief

The EU AI Act regulates AI based on risk level. It creates broad categories:

In sum, the law draws lines based on risk: unacceptable = banned, high-risk = heavily controlled, general-purpose or low-risk = monitoring and best practices, but not much red tape.

Start with an inventory

You can’t comply if you don’t know what you have. Create a living inventory of all AI systems or AI components in your organization, both customer-facing products and internal tools: For each, note the purpose (what is it used for and criticality), inputs (data it takes), outputs (decisions or predictions it makes), human involvement (is there a human reviewing outputs or is it fully automated?), origin of model (did you build it? fine-tune an existing model? using a vendor API?), and user base (who is affected – employees, customers, the public?). Assign an owner for each system – someone responsible for its compliance and performance. Even if it’s a small feature, give it a name on your list. Make this inventory cross-functional: involve IT, the data science/AI team, and business unit heads. Often they’ll surface “oh, I didn’t think that little script was AI, but it uses machine learning.” Exactly the point – cast a wide net. Include any usage of third-party AI services as well (e.g., if you use an AI SaaS for, say, resume screening). Because AI can pop up anywhere (marketing using an AI image generator, ops using an optimization algorithm, etc.), consider a quick survey or attestation process: ask teams “Do you use any AI or automated decision tools? If yes, list them.” It helps flush out shadow AI projects. This inventory is your foundation.

Classify by risk and impact

Now map each inventoried system into a risk category as defined by the Act, and also consider your own impact lens:

Document your rationale briefly for each classification, especially for edge cases. If in doubt, flag it for legal review. Remember, classification is not purely a legal question - it’s also a management decision about how much risk you’re willing to tolerate in that AI’s performance. Better to slightly over-classify and then justify downgrading, than to be caught with a supposedly “low-risk” system that an auditor or incident proves was high-risk.

Core obligations for high-risk systems

For each system you deem high-risk, the Act imposes a set of obligations. It’s helpful to build a checklist or template that you’ll fill out per system:

  1. Data governance: Document the datasets used for training and testing. Where did the data come from? Did you assess its relevance, representativeness, any biases? The Act will require that you use training data that is appropriate, and that you’ve taken steps to ensure “bias monitoring” for protected attributes. Maintain documentation on data collection, processing, and any cleaning or augmentation done. Note any known data limitations. Perform and record bias or accuracy tests especially relevant to how the AI is used (e.g., error rates across different demographic groups, if applicable).
  2. Technical documentation: Write down, in a concise form, the essential information about the system: its intended purpose, an overview of its logic (you might not have to share the algorithm publicly, but you should be able to explain it at a high level), key performance metrics (accuracy, precision/recall, etc.), and known limitations or failure modes. Essentially, imagine an external auditor asking “Explain how this AI works and how you know it’s effective and safe.” You need a document that answers that. This doesn’t have to be extremely detailed initially, but cover the bases. Some companies adapt a model card or datasheet concept here, which is great.
  3. Logging and traceability: Ensure the system has the ability to log its operations - inputs, outputs, decisions made, any manual overrides, and when it was running which version. The Act will require keeping logs for high-risk AI to later investigate issues. Work with your IT team to set up log storage that is secure and meets retention requirements (the Act might say keep logs for X years, and consider other laws like GDPR on data retention). Check privacy implications - if the logs contain personal data, you need to protect and possibly anonymize them while still being useful.
  4. Human oversight: Define the points at which humans can intervene or override. For each high-risk AI, explicitly state: who (by role) is monitoring it in real time or periodically? Under what conditions are they expected to step in and turn it off or change its output? Do users have an easy way to appeal or contest an AI-driven decision? The Act likely requires that high-risk systems are designed so they can be effectively overseen by people. This might involve adding a simple “fail-safe” - e.g., a credit scoring AI that flags borderline cases for manual review, or an emergency off-switch for an autonomous machine. Document your oversight mechanism and provide training to those humans on their authority and how to exercise it.
  5. Accuracy and robustness: Set target performance metrics and tolerances for the AI and have a process for testing them periodically. For example, “This facial recognition system should have at least 98% identification accuracy in controlled conditions and no more than 1 in 10,000 false matches. We will test it on a fresh dataset from real-world conditions every 6 months.” Also test for cybersecurity - how does the AI handle malicious inputs or attempts to game it? Record the results of these tests. If any test fails or shows drift (performance degrading over time), have a plan to retrain or adjust. Robustness also means considering worst-case scenarios: what happens if the AI output is wrong - do you have mitigations so it doesn’t lead straight to disaster?
  6. Post-market monitoring: Once the AI is deployed, the obligations don’t stop. Set up a way to capture incidents, complaints, or near-misses. For instance, if users can report “the AI suggested something incorrect or harmful,” make sure those reports are logged and reviewed. Maintain a log of “errors/ incidents” and how they were addressed. Also monitor for “drift” - changes in data or context that could make the model less valid. A feedback loop should exist: if something goes wrong or conditions change, you update the system (either retrain the model, adjust thresholds, or even shut it down until fixed).

Add two cross-cutting elements across all high-risk compliance efforts:

Completing this checklist for each high-risk system will likely be the bulk of your compliance heavy lifting. It’s resource-intensive, so prioritize: which systems need to be brought up to code first (perhaps those currently in use in the EU market, or those most likely to face scrutiny)?

General-purpose and foundation models

A special category in the Act (as of drafts) concerns general-purpose AI (GPAI) and foundation models. This is tricky because if you build or integrate large models like GPT-style, the rules are evolving. If you build or fine-tune large models:

If you are a downstream deployer of a foundation model (i.e., you use someone else’s large model via API or open-source):

Think of it this way: the Act doesn’t want companies shrugging “well, it’s a general AI, who knows what it’ll do.” You’re expected to know, as far as reasonable, what it can do in your context and to manage that. Some ideas for testing general models:

Document all these tests as part of your compliance documentation for that AI system.

Vendor and tool diligence

Most likely, you’re not doing everything in-house. You rely on vendors for AI components or tools (from cloud AI services to off-the-shelf AI software). The Act doesn’t let you off the hook for compliance just because you bought something - you’ll have to ensure your vendors meet requirements or that you compensate for any gaps. Standardize an AI vendor questionnaire to send out (or include in procurement):

Also clarify contractual points:

Mapping vendor provided capabilities versus what the Act needs is a good exercise. For example, if the Act says “users must be informed when interacting with an AI system,” and you’re using a vendor’s chatbot, ensure it has a feature to display “I am an AI” message - if not, you’ll have to implement that. In summary, treat AI suppliers like you would critical component suppliers in other regulated industries - demand transparency, quality controls, and shared responsibility.

Documentation that scales

The nightmare scenario is you create binderfuls of paperwork that immediately go out of date and no one reads. To avoid that, make documentation lightweight and maintainable:

Avoid gigantic binders that no one reads. Instead, strive for concise, accessible documentation with pointers to evidence. Regulators (and your own execs) will appreciate brevity with substance over volume with fluff.

Communicate internally and externally

Don’t keep all this compliance work in a silo. You need buy-in and understanding from your leadership, and in some cases reassurance to customers or regulators:

In essence, don’t let all your good compliance work be invisible. Use it to strengthen your narrative that your organization uses AI responsibly. In a world of hype and fear around AI, that can be a competitive advantage. A template for an executive readout might include:

This keeps it action-oriented and shows progress.

The role of AI ethics

Beyond strict legal compliance, many leading organizations are adopting broader AI ethics principles – fairness, accountability, transparency, etc. While not required by law, these can bolster your program:

The bottom line is that a culture of accountability around AI can become a competitive differentiator. It builds trust with regulators (they may scrutinize you a bit less if they see you’re earnest), with customers (who may prefer your product because they feel safer with it), and with partners.

A 90-day plan

If you’re starting now (2025) to operationalize this, here’s a rough sequence to get moving quickly:

The goal for 2025 is not perfection - regulators themselves are just gearing up. It is to have a credible program that knows its AI systems, controls the biggest risks, and has a plan to continuously improve. If you get that in place, you’ll be in good shape to handle whatever specific guidance or enforcement comes.

Common gaps and how to close them

As you go through this, you’ll likely discover some recurring issues. Here are a few and strategies to handle them:

Finally, remember that compliance is ongoing. The EU AI Act enforcement will evolve; standards and best practices will develop. Keep an eye on updates from regulatory sandboxes or guidelines from the EU’s planned AI Office. Adapt your program as needed. The effort you put in now not only keeps you out of trouble, it genuinely improves your AI systems and can be showcased as a trust point. When your competitors are scrambling late or (worse) facing penalties, you’ll be able to say to customers and regulators, “We’ve got this under control.” And that’s a strong position to be in as trustworthy AI becomes a market expectation.

VSCode Visible Files

src/content/posts/eu-ai-act-field-guide-2025.mdx

VSCode Open Tabs

src/content/posts/us-privacy-2025-brief.mdx src/content/posts/ira-industrial-policy-operators-2025.mdx src/content/posts/market-brief.mdx src/content/posts/crisis-qa-blueprint-2025.mdx src/content/posts/first-briefing.mdx src/content/posts/media-interviews-high-stakes-2025.mdx src/content/posts/monitoring-without-crowdtangle-2025.mdx src/content/posts/stakeholder-mapping-deep-dive-2025.mdx src/content/posts/evidence-playbook-2025.mdx src/content/posts/eu-ai-act-field-guide-2025.mdx

Current Time

8/9/2025, 12:06:46 AM (America/New_York, UTC-4:00)

Context Window Usage

365,140 / 1,048.576K tokens used (35%)

Current Mode

ACT MODE