EU AI Act for US Tech Companies: A Practical Guide
Part 3 of 3: Sector-Specific Implementation & Strategic Positioning

Transforming regulatory requirements into operational excellence and competitive advantage.
Part 3 delivers sector-specific implementation guidance for three high-stakes industries:
healthcare and life sciences, financial services, and human resources technology.
This analysis examines how the Act’s requirements manifest in specific operational contexts, identifies regulatory intersections creating compliance complexity, and provides strategic frameworks for converting regulatory obligations into competitive differentiation.
Three Sectors, Many Unresolved Tensions
Health Care
Medical AI systems that diagnose, recommend treatments, or support clinical decisions qualify as both high-risk AI under the AI Act (Annex III, point 5(b)) and medical devices under MDR 2017/745 (Article 2(1)). Both regulations require conformity assessments but their requirements don’t perfectly align.
Article 2(8) of the AI Act states it defers to sectoral legislation when that legislation provides equivalent requirements. However, ‘equivalence’ remains incompletely defined. The Commission must issue implementing acts clarifying which MDR provisions satisfy AI Act obligations and where gaps remain.
What’s unclear?
- Validation standards: MDR emphasizes clinical evidence in patient populations. The AI Act requires dataset representativeness analysis, adversarial robustness testing, and subgroup performance evaluation. Do MDR clinical evaluations automatically satisfy AI Act Article 15 requirements?
- Incident reporting: MDR Article 87 requires reporting to national competent authorities. AI Act Article 73 requires reporting to market surveillance authorities. When a medical AI experiences a serious incident, which authority receives notification? Both? Through which coordination mechanism?
- Conformity assessment: High-risk medical AI potentially requires notified body assessment under both regulations. Can a single body conduct combined assessment? The EU’s notified body designation process for the AI Act is ongoing, clarity on dual-designation bodies remains limited.
The AI Act provides extended deadlines for AI embedded in regulated medical devices, i.e. August 2, 2027 versus August 2, 2026 for standalone systems (Article 111(2)), though there are chances that the deadline is pushed further.
Financial Services
Annex III, point 5(b) classifies as high-risk AI systems ‘intended to be used to evaluate the creditworthiness of natural persons or establish their credit score.’ However, Article 6(3) provides exceptions for systems performing ‘narrow procedural tasks’ or ‘improving the result of a previously completed human activity.’
When does credit scoring AI qualify as merely preparatory versus materially determinative?
Classification decisions require functional analysis of how AI operates within decision- making workflows.
Factors suggesting preparatory status (potential Article 6(3) exception):
- Decision-makers review multiple information sources beyond AI output.
- Humans possess genuine discretion to override AI recommendations.
- AI scores don’t automatically trigger application outcomes.
- Evidence demonstrates humans regularly deviate from AI suggestions.
- Quality metrics show human oversight influences final decisions
Factors suggesting determinative status (high-risk classification):
- Strong correlation between AI output and final decisions in practice.
- System design makes deviation from AI difficult or requires justification documentation.
- Humans rarely override AI recommendations in operational deployment.
- Economic incentives or time constraints pressure humans to follow AI guidance.
HR Technology
Annex III, point 4(a) classifies as high-risk, AI systems ‘intended to be used for recruitment or selection of natural persons, making decisions affecting terms of employment relationships, task allocation, and monitoring or evaluation of persons in employment relationships.’
This language captures most HR technology platforms offering AI-powered features.
Examples include, applicant tracking systems with AI screening, video interview analysis platforms, performance evaluation systems with AI components, workforce analytics predicting turnover or identifying promotion candidates, and task allocation systems in various industries.
Article 5(1)(f) prohibits emotion recognition systems in workplace and educational settings, with narrow exceptions for medical or safety purposes.
Systems potentially affected:
- Sentiment analysis of employee communications.
- Engagement scoring derived from calendar or communication patterns.
- Tone analysis in customer service call monitoring.
- Systems identifying ‘disengaged’ employees from activity data.
What Organizations Can Do Now
1. Prioritize Documentation
Authorities will evaluate whether organizations made reasonable efforts based on
available guidance.
Document comprehensively:
- Classification decisions and functional analysis supporting those decisions.
- Article 6(3) exception claims with supporting operational evidence.
- Bias testing methodologies chosen and acknowledged limitations.
- Risk assessments conducted and mitigation measures implemented.
- Interpretive decisions on ambiguous provisions with rationale.
2. Monitor Guidance Development
The Commission will adopt implementing acts clarifying ambiguous provisions. Harmonized standards organizations will develop technical specifications. This guidance will resolve many current uncertainties.
Establish processes to track:
- Commission implementing and delegated acts.
- EDPB guidelines and opinions.
- Harmonized standards development through CEN, CENELEC, ETSI.
- National market surveillance authority guidance and enforcement actions.
3. Engage Legal Counsel Early
Organizations facing dilemmas around Article 2(8) sectoral deferral, Article 6(3) exception boundaries, or Article 5(1)(f) emotion recognition scope, should engage qualified EU legal counsel. Early engagement enables documentation of reasonable interpretation processes that demonstrate good-faith compliance efforts.
Enforcement Context
The Act establishes penalties calculated as the higher of fixed monetary caps or percentages of total worldwide annual turnover. Prohibited practices (Article 99(3)) face up to €35 million or 7% of global turnover. High-risk obligations violations (Article 99(4)) face up to €15 million or 3%. Information provision violations (Article 99(5)) face up to €7.5 million or 1%.
Member States designated national competent authorities by August 2, 2025. The European AI Office exercises supervisory authority over general-purpose AI models. Enforcement approaches will develop through 2025-2027 as authorities establish priorities and issue guidance.
Beyond monetary penalties, Article 94 empowers authorities to order market withdrawal of non-compliant systems, creating business continuity risks for companies dependent on AI technologies.
How Privacy Rules Helps
Privacy Rules can guide US technology companies through EU AI Act compliance by addressing the practical challenges outlined above.
We Focus on Defensible Positions
When regulations contain ambiguities, we help clients build defensible compliance positions, documented interpretations, evidence of good-faith efforts, and flexibility to adjust as guidance emerges. This approach emphasizes regulatory defensibility over false certainty.
We Track Regulatory Developments
We monitor Commission implementing acts, EDPB draft guidance, national authority consultations, harmonized standards development, and enforcement actions across member states. Clients receive targeted alerts on developments affecting their systems.
We Translate Between Legal and Technical
AI Act compliance requires coordinating legal requirements with technical implementation. We work with engineering teams to design validation studies, bias testing methodologies, and documentation systems that satisfy regulatory requirements while preserving product functionality.
About Privacy Rules
Privacy Rules provides EU data protection and AI regulatory advisory for US technology companies. Led by Tanya Chib, we help organizations navigate GDPR, AI Act compliance, and cross-border data transfer requirements.
© 2026 Privacy Rules. This analysis does not constitute legal advice. Organizations should consult qualified legal counsel for compliance decisions.

Leave a Reply