Your Company Uses AI Without Rules? AI Act Penalties Are Already Real

The European AI Act is already in force and penalties reach up to 35 million euros. Discover what obligations your company must respect, the deliverables required by the regulation and how to access funding for AI Act compliance training.

Gaetano Castaldo Gaetano Castaldo
17 Feb 2026
AI and Automation Digital Transformation Cybersecurity #AI Act #compliance #artificial intelligence #training #FRIA #AI governance #sanctions #SME #Lombardy Region #funded training
Infographic on the European AI Act and compliance obligations for Italian companies in 2025-2026

TL;DR

The European AI Act (EU Regulation 2024/1689) isn't a future topic: it's law now. The AI literacy obligation (article 4) has been in force since February 2, 2025, the sanctions regime became operative August 2, 2025, and from August 2, 2026 the complete requirements for high-risk systems will kick in. Penalties reach up to 35 million euros or 7% of annual global turnover. Yet most Italian SMEs continue to use AI tools like ChatGPT, Copilot and other generative tools without any governance, without documentation and without staff training. This article explains the concrete risks, what deliverables the regulation requires and how AI Act compliance training programs, some of which are fundable up to 90% thanks to Lombardy Region grants and Fondimpresa, represent the mandatory first step to protect your business.


The Elephant in the Room: Italian Companies and AI Without Governance

Useful prerequisite: if you're still evaluating which business processes to automate with AI, first read our practical guide on AI automation for Italian SMEs - includes real use cases, costs and a 3-step path.

Let's have an honest conversation. Right now, in thousands of Italian offices, employees are using generative artificial intelligence to write reports, analyze data, prepare presentations, filter job applications and even produce financial documents. They do this with good intentions and often with appreciable results. The problem is they do it without any rules, without structured supervision and without the company having the slightest awareness of what the European regulatory framework imposes.

EU Regulation 2024/1689, better known as the AI Act, entered into force on August 1, 2024. This isn't a directive to be transposed, but a regulation directly applicable in all member states. And the first obligations have already kicked in.

According to a RiskConnect study published in 2025, 60% of companies evaluating agentic AI systems adoption have yet to conduct any risk assessment. Meanwhile, an AllAboutAI report estimates that AI hallucinations caused losses of 67.4 billion dollars globally in 2024 alone. 47% of corporate AI users admit making at least one major strategic decision based on AI-generated content that later proved false.

We're talking about systemic risk that Italian companies are ignoring.


The Dates Every Entrepreneur Must Know

The AI Act follows gradual implementation. Here's the timeline every Italian company should have crystal clear.

February 2, 2025 (already in force): the bans on AI practices deemed unacceptable (article 5) kicked in and, most importantly, the mandatory requirement for AI literacy (article 4). This means every company using AI systems, even just ChatGPT for email writing, is already required to ensure its staff has sufficient AI competency. Companies that haven't yet adapted are already non-compliant.

August 2, 2025 (already in force): the sanctions regime became fully operative. Member states had to establish their own supervisory authorities and define sanctions rules. From that date, obligations for general-purpose AI models (GPAI) also apply, with requirements for technical documentation, transparency and risk management.

August 2, 2026 (next critical deadline): all requirements for high-risk artificial intelligence systems enter into force. Systems used in recruiting, credit scoring, diagnostics, financial services and critical infrastructure must comply with stringent conformity, documentation and human oversight requirements.

August 2, 2027: AI high-risk systems already on the market before 2025 must be fully compliant.


The Sanctions: It's Not Theoretical Risk

Let's talk concrete numbers. The AI Act's sanctions regime is among the most severe ever introduced at European level.

For violations of banned AI practices (for example, social scoring or behavioral manipulation through AI), administrative penalties can reach up to 35 million euros or 7% of annual global turnover, whichever is higher.

For violations of high-risk system obligations, fines can reach 15 million euros or 3% of annual global turnover.

For those providing incorrect, incomplete or misleading information to competent authorities, the penalty can reach 7.5 million euros or 1.5% of annual global turnover.

For SMEs and startups, the lower of the two amounts applies for each category. But even the "lower" between 7.5 million and 1.5% of turnover is a figure that can cripple any small or medium-sized enterprise.


When Hallucinating AI Becomes a Legal Problem: Concrete Examples

To understand the scope of the risk, consider scenarios that are already occurring in companies.

Financial Reports Generated by AI Without Human Control

Imagine a company using a language model to generate quarterly financial analyses. The model produces a report stating that a key supplier posted net profit of 500 million in 2024, when in fact official data is completely different. A 2025 Deloitte report explains how in finance even a 0.5% difference can translate to millions in losses. If that analysis is used for an investment decision or due diligence in an M&A transaction, consequences can be devastating. According to latest data, the hallucination rate of AI models on financial data stands between 2.1% for best-performing models and 13.8% as general average. These aren't marginal errors.

CV Screening and Personnel Selection with Non-Compliant AI

AI systems used for CV screening and personnel selection fall among high-risk systems under the AI Act. An emblematic case occurred in 2025, when Workday became subject to a class action for alleged discriminatory bias in its AI-based recruiting system. Using automated selection tools without technical documentation, without periodic testing to identify bias and without human oversight mechanisms exposes the company to sanctions and litigation.

The Legal Case: Nonexistent Citations in AI-Generated Documents

In 2025, an American lawyer submitted a court filing drafted with artificial intelligence assistance. The document contained nearly 30 defective case citations, including cases completely invented by the model. The court stigmatized the conduct and the lawyer suffered serious professional consequences. This scenario isn't limited to the legal world: every time a company produces documentation based on AI output without verifying accuracy, it exposes itself to the same type of risk.


The 4 Risk Levels of the AI Act: Where Does Your Company Stand?

The AI Act classifies artificial intelligence systems in four risk levels, each with different obligations.

Unacceptable risk: systems banned without exception. This category includes social scoring, emotion recognition in the workplace, indiscriminate collection of facial images from the internet, biometric categorization to infer sensitive data like race or political orientation. These bans are already operative since February 2, 2025.

High risk: systems subject to stringent obligations. They include AI systems used in recruiting and HR management, credit scoring and financial services, medical diagnostics, justice, critical infrastructure and surveillance. For these systems, from August 2, 2026, stringent conformity, documentation and human supervision requirements will be mandatory.

Limited risk: systems subject to transparency obligations. Chatbots, synthetic content generators (text, images, video), deepfakes. Users must be informed they're interacting with AI.

Minimal risk: no specific obligations. Calculators, antispam filters, generic recommendation systems.

Most corporate applications fall into limited or high risk. The boundary between the two categories isn't always clear: a customer care chatbot is limited risk, but if that chatbot influences access to financial services, it becomes high risk.


Mandatory AI Act Deliverables: What Your Company Must Produce

The AI Act doesn't just require good intentions: it requires concrete, verifiable, updated documentation. Here are the main deliverables every company using AI systems must prepare.

1. Inventory and Mapping of AI Systems

The first step is a complete census of all artificial intelligence systems in use in the organization, including those not formally approved (so-called "shadow AI"). The mapping must identify which systems are in use, who uses them, for what purposes and classify them according to the risk level set by the AI Act.

2. Risk Classification

Based on the mapping, each system must be classified according to the regulation's four risk levels. This classification determines which obligations apply and with what priority.

3. Fundamental Rights Impact Assessment (FRIA)

For high-risk systems, the Fundamental Rights Impact Assessment is mandatory and must be completed before production deployment. FRIA isn't a simple checklist, but an in-depth analysis of how the AI system impacts people's rights: from non-discrimination to human dignity, from freedom of expression to service access. Unlike GDPR's DPIA, FRIA evaluates a broader spectrum of fundamental rights.

4. Technical Documentation

For high-risk systems, detailed technical documentation is necessary including: system description and purpose, information on development and training, performance and accuracy metrics, datasets used, testing and validation procedures.

5. Risk Management System

A structured framework to identify, assess, mitigate and continuously monitor risks associated with AI use in the company. It's not a static document, but an iterative process that must be updated over time.

6. Human Oversight Protocols

The AI Act requires that high-risk systems be designed to enable effective human oversight. This means defining who is responsible for output control, how frequently and with what tools.

7. AI Literacy and Staff Training Plan

Article 4 of the AI Act requires providers and deployers to ensure sufficient AI literacy for staff and anyone involved in the operation and use of artificial intelligence systems. This obligation isn't limited to technical staff: it involves all corporate functions, from top management to administration, from HR to marketing, from sales to external collaborators. Training must be proportionate to the role, use context and risk level of the systems employed.

8. Activity Log and Traceability Register

High-risk systems must be equipped with automatic log recording mechanisms to ensure traceability of results and the ability to reconstruct decisions made with AI assistance.


AI Literacy: The Obligation Italian Companies Are Ignoring

Among all AI Act obligations, the one on AI literacy deserves particular attention because it's the first to enter into force and the most systematically neglected.

Article 4 of the Regulation states that: "Providers and deployers of AI systems shall adopt measures to ensure, as far as possible, a sufficient level of AI literacy of their personnel as well as any other person engaged in the operation and use of AI systems on their behalf."

The FAQs published by the AI Office of the European Commission in May 2025 clarified that, to comply with article 4, organizations should at minimum: ensure general understanding of AI within the organization, consider their role (provider or deployer), assess the risk level of systems used and build training programs proportionate to these factors.

In Italy, according to the European Union's DESI index, only 45.7% of individuals between 16 and 74 years have at least basic digital literacy level. The gap between the regulatory obligation and the reality of corporate skills is enormous.

While article 4 doesn't provide a direct financial sanction for breach of literacy obligation, lack of training represents an aggravating factor if challenged for other AI Act violations. In other words: if a high-risk AI system causes damage and the company hasn't trained staff, the company's position before the supervisory authority will be significantly weaker.


How Castaldo Solutions Is Helping Italian Companies

At Castaldo Solutions we work daily with Italian SMEs facing this transition. Our approach starts from a simple observation: AI Act compliance cannot be a bureaucratic exercise. It must translate into real increased awareness and operational capacity for the company.

Our AI Act compliance training and consulting programs are structured to accompany businesses through every phase of the process.

Initial assessment: we map all AI systems in use in the organization, identify the company's role (provider or deployer) and classify the risk level of each system.

Production of mandatory deliverables: we support the company in drafting the AI inventory, risk classification, FRIA (for high-risk systems), technical documentation and risk management plan.

Customized AI Literacy programs: we design and deliver training programs tailored to participants' roles (top management, operational staff, HR, marketing, finance functions), the company's specific context and the risk level of systems used. At program end, the company obtains documentation necessary to demonstrate article 4 compliance.

Integration with corporate governance: we help businesses integrate AI Act obligations into existing compliance systems, creating synergies with GDPR, cybersecurity policies and organizational models.


Funded Training: Concrete Opportunities for Lombard Enterprises

An element that makes our programs particularly accessible is the ability to fund them through public tools already operative.

Lombardy Region Continuous Training Grant: funds up to 90% of training costs for employees, with annual cap up to 50,000 euros grant. All companies headquartered in Lombardy can participate, regardless of size or sector. Fundable topics explicitly include artificial intelligence, process automation and cybersecurity.

Lombardy Region Skills and Innovation Grant: grant funding up to 80% of eligible costs, maximum 50,000 euros. The grant has been so popular that initial 7 million euro resources were exhausted in just two months, leading Lombardy Region to double funding with additional 8 million euros. Artificial intelligence is among technologies explicitly indicated as priority.

Fondimpresa Notice 4/2025: specifically dedicated to AI training, with 5 million euro allocation. Funding applications can be submitted starting February 25, 2026. Training plans must involve at least 15 employees and concern companies already affiliated with the fund.

These tools make AI Act compliance training not only a regulatory obligation, but a near-zero-cost investment for the company. Our team also handles support in managing funding applications, from training plan design to final reporting.


Frequently Asked Questions on the AI Act for SMEs

Is the AI Act already in force in Italy? Yes. EU Regulation 2024/1689 entered into force on August 1, 2024 and is directly applicable in all member states without need for transposition. The first obligations - bans on unacceptable AI practices and AI literacy obligation (article 4) - have been operative since February 2, 2025. The sanctions regime is applicable from August 2, 2025.

What's the next AI Act deadline affecting SMEs? August 2, 2026 is the critical deadline: from that date complete requirements for high-risk AI systems become operative. If your company uses AI systems for recruiting, credit scoring, diagnostics or critical infrastructure, it must be fully compliant by that date.

What are the AI Act penalties for Italian SMEs? Up to 35 million euros or 7% of annual global turnover for the most serious violations (banned practices). Up to 15 million or 3% for violations of high-risk system obligations. Up to 7.5 million or 1.5% for false information to authorities. For SMEs the lower of the two amounts applies per category, but remain potentially devastating figures even for small enterprises.

What constitutes a high-risk AI system under the AI Act? High-risk AI systems include those used for: personnel selection and HR management, credit access and financial scoring, medical diagnostics, judicial systems and law enforcement, critical infrastructure (energy, water, transport). From August 2, 2026 these systems require mandatory technical documentation, fundamental rights impact assessment (FRIA) and human oversight.

Does the AI training obligation (article 4 AI Act) apply to small companies too? Yes. Article 4 applies to all companies using AI systems, including commercial tools like ChatGPT and Copilot. The obligation concerns all staff interacting with these systems, not just IT. It's been in force since February 2, 2025: companies not yet adapted are already non-compliant.

What is the FRIA required by the AI Act? The Fundamental Rights Impact Assessment (FRIA) is a mandatory evaluation for high-risk AI systems, to be completed before production deployment. It analyzes how the AI system impacts fundamental rights: non-discrimination, human dignity, privacy, service access. It differs from GDPR's DPIA because it covers a broader spectrum of fundamental rights.

Are there funds for AI Act compliance training in Italy? Yes. Lombard SMEs can access Lombardy Region's Continuous Training Grant (up to 90% of costs, cap 50,000 euros grant) and Fondimpresa Notice 4/2025 (5 million dedicated to AI training, applications from February 25, 2026). These tools make compliance economically sustainable even for small enterprises.


The Time to Wait Is Over

The message is clear: the AI Act isn't a distant regulation. The first obligations are already in force, sanctions are already applicable and deadlines for high-risk systems are around the corner.

Every day a company uses artificial intelligence systems without governance, without staff training and without documentation is a day when regulatory, operational and reputational risk accumulates.

The good news is that compliance is possible, and concrete funding tools make this transition sustainable even for small enterprises.

The first step? Contact Castaldo Solutions for a free assessment of your AI Act compliance situation. We'll analyze together which AI systems are active in your organization, your risk level and what priority actions to take. And we'll show you how to access available funds to cover training costs.


Want to know if your company is already at risk?

Most Italian SMEs are using AI without governance and without knowing it. The first step to protect yourself is understanding where you are today: which AI systems are active in your organization, at what risk level they operate and what you're missing to be compliant.

At Castaldo Solutions we help businesses transform the AI Act obligation into a competitive advantage - with concrete assessments, deliverables ready for the supervisory authority and training programs fundable up to 90%.

If you want to understand where you stand and what actions to take before next deadlines:

Request a free AI Act assessment

30 minutes to map together the AI systems in use, assess your risk level and define operational priorities. We'll also show you how to access active grants to cover training costs. No theory: just a custom action plan for your company.

Because complying with the AI Act in 2026 isn't bureaucracy - it's business protection.

Tags

#AI Act #compliance #artificial intelligence #training #FRIA #AI governance #sanctions #SME #Lombardy Region #funded training
Gaetano Castaldo
Gaetano Castaldo Sole 24 Ore

Founder & CEO · Castaldo Solutions

Consulente di trasformazione digitale con esperienza enterprise. Aiuto le PMI italiane ad adottare AI, CRM e architetture IT con risultati misurabili in 90 giorni.

Read also

Related articles you might find interesting

AI Readiness Assessment dashboard showing evaluation of company maturity for artificial intelligence adoption

The importance of assessments before AI investments

Is your company really ready for AI? Between saying 'we want to do AI' and actually bringing it into your company there's a missing piece: understanding how ready we are today to absorb it. 60% of AI projects fail due to poor organizational preparation, not lack of technology.

06 Jan 2026 Read →

Want to learn more?

Contact us for a free consultation on your company's digital transformation.

Contact Us Now