AI accelerates, governance doesn't: the real risk for companies and people

While AI evolves at an exponential pace, governance moves linearly. The gap between technological capabilities and security widens: how to build AI on solid foundations without sacrificing people and rights on the altar of efficiency.

Gaetano Castaldo Gaetano Castaldo
16 Dec 2025
AI Governance Cybersecurity #Safe AI #AI governance #DPIA #NIS2 #compliance #superintelligence #ethical AI #data governance #cybersecurity #digital transformation
Balance between AI innovation speed and need for governance, compliance, and corporate security

Over recent months I've had the same feeling repeatedly: as I work with clients on secure AI projects, the world around me moves faster than any guideline. While we discuss policy, DPIA and NIS2, every week a new model emerges, a more efficient architecture, a tool that raises the bar even higher.

Diving deeper into the topic, I came across a recent interview with Roman Yampolskiy on Diary of a CEO. His message, in summary, is clear: we should slow down the race toward systems ever closer to superintelligence, building solid foundations of security, control and governance first.

It's a call to realism: we can't keep increasing capabilities without seriously asking ourselves how to guarantee their safe use.

The gap between innovation and protection

The problem is that, outside the podcasts, reality moves in the opposite direction. A single Samsung researcher proposes a hierarchical reasoning architecture more efficient than Transformers, with less data and better performance. Tools like Sora 2 demonstrate, in just days, how rapidly generative capabilities can evolve for content. And at the same time, vulnerable people: minors, exposed groups, those who struggle to recognize manipulated content: are already on the front lines, often without protection.

On one side, exponential growth in capabilities; on the other, adoption of security that proceeds linearly, with volleys of guidelines and checklists. The gap widens.

In this scenario, talking about "ethical AI" is not conference theory, but a concrete priority for those who design and buy solutions. The question is no longer whether to adopt AI, but how to do it without sacrificing data and personal security on the altar of efficiency.

Operating principles for safe AI

When I work with companies, I propose starting fresh from some operating principles.

1. Ethical and circumscribed use cases

The first is to base yourself on ethical and circumscribed use cases, not on "let's do something with AI". It means asking yourself: who does this use case benefit? Who could be harmed? What are the concrete risks, not theoretical ones?

2. Clean and governed corporate data

The second is to leverage clean and governed corporate data. A powerful model fed by dirty or unmanaged databases is an error and bias amplifier. Securing data from a technical, legal and organizational standpoint is a fundamental piece of AI security.

3. Specificity over omnipotence

Third: choose a very specific use case, not an "omnipotent" assistant that does everything. A targeted scope allows better control of impact, prevents reckless role substitution dynamics, and instead starts with well-defined tasks that support rather than replace people.

4. Start small, grow through iterations

Fourth point: start small and grow through iterations. A well-designed pilot, with clear objectives and risk/benefit metrics, is much safer than a generalized rollout. You observe, correct, and decide whether to extend.

5. Active AI governance

Fifth: activate true AI governance, not just a document on the intranet. Clear guardrails are needed (what is allowed, what is prohibited, with what escalation logic) that can evolve with experience. Governance is not a bureaucratic brake: it's the way to prevent AI from becoming a "gray zone" where everything is possible and nothing is tracked.

6. Policy and audit trail

Finally, policy and audit trail: DPIA where necessary, alignment with regulations like NIS2, detailed logging on who does what, with which data and which models. It's not just compliance: it's the ability to reconstruct, explain, and correct when something goes wrong.

How Castaldo Solutions builds AI governance in companies

In this complex context, Castaldo Solutions is actively working to instill governance pillars in companies, helping entrepreneurs and decision makers build solid foundations before accelerating innovation.

It's not about slowing down from fear, but about proceeding with awareness, integrating security, compliance and data protection from the design phase. For this reason, we offer training and project pathways funded by the Lombardy Region, which allow companies to invest in critical skills without heavily impacting the budget.

One of our distinctive pathways is Cloud Innovators, a program focused on cybersecurity, compliance and governance in the cloud, with a personalized approach to the company's actual needs in digital transformation and regulatory compliance.

Unlike standardized courses, Cloud Innovators starts with a specific diagnosis of the business context: where sensitive data is located, what concrete risks exist, which regulations impact the business (GDPR, NIS2, sector-specific), what skills gaps exist in the team. Only after this assessment phase is a customized pathway built, which can include technical training, policy review, implementation of audit trail systems, incident response simulations, and support in managing DPIAs.

The pathway is funded up to 10,500 euros as a grant by the Lombardy Region, with coverage up to 100% of eligible costs for professionals and businesses investing in critical skills for the digital future. It's not a simple theoretical course, but an operational accompaniment that brings measurable innovation to the company while building the foundations of solid and sustainable governance.

The question every company must ask itself

In summary, while public debate oscillates between enthusiasm and fear, the real question for organizations is very practical: are we building AI we can afford to use for the long term, or are we just chasing capabilities, hoping security sorts itself out?

Technology alone will keep running. It's up to us, companies, designers, and decision makers, to decide whether we want it to do so on fragile foundations or on solid ground, where efficiency matters, but never comes before people and their rights.


Getting ready now means not suffering innovation, but guiding it. With governance, skills, and the right tools, AI becomes a safe accelerator, not an uncontrolled risk.

Tags

#Safe AI #AI governance #DPIA #NIS2 #compliance #superintelligence #ethical AI #data governance #cybersecurity #digital transformation
Gaetano Castaldo
Gaetano Castaldo Sole 24 Ore

Founder & CEO · Castaldo Solutions

Consulente di trasformazione digitale con esperienza enterprise. Aiuto le PMI italiane ad adottare AI, CRM e architetture IT con risultati misurabili in 90 giorni.

Read also

Related articles you might find interesting

Modern data centers powered by solar panels with symbols of artificial intelligence, representing the energy revolution of AI

The AI race ignites the new energy revolution

Between data centers consuming tens of gigawatts and MIT's new ultra-thin solar cells, AI is forcing the system into an energy upgrade like never before. The challenges don't stop us: they force us to evolve.

20 Jan 2025 Read →

Want to learn more?

Contact us for a free consultation on your company's digital transformation.

Contact Us Now