As 2025 draws to a close, firms have weathered another year of rapidly evolving technologies, risks, and regulatory change. While these are perennial challenges, two unique forces had an outsized impact on reshaping cybersecurity risk in 2025: diverging cybersecurity regulations and AI.
From Europe’s new AI and resilience regulations, and the SEC’s withdrawal of 14 proposed rules, to generative AI fueling an explosion in phishing, deepfakes, and social engineering attacks, these forces have required cybersecurity, compliance, and business leaders to shift how they manage risk and create value for their firms.
Diverging Paths: Regional Splits in Regulation
2025 made one thing clear: regulatory expectations for cybersecurity and privacy are diverging. The EU stuck to a more structured playbook, rolling out frameworks and regulations designed to require stricter oversight and risk management of cybersecurity and new technologies, including:
- The AI Act for ethical AI, a landmark, risk-based regulation that ensures artificial intelligence systems used within the EU are safe, transparent, and non-discriminatory.
- The Data Act tackles systemic risks, which gives users control over data from connected devices, requires secure free access in machine-readable formats, allows sharing with third parties and public bodies, and supports cloud switching for fair markets.
- The Digital Operational Resilience Act (DORA) aims to harden resilience, by setting a common framework for managing information and communications technology (ICT) risks in finance, requiring governance, risk management, incident reporting, resilience testing, and oversight of ICT providers.
The UK and US took a different route; the UK adopted a flexibility-driven, innovation-first approach, as reflected in the FCA’s 2025–2030 strategy, which emphasizes adaptability and fostering innovation within financial markets.
The United States has shifted from a primarily enforcement-forward stance to an even more collaborative rulemaking process. This transition is evident through several initiatives:
- Notice-and-comment procedures: Encouraging public input on proposed regulations.
- SEC webinars and roundtables: Facilitating dialogue between regulators and market participants.
- Public outreach initiatives: Aiming to simplify compliance and promote market efficiency, particularly in crypto assets and private markets.
While the SEC transitions, it continues to enforce rigorous cybersecurity standards, expecting firms to maintain consistent governance and practices to ensure they reasonably manage information security and operational risks.
For global businesses juggling customer data across borders, figuring out which rules apply is a high-stakes puzzle that is only getting more challenging with the continued divergent approaches. Missteps can be costly, both financially and reputationally. Under the EU AI Act, penalties for non-compliance can reach up to €35 million. And the complexity isn’t slowing down; it’s accelerating.
Every new framework adds layers of obligations, and every violation can cascade into multi-jurisdiction penalties, operational disruptions, and shareholder scrutiny. In a world where regulators are rewriting the rules faster than businesses can adapt, compliance gaps aren’t just a legal risk; they’re an existential threat.
Strategic Moves for an Evolving Landscape
Considering these trends, businesses must prioritize these strategic actions:
- Conduct a risk and gap assessment: Start by mapping all your digital assets and data repositories, then identify potential threats and their impact. Leaving any gaps against regulatory standards can lead to mishandled data, costly breaches, and severe penalties. In today’s regulatory landscape, missing these basics isn’t just a technical oversight; it’s a business risk that can hit hard.
- Implement continuous compliance monitoring: Regulations are evolving faster than annual reviews can keep up. Automate compliance tracking and reporting to catch changes early and avoid last-minute, rushed adjustments.
- Prioritizing data governance: Companies must ensure robust data protection, proper data disposal, and safeguarding policies. Global regulations mandate these practices, and neglecting them can lead to compliance failures, hefty fines, and reputational damage.
The Rise of AI in 2025
In 2025, Generative AI (GenAI) not only reshaped external cyber threats, but it also introduced new internal risks that organizations can’t afford to ignore. While attackers leveraged AI to automate and scale their social engineering and ransomware attacks, businesses began exposing themselves to new risks and vulnerabilities by embracing GenAI without adequate cybersecurity and privacy controls.
On the inside, convenience became a double-edged sword. Employees using GenAI for drafting sensitive documents or writing code often unknowingly expose confidential data or introduce insecure snippets into production systems. These “silent leaks” can escalate into major risks. They often occur because of blindly trusting GenAI systems; a recent example is that of private conversations from ChatGPT were leaked online.
Externally, the picture is just as alarming. GenAI-powered phishing emails are now grammatically flawless, context-aware, and emotionally persuasive, making traditional red flags obsolete. The Arup deepfake scam is a chilling example: an employee was tricked into transferring $25.6 million during a video call with an AI-generated fake CFO. And it doesn’t just stop there; Large Language Models (LLMs) are being used to generate executable malware and attack scripts that once required skilled human effort.
Together, these internal and external risks have made cyberattacks faster, cheaper, and harder to detect, marking a major evolution in the threat landscape. And the fallout isn’t just technical, it’s economic too. Breaches, fraud, and compliance failures can trigger multimillion-dollar losses, regulatory penalties, and reputational damage that ripple across markets. For many organizations, the cost of inaction could be catastrophic.
Watch our on-demand Scariest Cyber Breaches webcast to learn more about damaging cyber-attacks and how to avoid them.
Steps to Address Emerging GenAI Risks
To stay ahead of GenAI threats, businesses need to act now, with these key steps:
- Establish a strong AI use policy: Set clear guidelines for how GenAI tools should be used across the organization to prevent misuse, protect sensitive data, and promote ethical practices.
- Establish formal processes for human oversight: Implement structured procedures to ensure that outputs from AI tools are consistently reviewed and validated by humans, and train employees on why this oversight is critical.
- Strengthen identity verification: Enhance authentication processes to guard against impersonation, especially in high-stakes interactions like video calls or financial approvals.
- Update incident response plans: Revise response protocols to include GenAI-specific threats such as deepfake scams and AI-generated malware, ensuring teams are prepared for new attack vectors.
- Educate employees on GenAI risks: With phishing attacks surging by 1,265%, training employees to spot AI-driven threats such as deepfakes and emotionally-persuasive frauds that bypass traditional warning signs is now a priority echoed by the SEC in its 2026 Examination Priorities, which emphasize AI risk awareness and training across firms.
Key Priorities for Staying Resilient and Compliant
The lessons of this year are clear: to thrive in 2026 and beyond, organizations must act now to build resilience and safeguard their future.
- Implement a well-defined AI governance framework: With AI risks climbing and both internal and external threats on the rise, having a solid AI governance framework is no longer optional; it’s essential. Clear policies ensure AI is used responsibly, ethically, and in full compliance across the organization.
- Apply access controls: Enforce clear access controls so only those who truly need sensitive data and systems can reach them. Separate access for others to minimize unnecessary exposure and reduce risk.
- Reinforce penetration testing and network segmentation: Adopt a proactive approach to penetration testing as new technologies come into play and regulatory requirements tighten. Regular testing uncovers vulnerabilities early, while network segmentation helps contain threats and protect critical systems together, creating a resilient security posture. In many jurisdictions, penetration testing is a regulatory requirement, making it essential for compliance and risk reduction.
- Implement operational resilience measures: Establish an incident response plan with regular testing and timely updates, alongside robust backup protocols to ensure continuity and quick recovery. Regulations like DORA now mandate strong incident response capabilities, underscoring their critical role in resilience.
- Stay ahead of regulations: Track evolving compliance requirements closely as regulations are moving in different directions across regions, making it critical to stay informed to navigate risks confidently and avoid costly pitfalls.
These focus areas form the foundation for a secure and forward-looking technology and compliance strategy.
Resilience in the Age of Intelligent Threats
Closing out 2025, it is evident that resilience and adaptability are non-negotiable. AI is transforming both risks and protections, while global regulations add complexity. Success will hinge on proactive, ethical action and strategies that look beyond compliance toward long-term security.
ACA: Your Strategic Partner
ACA helps firms navigate complex cybersecurity and compliance challenges with confidence. From AI governance frameworks and penetration testing to operational resilience and regulatory tracking, our experts provide tailored solutions to keep your organization secure and compliant. Learn more about our cybersecurity services here.
Skip to content