← Back to all articles

Fortinet 2025 Security Awareness Report: Key Findings

Analysis of the Fortinet 2025 Security Awareness and Training Report covering AI readiness gaps, training completion rates, and an action plan for MSP teams.

Updated 13 min read

Fortinet’s 2025 Security Awareness and Training Global Research Report surveyed 1,850 senior IT security decision-makers across 29 countries in November 2025, conducted by Sapio Research with a ±2.3% margin of error at 95% confidence. The findings paint a familiar but increasingly urgent picture: organizations broadly agree that security awareness matters, yet the gap between that belief and measurable employee readiness refuses to shrink.

This article pulls out the findings that matter most for MSP teams and maps them to things you can actually do about them.

AI threats are reshaping how employees see security

The rise of AI-powered attacks has done something years of compliance training struggled to achieve: it made employees pay attention. 88% of respondents say the growing use of AI by threat actors has increased how seriously their workforce takes security awareness training. Nearly half (47%) describe that shift as significant.

But awareness and readiness are not the same thing. When asked whether employees are actually equipped to identify, avoid, and report AI-based cyberthreats in the next 12 months, the confidence drops sharply.

88%

see AI threats as a wake-up call

perceive greater importance of training

40%

say staff are actually ready

highly trained for AI-specific threats

Perception and preparedness are not the same metric

The regional breakdown makes the gap even more visible. Asia Pacific leads at 53% reporting high readiness, while EMEA trails at 37%, a 16-point spread that reflects different rates of AI adoption and training investment across geographies.

pie title "AI Readiness by Region"
  "Asia Pacific" : 53
  "North America" : 51
  "Latin America" : 49
  "EMEA" : 37

What to do now: Audit your current training curriculum for AI-specific threat coverage. If AI social engineering, prompt injection risks, and data leakage through AI tools aren’t covered in dedicated modules, they need to be. Set a measurable baseline for “highly trained” and track it quarterly.

AI policy exists on paper, but enforcement lags behind

Organizations are not ignoring AI governance. 96% of respondents say they either have measures in place or are actively researching and implementing security policies for GenAI apps and other AI tools. Of that group, 68% report having already deployed those measures.

pie title "AI Security Policy Status"
  "Measures already in place" : 68
  "Researching or implementing" : 28
  "No measures" : 4

The problem is that policy documents don’t enforce themselves. Only 42% of organizations say they actually have tools to monitor how employees use AI. That leaves a majority with written rules but no way to know whether anyone follows them.

96%

have or are building AI policies

researching or already implemented

42%

can actually monitor AI use

visibility into employee AI activity

A policy without enforcement tooling is a paper exercise

Meanwhile, 53% of organizations train employees on proper GenAI usage and 53% use technologies to monitor or block sharing of sensitive information with AI tools. Those are reasonable starting points, but they leave nearly half of organizations without even basic controls.

As documented in the Huntress 2026 Cyber Threat Report, AI is functioning as a productivity accelerator for attackers rather than an entirely novel attack vector. The same logic applies on the defensive side: AI governance doesn’t need to be a new discipline built from scratch. It needs to be integrated into existing data protection and acceptable use frameworks.

The awareness gap that refuses to close

Here is the most stubborn number in the report: 69% of leaders say their employees lack security awareness. Last year it was 67%. The year before that, the picture looked similar. Despite widespread investment in training programs, leadership confidence in employee readiness has barely moved.

This isn’t because the training doesn’t exist. 95% of decision-makers believe that more security awareness would help reduce cyberattacks. 70% say their employees see security as a shared responsibility. And 70% believe their users can spot a spoofed email, with 23% rating that ability as “very good.”

But then the contradictions surface: 26% of employees who acknowledge security as important don’t consistently act on that belief. That gap between attitude and behavior is wide enough for a phishing campaign to walk through.

75%

Latin America

71%

Asia Pacific

67%

EMEA

62%

North America

Leaders who say employees lack security awareness, by region

Even North America, the most confident region, has 62% of leaders reporting an awareness deficit. Latin America sits at 75%. No region is close to comfortable.

Training is happening, but completion is the crisis

On the surface, the training discipline numbers look solid. 94% of organizations hold security awareness sessions on a regular schedule: 46% quarterly, 32% monthly, 16% annually.

pie title "Training Session Frequency"
  "Quarterly" : 46
  "Monthly" : 32
  "Annually" : 16
  "Other" : 6

Eighty-eight percent tailor training to specific employee groups, with 64% focusing on frequently targeted roles and 58% on individuals who appear to have the least security knowledge. The intent and structure are there.

Then the completion data arrives and reframes the entire picture.

94%

schedule regular training

monthly, quarterly, or annually

6%

achieve 100% completion

only 1 in 17 organizations

Scheduling training and completing it are two very different things

Just over half (56%) of organizations report completion rates above 70%, which is the survey average. That means 93% of organizations have employees who never finish the training they’re given. This single data point likely explains more about the persistent awareness gap than any other finding in the report.

Training works when it actually happens

For organizations that do get training across the finish line, the results are measurable. 67% report moderate or significant reductions in intrusions, incidents, and breaches since implementing their programs.

The most common ways organizations measure effectiveness: reduced security incidents (53%), employee feedback (52%), and security audits (50%). A further 42% track training completion rates, and 40% use phishing simulation results. Respondents could select multiple measures, and most use a blend, which is the right approach.

The regional picture shows meaningful variation in outcomes:

73%

North America

69%

Latin America

65%

EMEA

63%

Asia Pacific

Organizations reporting reduced incidents after training, by region

North America leads at 73%, which correlates with its higher rates of monthly/quarterly training (86% combined). EMEA, which reports the lowest frequency of regular training (69% monthly/quarterly combined), also shows the lowest incident reduction at 65%.

The takeaway isn’t complicated: organizations that train more frequently and measure more rigorously see better outcomes. If your training program lacks a measured outcome (reduced incidents, improved phishing simulation scores, or at minimum a tracked completion rate), the data to justify or improve the program simply doesn’t exist.

Training priorities reflect the threat landscape

The topics organizations prioritize for awareness training broadly mirror the threats they face. Data security leads at 51%, followed by data privacy at 43% and AI-based tools and threats at 41%.

Topic20252024
Data security51%48%
Data privacy43%41%
AI-based tools and threats41%
Protection against malware and ransomware34%22%
Cloud and application security33%
Information security concepts29%
Reporting incidents23%
Phishing, smishing, and vishing23%28%

Several categories are new to the 2025 survey (AI-based threats, cloud security, information security concepts, and incident reporting), so direct year-over-year comparisons aren’t always possible. But the overall direction is clear: the training focus is broadening to cover a wider attack surface.

The more interesting shift is in what motivates training adoption in the first place:

52% → 41%

External threat training

2024 vs 2025 adoption driver

4% → 27%

Insider risk training

2024 vs 2025 adoption driver

Insider risk went from afterthought to first-class priority in a single year

External threats remain the top motivator, but the 11-point drop likely reflects the addition of new survey options rather than a genuine decrease in concern. The insider risk jump from 4% to 27%, however, is striking regardless of methodology changes. Organizations are increasingly recognizing that training must address internal behavior, not just external attacks.

30-day action plan for MSP teams

These findings point to specific, addressable gaps. Here’s how to close the most critical ones in four weeks.

Week 1: Audit your current training program

  • Pull completion rate data for every client tenant. If it’s not being measured, that’s the first thing to fix
  • Map existing training modules against the top three topic areas: data security, data privacy, and AI-specific threats
  • Identify which employee groups have role-tailored training tracks and which are getting one-size-fits-all content
  • If phishing simulations have been dropped or deprioritized, schedule a new baseline campaign this week

Week 2: Close the AI readiness gap

  • Add a dedicated GenAI social engineering module covering AI-generated phishing, deepfake voice/video scams, and data leakage through AI tools
  • Inventory which AI tools employees are actually using, both sanctioned and shadow IT, across every tenant
  • Implement or audit DLP policies that flag sensitive data submitted to external AI services (Microsoft Purview, Google Workspace DLP, or your CASB)
  • Set a measurable target: what percentage of employees should score “highly trained” on AI threats by Q3?

Week 3: Fix completion rates

  • Enable manager-visible completion dashboards so individual accountability has real visibility into who finished and who didn’t
  • Restructure long annual training into shorter mandatory modules (15–20 minutes each) delivered monthly or quarterly
  • Add mandatory completion gates for high-risk roles: finance, IT admin, and executives
  • Run a phishing simulation campaign and compare click rates against your last available baseline

Week 4: Establish measurement and reporting

  • Define three core metrics: training completion rate, simulated phishing click rate, and security incident count
  • Baseline all three this month across every tenant if not already done
  • Set quarterly review checkpoints with a named owner responsible for reporting on each metric
  • If personnel limitations are the barrier for your clients, evaluate third-party awareness platforms that support MSP multi-tenant deployment and white-labeling

Key takeaways

  • Perception is not preparedness. 88% of leaders say AI threats raised the importance of security awareness, but only 40% say their employees are actually ready. Audit readiness against a defined standard and don’t assume that caring about security translates to competence.
  • Policy without enforcement tooling is decorative. 96% have or are building AI policies, but only 42% have tools to monitor employee AI use. The gap between the document and the control is where risk accumulates.
  • The awareness gap is structurally stable. 69% of leaders say employees lack awareness, essentially unchanged from 2024. Without changing training design or enforcement mechanisms, reporting the same number next year is the expected outcome.
  • Training completion is broken at scale. 94% of organizations schedule regular training. 6% achieve full completion. This is an accountability and engagement problem, not a time or budget problem.
  • Training demonstrably reduces incidents. 67% of organizations report fewer intrusions as a direct result of awareness programs. The ROI argument is settled; the execution gap is where programs fail.
  • Insider risk is now a first-class priority. A jump from 4% to 27% as a training driver signals a genuine shift in how organizations model risk. Programs that don’t include insider risk modules are already behind.

What’s next

If you’re working through the security controls that complement a training program, these articles cover the threat landscape and operational side:

Huntress 2026 Cyber Threat Report: Key Findings for MSPs

Analysis of the Huntress 2026 Cyber Threat Report covering identity compromise, RMM abuse, ClickFix loaders, ransomware timelines, and a 30-day action plan.

Huntress Blocks Device Code Phishing from Railway

Huntress deployed a conditional access policy across ITDR-protected tenants to block device code phishing from Railway infrastructure using AI-generated lures.

Hosting a Monitoring Stack - Grafana, InfluxDB, and Telegraf

Deploy a complete self-hosted monitoring stack using Grafana, InfluxDB, and Telegraf with Docker Compose — from installation to your first dashboard.

Search articles
esc to close