Certified
Artificial
Intelligence
Resilience
Professional
Governing AI Risk in Cybersecurity & Digital Operational Resilience

Enrol Now: Course Launches on March 30th 2026
"Certified AI Resilience Professional — Practical AI Governance, Risk & Digital Resilience Credential"
Why This Course Exists?
Artificial Intelligence is no longer an experimental technology sitting at the edge of the organisation.
It is already embedded — often quietly — into cybersecurity tools, fraud detection, identity systems, HR platforms, customer analytics, and decision-making processes that materially affect customers, staff, and critical operations.
For boards, executives, and senior risk leaders, this creates a new category of operational and governance risk.
AI systems can:
-
Make decisions that are opaque, difficult to explain, or impossible to audit
-
Introduce bias, instability, or over-reliance on automated judgement
-
Fail in ways that are non-deterministic and hard to predict
-
Expose organisations to regulatory, legal, reputational, and resilience impacts
And crucially — many leaders are expected to oversee these risks without having been trained to do so.
Regulators are already moving.
-
The EU AI Act introduces formal obligations around risk classification, governance, transparency, and accountability
-
NIS2, DORA, and Operational Resilience frameworks increasingly assume that organisations understand and control the technologies underpinning critical services — including AI-enabled systems
-
Boards are being asked not whether AI is used, but how it is governed, challenged, and controlled
In this environment, “we use reputable tools” is no longer a defensible position.
What is required now is:
-
The ability to ask the right questions of vendors, technical teams, and advisors
-
The confidence to challenge AI-driven outputs, assumptions, and recommendations
-
A practical understanding of where AI risk sits alongside cyber, operational, and third-party risk
-
The judgement to decide when AI should be trusted — and when it should not
The Certified AI Resilience Professional (CAIRP) course exists to close this gap.
It is not about building AI systems or becoming a data scientist.
It is about ensuring that those accountable for governance, resilience, and regulatory readiness can oversee AI use in a way that is defensible, proportionate, and aligned with real-world risk.
Because very soon, boards and regulators will not be asking:
“Do you understand AI?”
They will be asking:
“Can you justify the decisions your AI systems are influencing — and the risks you accepted by using them?”
CAIRP prepares you to answer that question with confidence.
At a Glance...
Access:
60-day online access with on-demand modules and bi-weekly live EU Cyber Academy Learning Forums
Audience:
Senior cybersecurity, risk, compliance, governance, and digital resilience professionals responsible for overseeing or advising on the use of AI
Outcome:
Clear, defensible capability to understand AI in plain English, evaluate AI tools responsibly, and govern AI-related risk in cybersecurity and digital operational resilience contexts
Format:
Self-paced online learning supported by bi-weekly live tutor-led Learning Forums and downloadable case books.
Assessment:
Single professional exam — 50 MCQs, 80% pass mark, up to three attempts
Certification:
Certified AI Resilience Professional (CAIRP)
CPD:
10 CPD points
"CAIRP is not an AI awareness course, a technical deep-dive, or a tool-specific training programme. It is a practitioner-led professional credential designed for those accountable for governing, overseeing, and evidencing the responsible use of artificial intelligence in cybersecurity and digital operational resilience contexts."
Who is This Course For?
Certified AI Resilience Professional (CAIRP)
The Certified AI Resilience Professional (CAIRP) is designed for professionals who must make informed decisions, provide oversight, and take accountability for the responsible use of artificial intelligence in cybersecurity and digital operational resilience contexts.
CAIRP equips you with the practical understanding, governance frameworks, and professional judgement required to evaluate AI use cases, assess AI-related risk, and integrate AI oversight into existing cybersecurity, risk, and resilience structures. The course supports professionals from initial understanding and tool evaluation through to governance, reporting, and board-level communication of AI-related decisions.
Ideal For
-
Senior professionals accountable for cybersecurity, digital resilience, risk, or regulatory compliance
-
CISOs, Heads of Security, CIOs, Risk & Compliance Leaders, and Governance & Assurance professionals
-
Those sponsoring, directing, or overseeing the use of AI tools within security, risk, or business functions
-
Professionals required to brief boards, regulators, auditors, or senior stakeholders on AI-related risks and controls
-
Consultants and advisors supporting organisations with AI governance, risk management, and resilience strategy
This Course Is Not For
-
Learners seeking a general AI awareness or introductory technology course
-
Practitioners looking for technical AI development, coding, or system configuration training
-
Individuals seeking vendor-specific AI tool certifications or data science qualifications
-
Those with no responsibility for governance, oversight, or decision-making relating to AI use

CAIRP Enables You to
On successful completion of CAIRP, you can:
Govern AI Use with Credible Oversight
-
Provide boards and senior leaders with clear assurance that AI use is responsible, proportionate, and aligned to resilience objectives.
Translate AI Concepts Into Plain English
-
Explain core AI mechanisms (including generative AI, LLMs, RAG workflows) in terms executives and regulators understand.
Embed AI Governance into Risk & Resilience Frameworks
-
Design and implement AI oversight arrangements that integrate into your organisation’s existing risk, compliance, and resilience structures.
Define and Defend AI Decision Boundaries
-
Establish clear scoping, tool selection criteria, and boundaries of acceptable use — with accountability, ownership, and rationale documented.
Identify and Challenge AI-Related Risk
-
Recognise meaningful cyber, operational, and resilience risks arising from AI use and prevent unmanaged (shadow) AI and over-reliance on outputs.
Justify and Govern AI Tool Use
-
Select, justify, and govern AI tools (including commonly used generative and presentation tools) with confidence in capabilities, limitations, and risk trade-offs.
Build Evidence-Based Rationale for Decisions
-
Compile credible evidence and professional judgement to support AI-assisted decisions suitable for engagement with boards, regulators, auditors, and customers.
Produce Board-Ready, Risk-Focused Reporting
-
Prepare concise, clear summaries that explain how AI is being used, governed, and controlled — highlighting where human judgement remains essential.
Avoid Common Governance and Resilience Failures
-
Prevent oversight breakdowns such as uncontrolled tools, misinterpreted outputs, opaque decisions, or compliance that exists only on paper.

Why CAIRP is Different
CAIRP is different because it:
-
Focuses on governance-led AI oversight — equipping you to make, direct, and defend decisions about the use of artificial intelligence in cybersecurity and digital operational resilience, rather than delegating judgement to tools or technology teams.
-
Explains AI mechanics without unnecessary technical complexity — you will understand how AI systems such as generative models, LLMs, and RAG work in practice, without being drawn into coding, system design, or vendor-specific implementation detail.
-
Builds real professional capability — not just awareness. You will leave able to evaluate AI use cases, select and justify AI tools, define acceptable-use boundaries, and integrate AI oversight into existing risk, resilience, and governance arrangements.
-
Is designed for accountability — ideal for professionals who sponsor, lead, or oversee the use of AI within cybersecurity, risk, compliance, or resilience programmes and must provide assurance to executive leadership and boards.
-
Includes professional, competence-based assessment — combining scenario-based evaluation with a practical AI tool and governance assessment to validate real-world judgement and decision-making capability, not just course completion.
-
Is delivered in a premium professional format — self-paced expert instruction supported by applied case studies and practical exercises, enabling challenge, reflection, and confident application in senior roles.
CAIRP is not an AI awareness or technology overview course.
It is a professional resilience and governance credential designed to help organisations use AI responsibly, manage AI-related risk, and support defensible decision-making in cybersecurity and digital operational resilience contexts.

How the Course is Delivered
How CAIRP Is Delivered
The Certified AI Resilience Professional (CAIRP) is delivered as an evergreen, professionally structured programme designed to fit the realities of senior, governance-focused roles. It combines flexibility with rigour, ensuring learning translates directly into confident, defensible AI oversight and decision-making.
The programme is deliberately non-technical and focuses on judgement, governance, and real-world application rather than theory or tool-specific training.
The Programme Includes:
On-Demand Expert-Led Learning
Professionally recorded modules completed at your own pace, providing clear, plain-English guidance on:
-
How AI works in practice (including LLMs, RAG, and generative tools)
-
How AI is used — and misused — in cybersecurity and resilience contexts
-
How to evaluate AI tools responsibly
-
How to govern AI use in line with organisational and regulatory expectations
Content is grounded in real organisational scenarios rather than abstract theory.
Bi-Weekly EU Cyber Academy Learning Forums (Live Online)
Regular live Learning Forums providing direct access to tutors for discussion, clarification, and applied learning. These sessions focus on:
-
Interpreting AI risks and governance challenges
-
Evaluating real-world AI use cases
-
Discussing regulatory and board-level considerations
-
Challenging assumptions and sharing practitioner perspectives
Learners join the next available forum following enrolment.
Professional Assessment Exam
A single, professionally designed online assessment:
-
50 multiple-choice questions
-
Scenario-based and judgement-focused
-
80% pass mark required
-
Up to three attempts within the 60-day access period
The assessment validates applied understanding rather than technical or theoretical recall.
Certification on Successful Completion
Participants who successfully meet the assessment standard are awarded the Certified AI Resilience Professional (CAIRP) credential.
A Focused, Professional Learning Experience
This delivery model ensures CAIRP is not simply informative, but genuinely practical — building the confidence, judgement, and credibility required to oversee AI use responsibly, communicate AI-related risk to boards, and support resilient, defensible decision-making in practice.
Assessment & Certification Credibility
Certification & Assessment
The Certified AI Resilience Professional (CAIRP) is a professional certification programme. The credential is awarded based on demonstrated competence, not attendance.
To achieve the Certified AI Resilience Professional (CAIRP) designation, participants must successfully complete a single professional assessment designed to validate applied understanding, judgement, and governance capability.
Professional Assessment Exam
The CAIRP assessment consists of a single online multiple-choice examination designed to test real-world professional judgement rather than technical detail or theoretical recall.
Assessment structure:
-
50 multiple-choice questions
-
Scenario-based and applied decision-making focus
-
Minimum pass mark: 80%
-
Up to three attempts within the 60-day access period
The exam assesses the ability to:
-
Understand and explain AI concepts (including generative AI, LLMs, and RAG) in plain English
-
Evaluate AI tools and use cases appropriately for senior and regulated environments
-
Identify and challenge AI-related cyber and operational risks
-
Apply governance, oversight, and accountability principles to AI use
-
Interpret AI-related regulatory expectations, including the EU AI Act and relevant UK and international approaches
The assessment is intentionally non-technical and vendor-agnostic, reflecting the responsibilities of professionals accountable for oversight, governance, and assurance.
Certification Standard
CAIRP has been designed to ensure that successful candidates can credibly govern, oversee, and explain the use of artificial intelligence in cybersecurity and digital operational resilience contexts, and can support defensible decision-making with boards, regulators, auditors, and senior stakeholders.
Certification & Professional Recognition
On successful completion, participants are awarded the Certified AI Resilience Professional (CAIRP) credential and receive 10 CPD points.
Holders of the CAIRP designation may use the credential in professional contexts, subject to the certification terms and conditions.

Programme Leadership & Instructor Authority

The Certified AI Resilience Professional (CAIRP) programme is designed and delivered by Paul C Dwyer, a globally recognised authority in cybersecurity, digital resilience, and regulatory implementation, with a particular focus on governance, oversight, and board-level accountability.
Practical AI Governance & Resilience Expertise
In addition to his deep understanding of international cybersecurity and resilience frameworks, Paul has been directly involved in the practical governance and oversight of artificial intelligence in real organisational contexts. He is the architect of practical AI governance and digital resilience models used by organisations to evaluate AI use cases, assess AI-related risk, and integrate AI oversight into existing cybersecurity, risk, and resilience structures.
This work has provided him with a deep, practitioner-level understanding of how AI is actually used in organisations — including where AI adds value, where it introduces hidden fragility, and where governance and oversight commonly fail. His experience spans the evaluation of generative AI tools, decision-support systems, and AI-assisted workflows, with a strong focus on explainability, accountability, and defensible decision-making.
Board-Level and Regulatory Perspective
Paul works extensively with senior leadership teams and boards, advising on AI-related risk, cybersecurity, regulatory alignment, and digital operational resilience. His work regularly intersects with emerging regulatory expectations, including the EU AI Act, NIS2, DORA, and the UK Operational Resilience Framework, supporting organisations as they navigate regulatory scrutiny, customer assurance, and executive decision-making.
His experience spans both the design of governance models and the practical realities of regulatory engagement — including what regulators, auditors, and customers actually expect to see when organisations rely on AI in security, risk, and operational decision-making.
Why This Matters for CAIRP Participants
As a result, CAIRP is grounded not in abstract AI theory or technical experimentation, but in real-world governance experience, regulator-facing reality, and practical delivery. Participants benefit from instruction that reflects how AI is actually understood, evaluated, governed, and explained in practice — and what senior professionals must do to deliver credible, board-ready AI resilience outcomes.
HEAD OFFICE
-
ICTTF Ltd
ICTTF House
First Floor Unit 15
N17 Business Park
Tuam, Co Galway
H54 H1K2 -
info@icttf.org
support@icttf.org -
+353 (0)1 905 3263

