Protect AI systems from evolving threats with hands-on, instructor-led training in AI Security.
These live courses teach how to defend machine learning models, counter adversarial attacks, and build trustworthy, resilient AI systems.
Training is available as online live training via remote desktop or onsite live training in Bonn, featuring interactive exercises and real-world use cases.
Onsite live training can be delivered at your location in Bonn or at a NobleProg corporate training center in Bonn.
Also known as Secure AI, ML Security, or Adversarial Machine Learning.
Our training facilities are located at Mozartstraße 4-10 in Bonn. Our spacious training rooms are located southwest of the city centre and offer optimal training conditions for your needs.
Arrival
The NobleProg training facilities are conveniently located near the Bonn main station. In the west you reach the motorway A565.
Parking
You will find numerous parking spaces around our training rooms.
Local Infrastructure
In downtown Bonn you will find numerous hotels and restaurants..
This instructor-led, live training in Bonn (online or onsite) is aimed at intermediate-level engineers and security professionals who wish to secure AI models deployed at the edge against threats such as tampering, data leakage, adversarial inputs, and physical attacks.
By the end of this training, participants will be able to:
Identify and assess security risks in edge AI deployments.
Apply tamper resistance and encrypted inference techniques.
Harden edge-deployed models and secure data pipelines.
Implement threat mitigation strategies specific to embedded and constrained systems.
This instructor-led, live training in Bonn (online or onsite) is aimed at advanced-level professionals who wish to implement and evaluate techniques such as federated learning, secure multiparty computation, homomorphic encryption, and differential privacy in real-world machine learning pipelines.
By the end of this training, participants will be able to:
Understand and compare key privacy-preserving techniques in ML.
Implement federated learning systems using open-source frameworks.
Apply differential privacy for safe data sharing and model training.
Use encryption and secure computation techniques to protect model inputs and outputs.
Artificial Intelligence (AI) introduces new dimensions of operational risk, governance challenges, and cybersecurity exposure for government agencies and departments.
This instructor-led, live training (online or onsite) is aimed at public sector IT and risk professionals with limited prior experience in AI who wish to understand how to evaluate, monitor, and secure AI systems within a government or regulatory context.
By the end of this training, participants will be able to:
Interpret key risk concepts related to AI systems, including bias, unpredictability, and model drift.
Apply AI-specific governance and auditing frameworks such as NIST AI RMF and ISO/IEC 42001.
Recognize cybersecurity threats targeting AI models and data pipelines.
Establish cross-departmental risk management plans and policy alignment for AI deployment.
Format of the Course
Interactive lecture and discussion of public sector use cases.
AI governance framework exercises and policy mapping.
Scenario-based threat modeling and risk evaluation.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
This instructor-led, live training in Bonn (online or onsite) is aimed at advanced-level security professionals and ML specialists who wish to simulate attacks on AI systems, uncover vulnerabilities, and enhance the robustness of deployed AI models.
By the end of this training, participants will be able to:
Simulate real-world threats to machine learning models.
Generate adversarial examples to test model robustness.
Assess the attack surface of AI APIs and pipelines.
Design red teaming strategies for AI deployment environments.
This instructor-led, live training in Bonn (online or onsite) is aimed at intermediate-level enterprise leaders who wish to understand how to govern and secure AI systems responsibly and in compliance with emerging global frameworks such as the EU AI Act, GDPR, ISO/IEC 42001, and the U.S. Executive Order on AI.
By the end of this training, participants will be able to:
Understand the legal, ethical, and regulatory risks of using AI across departments.
Interpret and apply major AI governance frameworks (EU AI Act, NIST AI RMF, ISO/IEC 42001).
Establish security, auditing, and oversight policies for AI deployment in the enterprise.
Develop procurement and usage guidelines for third-party and in-house AI systems.
This instructor-led, live training in Bonn (online or onsite) is aimed at intermediate-level to advanced-level AI developers, architects, and product managers who wish to identify and mitigate risks associated with LLM-powered applications, including prompt injection, data leakage, and unfiltered output, while incorporating security controls like input validation, human-in-the-loop oversight, and output guardrails.
By the end of this training, participants will be able to:
Understand the core vulnerabilities of LLM-based systems.
Apply secure design principles to LLM app architecture.
Use tools such as Guardrails AI and LangChain for validation, filtering, and safety.
Integrate techniques like sandboxing, red teaming, and human-in-the-loop review into production-grade pipelines.
This instructor-led, live training in Bonn (online or onsite) is aimed at intermediate-level machine learning and cybersecurity professionals who wish to understand and mitigate emerging threats against AI models, using both conceptual frameworks and hands-on defenses like robust training and differential privacy.
By the end of this training, participants will be able to:
Identify and classify AI-specific threats such as adversarial attacks, inversion, and poisoning.
Use tools like the Adversarial Robustness Toolbox (ART) to simulate attacks and test models.
Apply practical defenses including adversarial training, noise injection, and privacy-preserving techniques.
Design threat-aware model evaluation strategies in production environments.
This instructor-led, live training in Bonn (online or onsite) is aimed at beginner-level IT security, risk, and compliance professionals who wish to understand foundational AI security concepts, threat vectors, and global frameworks such as NIST AI RMF and ISO/IEC 42001.
By the end of this training, participants will be able to:
Understand the unique security risks introduced by AI systems.
Identify threat vectors such as adversarial attacks, data poisoning, and model inversion.
Apply foundational governance models like the NIST AI Risk Management Framework.
Align AI use with emerging standards, compliance guidelines, and ethical principles.
Online Secure AI training in Bonn, Secure AI training courses in Bonn, Weekend Secure AI courses in Bonn, Evening Secure AI training in Bonn, AI Security instructor-led in Bonn, Secure AI one on one training in Bonn, Online Secure AI training in Bonn, AI Security instructor in Bonn, Secure AI boot camp in Bonn, Evening Secure AI courses in Bonn, Weekend Secure AI training in Bonn, AI Security private courses in Bonn, Secure AI classes in Bonn, Secure AI coaching in Bonn, Secure AI instructor-led in Bonn, Secure AI trainer in Bonn, AI Security on-site in Bonn