Revised 8/2026
ITN 295 - Security of Artificial Intelligence Systems (3 CR.)
Course Description
Provides instruction in applying artificial intelligence (AI) technologies to cybersecurity operations, emphasizing secure AI deployment, risk management, and ethical governance. Covers AI fundamentals, protection of AI systems and data, AI-assisted security operations, and compliance frameworks for responsible AI use. Lecture 3 hours per week.
General Course Purpose
The purpose of this course is to equip students with a foundation in applying AI techniques to secure digital systems and enhance security operations. Students learn to protect AI-driven infrastructures, automate detection and response processes, and ensure compliance with ethical and regulatory frameworks. The course prepares learners for careers in AI-augmented cybersecurity and security operations centers (SOCs).
Course Prerequisites/Corequisites
Prerequisites: ITN 260, or equivalent networking and security knowledge. Basic familiarity with cybersecurity tools, scripting, or data analytics is recommended.
Course Objectives
Upon completing the course, the student will be able to:
- Explain core AI and machine learning principles relevant to cybersecurity operations.
- Identify current and emerging AI applications in defensive and offensive security contexts.
- Implement technical safeguards to secure AI models, training data, and deployment environments.
- Use AI tools to enhance threat detection, incident response, and security workflow automation.
- Integrate governance, risk, and compliance frameworks (GRC) into AI security projects.
- Understand legal, ethical, and regulatory considerations for responsible use of AI in cybersecurity.
Major Topics to Be Included
- Basic AI Concepts Related to Cybersecurity
- Principles and terminology of AI: machine learning, deep learning, NLP, and automation
- AI use cases: threat detection, vulnerability management, security operations
- AI-driven threats: automated phishing, polymorphic malware, adversarial ML, and malicious generative AI
- Securing AI Systems
- Security controls for models, datasets, and pipelines
- Securing deployment environments across on-premises, cloud, and hybrid systems
- Mitigating adversarial attacks targeting neural networks, training data, and inference layers
- AI-Assisted Security
- Using AI for anomaly detection, predictive defense, and incident response acceleration
- Automation of SOC workflows: alert triage, correlation, and orchestration
- Application of AI to behavioral analytics, threat modeling, and continuous monitoring
- AI Governance, Risk, and Compliance
- Regulatory frameworks (GDPR, NIST AI RMF, ISO/IEC 42001) and their security implications
- Integration of GRC principles across the AI lifecycle
- Responsible and ethical AI practices: bias mitigation, privacy preservation, accountability
Student Learning Outcomes
- Explain how AI models, data, and automation enhance cybersecurity operations.
- Identify and mitigate threats targeting AI systems and model integrity.
- Configure and manage secure AI deployment environments across varied infrastructures.
- Use AI-driven tools to improve detection accuracy and incident response efficiency.
- Evaluate adversarial risks and apply appropriate countermeasures to protect AI assets.
- Incorporate governance and compliance practices to ensure responsible and ethical AI security adoption.
- Demonstrate readiness for the CompTIA SecAI+ certification exam.