Resources
The AI security landscape moves faster than any printed resource can track. This page is the maintained, up-to-date companion to Appendix B of Security in the Age of AI Agents. Links are verified and updated as the field evolves.
Key Industry Initiatives
Project Glasswing
Announced by Anthropic in April 2026. A coalition initiative bringing together AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks to apply frontier AI capabilities to defensive cybersecurity at scale. If you work in security and have not read the Glasswing announcement, read it now.
anthropic.com/glasswing →Standards and Frameworks
NIST AI Risk Management Framework (AI RMF)
The foundational governance framework for AI risk. Required reading for anyone building an enterprise AI governance program.
airc.nist.gov →OWASP Top 10 for LLM Applications
The most practical starting point for AI-specific security risk. Updated regularly. Free.
owasp.org →OWASP Agentic AI Top 10 (2026)
Specific to autonomous agent deployments. Covers goal hijacking, tool misuse, memory poisoning, cascading failures, and rogue agent behavior.
owasp.org →MITRE ATLAS
Adversarial Threat Landscape for AI Systems. The AI equivalent of MITRE ATT&CK. Use this to structure red team exercises and threat modeling.
atlas.mitre.org →EU AI Act
Now in force. Required reading if you operate in Europe or serve European customers with AI systems.
artificialintelligenceact.eu →Open Source Tools
Garak
LLM vulnerability scanner. Runs automated probe sequences to identify prompt injection vulnerabilities, jailbreak susceptibility, and data leakage. Free, open source.
github.com/NVIDIA/garak →PyRIT (Python Risk Identification Toolkit)
Microsoft's open source red teaming framework for AI systems.
github.com/Azure/PyRIT →LangFuse
Open source LLM observability platform. Logging, tracing, and monitoring for AI agent behavior.
langfuse.com →Rebuff
Prompt injection detection API. Can be integrated into AI pipelines to flag suspicious inputs.
github.com/protectai/rebuff →Communities and Working Groups
Cloud Security Alliance AI Safety Initiative
The most active industry working group on enterprise AI security.
cloudsecurityalliance.org →OWASP AI Security and Privacy Guide
Ongoing working group producing practical guidance for AI security practitioners.
owasp.org →SANS AI Security Curriculum
The most practical formal training available for security practitioners building AI security competency.
sans.org →This page is updated as the field evolves. If a link is broken or you know of a resource that belongs here, contact jg@hard2hack.com.