AI/LLM Security Testing Checklist
A structured checklist for evaluating AI applications, including prompt injection, system prompt exposure, RAG behavior, tool access, authorization boundaries, memory, and output handling.
Coming SoonA living library of checklists, testing notes, templates, and practitioner references focused on AI/LLM security, emerging AI-enabled social engineering, offensive security methodology, and clear security communication.
A structured checklist for evaluating AI applications, including prompt injection, system prompt exposure, RAG behavior, tool access, authorization boundaries, memory, and output handling.
Coming SoonA practical reference for testing retrieval-augmented generation systems, including data scoping, document access, retrieval poisoning, cross-user exposure, stale indexes, and chunking risks.
Coming SoonA field reference for how AI changes OSINT, phishing, vishing, impersonation, pretext development, synthetic identity, and trust exploitation.
Coming SoonA reusable structure for writing AI security findings that connect model behavior, system conditions, evidence, business impact, and remediation.
Coming SoonPractical references for testing AI applications, LLM-powered workflows, RAG systems, agents, tools, and model-facing application logic.
Baseline checklist for prompt injection, guardrail bypass, sensitive data exposure, system prompt leakage, output handling, and role boundaries.
Direct, indirect, multi-turn, encoded, contextual, and workflow-driven prompt injection testing.
Notes for AI systems that can invoke tools, call APIs, send messages, create records, modify data, or trigger workflows.
References for understanding how generative AI changes reconnaissance, trust abuse, impersonation, pretext development, and scalable human-targeted attacks.
A structured checklist for collecting public information that could support phishing, vishing, impersonation, or pretext development.
A threat model for assessing voice cloning, deepfake video, executive impersonation, and real-time social engineering risk.
Notes on attacks where the target is not only a human or a model, but the trust boundary between a human, an AI assistant, and a business workflow.
A format for documenting affected workflow, model behavior, system condition, evidence, business impact, and remediation.
A repeatable format for connecting individual observations into an end-to-end attack path.
A structured way to capture what an AI system is, what it can access, what it can do, and what business workflow it supports.