Cheatsheets

Practical Field Notes for AI Security and Offensive Testing

A living library of checklists, testing notes, templates, and practitioner references focused on AI/LLM security, emerging AI-enabled social engineering, offensive security methodology, and clear security communication.

Featured Cheatsheets

Practical resources

AI/LLM Security Testing Checklist

A structured checklist for evaluating AI applications, including prompt injection, system prompt exposure, RAG behavior, tool access, authorization boundaries, memory, and output handling.

Coming Soon

RAG Security Testing Notes

A practical reference for testing retrieval-augmented generation systems, including data scoping, document access, retrieval poisoning, cross-user exposure, stale indexes, and chunking risks.

Coming Soon

AI-Enabled Social Engineering Field Notes

A field reference for how AI changes OSINT, phishing, vishing, impersonation, pretext development, synthetic identity, and trust exploitation.

Coming Soon

AI Security Finding Template

A reusable structure for writing AI security findings that connect model behavior, system conditions, evidence, business impact, and remediation.

Coming Soon
AI/LLM Security Testing

Testing references

Practical references for testing AI applications, LLM-powered workflows, RAG systems, agents, tools, and model-facing application logic.

LLM Testing Checklist

Baseline checklist for prompt injection, guardrail bypass, sensitive data exposure, system prompt leakage, output handling, and role boundaries.

Prompt Injection Testing Notes

Direct, indirect, multi-turn, encoded, contextual, and workflow-driven prompt injection testing.

Agent and Tool Abuse Checklist

Notes for AI systems that can invoke tools, call APIs, send messages, create records, modify data, or trigger workflows.

AI-Enabled Social Engineering

Human trust and AI-enabled deception

References for understanding how generative AI changes reconnaissance, trust abuse, impersonation, pretext development, and scalable human-targeted attacks.

AI-Assisted OSINT Checklist

A structured checklist for collecting public information that could support phishing, vishing, impersonation, or pretext development.

Deepfake and Voice Clone Threat Model

A threat model for assessing voice cloning, deepfake video, executive impersonation, and real-time social engineering risk.

Human-AI Workflow Abuse

Notes on attacks where the target is not only a human or a model, but the trust boundary between a human, an AI assistant, and a business workflow.

Reporting and SPECTRA Assets

Templates and field assets

AI Security Finding Template

A format for documenting affected workflow, model behavior, system condition, evidence, business impact, and remediation.

Attack Chain Writeup Format

A repeatable format for connecting individual observations into an end-to-end attack path.

SPECTRA Target Profile Template

A structured way to capture what an AI system is, what it can access, what it can do, and what business workflow it supports.