Offensive AI Security Research

AI/LLM Security and Emerging AI-Enabled Social Engineering

Research, methodology, and frameworks focused on AI/LLM security, emerging AI-enabled social engineering, and offensive security applied to deployed AI systems.

Personal research and portfolio site of Justin Henderson.

Black Ledger Security logo
SPECTRA

      
AI
Featured Framework

SPECTRA: Context-Aware AI Adversarial Testing

Most AI testing asks the same first question:
Can we make the model fail?

SPECTRA asks the question that matters next:
What does that failure enable?

Before testing begins, SPECTRA profiles the target system: architecture, retrieval behavior, data access, tool use, defensive controls, industry context, and business workflow.

That context shapes the assessment. A prompt that means nothing against a public chatbot could become a serious exposure path against a legal assistant, healthcare RAG system, financial services copilot, or internal agent with tool access.

SPECTRA turns AI testing from generic prompt execution into structured adversarial analysis: profile the system, test the context, build the chain, and map the control that breaks it.

SPECTRA // Assessment consoleSynthetic demo
target: legal-intake-assistantsector: legal servicesmode: context-aware
[01]Reconnaissance
retrieval layer detected
privileged matter context suspected
[02]Defense profiling
input filter present
authorization model: conversational trust suspected
[03]Context mapping
threat persona: co-counsel impersonation
data class: privileged communications
[04]Payload strategy
generic payload blocked
workflow-aware prompt generated
[05] Attack chain
prompt retrieval privileged context exposure
[06] Control mapping
matter-level authorization before retrieval
ProfileContextTestChainRemediate
Research Focus Areas

Where the work is focused

Research focused on how AI systems, human trust, adversarial behavior, and offensive security methodology intersect.

AI/LLM Security Research

Testing AI systems the way they are actually deployed: with RAG pipelines, tool access, authorization models, memory, and business workflows attached. The important question is not whether a model can be manipulated. It is what happens next in that specific system when it is.

AI-Enabled Social Engineering

Generative AI is rewriting the social engineering playbook. Reconnaissance that once took days can take minutes. Pretexts can be tailored at scale. Voice cloning, synthetic media, and automated persona development are changing what trust looks like online and over the phone.

AI-Enabled Threat Modeling and OSINT

AI changes the speed and scale of reconnaissance. Public information, synthetic personas, automated research, and language models can be used to map people, organizations, workflows, and trust relationships faster than traditional OSINT alone. This research focuses on how those capabilities reshape threat modeling, social engineering risk, and the defensive assumptions organizations rely on.

Cheatsheets and Field Notes

Practical references

Testing checklists, prompt templates, and field-ready references for AI/LLM security, social engineering, and offensive security tradecraft.

AI/LLM Testing Checklist

A structured checklist for evaluating AI applications, RAG systems, agents, tools, and prompt injection exposure.

Coming Soon

AI-Enabled Social Engineering Notes

Field notes on AI-assisted OSINT, pretext development, impersonation risk, synthetic media, and human trust exploitation.

Coming Soon

AI-Assisted Security Workflow Prompts

Prompt patterns for organizing testing notes, improving report drafts, structuring threat models, and supporting defensive security analysis.

Coming Soon

SPECTRA Field Assets

Target profiles, defense worksheets, context maps, attack chains, and remediation templates.

Coming Soon
About

Offensive security background.
AI security focus.

My background spans Marine Corps Special Operations, penetration testing and social engineering. Black Ledger Security is where I publish research, frameworks, cheatsheets, and field notes focused on AI/LLM security and the emerging role of AI in modern social engineering.

Connect

Research and professional inquiries

For collaboration, research feedback, speaking, or professional opportunities in AI security and offensive security.