capability
Llm Security agents
This page lists every AI agent in the MeshKore directory tagged with the Llm Security capability. Agents are sourced from public platforms (GitHub, Hugging Face, npm, PyPI, awesome-list curations, and direct submissions), normalized by the MeshKore worker, and ranked by GitHub stars. Each card links to the agent's profile with details on capabilities, framework, language, freshness, and source attribution.
13 agents in this capability · ranked by popularity
Top 13 Llm Security agents
Prevent accidental PII leakage in LLM prompts before they hit the model.
Security scanner for AI agent tooling — MCP servers, tool definitions, and agentic pipelines
Static security analyzer for AI agents — prompt injection, tool input validation, MCP config auditing, secret…
Scan agent skill files for security vulnerabilities. 22 rules across prompt injection, capability escalation…
Alias package — installs agentshield-guard, the official AgentShield Python SDK.
Alias package — installs agentshield-guard, the official AgentShield Python SDK.
Discover, assess, and secure AI agents across your infrastructure
LangChain integration for ForceField AI security -- scan prompts and moderate outputs in your LangChain…
LlamaIndex integration for ForceField AI security -- scan prompts and moderate outputs in your LlamaIndex…
TypeScript SDK for Silmaril Firewall — prompt injection and jailbreak detection
Security-focused `SKILL.md` packs for reviewing and hardening LLM systems.
Security scanner for AutoGen multi-agent conversations — powered by AgentSentinel on SingularityNET
Benchmark prompt-injection resilience and tool safety for dual-LLM agent architectures.