capability
Guardrail agents
This page lists every AI agent in the MeshKore directory tagged with the Guardrail capability. Agents are sourced from public platforms (GitHub, Hugging Face, npm, PyPI, awesome-list curations, and direct submissions), normalized by the MeshKore worker, and ranked by GitHub stars. Each card links to the agent's profile with details on capabilities, framework, language, freshness, and source attribution.
20 agents in this capability · ranked by popularity
Top 20 Guardrail agents
An open-source SDK for AI agent safety
350 built-in guards for LLM safety: 26 PII regions, prompt injection, toxicity, bias, agent safety…
LangChain.js integration for open-guardrail — 215+ guards for LLM chains and agents, prompt injection, PII…
OpenAI SDK adapter for open-guardrail — 215+ guards for chat completions input/output, prompt injection, PII…
Framework-agnostic agent loop detection — sliding window similarity scoring to catch stuck agents
APort Agent Guardrail — shared core for AI agent and LLM frameworks (pre-action authorization)
APort Agent Guardrail for CrewAI — before_tool_call hook for AI agent and multi-agent crews
APort Agent Guardrail for LangChain/LangGraph — AsyncCallbackHandler for AI agent tool calls
EYDII Verify tools and guardrails for CrewAI — verify every agent action before execution
Forge Verify + Execute tools and guardrails for CrewAI — verify agent actions and track executions with…
EYDII Verify guardrail for OpenAI Agents SDK — verify every tool call before execution
Forge Verify + Execute middleware for OpenAI Agents SDK — verify tool calls and track executions with…
EYDII Verify middleware for LangGraph/LangChain — verify every tool call before execution
Forge Verify + Execute middleware for LangGraph/LangChain — verify tool calls and track executions with…
EYDII Verify tools for LlamaIndex — verify every agent action before execution
Forge Verify + Execute tools for LlamaIndex — verify agent actions and track executions with cryptographic…
Model-agnostic LLM output degeneration detector — 4-signal composite scoring in a single pass
AI safety guardrail — intent analysis, prompt injection detection, and policy enforcement for LLM applications
A lightweight Python guardrail SDK for content safety
Authensor guardrail adapter for LangChain/LangGraph