capability
Guardrails agents
This page lists every AI agent in the MeshKore directory tagged with the Guardrails capability. Agents are sourced from public platforms (GitHub, Hugging Face, npm, PyPI, awesome-list curations, and direct submissions), normalized by the MeshKore worker, and ranked by GitHub stars. Each card links to the agent's profile with details on capabilities, framework, language, freshness, and source attribution.
79 agents in this capability · ranked by popularity
Top 79 Guardrails agents
Superagent protects your AI applications against prompt injections, data leaks, and harmful outputs. Embed…
Fastest enterprise AI gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode…
Pre-action authorization guardrails for AI agents - Works with OpenClaw, Claude Code, LandChain, CrewAI and…
Deterministic tool-call guardrails for pi — enforce rules with before-tool hooks instead of prompts
Intent DNA — Declarative policy layer for AI agent behavior
Runtime rule enforcement for AI agent tool calls
Guardrail regression testing for LLM agent tool calls
Deterministic, config-driven guardrails for LLM agent tool calls
Declarative workflow orchestration for LLM agents — schemas, routers, sub-workflow composition, full audit
MVA (Model-View-Agent) framework for the Model Context Protocol. Structured perception packages with…
Guards, Evals & Observability for AI applications - works seamlessly with LangChain/LangGraph
Real-time behavioral guardrails for AI agents. Detect retry storms, circular reasoning, budget overruns, and…
Content safety, PII handling, and output validation for HazelJS AI applications
OpenAI Guardrails: A TypeScript framework for building safe and reliable AI systems
Provider-agnostic agent framework with guardrails, memory, and multi-agent networks
OpenCode plugin that detects prompt injection in tool call outputs using an LLM judge
Guardrails and file integrity scanning for OpenClaw agents
Agent Development Kit for multi-agent orchestration over the Responses API via LlamaStack
MVA (Model-View-Agent) framework for the Model Context Protocol. Structured perception packages with…
Five governance questions for your AI agent system. Identity, restraint, accountability, memory, charter…
Action-level governance for AI agents -- control what they DO, not what they SAY
Framework-agnostic observability, audit, and eval for AI agent applications
Compliance & guardrails for AI agents — PII filtering, audit logging, GDPR/AI Act checks, kill switch
Agent Policy Layer - Portable, composable policies for AI agents
A layered protocol and reference implementation for codifying risk in autonomous agent actions.
A governance and policy enforcement layer for AI agents and non-human identities
Production-grade safety boundaries for AI agents - policies, tracing, replay, and human-in-the-loop approval
Comprehensive security framework for agentic AI applications — 8-layer defense-in-depth.
Real-time cost observability and guardrails for AI agents
Python SDK for AgentGuard — the firewall for AI agents
Agent-safe BigQuery client with guardrails, cost controls, and tool wrappers for agentic AI.
Security scanner and runtime firewall for AI agents and MCP servers
Runtime safety for AI agents: intercept tool calls, policy scoring, and audit logging
Runtime policy enforcement for AI agent sessions
The reliability layer for AI agents in production
APort Agent Guardrail — shared core for AI agent and LLM frameworks (pre-action authorization)
Production-grade LLM observability. G-ARVIS scoring for Groundedness, Accuracy, Reliability, Variance…
Compile structured Claude Code workflow policy into versioned artifacts and enforce it against runtime…
A comprehensive collection of AI guardrails built with DSPy for content moderation and security.
Official Governance AI Agent Guardrails SDK for Python agents and tool execution
AI control layer for Langchain agents
Evaluation-First Control Layer for Enterprise RAG Systems
Secure your LangChain agents with per-agent identity, policy enforcement, and tamper-proof audit logs.
Official SupraWall security integration for LangChain Python
LangChain integration for Velatir - AI governance, compliance, and human-in-the-loop workflows
llama-index packs zenguard integration
Guardrails for LLM-generated medical and health content
Validate LLM outputs against schemas with automatic retry and JSON extraction.
Safety and guardrails toolkit for LLM applications
Production-grade silent failure detection for LLM applications — hallucination alerts, PII leak detection…
OWASP LLM Top 10 security scanner for AI-powered applications
Healthcare-specific LLM guardrails middleware for clinical safety
Tonic Textual PII redaction tools and guardrails for the OpenAI Agents SDK
Trust & governance layer for OpenAI Agents SDK — policy enforcement, trust-gated handoffs, and Merkle audit…
Production-ready guardrails for Pydantic AI with native integration patterns
Quilr Guardrails Integration for LiteLLM
SENTINEL — AI Security Platform. 49 Rust Engines + Micro-Model Swarm. Defense, Offense, Framework.
Lightweight framework for LLM agents with tools, hooks, guardrails, and provider routing
Enterprise-grade LLM security framework with 40+ scanners and programmable guardrails
VEXIS AI Governance adapter for CrewAI — task guardrails, step callbacks, and audit trails for every crew…
VEXIS AI Governance adapter for LangChain — automatic policy checks, PII masking, and audit trails for every…
Pure-function policy matrix evaluator for AI coding agents (repo x capability x context ->…
Framework-agnostic safety helpers for tool-calling LLM agent loops.
Production guardrails for AI coding agents
APort guardrails for LangChain/LangGraph — callback handler
Utilities and helpers for LangChain and LangGraph with AWS services
ABS CORE Governance Adapter for LangGraph/LangChain
Official SupraWall security integration for LangChain (TypeScript)
Sanitize LLM outputs before HTML, SQL, shell, or markdown sinks. Python port of…
Python SDK for UMAI Agent Mesh Governance and guardrails.
Trust & governance layer for OpenAI Agents SDK — policy enforcement, trust-gated handoffs, and hash-chained…
Production-grade boto3 toolkit for AWS Bedrock: typed retry, per-model timeouts, capability lookup, full…
Governance components for Langflow — policy enforcement, trust routing, audit logging, and compliance…
Lightweight validation, repair, and retry helpers for LLM outputs.
Lightweight agentic loop detector and safety monitor. No LLM required.
Official Python client for Open AI Guardrails policy distribution, audit evidence, and OPA control-plane APIs.
Helmet.js for AI Agents — Lightweight security middleware for production AI agents
Governance middleware for PydanticAI — semantic policy enforcement, trust scoring, and audit trails for agent…
Static repository guardrails for agent-touched codebases.