capability

Guardrails agents

This page lists every AI agent in the MeshKore directory tagged with the Guardrails capability. Agents are sourced from public platforms (GitHub, Hugging Face, npm, PyPI, awesome-list curations, and direct submissions), normalized by the MeshKore worker, and ranked by GitHub stars. Each card links to the agent's profile with details on capabilities, framework, language, freshness, and source attribution.

79 agents in this capability · ranked by popularity

Top 79 Guardrails agents

superagent6,592 ★

Superagent protects your AI applications against prompt injections, data leaks, and harmful outputs. Embed…

bifrost4,860 ★

Fastest enterprise AI gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode…

aport-agent-guardrails20 ★

Pre-action authorization guardrails for AI agents - Works with OpenClaw, Claude Code, LandChain, CrewAI and…

pi-steering-hooks3 ★

Deterministic tool-call guardrails for pi — enforce rules with before-tool hooks instead of prompts

intentdna— ★

Intent DNA — Declarative policy layer for AI agent behavior

@edictum/core— ★

Runtime rule enforcement for AI agent tool calls

probitas— ★

Guardrail regression testing for LLM agent tool calls

frenum— ★

Deterministic, config-driven guardrails for LLM agent tool calls

llm-rail— ★

Declarative workflow orchestration for LLM agents — schemas, routers, sub-workflow composition, full audit

@vurb/core— ★

MVA (Model-View-Agent) framework for the Model Context Protocol. Structured perception packages with…

@dooor-ai/toolkit— ★

Guards, Evals & Observability for AI applications - works seamlessly with LangChain/LangGraph

@varpulis/agent-runtime— ★

Real-time behavioral guardrails for AI agents. Detect retry storms, circular reasoning, budget overruns, and…

@hazeljs/guardrails— ★

Content safety, PII handling, and output validation for HazelJS AI applications

@openai/guardrails— ★

OpenAI Guardrails: A TypeScript framework for building safe and reliable AI systems

@kognitivedev/agents— ★

Provider-agnostic agent framework with guardrails, memory, and multi-agent networks

opencode-injection-guard— ★

OpenCode plugin that detects prompt injection in tool call outputs using an LLM judge

@enkryptai/clawpatrol— ★

Guardrails and file integrity scanning for OpenClaw agents

@augment-adk/augment-adk— ★

Agent Development Kit for multi-agent orchestration over the Responses API via LlamaStack

@vinkius-core/mcp-fusion— ★

MVA (Model-View-Agent) framework for the Model Context Protocol. Structured perception packages with…

agent-governance-check— ★

Five governance questions for your AI agent system. Identity, restraint, accountability, memory, charter…

agent-guardrail— ★

Action-level governance for AI agents -- control what they DO, not what they SAY

agent-observe— ★

Framework-agnostic observability, audit, and eval for AI agent applications

agent-policy-gateway-mcp— ★

Compliance & guardrails for AI agents — PII filtering, audit logging, GDPR/AI Act checks, kill switch

agent-policy-layer— ★

Agent Policy Layer - Portable, composable policies for AI agents

agent-risk-engine— ★

A layered protocol and reference implementation for codifying risk in autonomous agent actions.

agent-safe— ★

A governance and policy enforcement layer for AI agents and non-human identities

agent-safety-layer— ★

Production-grade safety boundaries for AI agents - policies, tracing, replay, and human-in-the-loop approval

agentarmor-core— ★

Comprehensive security framework for agentic AI applications — 8-layer defense-in-depth.

agentguard-ram— ★

Real-time cost observability and guardrails for AI agents

agentguardproxy— ★

Python SDK for AgentGuard — the firewall for AI agents

agentic-bq— ★

Agent-safe BigQuery client with guardrails, cost controls, and tool wrappers for agentic AI.

agentic-shield— ★

Security scanner and runtime firewall for AI agents and MCP servers

agentiva— ★

Runtime safety for AI agents: intercept tool calls, policy scoring, and audit logging

agentpolicy— ★

Runtime policy enforcement for AI agent sessions

agentx-sdk— ★

The reliability layer for AI agents in production

aport-agent-guardrails— ★

APort Agent Guardrail — shared core for AI agent and LLM frameworks (pre-action authorization)

argus-llm— ★

Production-grade LLM observability. G-ARVIS scoring for Groundedness, Accuracy, Reliability, Variance…

claude-md-compiler— ★

Compile structured Claude Code workflow policy into versioned artifacts and enforce it against runtime…

dspy-guardrails— ★

A comprehensive collection of AI guardrails built with DSPy for content moderation and security.

governanceai-guardrails-agent— ★

Official Governance AI Agent Guardrails SDK for Python agents and tool execution

handlebar-langchain— ★

AI control layer for Langchain agents

hardrag-core— ★

Evaluation-First Control Layer for Enterprise RAG Systems

langchain-ai-identity— ★

Secure your LangChain agents with per-agent identity, policy enforcement, and tamper-proof audit logs.

langchain-suprawall— ★

Official SupraWall security integration for LangChain Python

langchain-velatir— ★

LangChain integration for Velatir - AI governance, compliance, and human-in-the-loop workflows

llama-index-packs-zenguard— ★

llama-index packs zenguard integration

llm-medical-guard— ★

Guardrails for LLM-generated medical and health content

llm-output-guard— ★

Validate LLM outputs against schemas with automatic retry and JSON extraction.

llm-shelter— ★

Safety and guardrails toolkit for LLM applications

llm-watchdog— ★

Production-grade silent failure detection for LLM applications — hallucination alerts, PII leak detection…

llmarmor— ★

OWASP LLM Top 10 security scanner for AI-powered applications

medguard-llm— ★

Healthcare-specific LLM guardrails middleware for clinical safety

openai-agents-tonic-textual— ★

Tonic Textual PII redaction tools and guardrails for the OpenAI Agents SDK

openai-agents-trust— ★

Trust & governance layer for OpenAI Agents SDK — policy enforcement, trust-gated handoffs, and Merkle audit…

pydantic-ai-guardrails— ★

Production-ready guardrails for Pydantic AI with native integration patterns

quilr-litellm-guardrails— ★

Quilr Guardrails Integration for LiteLLM

sentinel-llm-security— ★

SENTINEL — AI Security Platform. 49 Rust Engines + Micro-Model Swarm. Defense, Offense, Framework.

swarm-agents— ★

Lightweight framework for LLM agents with tools, hooks, guardrails, and provider routing

ultraguard— ★

Enterprise-grade LLM security framework with 40+ scanners and programmable guardrails

vexis-crewai— ★

VEXIS AI Governance adapter for CrewAI — task guardrails, step callbacks, and audit trails for every crew…

vexis-langchain— ★

VEXIS AI Governance adapter for LangChain — automatic policy checks, PII masking, and audit trails for every…

yui-agent-policy— ★

Pure-function policy matrix evaluator for AI coding agents (repo x capability x context ->…

@theajmalrazaq/agentsloopguard— ★

Framework-agnostic safety helpers for tool-calling LLM agent loops.

agent-guardrails— ★

Production guardrails for AI coding agents

@aporthq/aport-agent-guardrails-langchain— ★

APort guardrails for LangChain/LangGraph — callback handler

langchain-aws-utils— ★

Utilities and helpers for LangChain and LangGraph with AWS services

@oconnectortechnology/abs-langgraph— ★

ABS CORE Governance Adapter for LangGraph/LangChain

suprawall-langchain— ★

Official SupraWall security integration for LangChain (TypeScript)

llm-output-sanitizer-py— ★

Sanitize LLM outputs before HTML, SQL, shell, or markdown sinks. Python port of…

umai-agent-sdk— ★

Python SDK for UMAI Agent Mesh Governance and guardrails.

agentmesh_openai_agents_trust— ★

Trust & governance layer for OpenAI Agents SDK — policy enforcement, trust-gated handoffs, and hash-chained…

bedrock-ops— ★

Production-grade boto3 toolkit for AWS Bedrock: typed retry, per-model timeouts, capability lookup, full…

langflow_agentmesh— ★

Governance components for Langflow — policy enforcement, trust routing, audit logging, and compliance…

llmshield-ai— ★

Lightweight validation, repair, and retry helpers for LLM outputs.

nakata-agentguard— ★

Lightweight agentic loop detector and safety monitor. No LLM required.

openaiguardrails-sdk— ★

Official Python client for Open AI Guardrails policy distribution, audit evidence, and OPA control-plane APIs.

pyagentguard— ★

Helmet.js for AI Agents — Lightweight security middleware for production AI agents

pydantic_ai_agentmesh— ★

Governance middleware for PydanticAI — semantic policy enforcement, trust scoring, and audit trails for agent…

yui-agent-guard— ★

Static repository guardrails for agent-touched codebases.

Browse other capabilitys