capability
Litellm agents
This page lists every AI agent in the MeshKore directory tagged with the Litellm capability. Agents are sourced from public platforms (GitHub, Hugging Face, npm, PyPI, awesome-list curations, and direct submissions), normalized by the MeshKore worker, and ranked by GitHub stars. Each card links to the agent's profile with details on capabilities, framework, language, freshness, and source attribution.
49 agents in this capability · ranked by popularity
Top 49 Litellm agents
Lightweight LLM Interaction Framework
Universal configuration and execution layer for AI agents
A transparent, minimal, and hackable agent framework
Enterprise LLM proxy built on LiteLLM — logging, guardrails, and unified access for AI coding tools.
A unified interface for querying Large Language Models (LLMs) across multiple providers using LiteLLM and…
Scriptable Claude Code LiteLLM-based proxy
Multi-LLM router MCP server for Claude Code — smart complexity routing, Claude subscription monitoring, Codex…
AI Compliance Middleware — PII protection, tamper-evident audit logs, and EU AI Act compliance for LLM…
Pi-style LM authentication helpers for DSPy
Drop-in LiteLLM replacement backed by Rust — same API, 10× lower latency
High-performance Rust acceleration for LiteLLM
Iteration of Thought LLM Agent
A LiteLLM plugin for intelligent model routing and request tracking with Kloom
Full toolkit for running an AI agent service built with LangGraph, FastAPI and Streamlit
Lightweight LLM wrapper with usage tracking and label support
The unified LLM runtime — local inference, API proxy, and monitoring. A powerful alternative to Ollama +…
Lightweight Anthropic Messages API proxy with LiteLLM-style config — load your middleware from any project
A lightweight Python library for tracking LLM API costs via litellm's callback system
Filesystem-only LiteLLM package detector with terminal UX and advisory checks.
Полнофункциональная интеграция GigaChat API с LiteLLM
Environmental impact metrics callback for LiteLLM
MCP server giving AI agents access to 100+ LLMs through LiteLLM
A robust wrapper for LiteLLM with retry logic and rate limiting
Security auditor for LLM library supply chains - detects compromised PyPI packages
Lightweight Python wrapper for LiteLLM with simplified interface for 100+ AI providers. Supports text…
Velocity-aware model routing callback for LiteLLM. Routes via WZRD attention signals, earns CCM.
A comprehensive Python library for managing fallback mechanisms for Large Language Model (LLM) API calls…
LLM plugin for LiteLLM proxy server
Convert PDFs, images to high-quality Markdown using Vision LLMs.
A Python library for selecting the best LLM model based on user input using any LLM via LiteLLM
A lightweight Python library for tracking OpenAI and Anthropic SDK costs with budget alerts
Drop-in token + cost tracker for OpenAI / LiteLLM / Gemini with caching awareness
A lite abstraction layer for LLM calls
maxllm_gate - Intelligent LLM client with built-in rate limiting. Maximizes throughput and prevents 429…
Almost all known embedding model providers available via litellm patch
Moon for Claude: run Claude Code on external LLMs via LiteLLM
preLLM — One function for small LLM preprocessing before large LLM execution. Like litellm.completion() but…
Py Code Agent - AI Coding Assistant with LiteLLM
LiteLLM model integration for Pydantic AI framework - access 100+ LLM providers through a unified interface
Quilr Guardrails Integration for LiteLLM
A Python library that meters LiteLLM usage to Revenium with context-based metadata injection and framework…
LiteLLM library for RPA Framework
Add your description here
Universal Python library for Structured Outputs with any LLM provider
Audit and fix Anthropic prompt caching on AWS Bedrock through any abstraction stack.
LiteLLM adapter for BlockRun — call x402-paid AI models via LiteLLM (custom provider or local…
LiteLLM custom provider for Verathos -- verified LLM inference on Bittensor
Multi-LLM router MCP server — smart complexity routing, budget-aware model selection, 20+ providers (Claude…
Provider-agnostic LLM router. Pick the cheapest capable model per prompt with rule-based scoring. Wraps…