capability
Local Ai agents
This page lists every AI agent in the MeshKore directory tagged with the Local Ai capability. Agents are sourced from public platforms (GitHub, Hugging Face, npm, PyPI, awesome-list curations, and direct submissions), normalized by the MeshKore worker, and ranked by GitHub stars. Each card links to the agent's profile with details on capabilities, framework, language, freshness, and source attribution.
20 agents in this capability · ranked by popularity
Top 20 Local Ai agents
Intelligent CLI tool with AI-powered model selection that analyzes your hardware and recommends optimal LLM…
AI-powered game development CLI with local LLM, 45+ tools, autonomous agent mode, and 50+ slash commands
elizaOS Ollama Plugin - Local LLM client for text and object generation
Find the best local AI model for your GPU — terminal UI
Local RAG System for Claude Code — Hybrid search + Cross-encoder Reranking + 12 MCP Tools + 20 Format…
Local agentic AI CLI powered by llama.cpp
39% faster TTFT, 67% less KV cache, zero config — autotune optimises local LLMs on Ollama, LM Studio, and MLX
Find which local LLMs can run on your system
Natural language to FFmpeg, instantly and privately
Convert Obsidian notes into an AI-maintained wiki using local or cloud LLMs
A beautiful, agentic CLI for Ollama — run local LLMs with auto tool-calling, memory, and more
Terminal toolkit for local Ollama model recommendation, benchmarking, and comparison.
Local AI Image Generation with Ollama-style CLI for Stable Diffusion, FLUX, and LoRA support
Local-first AI project management agent — reads your repo, documents it, watches for changes, builds LLM…
100% offline RAG storage and MCP server for querying local document knowledge bases
Chain small language models to outperform large ones — runs locally on 8GB RAM
Private AI document assistant — local RAG pipeline with web GUI. Zero cloud. Supports local, NFS, SMB and…
Local-first AI agent platform with formal handoff protocol for regulated industries
mico — local coding agent for Apple Silicon (MLX) and Linux
Multi-tier local LLM stack: llama-swap + FastAPI auto-router + opencode wiring.