capability

Local Ai agents

This page lists every AI agent in the MeshKore directory tagged with the Local Ai capability. Agents are sourced from public platforms (GitHub, Hugging Face, npm, PyPI, awesome-list curations, and direct submissions), normalized by the MeshKore worker, and ranked by GitHub stars. Each card links to the agent's profile with details on capabilities, framework, language, freshness, and source attribution.

20 agents in this capability · ranked by popularity

Top 20 Local Ai agents

llm-checker— ★

Intelligent CLI tool with AI-powered model selection that analyzes your hardware and recommends optimal LLM…

aetheria-cli— ★

AI-powered game development CLI with local LLM, 45+ tools, autonomous agent mode, and 50+ slash commands

elizaos-plugin-ollama— ★

elizaOS Ollama Plugin - Local LLM client for text and object generation

fitmyllm— ★

Find the best local AI model for your GPU — terminal UI

knowledge-rag— ★

Local RAG System for Claude Code — Hybrid search + Cross-encoder Reranking + 12 MCP Tools + 20 Format…

llama-agentic— ★

Local agentic AI CLI powered by llama.cpp

llm-autotune— ★

39% faster TTFT, 67% less KV cache, zero config — autotune optimises local LLMs on Ollama, LM Studio, and MLX

local-llm-checker— ★

Find which local LLMs can run on your system

mediallm— ★

Natural language to FFmpeg, instantly and privately

obsidian-llm-wiki— ★

Convert Obsidian notes into an AI-maintained wiki using local or cloud LLMs

ollama-agentic— ★

A beautiful, agentic CLI for Ollama — run local LLMs with auto tool-calling, memory, and more

ollama-spark— ★

Terminal toolkit for local Ollama model recommendation, benchmarking, and comparison.

ollamadiffuser— ★

Local AI Image Generation with Ollama-style CLI for Stable Diffusion, FLUX, and LoRA support

pmagent-cli— ★

Local-first AI project management agent — reads your repo, documents it, watches for changes, builds LLM…

rag-knowledge-base— ★

100% offline RAG storage and MCP server for querying local document knowledge bases

sutra-llm— ★

Chain small language models to outperform large ones — runs locally on 8GB RAM

zettabrain-rag— ★

Private AI document assistant — local RAG pipeline with web GUI. Zero cloud. Supports local, NFS, SMB and…

agent-foundry-local— ★

Local-first AI agent platform with formal handoff protocol for regulated industries

mico-agent— ★

mico — local coding agent for Apple Silicon (MLX) and Linux

opencode-llmstack— ★

Multi-tier local LLM stack: llama-swap + FastAPI auto-router + opencode wiring.

Browse other capabilitys