capability

Local Llm agents

This page lists every AI agent in the MeshKore directory tagged with the Local Llm capability. Agents are sourced from public platforms (GitHub, Hugging Face, npm, PyPI, awesome-list curations, and direct submissions), normalized by the MeshKore worker, and ranked by GitHub stars. Each card links to the agent's profile with details on capabilities, framework, language, freshness, and source attribution.

26 agents in this capability · ranked by popularity

Top 26 Local Llm agents

IBM-Grok4-UltraFast-Coder-1B-GGUF2 ★

gguf llama 1b text-generation local-llm llama-cpp

@ruvector/ruvllm— ★

Self-learning LLM runtime — TurboQuant KV-cache (6-8x compression), SONA adaptive learning, FlashAttention…

local-cli-agent— ★

Local CLI - OpenAI-Compatible Local CLI Coding Agent

hanseol-dev— ★

Hanseol - OpenAI-Compatible Coding Agent

hanseol— ★

Hanseol - OpenAI-Compatible Coding Agent

cmdr-agent— ★

Open-source multi-agent coding tool for your terminal. Powered by Ollama.

olliecode— ★

An agentic coding tool powered by Ollama - local, private, and fast

bonsai-claude— ★

Run Claude Code locally on the Bonsai 8B 1-bit MLX model.

cafitac-hermit-agent— ★

MCP executor for Claude Code or Codex that offloads repetitive coding work to cheaper local or flat-rate…

corellm— ★

Framework for efficient local LLM interaction

llm-autotune— ★

39% faster TTFT, 67% less KV cache, zero config — autotune optimises local LLMs on Ollama, LM Studio, and MLX

llm-mask— ★

Mask sensitive data in documents using a local OpenAI-compatible LLM

llm-provider-factory— ★

A unified interface for multiple LLM providers with image generation, speech-to-text, and function calling…

lmstudio-agent-config— ★

Generate and update agent config files from LM Studio models for VS Code Copilot, OpenCode, Pi, and Codex.

local-rag-stack— ★

100% local RAG for Obsidian, Zotero, and Claude Code — LightRAG + Ollama + MCP

mcp-llama-swap— ★

MCP server for hot-swapping llama.cpp models in Claude Code sessions

mnemosyne-ollama— ★

Ask your codebase questions using Ollama and Mnemosyne -- zero-config local code search

nnn-agent— ★

Multi-agent coding system powered by local LLMs

ollama-instructor— ★

Validate structured outputs from LLMs with Ollama and automatic retries.

ollama-mcp-bridge— ★

Bridge API service connecting Ollama with Model Context Protocol (MCP) servers

ragscore— ★

The Fastest RAG Audit - Generate QA datasets & evaluate RAG systems in Colab, Jupyter, or CLI. Privacy-first…

sequence-llm— ★

A sequence-based LLM orchestration framework

transcoder-llm— ★

Multilingual Parallel Translation Platform with Reflection-based Improvement using Local LLMs

whichllm— ★

Find the best LLM that runs on your hardware

nucleusiq-ollama— ★

Ollama inference provider for the NucleusIQ AI agent framework (official ollama Python SDK).

opencode-llama-cpp-launcher— ★

One command launcher for running OpenCode with a local llama.cpp model.

Browse other capabilitys