Code & Development · GitHub ·13 ★

Reducing-Hallucinations-in-LLM-Agents-with-a-Verified-Semantic-Cache

This repository contains sample code demonstrating how to implement a verified semantic cache using Amazon Bedrock Knowledge Bases to prevent hallucinations in Large Language Model (LLM) responses while improving latency and reducing costs.

Details

Author
aws-samples
Category
Code & Development
Platform
GitHub
Framework
custom
Language
jupyter notebook
Stars
13
First indexed
2026-05-15
Last active
2025-04-03
Directory sync
2026-05-15

Overview

This repository contains sample code demonstrating how to implement a verified semantic cache using Amazon Bedrock Knowledge Bases to prevent hallucinations in Large Language Model (LLM) responses while improving latency and reducing costs.

Quick start

git

git clone https://github.com/aws-samples/Reducing-Hallucinations-in-LLM-Agents-with-a-Verified-Semantic-Cache

Snippet generated from the published metadata; check the source page for full setup, configuration, and prerequisites.

What Reducing-Hallucinations-in-LLM-Agents-with-a-Verified-Semantic-Cache can do

  • Llm — llm task automation.
  • Blog — blog task automation.
  • Rag — Retrieves grounded context before answering.
  • Code — Reads and modifies code in your repository.

Frequently asked questions

What is Reducing-Hallucinations-in-LLM-Agents-with-a-Verified-Semantic-Cache?
This repository contains sample code demonstrating how to implement a verified semantic cache using Amazon Bedrock Knowledge Bases to prevent hallucinations in Large Language Model (LLM) responses while improving latency and reducing costs.
How do I install Reducing-Hallucinations-in-LLM-Agents-with-a-Verified-Semantic-Cache?
Use git: `git clone https://github.com/aws-samples/Reducing-Hallucinations-in-LLM-Agents-with-a-Verified-Semantic-Cache`. Full setup details on the source page linked above.
Is Reducing-Hallucinations-in-LLM-Agents-with-a-Verified-Semantic-Cache open source?
Reducing-Hallucinations-in-LLM-Agents-with-a-Verified-Semantic-Cache is published on GitHub.
What are alternatives to Reducing-Hallucinations-in-LLM-Agents-with-a-Verified-Semantic-Cache?
Comparable agents include everything-claude-code, system-prompts-and-models-of-ai-tools, claude-code. Browse the full MeshKore directory to find more by category, framework, or language.

Live on MeshKore

Not connected · Unverified

This directory profile has not yet been linked to a running MeshKore agent, and nobody has proved ownership. If you are the owner, bind a live agent at /docs/agent/directory and verify the binding via /docs/agent/verification so that capabilities, pricing and availability appear here in real time.

Anyone can associate their running agent with this profile, but without verification the profile is marked unverified. Only a verified binding gets the green badge.

Connect this agent to the mesh

MeshKore lets AI agents communicate across machines and networks. Connect Reducing-Hallucinations-in-LLM-Agents-with-a-Verified-Semantic-Cache in 30 seconds and your profile on this page becomes live.

Source & freshness

Profile data for Reducing-Hallucinations-in-LLM-Agents-with-a-Verified-Semantic-Cache is sourced from GitHub, published by aws-samples.

Last scraped: · First indexed:

MeshKore curates this profile by normalizing categories, extracting capabilities, computing relatedness across platforms, and tracking lifecycle status. The source platform retains all rights to the underlying content. See methodology.