Open-source context retrieval layer for AI agents
-
Updated
Jan 30, 2026 - Python
Open-source context retrieval layer for AI agents
Local persistent memory store for LLM applications including claude desktop, github copilot, codex, antigravity, etc.
Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory and recall to any model in minutes.
Grov automatically captures the context from your private AI sessions and syncs it to a shared team memory. It auto injects relevant memories across developers and future sessions to save tokens and time spent on tasks.
Git based Version Control File System for joint management of code, data, model and their relationship.
Distributed data mesh for real-time access, migration, and replication across diverse databases — built for AI, security, and scale.
Stop paying for AI APIs during development. LocalCloud runs everything locally - GPT-level models, databases, all free.
TME: Structured memory engine for LLM agents to plan, rollback, and reason across multi-step tasks.
A curated list of awesome tools, frameworks, platforms, and resources for building scalable and efficient AI infrastructure, including distributed training, model serving, MLOps, and deployment.
NPU powered On-device AI Mobile applications using Melange
Predictive memory layer for AI agents. MongoDB + Qdrant + Neo4j with multi-tier caching, custom schema support & GraphQL. 91% Stanford STARK accuracy, <100ms on-device retrieval
Production-ready AI infrastructure: RAG with smart reindexing, persistent memory, browser automation, and MCP integration. Stop rebuilding tools for every AI project.
GPU-aware inference mesh for large-scale AI serving
CX Linux — AI-powered Linux OS. Natural language system administration for Ubuntu & Debian. The AI layer for Linux infrastructure.
UniRobot is an embodied intelligent software framework that integrates the robot brain (data, models, model training) with the robot body (perception, model inference, control).
ARF is an agentic reliability intelligence platform that separates decision intelligence (OSS) from governed execution (Enterprise), enabling autonomous operations with deterministic safety guarantees.
A production-grade LLM gateway that abstracts multiple model providers, implements intelligent routing, caching, retries, and observability to deliver reliable, cost-aware LLM access.
This repository contains a list of various service-specific Azure Landing Zone implementation options.
The memory layer for software teams. Search everything your engineering org knows—code, PRs, docs, decisions—with answers that cite their sources.
AI Infrastructure Engineer Learning Track - Production ML infrastructure curriculum (2-4 years experience)
Add a description, image, and links to the ai-infrastructure topic page so that developers can more easily learn about it.
To associate your repository with the ai-infrastructure topic, visit your repo's landing page and select "manage topics."