A developer describes building a private AI assistant memory system using three components: a Karpathy-style LLM Wiki for durable structured knowledge, gbrain for distilling personal work signals from GitHub PRs, emails, Slack, and other sources, and gstack for applying that memory in daily Rails coding workflows. The architecture keeps raw private data local-only, uses redaction before any public export, and enforces mandatory human review gates before merging. The result is an AI coding assistant that starts from the developer's own review preferences, Rails patterns, and shipping standards rather than generic defaults. The post also serves as a services pitch for CTO AI transformation work.
Table of contents
The ShapeWhat Goes Into gbrainUseful Signal SourcesRedact Firstgbrain FirstAdding an LLM WikiImporting gbrain Into llmwikiWhat I Tested LocallyLocal LLM SetupHow gbrain and llmwiki Work TogetherHow gstack Uses ItMandatory Review GatesMiru Review and Shipping ExampleMore Workflow ExamplesRails Workflow ExampleBlog Workflow ExampleWhat I Do Not WantWhy This MattersAvailable for CTO AI Transformation and Rails AI WorkSort: