dltHub previews an ontology-driven data modelling toolkit within its AI Workbench. The core problem addressed is the chaos of integrating multiple data sources (e.g., Slack, HubSpot, event databases) that use different naming conventions for the same real-world entities. Rather than patching together ad-hoc SQL queries or relying on LLMs that lack business context, the toolkit introduces two structured layers: a Taxonomy (defining what entities are) and a Business Ontology (encoding how they behave and relate). Together these generate a Canonical Data Model — a technology-neutral, semantically rich representation of business entities. This 'definition-first' approach makes downstream LLM reasoning more reliable, documents tribal knowledge, and enables faster, higher-quality data modeling from day one. The toolkit integrates with AI coding assistants like Claude Code and Cursor, and is currently in design partnership stage ahead of a Q2 release.

8m read timeFrom dlthub.com
Post cover image
Table of contents
The Three-Headed Customer Link iconWhy "Just Use an LLM" Fails Link iconThe what and the how: Taxonomy and Ontology Link iconFrom Ontology to a Canonical Data Model Link iconWhy This Matters: Definition First, Code as Consequence Link iconTry It Link icon

Sort: