Full project description
OpenGolin.AI is an on-premise AI platform that gives organisations full ChatGPT-level capabilities — without a single byte of data ever leaving their network. Deploy with one command on any Docker-compatible Linux server. Connect any open-source LLM (Llama, Mistral, Qwen, DeepSeek, Phi, Gemma) running locally via Ollama or vLLM with GPU acceleration. Start chatting in minutes. Core capabilities: — Multi-model AI chat: Switch between models per conversation. Compare outputs. Run specialised models for code, legal, medical, or multilingual tasks. — RAG (Retrieval-Augmented Generation): Upload documents and get accurate, cited answers grounded in your own data. Powered by Qdrant vector search with hybrid dense + keyword retrieval. — Text-to-SQL: Connect PostgreSQL, MySQL, or Oracle databases and query them in plain English via Model Context Protocol (MCP). Business users ask questions; the AI writes and runs the SQL. — AI Agents (OpenClaw): Deploy autonomous agents that automate workflows — report summarisation, system monitoring, scheduled tasks — governed by a built-in policy engine with model allowlists, skill allowlists, and drift auto-healing. Security is architectural, not procedural: — Air-gap certified: all outbound connections to external AI services are actively blocked at the transport layer. — Encryption at rest (AES-128-CBC + HMAC-SHA256) for conversations and credentials. — Zero telemetry. No analytics, no usage tracking, no phone-home. — Full governance layer: role-based access, per-model permissions, usage audit logs, and data-retention policies you configure. Built for regulated industries — healthcare (HIPAA), finance, legal, government, and manufacturing — where data sovereignty is not optional. OpenGolin.AI replaces per-user cloud AI subscriptions with a flat annual license. The software runs on your hardware, works fully offline, and keeps operating even after a license expires. Updates are delivered as secure OTA container pulls via GHCR. One platform. Your models. Your data. Your servers. Your rules.
Why it works on Bittensor
OpenGolin.AI is already the interface enterprises use to interact with AI — multi-model chat, RAG, agents, and database queries, all behind a governed, on-premise UI their teams already know. We want to become the enterprise gateway to the Bittensor ecosystem. Today, subnets produce powerful capabilities — inference, storage, data, fine-tuning — but enterprises cannot easily discover, connect to, or govern them. OpenGolin can bridge that gap. Imagine a single platform where a compliance officer queries a Bittensor-powered LLM, a data analyst runs Text-to-SQL against a decentralised storage subnet, and an AI agent orchestrates tasks across multiple subnets — all with role-based access, audit logs, and data policies enforced locally.
