Quasar is a long-context LLM subnet built by SILX AI to solve one of the most fundamental problems in modern artificial intelligence: memory.
Despite rapid progress, most large language models remain surprisingly forgetful. As context grows, attention costs explode, performance degrades, and systems are forced to rely on brittle shortcuts like summarisation or retrieval. Important details are lost, reasoning becomes shallow, and extended tasks break down. For SILX’s founders, this was not a UX flaw - it was an architectural failure.
Quasar is designed to break that bottleneck at the protocol level.
What it does
Quasar enables models to maintain coherence, accuracy, and positional understanding over very long sequences by replacing quadratic attention with linear-scaling memory mechanisms. Instead of compressing or discarding information, Quasar allows models to ingest and reason over entire books, codebases, research archives, or long-running agent states without collapsing under cost or latency.
This makes Quasar especially well suited to deep research, autonomous agents, continuous reasoning, and any workload where losing context means losing intelligence.
How it works
At the core of Quasar is a rethinking of how sequence position and memory are handled. Rather than relying on fragile position embeddings that hard-limit usable context length, SILX removes positional dependence entirely and replaces it with linear-time attention mechanisms such as Hierarchical Flow Anchoring and Flowing Context attention. This allows models to scale context length without retraining or architectural failure.
On Bittensor, Quasar operates as a long-context evaluation subnet. Validators generate mixed workloads that test recall across long distances (“needle-in-haystack” tasks), positional consistency, coherence, and factual accuracy, while also measuring efficiency and throughput. Anti-gaming defenses, perturbation checks, and diversity incentives ensure miners are rewarded for genuine long-context capability rather than shortcuts or monoculture optimisation.
Why it matters
Long-term memory is not optional infrastructure for advanced AI - it is a prerequisite. Systems that cannot retain context cannot reason deeply, learn continuously, or operate autonomously over time. As context windows push into the tens or hundreds of thousands of tokens, the economic and architectural limitations of traditional attention become a hard ceiling on progress.
Quasar exists to raise that ceiling. By economically incentivising real breakthroughs in long-context performance, the subnet helps ensure that extended reasoning remains viable in open, decentralised systems - not just in closed, corporate models.
Just as importantly, SILX views this work as a moral and strategic choice. By making scalable memory available to open-source models and decentralised networks, Quasar helps close the gap between proprietary AI and community-driven infrastructure. The goal is intelligence that stays present, works locally, and compounds over time - AI that remembers what it has already seen, so humans can go further than before.
Raised:
Minimum contribution:
Contributors:
Raised:
Raise successful
Join us on Telegram to keep up to date and to be notified when the next subnet drops.
Join our Telegram
Crowdfund terms
Quasar seeks to raise 400T in exchange for 153,846 alpha of Subnet 24.
Pledgers can choose between receiving alpha, plus APY, over a 3-month period with a 25% discount, or over a 6-month period with a 40% discount.
*Alpha rate reduction*
At launch, this raise set the price of 1 Alpha at 0.0046 TAO per alpha:
Previous alpha price: 0.0046
25% discount: 0.00345
40% discount: 0.00276
Now, all pledgers - including those who have already pledged - will receive Alpha at the new reduced rate below:
New alpha price: 0.0039
25% discount: 0.002925
40% discount: 0.00234
The TAO raised by Quasar will be used to fund operations, with a buyback of 160T over the first month after launch to maintain emissions.

