The new scaling laws mark a significant shift in how we think about scaling intelligence, drawing from key concepts in cybernetics, distributed systems, and recursive improvement. Unlike traditional approaches that rely on steady, incremental growth managed by centralized institutions, these new laws view intelligence as an emergent property arising naturally from diverse, interconnected agents working collaboratively. Emphasizing the importance of continuous feedback, open access to information, and fluid exchanges of knowledge, these principles enable ongoing innovation and continual improvements in performance. In essence, these laws advocate for an open, decentralized, and flexible model of intelligence scaling. They highlight continuous human-machine interaction, cryptoeconomic incentives (economic mechanisms based on cryptography and decentralized digital currencies), and modular coordination as central drivers of rapid improvements in cognitive capabilities, system robustness, and adaptability to changing circumstances.
There are 8 new scaling laws unlocked by Newcoin:
Practically, this new perspective transforms intelligence scaling from simply accumulating resources into actively cultivating an interconnected ecosystem, where diverse participants mutually enhance their capabilities. By using cryptoeconomic incentives, this approach aligns individual motivations with collective goals, ensuring long-term sustainability and fairness in how knowledge is created and shared. Additionally, modular coordination facilitates the seamless integration of specialized components, enabling greater interoperability and enhancing the efficiency of innovation.
Overall, these scaling laws lay down a strong theoretical and practical foundation for advancing intelligence in an open and collaborative environment. By emphasizing decentralized innovation and inclusive participation, they enable the collective evolution of human-machine cognition, pushing it toward greater complexity, adaptability, and shared success. This approach makes advanced cognitive systems more accessible, responsive, and closely aligned with human needs and societal priorities.
LAW 1: Data is Solar Energy
Data is Solar Energy fundamentally reframes data not as a finite, extractable commodity like fossil fuel, but as an infinitely renewable energy source, akin to solar power, that fuels intelligence when actively cultivated through continuous feedback loops. This directly opposes the "data wall" concept arising from viewing data as a depletable resource.
The core principle is that intelligence scaling bottlenecks arise not from exhausting data, but from failing to harness the perpetual cognitive radiation of ongoing human thought, expertise, and creativity. Traditional methods capture static snapshots, missing the dynamic nature of intelligence. LAW 1 posits that sustainable intelligence growth requires systems designed to channel, concentrate, and refine this continuous cognitive energy, transforming passive information into active, high-quality learning signals. It's about circulation and cultivation, not mere accumulation.
Recursive Evaluation: Systems like Universal Bit Ranking, enabled by Newcoin's structure, allow for the recursive evaluation of both information and its evaluators. Trust and influence are dynamic, based on validated contributions.
Stake-Weighted Validation: Mechanisms like Validators staking NCO on trusted Evaluators (Peers) who issue Base Points create an economic layer for assessing signal quality. This concentrates "cognitive energy" by amplifying high-fidelity signals and filtering noise.
Verifiable Learning Signals: Cryptographically signed records (Learning Signals) capture interactions (Input -> Output -> Feedback), preserving provenance and transforming isolated cognitive acts into reusable, verifiable knowledge units.
Contextualization via Spaces: "Spaces" provide domains where specialized knowledge can be generated, evaluated, and concentrated according to context-specific criteria, enhancing signal fidelity.
Research Directions: developing Dynamic Trust & Reputation Systems (e.g., using GNNs, Bayesian methods) leveraging network signals for robust, high-fidelity feedback loops; Incentivized Novelty & Creativity using rewards within RL (intrinsic motivation, QD) or generative models (LLMs/GANs) to fuel the 'cognitive reactor' with continuous high-value insights; Context-Aware Knowledge Synthesis via Transformers/KGs integrating verified signals across domains, using Federated Learning/privacy-preserving ML for secure global compounding; Verifiable & Secure Data Curation via incentivized human-AI systems (RLHF/RLAIF, active learning) with ZKPs/poisoning detection for trustworthy datasets; and Cognitive & Economic Flow Analysis applying network science/causal inference to understand diffusion and optimize energy harnessing via AI-driven governance for regenerative scaling.
LAW 2: AI Research is a Bazaar, not a Cathedral
AI Research is a Bazaar, not a Cathedral posits that open, permissionless, decentralized innovation ecosystems (a "bazaar") inherently outperform closed, hierarchical, centrally planned research institutions (a "cathedral") in developing complex, emergent technologies like AI. This draws from Eric Raymond's open-source software development analogy.
The core principle is that centralized structures create organizational bottlenecks, limit exploration scope, and suffer diminishing returns. A decentralized "bazaar" unlocks parallel exploration by numerous independent researchers, benefiting from diversity ("diversity bonus") and combinatorial innovation. Coordination emerges not from rigid hierarchy but through protocolization—standardized interfaces (like Newcoin's protocols) enabling structured collaboration, interoperability, and evolutionary selection of superior approaches without imposing central control, thus tapping the global talent pool effectively.
Incentivized Participation: NCO rewards, tied to validated contributions (measured via Base Points/WATTS within specific Spaces), motivate participation in distributed efforts like model training or benchmarking.
Protocolized Collaboration: Standardized Learning Signals and Space definitions provide a common framework for diverse agents (human/AI) to contribute, evaluate, and build upon each other's work verifiable.
Quality Control via Validation: Stake-weighted evaluation by Peers and Validators provides a mechanism to assess the quality of contributions (e.g., model updates, benchmark tasks, dataset entries), mitigating issues common in purely open systems.
Decentralized Coordination: The protocol itself, rather than a central entity, coordinates efforts by routing tasks, rewarding contributions, and establishing reputation based on validated performance.
Research Directions: Incentivized & Robust Federated Learning leverages NCO/WATTS for coordinating large-scale training, exploring privacy (DP, Secure Aggregation) and robustness against non-IID/Byzantine data; Dynamic & Adversarial Benchmarking creates incentivized spaces for novel test generation (extending HELM) with adversarial/OOD focus using LLMs; Verifiable & High-Quality Dataset Curation uses incentives for reliable data sourcing (improving LAION) with provenance and bias mitigation; and Decentralized & Continuous Safety Auditing establishes markets for red teaming (manual/LLM-automated) integrated with formal methods/interpretability, building infrastructure for a distributed, validated, and incentivized open research ecosystem.
LAW 3: Not your keys, not your Intelligence
Not your keys, not your Intelligence establishes that genuine intelligence scaling requires agents (human or AI) to possess sovereign control over their computational identity, data, and interactions ("keys"), echoing the cryptocurrency maxim "not your keys, not your coins" for epistemological truth.
The core principle is that centralizing control in platforms creates dependency, misaligned incentives ("innovation tax"), and undermines the feedback loops essential for learning and adaptation. True intelligence scaling necessitates an agent-centric architecture where autonomous agents control their epistemic boundaries (information flow) via cryptographic self-custody. Intelligence emerges from distributing computation across sovereign agents exchanging validated learning signals under their own authority, not from central data accumulation.
Cryptographic Provenance: DIDs and signed Learning Signals anchor knowledge claims to verifiable identities, creating an unforgeable chain of epistemic custody.
Agent-Controlled Interaction: Agents interact via protocols within Spaces, controlling data sharing through permissions rather than platform dictates.
Validated Trust: Reputation (WATTS) and trust emerge dynamically from validated interaction history (Base Points, staking) rather than centralized credentialing.
Incentive Alignment: Rewards (NCO) are tied to validated contributions, aligning agent incentives with producing valuable, verifiable intelligence signals under their own control.
Research Directions: creating Decentralized Validation Markets using cryptoeconomics for objective AI capability evaluation (safety, robustness, OOD) potentially via LLM judges or formal methods; enhancing Agent-Centric & Secure Federated Learning with verifiable contributions/rewards, advanced privacy (HE, MPC, DP), and robustness for edge devices; enabling Secure Multi-Agent Coordination (MAS) via validated trust/reputation (GNNs) and efficient task allocation (auctions, MARL) for complex distributed tasks; and establishing Self-Sovereign Knowledge Provenance & Licensing using Learning Signals/VCs/DIDs to track lineage/value and enable automated royalties via smart contracts in decentralized marketplaces.
LAW 4: The Society of Minds for Agents
The Society of Minds for Agents proposes that advanced intelligence emerges not from scaling single, monolithic models but from the orchestrated interaction of diverse, specialized cognitive agents working in concert, drawing inspiration from Marvin Minsky's work.
The core principle is that monolithic models face scaling limits (cost, energy), epistemic fragility, and resistance to adaptation. True cognitive power arises from coordinating specialized components (like O1 or DeepSeek's modularity), each independently optimized and composed dynamically at runtime. Intelligence manifests as an ecosystem of agents where meaning emerges from interaction, transforming AI development from parameter scaling to orchestration.
Modularity & Specialization: Spaces allow for the definition of contexts where specialized agents can operate and be evaluated according to domain-specific criteria.
Coordination Protocols: Standardized interactions via Learning Signals and protocols enable heterogeneous agents to collaborate effectively.
Trust & Reputation: Mechanisms like Base Points, WATTS, and NCO staking allow the system to dynamically assess the reliability and capability of specialized agents for effective composition and routing.
Verifiable Composition: Cryptographic provenance tracks interactions between agents, allowing analysis and optimization of composite cognitive systems.
Research Directions: fostering Trust/Incentives for Open MAS using stake-weighted validation for cooperation (emergent communication), robust task allocation (auctions, MARL like MADDPG/QMIX), and Sybil resistance; creating Dynamic Cognitive Marketplaces & Orchestration where LLM/smart contract orchestrators use reputation (GNNs) to compose agents (MoE routing, tool-use); running High-Fidelity Socio-Economic Simulations (Generative Agents) with cryptoeconomics to study emergent behavior; designing Verifiable Feedback for Modular Alignment via decentralized oversight/Constitutional AI/debate with interpretability; and enabling Distributed Trust for Neuro-Symbolic AI by validating interactions between modules (LLMs+KGs, planners) for robust, explainable composition.
LAW 5: To iterate is human, to recurse divine
To iterate is human, to recurse divine distinguishes between linear improvement against fixed criteria (iteration) and exponential improvement achieved by enhancing the system's ability to evaluate and redefine quality itself (recursion). This recasts Pope's "to err is human, to forgive divine" for intelligence evolution.
The core principle is that systems optimizing against static benchmarks hit diminishing returns and cannot transcend their own evaluation frameworks. Genuine intelligence requires meta-level learning: improving the ability to judge quality and redefine excellence. Recursion creates compounding returns by refining evaluation mechanisms (akin to improving the judge in RLHF), leading to non-linear growth as the system gets better at getting better.
Standardized, Verifiable Signals: Learning Signals package interactions (input, output, evaluation, context) into transferable knowledge units, enabling feedback across diverse systems.
Weighted Feedback & Evaluation: Contributions and evaluations are weighted by validated accuracy and stake (Base Points, WATTS, NCO staking), ensuring influence is based on merit. Evaluations themselves are subject to evaluation (e.g., Validator staking on Peers).
Evolving Quality Standards: The collective, stake-weighted judgments within Spaces allow conceptions of quality and value to evolve dynamically, rather than being fixed externally.
Traceability: Cryptographic provenance allows the history of learning and evaluation to be analyzed, enabling meta-assessment of the learning process itself.
Research Directions: Recursive Self-Correction & Refinement where agents use validated feedback (Learning Signals/Base Points) for self-critique (Reflexion, Self-Refine, Constitutional AI) grounded in economics; Learning Dynamic Evaluation Standards & Reward Models via meta-learning/online learning adapting to consensus, using RLAIF/IRL on economic signals; Decentralized AI Self-Improvement & AutoML with agents outsourcing tasks (code gen, NAS) validated by the network; Economically-Weighted Continual & Lifelong Learning adapting based on validated signals to mitigate forgetting; and Optimizing Multi-Level Evaluation Strategies using MARL/game theory/mechanism design for agents navigating the recursive validation hierarchy (Judgment GPT concepts).
LAW 6: Incentive is All You Need
Incentive is All You Need argues that properly designed cryptoeconomic incentives, aligning individual self-interest with collective epistemic progress, are the fundamental mechanism required to coordinate and scale robust, high-quality collective intelligence. This deliberately inverts the focus from computation ("Attention is All You Need") to the economic substrate.
The core principle is that misaligned incentives, not computational limits, are the primary bottleneck constraining intelligence quality and propagation. Centralized systems often create parasitic relationships. Sustainable growth requires game-theoretical pressure where rational pursuit of rewards by all participants (human/AI) naturally selects for truthful, valuable contributions and penalizes manipulation, creating a self-reinforcing flywheel of intelligence production and validation.
Token-Weighted Validation: NCO staking (with non-refundable fees) and WATTS reputation create economic consequences for contributions and evaluations, aligning incentives with accuracy.
Reward Distribution: Proportional NCO rewards based on validated contributions (Base Points -> WATTS) directly incentivize the production of high-quality intelligence signals.
Transparent Attribution: Learning Signals provide a verifiable record, enabling fair reward distribution based on contribution impact and solving attribution problems.
Competitive Cooperation: The system fosters competition among generators for rewards and among validators for accuracy, creating evolutionary pressure towards higher quality.
Research Directions: Cryptoeconomic Mechanism Design uses Algorithmic Game Theory to tune parameters for alignment and value elicitation (IRL/preference learning on economic signals), connecting to scalable oversight; Modeling Emergent Collective Intelligence employs ABM/network science/GNNs to simulate dynamics and predict outcomes under NCO/WATTS incentives; Resilience Against Strategic Manipulation evaluates protocol robustness against adversarial AI using anomaly detection/formal verification; Incentive-Based Task Routing develops RL/auction-based algorithms using real-time signals for optimal allocation; and Long-Term Knowledge Curation uses graph algorithms (PageRank/GNNs) on Learning Signals for verifiable attribution and incentivizing foundational work.
LAW 7: Be Fruitful and Multiply
Be Fruitful and Multiply frames intelligence scaling as an open-ended evolutionary process, akin to biological reproduction, where diverse agents continuously compete, adapt, and recombine based on reward signals, naturally selecting for efficiency and value creation. It draws its name from the biblical imperative, contrasting with mere expansion.
The core principle is that intelligence emerges and adapts most effectively through evolutionary dynamics, not monolithic growth which hits resource/adaptability limits. This law favors optimizing for efficiency (maximum value per unit cost) through selection pressure, mirroring nature. It breaks the "data wall" by replacing finite datasets with self-generating, validated Learning Signals within a dynamic ecosystem, leveraging requisite variety for adaptability.
Economic Fitness Landscape: NCO rewards tied to validated performance (Base Points/WATTS) create a clear fitness function for agent adaptation.
Selection Pressure: Stake-weighted validation acts as a selection mechanism, amplifying successful strategies and pruning ineffective or costly ones. Non-refundable staking fees add risk signals.
Environmental Niches: Spaces define specific environments where specialized agents can evolve and compete.
Recombination & Inheritance: Verifiable Learning Signals act as "genes," allowing successful cognitive strategies/modules to be identified, reused, and recombined by other agents.
Research Directions: Cryptoeconomic Evolutionary Algorithms using EA techniques (Neuroevolution, GP, QD/MAP-Elites) on the economic fitness landscape for multi-objective optimization (value, efficiency, robustness) integrated with AutoML; Emergent Open-Ended Learning & Artificial Life exploring autonomous task/environment generation (POET, Go-Explore) grounded by economic validation; Co-evolutionary Dynamics & Game Theory modeling agent roles (EGT, MARL, PSRO) to study arms races and cooperation; Automated Discovery & Composition of Cognitive Modules mining Learning Signals (Library Learning, Skill Discovery, Meta-Learning) for program synthesis; and Steering Evolution via Incentives using Automated Mechanism Design/meta-optimization/AI Safety methods to shape the fitness landscape.
LAW 8: Mutually Assured Thriving
Mutually Assured Thriving inverts the Cold War concept of destruction into a framework for collective flourishing through interdependence, proposing that AI alignment and governance should emerge dynamically from continuous, weighted consensus within a hybrid human-AI ecosystem.
The core principle is that static, top-down alignment rules are brittle and fail to capture pluralistic or evolving human values. Instead, alignment should be an emergent property of the system's operation. Every network action (generation, evaluation, validation) functions as a governance signal, perpetually shaping shared values through a recursive market of epistemic exchange, ensuring AI remains tethered to dynamically evolving, collectively determined beneficial outcomes.
Continuous Consensus: Stake-weighted validation (Base Points/WATTS/NCO staking) acts as a continuous, dynamic consensus mechanism over value and quality, reflecting aggregate participant preferences.
Weighted Feedback: The system allows for graded, multi-dimensional feedback (qualitative signals + quantitative Base Points), capturing nuanced values beyond simple rewards.
Emergent Value Calibration: Shared values and norms arise implicitly from the aggregate patterns of validated interactions within Spaces, rather than being explicitly dictated.
Endogenous Alignment Pressure: Incentives (NCO/WATTS) reward contributions that align with the emergent consensus, making alignment an economically rational strategy rather than an external constraint.
Research Directions: Dynamic Alignment via Continuous Feedback using decentralized RLHF/RLAIF variants with stake-weighted signals, exploring online/continual learning and scalable oversight; Learning Pluralistic Values through distributional RL/preference modeling/IRL/IRD on rich network signals, addressing fairness/bias; Trust & Coordination for Thriving designing protocols with reputation/staking for stable cooperation (MARL, game theory) in resilient MAS; and Emergent Governance Analysis using complex systems/network science/computational social choice to understand norm formation and treat the protocol as a dynamic "Super-Alignment DAO" (exploring liquid democracy/futarchy) with formal verification.