Research — hexagrams, computational analysis, warring states simulation
Research & Methodology

Ancient Algorithms
Tested Against Modern ML

We tested whether the King Wen I-Ching sequence's anti-habituation properties improve neural network training. The answer is no. The sequence has genuine statistical structure—confirmed by Monte Carlo analysis against 100,000 random baselines—but these properties destabilize gradient-based optimization rather than helping it.

By Augustin Chan with AI · Published January 2025 · Updated March 2026

Research Materials

PDF

Research Paper

Negative result: the King Wen sequence has genuine statistical properties but they do not improve neural network training. Tested via LR modulation, curriculum ordering, and seed sensitivity analysis on two platforms.

cs.LG

arXiv Preprint

Published in cs.LG (Machine Learning). The paper reports a rigorous negative result: the King Wen sequence's anti-habituation properties are statistically real but do not help neural network training. Includes experiments on two platforms with a 30-seed sensitivity analysis.

Results

Experimental Findings

LR Modulation

King Wen surprise profile as learning rate modulation degrades performance at all amplitudes (0.15, 0.3, 0.5). Worse than both random and Shao Yong controls.

Curriculum Ordering

As data ordering strategy, King Wen is the worst non-sequential ordering on CUDA and within noise on MLX. Random shuffle beats everything.

Statistical Properties

The sequence genuinely has anti-habituation structure: high transition distance, negative autocorrelation, yang balance. Confirmed vs 100K random permutations.

Why It Doesn't Work

The sequence's high variance—the property that makes it statistically distinctive—destabilizes gradient-based optimization. Negative autocorrelation disrupts optimizer momentum. Anti-habituation is premature for models still in early learning. A fixed 3,000-year-old sequence cannot adapt to the learner's state.

Comparison

King Wen vs Machine Learning

In the simulation framework, the two optimization approaches differ across six dimensions. The King Wen method and modern ML represent fundamentally different theories of strategic decision-making.

DimensionKing Wen ApproachML Approach
Decision basisCosmological pattern recognition (3,000-year-old sequence)Statistical optimization on training data
AdaptabilityFixed sequence, context-dependent interpretationDynamic retraining on new data
Historical groundingEncodes millennia of observed strategic patternsNo historical priors
Computational costNear-zero (lookup in 64-element sequence)High (training, inference, retraining)
InterpretabilityHuman-readable hexagram judgments and line textsBlack-box neural network weights
Strategic philosophyHolistic pattern matching (yin-yang balance)Reward maximization

Why Han?

The ultimate underdog test case

Historical Constraints

  • Smallest territory of the seven Warring States
  • Poor resources and strategic depth
  • Strategically boxed in by Qin, Wei, and Chu
  • First to fall to Qin (230 BC)

AI Opportunity

Early 3-state trials eliminated Han in 93% of games within five rounds — confirming the need for full 7-state geopolitical complexity. If the King Wen method helps Han survive in that richer environment, it suggests ancient algorithms may have untapped potential in modern strategy optimization.

Potential Strategies

  • Overcome extreme geopolitical constraints
  • Form strategic alliances
  • Engineer asymmetric strategies
  • Leverage espionage and diplomacy

Scaling the Experiment

Alternative Test States

For testing a new learning algorithm, you want a historically disadvantaged but initially viable state. These candidates could serve as interesting underdogs:

Wei

Once Mighty, Then Declined

Challenge: Reverse-engineer successful Qin deterrence

Opportunity: Test long-term planning and strategic correction

Yan

Isolated and Slow to Act

Challenge: Build stable power base in the north

Opportunity: Use unconventional warfare and early alliances

Zhao

Brave but Overwhelmed

Challenge: Balance tactical vs. strategic skills

Opportunity: Leverage military talent with better strategy

Qi

Economic Powerhouse

Challenge: Translate wealth into lasting dominance

Opportunity: Leverage Jixia Academy and salt monopoly

Chu

Complex Geopolitics

Challenge: Manage vast territory effectively

Opportunity: Test complex alliance dynamics

Qin

Test Restraint

Challenge: Test restraint, not power

Opportunity: Avoid over-aggression and maintain stability

Recommendation: Use Han as the Test Bed

Han is historically the most constrained—minimal land, minimal power, first to die. If your AI can lead Han to survive or even dominate, you have a powerful system.

Scaling Strategy: Start with Han (baseline AI), then scale to Wei/Yan (advanced challenge), Zhao/Chu (complex geopolitics), and Qin (test restraint, not power).