Research & Methodology

Can Ancient Algorithms
Outperform Modern ML?

Testing the King Wen sequence-based optimization method—derived from the I Ching—against contemporary machine learning in a complex multi-agent strategy simulation.

Research Materials

PDF

Research Paper

Complete hypothesis and methodology for testing ancient optimization against modern machine learning approaches.

cs.ai

arXiv Submission

Seeking endorsement for cs.ai. If you have credentials and interest in ancient algorithms + modern AI, your support is appreciated—and I'm open to collaboration or feedback to strengthen the work and improve the chances of arXiv acceptance.

Endorsement Form

Endorsement Code

PY3OJU

Methodology

Experimental Setup

Control Game

Standard ML Optimization

All seven historical states use contemporary machine learning optimization techniques. This establishes the baseline for performance comparison.

Test Game

King Wen Sequence

Same setup, but Han (韓) uses the King Wen sequence-based optimization method—derived from I Ching principles.

If Han performs better

The hypothesis is supported—ancient algorithms may have untapped potential for modern optimization problems.

If not

The hypothesis is falsified—modern machine learning methods remain superior for this class of problems.

Why Han?

The ultimate underdog test case

Historical Constraints

  • Smallest territory of the seven Warring States
  • Poor resources and strategic depth
  • Strategically boxed in by Qin, Wei, and Chu
  • First to fall to Qin (230 BCE)

AI Opportunity

If the King Wen method helps Han rise, it suggests ancient algorithms may have untapped potential in modern strategy optimization.

Potential Strategies

  • Overcome extreme geopolitical constraints
  • Form strategic alliances
  • Engineer asymmetric strategies
  • Leverage espionage and diplomacy

Scaling the Experiment

Alternative Test States

For testing a new learning algorithm, you want a historically disadvantaged but initially viable state. These candidates could serve as interesting underdogs:

Wei

Once Mighty, Then Declined

Challenge: Reverse-engineer successful Qin deterrence

Opportunity: Test long-term planning and strategic correction

Yan

Isolated and Slow to Act

Challenge: Build stable power base in the north

Opportunity: Use unconventional warfare and early alliances

Zhao

Brave but Overwhelmed

Challenge: Balance tactical vs. strategic skills

Opportunity: Leverage military talent with better strategy

Song

Dark Horse Option

Challenge: Lead minor state to major power

Opportunity: Use ideology-based diplomacy

Chu

Complex Geopolitics

Challenge: Manage vast territory effectively

Opportunity: Test complex alliance dynamics

Qin

Test Restraint

Challenge: Test restraint, not power

Opportunity: Avoid over-aggression and maintain stability

Recommendation: Use Han as the Test Bed

Han is historically the most constrained—minimal land, minimal power, first to die. If your AI can lead Han to survive or even dominate, you have a powerful system.

Scaling Strategy: Start with Han (baseline AI), then scale to Wei/Yan (advanced challenge), Zhao/Chu (complex geopolitics), and Qin (test restraint, not power).