
Can Ancient Algorithms
Outperform Modern ML?
Testing the King Wen sequence-based optimization method—derived from the I Ching—against contemporary machine learning in a complex multi-agent strategy simulation.
Research Materials
Research Paper
Complete hypothesis and methodology for testing ancient optimization against modern machine learning approaches.
arXiv Submission
Seeking endorsement for cs.ai. If you have credentials and interest in ancient algorithms + modern AI, your support is appreciated—and I'm open to collaboration or feedback to strengthen the work and improve the chances of arXiv acceptance.
Endorsement FormEndorsement Code
PY3OJU
Methodology
Experimental Setup
Standard ML Optimization
All seven historical states use contemporary machine learning optimization techniques. This establishes the baseline for performance comparison.
King Wen Sequence
Same setup, but Han (韓) uses the King Wen sequence-based optimization method—derived from I Ching principles.
The hypothesis is supported—ancient algorithms may have untapped potential for modern optimization problems.
The hypothesis is falsified—modern machine learning methods remain superior for this class of problems.
Why Han?
The ultimate underdog test case
Historical Constraints
- 疆Smallest territory of the seven Warring States
- 貧Poor resources and strategic depth
- 困Strategically boxed in by Qin, Wei, and Chu
- 亡First to fall to Qin (230 BC)
AI Opportunity
If the King Wen method helps Han rise, it suggests ancient algorithms may have untapped potential in modern strategy optimization.
Potential Strategies
- Overcome extreme geopolitical constraints
- Form strategic alliances
- Engineer asymmetric strategies
- Leverage espionage and diplomacy
Scaling the Experiment
Alternative Test States
For testing a new learning algorithm, you want a historically disadvantaged but initially viable state. These candidates could serve as interesting underdogs:
Wei
Once Mighty, Then Declined
Challenge: Reverse-engineer successful Qin deterrence
Opportunity: Test long-term planning and strategic correction
Yan
Isolated and Slow to Act
Challenge: Build stable power base in the north
Opportunity: Use unconventional warfare and early alliances
Zhao
Brave but Overwhelmed
Challenge: Balance tactical vs. strategic skills
Opportunity: Leverage military talent with better strategy
Qi
Economic Powerhouse
Challenge: Translate wealth into lasting dominance
Opportunity: Leverage Jixia Academy and salt monopoly
Chu
Complex Geopolitics
Challenge: Manage vast territory effectively
Opportunity: Test complex alliance dynamics
Qin
Test Restraint
Challenge: Test restraint, not power
Opportunity: Avoid over-aggression and maintain stability
Recommendation: Use Han as the Test Bed
Han is historically the most constrained—minimal land, minimal power, first to die. If your AI can lead Han to survive or even dominate, you have a powerful system.
Scaling Strategy: Start with Han (baseline AI), then scale to Wei/Yan (advanced challenge), Zhao/Chu (complex geopolitics), and Qin (test restraint, not power).