@craig showed how he built a browser-based micro debugging tool that visualizes unit movement, attack orders, and kiting paths from structured log data — all without writing a single line of JavaScript. The tool downloads AI Arena micro ladder results automatically, displays per-round win/loss breakdowns, and lets him step through individual unit decisions to find exactly where his micro logic breaks down. He also discussed migrating micro skills from the simplified ladder back into his main Sharky-based bot, the challenges of terrain and concaves, and his early experiments with closed-loop AI-driven bot improvement using Cursor and Claude Code.
Key Takeaways:
A browser-based replay visualizer can show attack orders, kiting paths, and unit priorities frame-by-frame — far more insight than raw win/loss numbers from the micro ladder
Per-round breakdowns (e.g. 4-6 vs 2-8) reveal how close your losses actually are, helping you prioritize which matchups deserve tuning effort
Micro ladder skills transfer to melee — but flat-map kiting doesn’t account for chokes, concave width, or terrain, so expect rework when migrating
The entire tool was built with AI coding assistants (Cursor/Claude) with zero hand-written JavaScript — structured prompting can replace language-specific expertise
Structured log emission (not replay parsing) gives you bot-internal reasoning like target prioritization scores and combat simulator predictions that replays simply can’t provide
Closing the loop: Craig is working toward ticket-driven AI agents that commit changes, run test matches, and evaluate results autonomously — but token costs and log file sizes remain the bottleneck