About this project

Adam Jesionkiewicz

Adam Jesionkiewicz

AI Researcher & Creator

I’m focused on independent AI research, experimentation, and building ambitious ideas at the edge of science, intelligence, and creativity. Space, technology, and discovery are what drive me forward. Autochess is one of those ideas.

It started as a personal experiment: can I build a chess engine from scratch that learns to play purely from neural network inference? No Stockfish, no handcrafted evaluation, no opening books — just a neural network trained on high-Elo games, endgame tablebases, and self-play reinforcement learning.

The whole thing is an exercise in curiosity. I wanted to understand how AlphaZero-style systems work — not by reading papers, but by building one myself, debugging every gradient, and watching it slowly go from random moves to something that actually plays chess. It’s equal parts frustrating and magical.

Goals

Traditional engines are extraordinarily strong, but they don’t play chess the way humans do. There’s no sense of strategy, no subtlety, no personality — just exhaustive search over billions of positions. Model V4 already changes this: it develops plans, sets traps, and shifts between tactical and positional play depending on the position. Model Ω will take this much further — an AI that adapts its style to the opponent, plays with mood and momentum, adjusts its aggression to the game state, and brings genuine variety and joy to every match. Not an engine. A playing agent.

How it works

Every model is trained from scratch — no Stockfish, no opening books. The network takes an AlphaZero-style 19-plane 8×8 board and outputs a policy (4672 possible moves) and a value estimate. Training follows a three-phase pipeline:

Since V3, the model uses learned thought tokens — trainable embeddings processed by transformer layers before the final move prediction. Think of it as an internal “pause to think” that lets the model reconsider its instinctive choices. V4 takes this further with a fully transformer-based architecture, adaptive attention bias, and nearly 5× more parameters.

Model generations

A ground-up redesign. The CNN backbone is replaced by a 20-layer deep transformer with adaptive attention bias — each layer dynamically adjusts attention patterns based on the position. Nearly 5× larger than V3, with deeper reasoning through expanded thought token processing.

Architecture
Deep Transformer + adaptive attention bias
Parameters
45M (5× V3)
Layers
8 CNN residual + 20 transformer
Thought tokens
8 latent tokens with cross-attention
Training data
2400+ Elo games + Syzygy + 2M puzzles
Search
2-ply negamax with quiescence
Inference
CPU, pure Python server
Elo
~2700 (Stockfish 60% score)
Board encoding
19-plane 8×8 (AlphaZero-style)
Action space
4672 moves

AI-accelerated research

This project is also an experiment in a broader question: what becomes possible when AI agents participate in the research process itself?

Every model architecture was designed, debugged, and iterated with AI-assisted code generation. Dozens of scientific papers were analyzed, cross-referenced, and synthesized in days instead of months or years. Training pipelines that would have taken months to build by hand were prototyped in days. The result is a pace of experimentation that would have been economically, logistically, and intellectually impossible just six months ago.

Autochess is a proof of concept: one person, working with AI agents, can conduct serious research — training 45M-parameter models, building interactive learning platforms, and exploring novel architectures — at a speed and depth that previously required a funded lab.

Looking for partners

The next phase of this research — full MCTS integration, graph search distillation, and the Model Ω experiments — requires serious compute. Training runs that take days on a single RTX 4090 need to scale to multi-node distributed training for deeper models and longer self-play.

I’m looking for partners who can provide or sponsor access to high-performance GPU clusters. The ideal setup:

If you’re interested in supporting independent AI chess research, or want to collaborate on the Model Ω direction, reach out: adam@jesion.pl

Model version V4 · App version v1.45

Play against the AI