lichess.org
Donate

AI Slop is Invading the Chess World

AI as we intend it nowadays will never be "intelligent", it's just a bunch of written code thrown together which scraps words here and there. the problem lies in the definition of "intelligent" (lit. from lat. intelligentia = to understand) and in (most) humans that watch too much sci-fi films, read too much sci-fi books.

Humanity itself hasn't evolved much, we still kill each others for resources, money, ecc., we still have the same existential questions as they had 2000 years ago (intelligentia = to understand) and so on. E.g. Electric cars already existed in the 19th century, ancient egypitians already had dental implants 3000 years ago, chinese already had elettricity 5000 years ago, it's the same stuff over and over again, only the appeareance changes, but the essence of human beeings didn't change at all.

Nowadays we're just better at wrapping basic stuff in nicer gift boxes and with words that sound good at the moment but are, in fact, empty packages without content.

AI as we intend it nowadays will never be "intelligent", it's just a bunch of written code thrown together which scraps words here and there. the problem lies in the definition of "intelligent" (lit. from lat. intelligentia = to understand) and in (most) humans that watch too much sci-fi films, read too much sci-fi books. Humanity itself hasn't evolved much, we still kill each others for resources, money, ecc., we still have the same existential questions as they had 2000 years ago (intelligentia = to understand) and so on. E.g. Electric cars already existed in the 19th century, ancient egypitians already had dental implants 3000 years ago, chinese already had elettricity 5000 years ago, it's the same stuff over and over again, only the appeareance changes, but the essence of human beeings didn't change at all. Nowadays we're just better at wrapping basic stuff in nicer gift boxes and with words that sound good at the moment but are, in fact, empty packages without content.

Yeah, great article, I hope it stands out between all those of sneaky salesmen that try to sell things that don't work. Like, I have a hard time believing they themselves believe that these AI comments on every move are particularly insightful...

There is this thing about automatic game analysis that nobody seems to talk about, but which seems really important to me: a chess game is not just the sum of its individual moves. Like, an awesome Q-sacrifice leading to a forced mate is only brilliant is you actually play the mate. Otherwise it's just blundering a Q. Ideas stretch beyond individual moves. Everything is intertwined.

So, every analysis based on a move-by-move evaluation change feels just fundamentally wrong to me. The only thing it does, is filtering out one-move blunders, which is somewhat helpful sometimes, and these AI thingies can kinda describe those, but it doesn't reach any depth at all...

Yeah, great article, I hope it stands out between all those of sneaky salesmen that try to sell things that don't work. Like, I have a hard time believing they themselves believe that these AI comments on every move are particularly insightful... There is this thing about automatic game analysis that nobody seems to talk about, but which seems really important to me: a chess game is not just the sum of its individual moves. Like, an awesome Q-sacrifice leading to a forced mate is only brilliant is you actually play the mate. Otherwise it's just blundering a Q. Ideas stretch beyond individual moves. Everything is intertwined. So, every analysis based on a move-by-move evaluation change feels just fundamentally wrong to me. The only thing it does, is filtering out one-move blunders, which is somewhat helpful sometimes, and these AI thingies can *kinda* describe those, but it doesn't reach any depth at all...

Thank you for the interesting and thought-provoking article. Here are my two cents.

  • A lot of what is currently branded as AI in chess is really just large language models. General artificial intelligence does not exist yet, and realistically we are still decades away from it. Commercial LLMs do have some limited reasoning capability, but it is fragile and inconsistent. Left on their own, they operate at a very low level when it comes to actual chess understanding, and as your example shows, they can confidently reach the wrong conclusion or even contradict themselves within the same answer.

  • That said, LLMs can have real value when used properly. They should not be treated as the source of chess insight. Their strength is in turning structured information into readable explanations. If you feed them concrete inputs like engine evaluations, candidate moves, and factual positional data, and keep randomness close to zero, they can help present analysis in a more human-friendly way. Doing this well is not easy and requires a lot of work outside the model itself. Most AI coach products do not do this, which is why the output is sometimes wrong.

  • As for replacing human coaches, I do not see that happening any time soon. Coaching is about more than pointing out moves. It involves understanding a player’s habits, prioritizing what actually matters, and adapting explanations over time. That is far beyond what LLMs can do today. Where they can be genuinely useful is with beginners or in very structured areas like basic opening principles, or learning opening courses, where the task is repetitive, rule-based, and progress can be tracked mechanically.

LLMs are useful tools, but they have been aggressively misbranded as intelligent systems. On their own, they produce low-quality output. To build something genuinely helpful, they need to be paired with real data, real analysis, and strong constraints. That is where most products in the chess AI space currently fall short.

My two cents.

Thank you for the interesting and thought-provoking article. Here are my two cents. - A lot of what is currently branded as *AI* in chess is really just large language models. General artificial intelligence does not exist yet, and realistically we are still decades away from it. Commercial LLMs do have some limited reasoning capability, but it is fragile and inconsistent. Left on their own, they operate at a very low level when it comes to actual chess understanding, and as your example shows, they can confidently reach the wrong conclusion or even contradict themselves within the same answer. - That said, LLMs can have real value when used properly. They should not be treated as the source of chess insight. Their strength is in turning structured information into readable explanations. If you feed them concrete inputs like engine evaluations, candidate moves, and factual positional data, and keep randomness close to zero, they can help present analysis in a more human-friendly way. Doing this well is not easy and requires a lot of work outside the model itself. Most *AI coach* products do not do this, which is why the output is sometimes wrong. - As for replacing human coaches, I do not see that happening any time soon. Coaching is about more than pointing out moves. It involves understanding a player’s habits, prioritizing what actually matters, and adapting explanations over time. That is far beyond what LLMs can do today. Where they can be genuinely useful is with beginners or in very structured areas like basic opening principles, or learning opening courses, where the task is repetitive, rule-based, and progress can be tracked mechanically. LLMs are useful tools, but they have been aggressively misbranded as intelligent systems. On their own, they produce low-quality output. To build something genuinely helpful, they need to be paired with real data, real analysis, and strong constraints. That is where most products in the chess AI space currently fall short. My two cents.

One of the most jarring things about the advent of LLM AI's is that they're often wrong and people still treat them as expert authorities without a second thought.

It's not even that they're sometimes wrong - it is painfully often

One of the most jarring things about the advent of LLM AI's is that they're *often wrong* and people still treat them as expert authorities without a second thought. It's not even that they're sometimes wrong - it is painfully often

@DavidMoscoe said in #15:

is that they're often wrong

Code can never be "right" or "wrong", it doesn't think.
Most people have no real understanding how LLM functions.

and people still treat them as expert authorities without a second thought.

That just demostrates the level of intelligence of such people, most of them have no basic understanding at all how LLM (not AI, AI will taker another couple of centuries, IF ever) functions.
Be prepared for the next decades, lol

@DavidMoscoe said in #15: > is that they're *often wrong* Code can never be "right" or "wrong", it doesn't think. Most people have no real understanding how LLM functions. > and people still treat them as expert authorities without a second thought. That just demostrates the level of intelligence of such people, most of them have no basic understanding at all how LLM (not AI, AI will taker another couple of centuries, IF ever) functions. Be prepared for the next decades, lol
<Comment deleted by user>

chess.com uses AI during analysis

chess.com uses AI during analysis

People get at best what they pay for, sometimes less.

People get at best what they pay for, sometimes less.

Never mind just the AI-assisted chess coaches; the actual chess content mill seems heavily AI. Example, I prompted ChatGPT the following:

"Write me a very short article or note that could take the form of a Substack article/note or Lichess blog about the underrated idea of thinking about king safety as a way to actually keep your pieces active. Use a specific writing voice, like Yasser Seirawan-esque*."

ChatGPT replies:

King Safety Isn’t Cowardice — It’s an Engine

There’s a persistent myth among ambitious players that worrying about king safety is a sign of timidity. “I’ll castle later,” they say, “I’m developing aggressively.” Or worse: “If I attack first, my king problems won’t matter.” This is a charming belief, and it loses games with impressive regularity.

Here’s the quieter truth: good king safety is not a defensive posture. It’s a way of keeping your pieces active.

When your king is unsafe, your army becomes its own security detail. A rook that wants to swing across the third rank stays home to guard f2. A knight that belongs on d5 must loiter on f3, staring nervously at checks that might never come. The queen, that most ambitious of pieces, suddenly finds herself babysitting instead of creating trouble.

Castling — or otherwise securing the king — frees your pieces psychologically as much as tactically. Once the king is tucked away, your pieces stop asking, “What if?” and start asking, “Where do I belong?”

I used to tell students: your pieces take their cues from your king. If he looks comfortable, they fan out with confidence. If he looks exposed, they huddle, whispering, and accomplish very little.

Think of classic games where one side delays king safety “just a little too long.” The punishment often isn’t a direct mate. It’s worse: a slow strangulation. Every active idea is met with a tempo-gaining check. Every improvement is postponed because something must be covered first. The opponent doesn’t even have to attack well — the position attacks itself.

Strong players understand that king safety is a form of prophylaxis for activity. You castle not because you’re afraid, but because you want permission to play freely. You want rooks that can lift, bishops that can abandon diagonals, knights that can jump forward without checking over their shoulders.

So the next time you’re deciding between “one more developing move” and “just castling,” remember: castling isn’t the end of your ambitions. It’s the beginning of your pieces’ careers.

A safe king doesn’t make you passive. It makes you dangerous.

I didn't even read it but I could easily publish it on Lichess and it might even be popular. Except more typical subject matters are even more generic, smeary topics about chess psychology.

*(With apologies to Yasser!)

I use AI for lots of things and appreciate what it lets me do that I used to not be able to, especially with rapid code prototyping, but it obviously also has important limitations.

I'm gonna go try to play some games now and increase the pressure and focus on central control. Maybe I'll even master the tempo. Wish me luck.

Never mind just the AI-assisted chess coaches; the actual chess content mill seems heavily AI. Example, I prompted ChatGPT the following: >"Write me a very short article or note that could take the form of a Substack article/note or Lichess blog about the underrated idea of thinking about king safety as a way to actually keep your pieces active. Use a specific writing voice, like Yasser Seirawan-esque*." ChatGPT replies: >**King Safety Isn’t Cowardice — It’s an Engine** > >There’s a persistent myth among ambitious players that worrying about king safety is a sign of timidity. “I’ll castle later,” they say, “I’m developing aggressively.” Or worse: “If I attack first, my king problems won’t matter.” This is a charming belief, and it loses games with impressive regularity. > >Here’s the quieter truth: good king safety is not a defensive posture. It’s a way of keeping your *pieces* active. > >When your king is unsafe, your army becomes its own security detail. A rook that wants to swing across the third rank stays home to guard f2. A knight that belongs on d5 must loiter on f3, staring nervously at checks that might never come. The queen, that most ambitious of pieces, suddenly finds herself babysitting instead of creating trouble. > >Castling — or otherwise securing the king — frees your pieces psychologically as much as tactically. Once the king is tucked away, your pieces stop asking, “What if?” and start asking, “Where do I belong?” > >I used to tell students: *your pieces take their cues from your king*. If he looks comfortable, they fan out with confidence. If he looks exposed, they huddle, whispering, and accomplish very little. > >Think of classic games where one side delays king safety “just a little too long.” The punishment often isn’t a direct mate. It’s worse: a slow strangulation. Every active idea is met with a tempo-gaining check. Every improvement is postponed because something must be covered *first*. The opponent doesn’t even have to attack well — the position attacks itself. > >Strong players understand that king safety is a form of prophylaxis for activity. You castle not because you’re afraid, but because you want permission to play freely. You want rooks that can lift, bishops that can abandon diagonals, knights that can jump forward without checking over their shoulders. > >So the next time you’re deciding between “one more developing move” and “just castling,” remember: castling isn’t the end of your ambitions. It’s the beginning of your pieces’ careers. > >A safe king doesn’t make you passive. It makes you *dangerous*. I didn't even read it but I could easily publish it on Lichess and it might even be popular. Except more typical subject matters are even more generic, smeary topics about chess psychology. *(With apologies to Yasser!) I use AI for lots of things and appreciate what it lets me do that I used to not be able to, especially with rapid code prototyping, but it obviously also has important limitations. I'm gonna go try to play some games now and increase the pressure and focus on central control. Maybe I'll even master the tempo. Wish me luck.