lichess.org
Donate

AI Slop is Invading the Chess World

I would say that for AI Slop to actually be beneficial the following problems need to be addressed.

  1. Chess engines like Stockfish use algorithms that can't be reproduced by humans since calculating millions of nodes blindly and trying to use same algorithm with human brain isn't practical.
  2. If using Stockfish to give an evaluation you're starting from a result to explain what to do in the starting position. Unfortunately a human has to find the result from the starting position so even cheating doesn't explain why it's result. Players can already seen in analysis if engine says there's a better move and can work through variations to understand why so unless AI comments are close to GM level it's likely to not be useful.
  3. AIs have a tendency to hallucinate. If people blindly believe AI then things like Argentine Disaster at the 1955 Gothenburg Interzonal tournament would be common.

It's probably possible to make a semi viable AI trainer if you have an engine that works closer to a human brain in thought process. However hallucination problem is a big issue since if people memorise content and 90% is good and 10% is made up then people could end up thinking made up nonsense is true.

I would say that for AI Slop to actually be beneficial the following problems need to be addressed. 1. Chess engines like Stockfish use algorithms that can't be reproduced by humans since calculating millions of nodes blindly and trying to use same algorithm with human brain isn't practical. 2. If using Stockfish to give an evaluation you're starting from a result to explain what to do in the starting position. Unfortunately a human has to find the result from the starting position so even cheating doesn't explain why it's result. Players can already seen in analysis if engine says there's a better move and can work through variations to understand why so unless AI comments are close to GM level it's likely to not be useful. 3. AIs have a tendency to hallucinate. If people blindly believe AI then things like Argentine Disaster at the 1955 Gothenburg Interzonal tournament would be common. It's probably possible to make a semi viable AI trainer if you have an engine that works closer to a human brain in thought process. However hallucination problem is a big issue since if people memorise content and 90% is good and 10% is made up then people could end up thinking made up nonsense is true.

@ninguno said in #16:

Code can never be "right" or "wrong", it doesn't think.
Most people have no real understanding how LLM functions.

If I make a script that outputs "2" for every input and someone asks me what 1+3 is equal to, the output, "2," would be wrong. I understand LLM's don't "think" it's also completely utterly irrelevant when the outputs they generate are often wrong

@ninguno said in #16: > Code can never be "right" or "wrong", it doesn't think. > Most people have no real understanding how LLM functions. If I make a script that outputs "2" for every input and someone asks me what 1+3 is equal to, the output, "2," would be wrong. I understand LLM's don't "think" it's also completely utterly irrelevant when the outputs they generate are often wrong

Imagine using AI to make the thumbnail for a blog complaining about AI...

Imagine using AI to make the thumbnail for a blog complaining about AI...

I once asked AI to give me an 8-letter word, but it repeatedly gave me 7 and 6-letter words, and the AI was in its newest model. If AI can't get a simple thing right, wonder about all the times it gets chess analysis wrong. I'm not saying AI is stupid or dumb, I'm just saying that AI is wrong and that everyone has to be careful with it.

I once asked AI to give me an 8-letter word, but it repeatedly gave me 7 and 6-letter words, and the AI was in its newest model. If AI can't get a simple thing right, wonder about all the times it gets chess analysis wrong. I'm not saying AI is stupid or dumb, I'm just saying that AI is wrong and that everyone has to be careful with it.

@DavidMoscoe said in #62:

If I make a script that outputs "2" for every input and someone asks me what 1+3 is equal to, the output, "2," would be wrong

makes sense to write such a program

@DavidMoscoe said in #62:

it's also completely utterly irrelevant when the outputs they generate are often wrong

you forgot to add "for me", "IMO" or similar at the end or the beginning of the sentence.
Others have a different opinion than yours.

@Mshibani said in #64:

I once asked AI to give me an 8-letter word, but it repeatedly gave me 7 and 6-letter words, and the AI was in its newest model

lol. As for now a LLM (not AI) has been appointed as a virtual minister in Albania. Good luck with it :)

@DavidMoscoe said in #62: > If I make a script that outputs "2" for every input and someone asks me what 1+3 is equal to, the output, "2," would be wrong makes sense to write such a program @DavidMoscoe said in #62: > it's also completely utterly irrelevant when the outputs they generate are often wrong you forgot to add "for me", "IMO" or similar at the end or the beginning of the sentence. Others have a different opinion than yours. @Mshibani said in #64: > I once asked AI to give me an 8-letter word, but it repeatedly gave me 7 and 6-letter words, and the AI was in its newest model lol. As for now a LLM (not AI) has been appointed as a virtual minister in Albania. Good luck with it :)

@ninguno said in #65:

If I make a script that outputs "2" for every input and someone asks me what 1+3 is equal to, the output, "2," would be wrong

makes sense to write such a program

It does for the sake of an analogy to get through your exceptionally thick skull, but I underestimated what a massive dunce you are and/or how much AI has atrophied your brain

it's also completely utterly irrelevant when the outputs they generate are often wrong

you forgot to add "for me", "IMO" or similar at the end or the beginning of the sentence.
Others have a different opinion than yours.

It's not a matter of opinion, anyone with any degree of expertise can merely ask any LLM a question in their area of expertise and quickly see a lot of things that are either minorly or majorly incorrect. Unfortunately it seems you have no areas of expertise so you may not be able to verify this.

@ninguno said in #65: > > If I make a script that outputs "2" for every input and someone asks me what 1+3 is equal to, the output, "2," would be wrong > > makes sense to write such a program It does for the sake of an analogy to get through your exceptionally thick skull, but I underestimated what a massive dunce you are and/or how much AI has atrophied your brain > > > it's also completely utterly irrelevant when the outputs they generate are often wrong > > you forgot to add "for me", "IMO" or similar at the end or the beginning of the sentence. > Others have a different opinion than yours. > It's not a matter of opinion, anyone with any degree of expertise can merely ask any LLM a question in their area of expertise and quickly see a lot of things that are either minorly or majorly incorrect. Unfortunately it seems you have no areas of expertise so you may not be able to verify this.

@TotalNoob69 said in #2:

The problem is that no one made a good chess analyser :D It has nothing to do with AI and LLMs cannot power the necessary machine learning architecture required for such a module. As always, it's not AI slop, it's sloppy humans using AI wrong.

@TotalNoob69 said in #2:

Large Language Models encode, spoiler alert, language, not knowledge. The architecture of an LLM enabled chess tutor has to have at least two components:

  • a chess analyser
  • an LLM that translates what the analyser extracted into English

There is nothing inherently bad in this design, with each component playing to their strength and creating a useful tool that everybody loves.

The problem is that no one made a good chess analyser :D It has nothing to do with AI and LLMs cannot power the necessary machine learning architecture required for such a module. As always, it's not AI slop, it's sloppy humans using AI wrong.

That being said, just like LLMs taught us a lot about how humans function and what language does for them, these AI assistants will shows how chess coaches function and what they really do. Because I am pretty sure there are a lot of chess coaches out there that spout the same kind of authoritative nonsense trying to make a buck off people who don't understand chess, as it's not the chess that matters, but the coaching skill.

I tried building something like that: https://github.com/romit-basak/chess-explainer

This basically works with a small, local LLM and Stockfish. Stockfish is the main chess brain, the LLM doesn't try to do any analysis by itself, just presents stockfish output in natural language. I am in no way claiming that it is perfect, even with constraints LLMs hallucinate sometimes(especially small models). But this is an attempt to create what the AI "marketers" claim as the benefits of LLMs for chess analysis.

@TotalNoob69 said in #2: > The problem is that no one made a good chess analyser :D It has nothing to do with AI and LLMs cannot power the necessary machine learning architecture required for such a module. As always, it's not AI slop, it's sloppy humans using AI wrong. @TotalNoob69 said in #2: > Large Language Models encode, spoiler alert, language, not knowledge. The architecture of an LLM enabled chess tutor has to have at least two components: > - a chess analyser > - an LLM that translates what the analyser extracted into English > > There is nothing inherently bad in this design, with each component playing to their strength and creating a useful tool that everybody loves. > > The problem is that no one made a good chess analyser :D It has nothing to do with AI and LLMs cannot power the necessary machine learning architecture required for such a module. As always, it's not AI slop, it's sloppy humans using AI wrong. > > That being said, just like LLMs taught us a lot about how humans function and what language does for them, these AI assistants will shows how chess coaches function and what they really do. Because I am pretty sure there are a lot of chess coaches out there that spout the same kind of authoritative nonsense trying to make a buck off people who don't understand chess, as it's not the chess that matters, but the coaching skill. I tried building something like that: https://github.com/romit-basak/chess-explainer This basically works with a small, local LLM and Stockfish. Stockfish is the main chess brain, the LLM doesn't try to do any analysis by itself, just presents stockfish output in natural language. I am in no way claiming that it is perfect, even with constraints LLMs hallucinate sometimes(especially small models). But this is an attempt to create what the AI "marketers" claim as the benefits of LLMs for chess analysis.

In the reddit board it's king is blocking the rook

HOW IT IS POSSIBLE????

In the reddit board it's king is blocking the rook HOW IT IS POSSIBLE????

@RomitBasak said in #67:

The problem is that no one made a good chess analyser :D It has nothing to do with AI and LLMs cannot power the necessary machine learning architecture required for such a module. As always, it's not AI slop, it's sloppy humans using AI wrong.

I tried building something like that: https://github.com/romit-basak/chess-explainer

This basically works with a small, local LLM and Stockfish. Stockfish is the main chess brain, the LLM doesn't try to do any analysis by itself, just presents stockfish output in natural language. I am in no way claiming that it is perfect, even with constraints LLMs hallucinate sometimes(especially small models). But this is an attempt to create what the AI "marketers" claim as the benefits of LLMs for chess analysis.

SF only shows lines ordered by risk. It doesn't explain what the problems are with the position or what the plans are (it doesn't have any). That's the main alignment problem between chess engines and what people want from a chess tutor.

@RomitBasak said in #67: > > The problem is that no one made a good chess analyser :D It has nothing to do with AI and LLMs cannot power the necessary machine learning architecture required for such a module. As always, it's not AI slop, it's sloppy humans using AI wrong. > I tried building something like that: https://github.com/romit-basak/chess-explainer > > This basically works with a small, local LLM and Stockfish. Stockfish is the main chess brain, the LLM doesn't try to do any analysis by itself, just presents stockfish output in natural language. I am in no way claiming that it is perfect, even with constraints LLMs hallucinate sometimes(especially small models). But this is an attempt to create what the AI "marketers" claim as the benefits of LLMs for chess analysis. SF only shows lines ordered by risk. It doesn't explain what the problems are with the position or what the plans are (it doesn't have any). That's the main alignment problem between chess engines and what people want from a chess tutor.

@TotalNoob69 I argue otherwise, my own app does explain the planning process, threats about king safety, as well as weaknesses such as doubled pawns and if you should be concerned about them or not. If you don't believe it then by all means actually see mine in action.

@TotalNoob69 I argue otherwise, my own app does explain the planning process, threats about king safety, as well as weaknesses such as doubled pawns and if you should be concerned about them or not. If you don't believe it then by all means actually see mine in action.