And just to spam even more with my unsolicited opinions, slop is safe! Maybe it's correlation and not causation that so many brilliant people end up having mental disorders, but you can't argue that there is a higher incidence of mental disease in technical domains that require consistent reasoning and true intelligence. Chess being one of them.
What happens? Brains crack! They are not designed for that kind of work. You push them too hard into this unfamiliar territory of pure abstract reason and they malfunction. Slop on the other hand? It's "environmentally friendly". Uses almost no energy. Just recycle what someone else said. Even better if a machine says it, because you can either pretend machines are not supposed to get things wrong or you can blame them for your lack of effort. But hey, at least your brain remains intact!
And just to spam even more with my unsolicited opinions, slop is safe! Maybe it's correlation and not causation that so many brilliant people end up having mental disorders, but you can't argue that there is a higher incidence of mental disease in technical domains that require consistent reasoning and true intelligence. Chess being one of them.
What happens? Brains crack! They are not designed for that kind of work. You push them too hard into this unfamiliar territory of pure abstract reason and they malfunction. Slop on the other hand? It's "environmentally friendly". Uses almost no energy. Just recycle what someone else said. Even better if a machine says it, because you can either pretend machines are not supposed to get things wrong or you can blame them for your lack of effort. But hey, at least your brain remains intact!
LLMs are uniquely unsuited for Chess analysis. At they're core they're predictive models operating on probabilities of what the next word is. They don't actually understand anything, they're just making an educated guess based on similar exchanges. That does not mesh well with the endless nuances and variations (unpredictability) in chess. All the model has to work with is some chess notation and engine evaluations.
I would guess the notation is approximately useless in determining what's going on, as there's an endless number of move sequences in chess, and in every position the logic and evaluation could be different. There is nothing there that the LLM can predict. The best it could do is copy-paste human analysis. When you combine this with engine evaluations, the models seem to just take the top engine line and then make something based on the next few moves that lines up with the engine evaluation. But the engine's top line might have nothing to do with the actual idea behind a move, so they usually output complete nonsense. To reiterate: LLMs don't understand chess.
The amount of high-quality training data (positions, moves and their human analysis) to train an LLM to actually give decent feedback on chess would be absolutely astronomical and completely unfeasible to produce. A prime example of people trying to use them for things they're not suited for.
LLMs are uniquely unsuited for Chess analysis. At they're core they're predictive models operating on probabilities of what the next word is. They don't actually understand anything, they're just making an educated guess based on similar exchanges. That does not mesh well with the endless nuances and variations (unpredictability) in chess. All the model has to work with is some chess notation and engine evaluations.
I would guess the notation is approximately useless in determining what's going on, as there's an endless number of move sequences in chess, and in every position the logic and evaluation could be different. There is nothing there that the LLM can predict. The best it could do is copy-paste human analysis. When you combine this with engine evaluations, the models seem to just take the top engine line and then make something based on the next few moves that lines up with the engine evaluation. But the engine's top line might have nothing to do with the actual idea behind a move, so they usually output complete nonsense. To reiterate: LLMs don't understand chess.
The amount of high-quality training data (positions, moves and their human analysis) to train an LLM to actually give decent feedback on chess would be absolutely astronomical and completely unfeasible to produce. A prime example of people trying to use them for things they're not suited for.
@TotalNoob69 said in #30:
You didn't understand my post @RuyLopez1000. What I am saying is that if what we consider slop was good enough for school and hard to grade - not grading poorly, but having difficulty deciding what grade it deserves, then there surely must be a problem there, in the school system and what we are teaching kids and how we are training teachers.
Good point.
The first student wrote their paper with ChatGPT. The structure was perfection. The content? Recycled slop. Not only did it not reflect the student's grasp of knowledge, it even contained factual errors, only they were confidently expressed and even cited a non existent paper for the source.
The second student wrote their paper with their own head. They are good at hard sciences, but have difficulty navigating the intricacies of language. Maybe they are not native speakers of the local language or maybe they're just not into writing anything but equations. Their paper not only perfectly solved the problem, but also expressed the inner workings of the student's mind.
The result: sneaky student gets a boost of confidence. They did it! They tricked the teacher! And if they can trick the teacher like this, who else can they trick in the same way and get ahead in life? Science student instead is disappointed. They learn that expressing themselves naturally is wrong, that in order to get anywhere they have to adopt or fake a specific culture, with structured rules that involve little thinking.
But how could the sneaky student trick the teacher? You said 'The structure was perfection. The content? Recycled slop. Not only did it not reflect the student's grasp of knowledge, it even contained factual errors, only they were confidently expressed and even cited a non existent paper for the source.'
There's no way they could pass as structure is a very small part of the marking scheme normally. If what they say is wrong then they certainly could not beat the other student. Also universities check the sources to see if they are real.
Was it a lazy teacher? No. Their brain just naturally discerns patterns and once those patterns seem to get the job done, they stop thinking about it and use them as a good tool. After all, you don't always think of how a magnetron works when you're reheating your food in the microwave. The student does the same thing, they learn patterns, mainly the behavioral ones of the teacher. They adapt to what gets the easiest grade with the least effort. This is not laziness - as defined relatively to the average work load of community members - but how brains work. Parents do that same thing. Brains find patterns and they settle into them. Just like LLMs are trained.
And that is why 20 years later the sneaky student is the one who hires the science student to make them money.
We have been training people for slop for centuries. We are not demanding or rewarding intelligence, reasoning skills, chain of thought, but rote memorization, social cues and finding the easiest way forward. School is not an elevator of minds, but a conveyor belt filter of fitness in the structure of society. If they can't get through school, they certainly won't be good employees. And if they do get through school, it means they can handle whatever other lazy brains will throw at them.
Nice observation, very true. Great explanation of why there is 'a problem there, in the school system and what we are teaching kids and how we are training teachers.'
And when LLMs reveal this for all to see, what do we do? We label AI output as slop and complain how they "took 'r jubz!". Because the scariest thing to contemplate is that we will be building AI that will not be sloppy. And then what do WE do?
Interesting point. Part of the motivation for calling AI sloppy. But AI is indeed sloppy at the moment so it is a justified thing to say.
What do you think will happen when AI is not sloppy? Do you think that jobs will decrease or do you think that people continue their jobs but basically be puppets who make the AI do things for them while they vegetate.
@TotalNoob69 said in #30:
> You didn't understand my post @RuyLopez1000. What I am saying is that if what we consider slop was good enough for school and hard to grade - not grading poorly, but having difficulty deciding what grade it deserves, then there surely must be a problem there, in the school system and what we are teaching kids and how we are training teachers.
Good point.
> The first student wrote their paper with ChatGPT. The structure was perfection. The content? Recycled slop. Not only did it not reflect the student's grasp of knowledge, it even contained factual errors, only they were confidently expressed and even cited a non existent paper for the source.
>
> The second student wrote their paper with their own head. They are good at hard sciences, but have difficulty navigating the intricacies of language. Maybe they are not native speakers of the local language or maybe they're just not into writing anything but equations. Their paper not only perfectly solved the problem, but also expressed the inner workings of the student's mind.
>
> The result: sneaky student gets a boost of confidence. They did it! They tricked the teacher! And if they can trick the teacher like this, who else can they trick in the same way and get ahead in life? Science student instead is disappointed. They learn that expressing themselves naturally is wrong, that in order to get anywhere they have to adopt or fake a specific culture, with structured rules that involve little thinking.
But how could the sneaky student trick the teacher? You said *'The structure was perfection. The content? Recycled slop. Not only did it not reflect the student's grasp of knowledge, it even contained factual errors, only they were confidently expressed and even cited a non existent paper for the source.'*
There's no way they could pass as structure is a very small part of the marking scheme normally. If what they say is wrong then they certainly could not beat the other student. Also universities check the sources to see if they are real.
>
> Was it a lazy teacher? No. Their brain just naturally discerns patterns and once those patterns seem to get the job done, they stop thinking about it and use them as a good tool. After all, you don't always think of how a magnetron works when you're reheating your food in the microwave. The student does the same thing, they learn patterns, mainly the behavioral ones of the teacher. They adapt to what gets the easiest grade with the least effort. This is not laziness - as defined relatively to the average work load of community members - but how brains work. Parents do that same thing. Brains find patterns and they settle into them. Just like LLMs are trained.
>
> And that is why 20 years later the sneaky student is the one who hires the science student to make them money.
>
> We have been training people for slop for centuries. We are not demanding or rewarding intelligence, reasoning skills, chain of thought, but rote memorization, social cues and finding the easiest way forward. School is not an elevator of minds, but a conveyor belt filter of fitness in the structure of society. If they can't get through school, they certainly won't be good employees. And if they do get through school, it means they can handle whatever other lazy brains will throw at them.
Nice observation, very true. Great explanation of why there is *'a problem there, in the school system and what we are teaching kids and how we are training teachers.'*
> And when LLMs reveal this for all to see, what do we do? We label AI output as slop and complain how they "took 'r jubz!". Because the scariest thing to contemplate is that we will be building AI that will not be sloppy. And then what do WE do?
Interesting point. Part of the motivation for calling AI sloppy. But AI is indeed sloppy at the moment so it is a justified thing to say.
What do you think will happen when AI is not sloppy? Do you think that jobs will decrease or do you think that people continue their jobs but basically be puppets who make the AI do things for them while they vegetate.
3333rrr
@TotalNoob69 said in #31:
And just to spam even more with my unsolicited opinions, slop is safe! Maybe it's correlation and not causation that so many brilliant people end up having mental disorders,
but you can't argue that there is (me - did you mean isn't?) a higher incidence of mental disease in technical domains that require consistent reasoning and true intelligence. Chess being one of them.
What happens? Brains crack! They are not designed for that kind of work. You push them too hard into this unfamiliar territory of pure abstract reason and they malfunction.
Sounds like a stereotype. Do their brains crack? Where's the evidence that working with pure abstract reason causes this? Chess players cracking is not common at all. Just a few examples brought out ignoring all the others who didn't have outward evidence of mental disorders. Some even falsely claim that Morphy 'went crazy' (false) trying to play into the stereotype.
Slop on the other hand? It's "environmentally friendly". Uses almost no energy. Just recycle what someone else said.
Recycling or Plagiarizing?
Even better if a machine says it, because you can either pretend machines are not supposed to get things wrong or you can blame them for your lack of effort. But hey, at least your brain remains intact!
Would your brain remain intact if you grew up in a world where everything was AI Slop? Even now AI Slop is being used for brainwashing people, their brains certainly didn't remain intact.
@TotalNoob69 said in #31:
> And just to spam even more with my unsolicited opinions, slop is safe! Maybe it's correlation and not causation that so many brilliant people end up having mental disorders,
>but you can't argue that there is (me - did you mean isn't?) a higher incidence of mental disease in technical domains that require consistent reasoning and true intelligence. Chess being one of them.
>
> What happens? Brains crack! They are not designed for that kind of work. You push them too hard into this unfamiliar territory of pure abstract reason and they malfunction.
Sounds like a stereotype. Do their brains crack? Where's the evidence that working with pure abstract reason causes this? Chess players cracking is not common at all. Just a few examples brought out ignoring all the others who didn't have outward evidence of mental disorders. Some even falsely claim that Morphy 'went crazy' (false) trying to play into the stereotype.
>Slop on the other hand? It's "environmentally friendly". Uses almost no energy. Just recycle what someone else said.
Recycling or Plagiarizing?
>Even better if a machine says it, because you can either pretend machines are not supposed to get things wrong or you can blame them for your lack of effort. But hey, at least your brain remains intact!
Would your brain remain intact if you grew up in a world where everything was AI Slop? Even now AI Slop is being used for brainwashing people, their brains certainly didn't remain intact.
@RuyLopez1000 said in #33:
What do you think will happen when AI is not sloppy? Do you think that jobs will decrease or do you think that people continue their jobs but basically be puppets who make the AI do things for them while they vegetate.
Well, there are several alternatives. Historically, we rebel against the tyrants who demand quality :) But we also adapt to new situations.
If AI will cease to be sloppy, the default human will just trust it unquestionably and offload their intelligence to it. They will revel in their sloppiness and even mock machines for the lack of it.
Frank Herbert postulated a "Butlerian Jihad" against machines, but also a stratification of society into the ones who are culturally forced to train their excellence and those who live under them in perpetual squalor and stupidity. He was a lot smarter than me, but somehow I suspect he was overly optimistic.
The most optimistic idea is that we will build our gods and somehow make them right, so they will take care of humanity like we do pets. The most pessimistic is that a minority of people will have control over AIs and robots and thus consider and treat the rest of the population as superfluous and eventually... subhuman.
What I believe? Not sure I am qualified. But if I give it a try, people will start communicating in machine dream language and then live there in machine dreamscapes, for better or for worse. Instead of expressing thoughts they will meme a feeling across using ad-hoc generated emojimagery, which will resonate pleasantly with people who feel a connection with said imagery and generate that necessary tribal emotion. Most people will find no use in reality and avoid it altogether if they have the option. And complain violently if they don't. And then the pessimistic scenario above.
@RuyLopez1000 said in #33:
> What do you think will happen when AI is not sloppy? Do you think that jobs will decrease or do you think that people continue their jobs but basically be puppets who make the AI do things for them while they vegetate.
Well, there are several alternatives. Historically, we rebel against the tyrants who demand quality :) But we also adapt to new situations.
If AI will cease to be sloppy, the default human will just trust it unquestionably and offload their intelligence to it. They will revel in their sloppiness and even mock machines for the lack of it.
Frank Herbert postulated a "Butlerian Jihad" against machines, but also a stratification of society into the ones who are culturally forced to train their excellence and those who live under them in perpetual squalor and stupidity. He was a lot smarter than me, but somehow I suspect he was overly optimistic.
The most optimistic idea is that we will build our gods and somehow make them right, so they will take care of humanity like we do pets. The most pessimistic is that a minority of people will have control over AIs and robots and thus consider and treat the rest of the population as superfluous and eventually... subhuman.
What I believe? Not sure I am qualified. But if I give it a try, people will start communicating in machine dream language and then live there in machine dreamscapes, for better or for worse. Instead of expressing thoughts they will meme a feeling across using ad-hoc generated emojimagery, which will resonate pleasantly with people who feel a connection with said imagery and generate that necessary tribal emotion. Most people will find no use in reality and avoid it altogether if they have the option. And complain violently if they don't. And then the pessimistic scenario above.
@RuyLopez1000 said in #35:
but you can't argue that there is (me - did you mean isn't?) a higher incidence of mental disease in technical domains
yes, that is what I meant, thanks for the correction.
Sounds like a stereotype. Do their brains crack? Where's the evidence that working with pure abstract reason causes this? Chess players cracking is not common at all. Just a few examples brought out ignoring all the others who didn't have outward evidence of mental disorders. Some even falsely claim that Morphy 'went crazy' (false) trying to play into the stereotype.
Most chess players are amateurs. And most chess masters have resilient minds. But the stress is there and the evidence of its effect on human minds is clear. And I am not even talking about the autistic personality traits of technically minded people, which I feel just means people label smart people early, just to be safe :) Slop will never stress you to insanity.
Slop on the other hand? It's "environmentally friendly". Uses almost no energy. Just recycle what someone else said.
Recycling or Plagiarizing?
What happened when AI started generating medleys of human art that looked amazing? We invoked vague copyright, as if a human doing the same thing: trying to emulate his betters and combining what they learned, is not exactly the same thing. I don't believe in copyright, but then again I think most of art and thought out there is just recycling.
Even better if a machine says it, because you can either pretend machines are not supposed to get things wrong or you can blame them for your lack of effort. But hey, at least your brain remains intact!
Would your brain remain intact if you grew up in a world where everything was AI Slop? Even now AI Slop is being used for brainwashing people, their brains certainly didn't remain intact.
Intact and smooth like a baby's bottom. I stopped using my brain for a few years and I got really stupid. It doesn't last unless you work continuously at it. That's biological brains for you. They can't possible win this.
@RuyLopez1000 said in #35:
> >but you can't argue that there is (me - did you mean isn't?) a higher incidence of mental disease in technical domains
yes, that is what I meant, thanks for the correction.
> Sounds like a stereotype. Do their brains crack? Where's the evidence that working with pure abstract reason causes this? Chess players cracking is not common at all. Just a few examples brought out ignoring all the others who didn't have outward evidence of mental disorders. Some even falsely claim that Morphy 'went crazy' (false) trying to play into the stereotype.
Most chess players are amateurs. And most chess masters have resilient minds. But the stress is there and the evidence of its effect on human minds is clear. And I am not even talking about the autistic personality traits of technically minded people, which I feel just means people label smart people early, just to be safe :) Slop will never stress you to insanity.
> >Slop on the other hand? It's "environmentally friendly". Uses almost no energy. Just recycle what someone else said.
> Recycling or Plagiarizing?
What happened when AI started generating medleys of human art that looked amazing? We invoked vague copyright, as if a human doing the same thing: trying to emulate his betters and combining what they learned, is not exactly the same thing. I don't believe in copyright, but then again I think most of art and thought out there is just recycling.
> >Even better if a machine says it, because you can either pretend machines are not supposed to get things wrong or you can blame them for your lack of effort. But hey, at least your brain remains intact!
>
> Would your brain remain intact if you grew up in a world where everything was AI Slop? Even now AI Slop is being used for brainwashing people, their brains certainly didn't remain intact.
Intact and smooth like a baby's bottom. I stopped using my brain for a few years and I got really stupid. It doesn't last unless you work continuously at it. That's biological brains for you. They can't possible win this.
who the f actually uses AI to learn chess tho? let natural selection handle them
who the f actually uses AI to learn chess tho? let natural selection handle them
There’s a lot of misinformation in this article. For instance:
- ChatGPT is a product, not a model.
- 1.8B parameters would be considered an SLM, not an LLM. In early 2023, GPT-4 was rumored to have ~1.76T parameters, so I’m guessing the author rounded that up alongside with the wrong amount. Later iterations don’t have any widely accepted leaks about parameter count.
- A token isn’t a word; it’s closer to ~0.75 words on average. Many common words are one token, but it depends on capitalization and context. Either way, the article’s claim is false.
- GPT-3’s training data was described as ~300B tokens, not words (July 2020; source: https://arxiv.org/pdf/2005.14165.pdf).
- It predicts the next token, not the next word.
- The author tries to familiarize the reader with the term “token” by defining it as 1:1 with “word,” which is misleading.
- Correlating “parameters” with adjusting the output doesn’t make any sense.
- The author keeps repeating “Train the AI,” but most products today are built by using existing models via prompting/context (and broader LLM application design), not by training from scratch. You could train SLMs to be part of the pipeline, though, and it’s likely that they can outperform LLMs on many tasks while being more cost-efficient and faster.
- Using the internet as training data is not clearly “theft”, the case is just not legally settled. I think that only content behind a paywall should be illegal to scrape, but again, this is just my opinion.
- It’s true that LLMs can make logical errors (especially older or non-reasoning models). However, for example, this is even with reasoning mode turned off in ChatGPT’s default mode (I don’t know which model the author used, and the site might utilizing smaller/older ones):
Q: Is this statement about chess true or false?:
There are never less than 32 empty squares on a chess board.
True.
A chessboard has 64 squares. Under the rules of chess:
At the start, there are 32 pieces, so 32 empty squares.
Captures reduce the number of pieces, increasing empty squares.
Promotions do not increase the number of occupied squares—they replace a pawn on its square with another piece.
Therefore, the number of occupied squares is never more than 32, and the board always has at least 32 empty squares in any legal chess position.
Since models are non-deterministic, enabling reasoning may reduce the chance of these basic mistakes.
It’s true that some companies are building bad products with AI while capitalizing on the hype. However, there’s no doubt the technology can be utilized, especially when the product is built well.
It seems that even the AI caught some of the errors in the article. I’d suggest that you rewrite it and apologize for spreading misinformation earlier.
edit: minor grammar fixes
There’s a lot of misinformation in this article. For instance:
- ChatGPT is a product, not a model.
- 1.8B parameters would be considered an SLM, not an LLM. In early 2023, GPT-4 was rumored to have ~1.76T parameters, so I’m guessing the author rounded that up alongside with the wrong amount. Later iterations don’t have any widely accepted leaks about parameter count.
- A token isn’t a word; it’s closer to ~0.75 words on average. Many common words are one token, but it depends on capitalization and context. Either way, the article’s claim is false.
- GPT-3’s training data was described as ~300B tokens, not words (July 2020; source: https://arxiv.org/pdf/2005.14165.pdf).
- It predicts the next token, not the next word.
- The author tries to familiarize the reader with the term “token” by defining it as 1:1 with “word,” which is misleading.
- Correlating “parameters” with adjusting the output doesn’t make any sense.
- The author keeps repeating “Train the AI,” but most products today are built by using existing models via prompting/context (and broader LLM application design), not by training from scratch. You could train SLMs to be part of the pipeline, though, and it’s likely that they can outperform LLMs on many tasks while being more cost-efficient and faster.
- Using the internet as training data is not clearly “theft”, the case is just not legally settled. I think that only content behind a paywall should be illegal to scrape, but again, this is just my opinion.
- It’s true that LLMs can make logical errors (especially older or non-reasoning models). However, for example, this is even with reasoning mode turned off in ChatGPT’s default mode (I don’t know which model the author used, and the site might utilizing smaller/older ones):
```
Q: Is this statement about chess true or false?:
There are never less than 32 empty squares on a chess board.
True.
A chessboard has 64 squares. Under the rules of chess:
At the start, there are 32 pieces, so 32 empty squares.
Captures reduce the number of pieces, increasing empty squares.
Promotions do not increase the number of occupied squares—they replace a pawn on its square with another piece.
Therefore, the number of occupied squares is never more than 32, and the board always has at least 32 empty squares in any legal chess position.
```
Since models are non-deterministic, enabling reasoning may reduce the chance of these basic mistakes.
It’s true that some companies are building bad products with AI while capitalizing on the hype. However, there’s no doubt the technology can be utilized, especially when the product is built well.
It seems that even the AI caught some of the errors in the article. I’d suggest that you rewrite it and apologize for spreading misinformation earlier.
edit: minor grammar fixes
<Comment deleted by user>