lichess.org
Donate

AI Slop is Invading the Chess World

@HarpSeal said in #20:

Never mind just the AI-assisted chess coaches; the actual chess content mill seems heavily AI.

We may be about to enter an era where the internet builds karma scores, cliques, etc. I wonder.

@HarpSeal said in #20: > Never mind just the AI-assisted chess coaches; the actual chess content mill seems heavily AI. We may be about to enter an era where the internet builds karma scores, cliques, etc. I wonder.

AI aside (don't you worry I'm defiantly crashing out) Reading this was lo-key entertaining.

AI aside (don't you worry I'm defiantly crashing out) Reading this was lo-key entertaining.

Agreed! I'm surprised there is not more pushback against this nonsense. Hustlers out there trying to make a quick buck from gullible people.

Agreed! I'm surprised there is not more pushback against this nonsense. Hustlers out there trying to make a quick buck from gullible people.

@TotalNoob69 said in #2:

There is nothing inherently bad in this design, with each component playing to their strength and creating a useful tool that everybody loves.

semantics. slop is slop and currently worthless products advertised as AI coaching are popping up all over the place. the product itself is slop.

@TotalNoob69 said in #2: > There is nothing inherently bad in this design, with each component playing to their strength and creating a useful tool that everybody loves. semantics. slop is slop and currently worthless products advertised as AI coaching are popping up all over the place. the product itself is slop.

The revolution will be....monetized! I'm sure they're already creating a Magnus AI, so we can all learn from the goat, but it will be a novelty as most people treat coaches as learning companions who have a favourite soccer team, or not.
...AI will never hate Soccer

Whatever the tech, it will be guided by the needs of humans. A few years ago Napster 2.0 was going to snuff out the music industry but it just created a hundred million songs I don't listen to.

The revolution will be....monetized! I'm sure they're already creating a Magnus AI, so we can all learn from the goat, but it will be a novelty as most people treat coaches as learning companions who have a favourite soccer team, or not. ...AI will never hate Soccer Whatever the tech, it will be guided by the needs of humans. A few years ago Napster 2.0 was going to snuff out the music industry but it just created a hundred million songs I don't listen to.

@HarpSeal said in #20:

Never mind just the AI-assisted chess coaches; the actual chess content mill seems heavily AI. Example, I prompted ChatGPT the following:

I read somewhere that the majority of LinkedIn is now AI slop, and 95% of all articles and blogs published in the AI era was heavily edited by AI.

@HarpSeal said in #20: > Never mind just the AI-assisted chess coaches; the actual chess content mill seems heavily AI. Example, I prompted ChatGPT the following: I read somewhere that the majority of LinkedIn is now AI slop, and 95% of all articles and blogs published in the AI era was heavily edited by AI.

AI trying to play chess:

ChatGPT, first move. "e4"
Me: "e5"
ChatGPT: "King d1"
Me: "uh, you can't move there"
ChatGPT "I apologize, Rook b1"
Me: "uhh, you can't move there either"

AI trying to play chess: ChatGPT, first move. "e4" Me: "e5" ChatGPT: "King d1" Me: "uh, you can't move there" ChatGPT "I apologize, Rook b1" Me: "uhh, you can't move there either"

@HollowLeaf said in #26:

I read somewhere that the majority of LinkedIn is now AI slop, and 95% of all articles and blogs published in the AI era was heavily edited by AI.

It "started" with teachers and students, didn't it? Suddenly students were coming up with well written essays and problem solutions because they were using (the very earliest!) ChatGPT. Teachers foamed at the mouth then... started using ChatGPT to grade students.

Not for a second did the anti-AI crowd stop to think why the output of ChatGPT which is so underwhelmingly inferior in most situations was so good at solving school problems and why teachers were so bad at grading papers written by AI. Not for a second did the "worried parents" consider what the school system was teaching and demanding of their children and what teachers had adapted to spot in order to grade a paper high or low.

Strange what people choose to focus on when looking in a mirror.

@HollowLeaf said in #26: > I read somewhere that the majority of LinkedIn is now AI slop, and 95% of all articles and blogs published in the AI era was heavily edited by AI. It "started" with teachers and students, didn't it? Suddenly students were coming up with well written essays and problem solutions because they were using (the very earliest!) ChatGPT. Teachers foamed at the mouth then... started using ChatGPT to grade students. Not for a second did the anti-AI crowd stop to think why the output of ChatGPT which is so underwhelmingly inferior in most situations was so good at solving school problems and why teachers were so bad at grading papers written by AI. Not for a second did the "worried parents" consider what the school system was teaching and demanding of their children and what teachers had adapted to spot in order to grade a paper high or low. Strange what people choose to focus on when looking in a mirror.

@TotalNoob69 said in #28:

I read somewhere that the majority of LinkedIn is now AI slop, and 95% of all articles and blogs published in the AI era was heavily edited by AI.

It "started" with teachers and students, didn't it? Suddenly students were coming up with well written essays and problem solutions because they were using (the very earliest!) ChatGPT. Teachers foamed at the mouth then... started using ChatGPT to grade students.

But teachers are different. Some do and some don't use ChatGPT to grade.

Not for a second did the anti-AI crowd stop to think why the output of ChatGPT which is so underwhelmingly inferior in most situations was so good at solving school problems and why teachers were so bad at grading papers written by AI.

The point is to be able to teach people to be capable of doing things themselves. Writing papers is a means for assessing people capabilities, writing papers is not an end in itself.

Plus LLMs certainly make a lot of errors and incoherent text.

Not for a second did the "worried parents" consider what the school system was teaching and demanding of their children and what teachers had adapted to spot in order to grade a paper high or low.

What can they do? Most probably aren't aware of the scale of the issue. And how do you know they're fine with what's going on?

@TotalNoob69 said in #28: > > > I read somewhere that the majority of LinkedIn is now AI slop, and 95% of all articles and blogs published in the AI era was heavily edited by AI. > > It "started" with teachers and students, didn't it? Suddenly students were coming up with well written essays and problem solutions because they were using (the very earliest!) ChatGPT. Teachers foamed at the mouth then... started using ChatGPT to grade students. But teachers are different. Some do and some don't use ChatGPT to grade. > Not for a second did the anti-AI crowd stop to think why the output of ChatGPT which is so underwhelmingly inferior in most situations was so good at solving school problems and why teachers were so bad at grading papers written by AI. The point is to be able to teach people to be capable of doing things *themselves*. Writing papers is a means for assessing people capabilities, writing papers is not an end in itself. Plus LLMs certainly make a lot of errors and incoherent text. >Not for a second did the "worried parents" consider what the school system was teaching and demanding of their children and what teachers had adapted to spot in order to grade a paper high or low. What can they do? Most probably aren't aware of the scale of the issue. And how do you know they're fine with what's going on?

@RuyLopez1000 said in #29:

I read somewhere that the majority of LinkedIn is now AI slop, and 95% of all articles and blogs published in the AI era was heavily edited by AI.

It "started" with teachers and students, didn't it? Suddenly students were coming up with well written essays and problem solutions because they were using (the very earliest!) ChatGPT. Teachers foamed at the mouth then... started using ChatGPT to grade students.

But teachers are different. Some do and some don't use ChatGPT to grade.

Not for a second did the anti-AI crowd stop to think why the output of ChatGPT which is so underwhelmingly inferior in most situations was so good at solving school problems and why teachers were so bad at grading papers written by AI.

The point is to be able to teach people to be capable of doing things themselves. Writing papers is a means for assessing people capabilities, writing papers is not an end in itself.

Plus LLMs certainly make a lot of errors and incoherent text.

Not for a second did the "worried parents" consider what the school system was teaching and demanding of their children and what teachers had adapted to spot in order to grade a paper high or low.

What can they do? Most probably aren't aware of the scale of the issue. And how do you know they're fine with what's going on?

You didn't understand my post @RuyLopez1000. What I am saying is that if what we consider slop was good enough for school and hard to grade - not grading poorly, but having difficulty deciding what grade it deserves, then there surely must be a problem there, in the school system and what we are teaching kids and how we are training teachers.

A story: teacher takes the stack of papers they have to grade. There are at least 30, because that's how class sizes work today, so they take them one by one and decide how good they are. At first they are reading the entire paper, finding where the student struggled to maintain coherence or display knowledge and point out those are areas for improvement.

But after a few of these, they get tired, so they are losing focus and instead of looking at specifics, they look at the general structure of the paper. Is it articulate and nicely organized? Surely that's a sign of quality. 10! A! Excellent! Does a student struggle to even find the words in the sentence to describe their thought patterns? Surely that means they have to do better: 8! B-! Do better!

The first student wrote their paper with ChatGPT. The structure was perfection. The content? Recycled slop. Not only did it not reflect the student's grasp of knowledge, it even contained factual errors, only they were confidently expressed and even cited a non existent paper for the source.

The second student wrote their paper with their own head. They are good at hard sciences, but have difficulty navigating the intricacies of language. Maybe they are not native speakers of the local language or maybe they're just not into writing anything but equations. Their paper not only perfectly solved the problem, but also expressed the inner workings of the student's mind.

The result: sneaky student gets a boost of confidence. They did it! They tricked the teacher! And if they can trick the teacher like this, who else can they trick in the same way and get ahead in life? Science student instead is disappointed. They learn that expressing themselves naturally is wrong, that in order to get anywhere they have to adopt or fake a specific culture, with structured rules that involve little thinking.

Was it a lazy teacher? No. Their brain just naturally discerns patterns and once those patterns seem to get the job done, they stop thinking about it and use them as a good tool. After all, you don't always think of how a magnetron works when you're reheating your food in the microwave. The student does the same thing, they learn patterns, mainly the behavioral ones of the teacher. They adapt to what gets the easiest grade with the least effort. This is not laziness - as defined relatively to the average work load of community members - but how brains work. Parents do that same thing. Brains find patterns and they settle into them. Just like LLMs are trained.

And that is why 20 years later the sneaky student is the one who hires the science student to make them money.

We have been training people for slop for centuries. We are not demanding or rewarding intelligence, reasoning skills, chain of thought, but rote memorization, social cues and finding the easiest way forward. School is not an elevator of minds, but a conveyor belt filter of fitness in the structure of society. If they can't get through school, they certainly won't be good employees. And if they do get through school, it means they can handle whatever other lazy brains will throw at them.

And when LLMs reveal this for all to see, what do we do? We label AI output as slop and complain how they "took 'r jubz!". Because the scariest thing to contemplate is that we will be building AI that will not be sloppy. And then what do WE do?

@RuyLopez1000 said in #29: > > > > > I read somewhere that the majority of LinkedIn is now AI slop, and 95% of all articles and blogs published in the AI era was heavily edited by AI. > > > > It "started" with teachers and students, didn't it? Suddenly students were coming up with well written essays and problem solutions because they were using (the very earliest!) ChatGPT. Teachers foamed at the mouth then... started using ChatGPT to grade students. > > But teachers are different. Some do and some don't use ChatGPT to grade. > > > Not for a second did the anti-AI crowd stop to think why the output of ChatGPT which is so underwhelmingly inferior in most situations was so good at solving school problems and why teachers were so bad at grading papers written by AI. > > The point is to be able to teach people to be capable of doing things *themselves*. Writing papers is a means for assessing people capabilities, writing papers is not an end in itself. > > Plus LLMs certainly make a lot of errors and incoherent text. > > >Not for a second did the "worried parents" consider what the school system was teaching and demanding of their children and what teachers had adapted to spot in order to grade a paper high or low. > > What can they do? Most probably aren't aware of the scale of the issue. And how do you know they're fine with what's going on? You didn't understand my post @RuyLopez1000. What I am saying is that if what we consider slop was good enough for school and hard to grade - not grading poorly, but having difficulty deciding what grade it deserves, then there surely must be a problem there, in the school system and what we are teaching kids and how we are training teachers. A story: teacher takes the stack of papers they have to grade. There are at least 30, because that's how class sizes work today, so they take them one by one and decide how good they are. At first they are reading the entire paper, finding where the student struggled to maintain coherence or display knowledge and point out those are areas for improvement. But after a few of these, they get tired, so they are losing focus and instead of looking at specifics, they look at the general structure of the paper. Is it articulate and nicely organized? Surely that's a sign of quality. 10! A! Excellent! Does a student struggle to even find the words in the sentence to describe their thought patterns? Surely that means they have to do better: 8! B-! Do better! The first student wrote their paper with ChatGPT. The structure was perfection. The content? Recycled slop. Not only did it not reflect the student's grasp of knowledge, it even contained factual errors, only they were confidently expressed and even cited a non existent paper for the source. The second student wrote their paper with their own head. They are good at hard sciences, but have difficulty navigating the intricacies of language. Maybe they are not native speakers of the local language or maybe they're just not into writing anything but equations. Their paper not only perfectly solved the problem, but also expressed the inner workings of the student's mind. The result: sneaky student gets a boost of confidence. They did it! They tricked the teacher! And if they can trick the teacher like this, who else can they trick in the same way and get ahead in life? Science student instead is disappointed. They learn that expressing themselves naturally is wrong, that in order to get anywhere they have to adopt or fake a specific culture, with structured rules that involve little thinking. Was it a lazy teacher? No. Their brain just naturally discerns patterns and once those patterns seem to get the job done, they stop thinking about it and use them as a good tool. After all, you don't always think of how a magnetron works when you're reheating your food in the microwave. The student does the same thing, they learn patterns, mainly the behavioral ones of the teacher. They adapt to what gets the easiest grade with the least effort. This is not laziness - as defined relatively to the average work load of community members - but how brains work. Parents do that same thing. Brains find patterns and they settle into them. Just like LLMs are trained. And that is why 20 years later the sneaky student is the one who hires the science student to make them money. We have been training people for slop for centuries. We are not demanding or rewarding intelligence, reasoning skills, chain of thought, but rote memorization, social cues and finding the easiest way forward. School is not an elevator of minds, but a conveyor belt filter of fitness in the structure of society. If they can't get through school, they certainly won't be good employees. And if they do get through school, it means they can handle whatever other lazy brains will throw at them. And when LLMs reveal this for all to see, what do we do? We label AI output as slop and complain how they "took 'r jubz!". Because the scariest thing to contemplate is that we will be building AI that will not be sloppy. And then what do WE do?