

Scrolled way too far down to see this kind of post.
I am live.


Scrolled way too far down to see this kind of post.
Dude. He makes that gun look like a toy from the dollar tree.
They can be fat and they can be fucks. Just not the thing in the middle.


The reason he should learn about it is because he’s talking about it as though he’s informed and he is not.
I don’t have to be a LLM programmer working at openai to have a working knowledge of how these machines function. It’s literally just a Google search.
He made an unreasonable ignorant comment and I called him out. He should feel ashamed and I have absolutely no reason to pad down what I’m saying under the guise of being nice.


Calling an llm a Wikipedia regurgitator is factually and objectively incorrect.
Is there anything that you can say to refute the facts that I presented in my above comment?
(I rolled my eye so hard at your comment that I pulled my back out)


No. You’re not just wrong, you’re aggressively uninformed.
By you repeating the same tired “AI is just regurgitating data” line makes it clear you don’t understand what you’re criticizing. Calling large language models “AI” the way you are doing it just exposes that you do not know what you are talking about. It is like a creationist smugly saying “orangutang” instead of “orangutan” and thinking they sound informed. You are not demonstrating insight. You are advertising ignorance.
What you’re describing, reading a paragraph off Wikipedia, is literal retrieval. That is not how modern language models operate. They are not databases with a search bar attached. They are probabilistic systems trained to model patterns, structure, and relationships across massive datasets. When they generate a response, they are not pulling a stored paragraph. They are constructing output token by token based on learned representations.
If it were just regurgitation, you would constantly see verbatim copies of training data. You do not. What you see instead is synthesis. Concepts are recombined, abstracted, and adapted to context. The system can explain the same idea multiple ways, shift tone, handle novel prompts, and connect ideas that were never explicitly paired in the source material. That is fundamentally different from reading something out loud.
Your analogy fails because it assumes nothing is being transformed. In reality, transformation is the entire mechanism. Information is compressed into weights and then expanded into new outputs.
Is it human intelligence. No. Is it perfect. No. But reducing it to “just reading Wikipedia out loud” is not skepticism. It is a basic failure to understand how the technology works.
If you are going to criticize something, at least learn what it is first.


It really isn’t. But you do you boo.


I know lemmy’s very anti-ai but this is really fascinating stuff.


No, as a matter of fact, the subject of this particular post fits the sub exactly. It is a stupid question.
It is incredibly stupid. There is no real way to answer it, and any answer would be superficial because it is such a massive hypothetical that the answer itself does not actually matter.
Although, as stated above, it technically fits the sub, it violates the spirit of what this is supposed to be.
Great movie btw.