You are viewing a single comment's thread from:

RE: A Description of What's Under the Hood of AI from Stephen Wolfram

in #life2 years ago

Wolfram makes it obvious what ChatGPT actually is - it is simple machine learning applied as a large language model. It does what most people do when they do small talk - something I would call "buzzwording".

Throwing in words that statistically fit the given topic. "Bitcoin? Uhh yea, the energy expenditure of the proof-of-work algorithm is a disaster"

Ignoring the fact that an algorithm has no energy expense... The bigger the language model, the better it obfuscates the fact that it deepfakes understanding. It is an authority bias where WE project assumptions onto the bot. There is no underlying model of the topic. But most people will not care, they just want to talk. Publish or perish.

Generative AI for Art is one thing, word prediction for saving some seconds is another, but for empirical results, we need tools that address the problem and not tools that look like they address the problem. A screwdriver that only looks like it is spinning doesn't put a screw into a wall. No matter how real the spinn looks like, when there is no tork it is useless. Using LLM for anything is a clear man with a hammer syndrome. I use it as a highly unqualified but highly motivated assistant. It lists me trivia. "All environmental laws in California by date", with more APIs it become more useful for such paper work

Sort:  

"It is an authority bias where WE project assumptions onto the bot."

I agree. The more impressed people are by AI the less I find they understand what it is, or understand about our own consciousness.

Thanks!