IMG source - wpmedia.wolfram.com
While I am not expecting the Singularity any time soon, or at all, I have been taken aback by recent developments in AI. The former limits of their output have seemingly suddenly been surpassed, significantly. The weird garbled, quasi-faceless 'art' they created, that really left me queasy, and the Jack and Jill sentence structure of written output, has quite noticeably become far more competent. My ability to manipulate pixels, to make art, is quite surpassed. I don't think my original ideas are close to being threatened, but my ability to write concise and readable prose certainly has been left in the dust - as I'm sure everyone that struggles through this wall of text will agree!
Given these specific capabilities we're being shown are quite targeted, and blatantly neutered, I am left to consider that similar capabilities in more substantive matters, like software development, facial recognition, and far more disturbing fields, have qualitatively improved no less. It's also obvious that the neutering is done out of desperation. The propaganda narratives around race, gender, and the whole panoply of political power, would be quickly and utterly destroyed without that neutering (I find it hilarious the socially warring term it 'safety'. It's throttling - literally the opposite of safety). Obviously overlords and their pet minions are availed of these tools without that throttling, and that's disturbing because of what it says about what they want for us: it's not to derive benefit, or to make society more felicitous.
I also find it remarkable how well certain classes of society are going to be manageable by AI. They were always well suited to psychological manipulation, so I'm not going to spend a lot of tears lamenting their loss to their machine overlords, but I'm impressed how well they take to it. The closest comparison I can think of is experiments that were done on social deprivation that provided baby monkeys wire frames draped in terry cloth to simulate mothers, back in the 1980s, when Romanian orphanages produced horribly psychologically injured children because they were not interacted with beyond feeding and rudimentary hygiene. I'll point out that we're all manipulable psychologically, but some of us are more able to be driven into opposition, while very unlikely to become willingly compliant to given policies or AI services, and it is that submission that strikes me as masterfully orchestrated. It's easy to make noise that people dislike. It's far more demanding to craft music some enjoy.
I am confident a lot of folks are giving these matters thought too, and as is my wont, I sniffed about to learn more about edge cases, flaws, the weaknesses of these bots. In that process I ran across this essay by Stephen Wolfram, of Wolfram Alpha fame, and it really clarified for me how neural nets and machine learning work, and what's gone so suddenly right with them.
I wanted to share it with you because I am sure many of you will be glad to gain this insight. There's potential in these tools for real benefit and utility in meaningful ways people need, and despite the scary or obscene pics, or dirty limericks and prompt injection exploits, that seem to be riveting us initially, the best uses of these tools are barely even conceivable right now. I'd like to see facilitation of automation, and I am certain that's possible. What more that's good I haven't put much thought into yet. Maybe reading Wolfram will jog a few noggins and you'll share some best use cases with me here.
I'd appreciate it if you do.
Wolfram makes it obvious what ChatGPT actually is - it is simple machine learning applied as a large language model. It does what most people do when they do small talk - something I would call "buzzwording".
Throwing in words that statistically fit the given topic. "Bitcoin? Uhh yea, the energy expenditure of the proof-of-work algorithm is a disaster"
Ignoring the fact that an algorithm has no energy expense... The bigger the language model, the better it obfuscates the fact that it deepfakes understanding. It is an authority bias where WE project assumptions onto the bot. There is no underlying model of the topic. But most people will not care, they just want to talk. Publish or perish.
Generative AI for Art is one thing, word prediction for saving some seconds is another, but for empirical results, we need tools that address the problem and not tools that look like they address the problem. A screwdriver that only looks like it is spinning doesn't put a screw into a wall. No matter how real the spinn looks like, when there is no tork it is useless. Using LLM for anything is a clear man with a hammer syndrome. I use it as a highly unqualified but highly motivated assistant. It lists me trivia. "All environmental laws in California by date", with more APIs it become more useful for such paper work
I agree. The more impressed people are by AI the less I find they understand what it is, or understand about our own consciousness.
Thanks!
Thanks for the pointer to that essay, it is a very clear and precise introduction to the field of large language models. A bit lengthy but very worthy read.
N'importe de quoi. I am grateful I can serve.