The past is mostly wrong
If you go back and look at the predictions of things like Popular Mechanics and Omni, or just about any writer, thinker, or science communicator for that matter, the one consistent thing you see is how goddamn wrong they are. About everything.
The flying car is the poster child for this. It was obvious - obvious! - we’d have them in no time. Energy is cheap, and technology was growing by leaps and bounds. We harnessed the power of the atom, put stuff in orbit, and we’ll be on the moon soon: flying cars are gonna happen any moment!
We still do not have flying cars.
This goes both ways, a la “I think there is a world market for maybe five computers.” You can control someone else’s sex toy from half a world away with the computer that lives in your pocket. Great prediction, Watson.
The reason no one can get this shit right should be obvious: the externalities, the black swans, all conspire to derail what to any sensible technologist is a clear and obvious roadmap. Flying cars were the next obvious innovation, I guess, but there’s a lot more to it than just building the thing.
All of this is to say: No, I don’t think we’ll get AGI any time soon, and the weird religio ex machina freaks hoping to invent God out of math are just as delusional as the ones who are … hoping to invent God out of math.
Again, look at the evidence: we’re very, very bad at predicting where stuff goes. We get it wrong approximately 100% of the time. Most innovations don’t arrive with a big thunderclap - the future is unevenly distributed.
I think there are technical problems a lot of people are handwaving away - we don’t have a real theory of the mind that answers all the questions about it, we don’t even know how (in great detail, anyway) how anesthesia works. But you’re telling me we can just skip all that stuff and make a brain, just like ours but better. Sure thing!
But ignoring the technical parts, I think the reality is that AI, or even AGI, is going to look (and act, and be) nothing like what we all seem to think it is going to look like. Because that’s just how this stuff goes.
Dramatic, sci-fi tales of malevolent machine gods escaping into the real world? Stories. Humans being their generally awful selves, and building a machine that hates black people, because it was trained on a bunch of unexamined biases of rich white people? That’s happening right now, and will continue to happen.
Which is why they spend all their time talking about WOPR, by the way. It’s much better to freak you out about the end of the world, instead of “you will be denied a car loan by a racist machine”, or whatever. They can dismiss their “safety” teams, where “safety” is defined as “not mess with humans”, and really “focus” on safety as defined as “keep it from starting World War 3, or something”.
Generative AI has real-world uses right now; people are doing interesting things with it. But it’s not going to come alive and revolutionize the world - or destroy it. Humans will remain the sole owners of their destiny.