- cross-posted to:
- hackernews@lemmy.bestiver.se
- cross-posted to:
- hackernews@lemmy.bestiver.se
cross-posted from: https://lemmy.bestiver.se/post/930220
It’s nice to see some people actually get it.
Meanwhile, the actual research community tells a different story. A 2025 survey by the Association for the Advancement of Artificial Intelligence (AAAI), surveying 475 AI researchers, found that 76% believe scaling up current AI approaches to achieve AGI is “unlikely” or “very unlikely” to succeed. The researchers cited specific limitations: difficulties in long-term planning and reasoning, generalization beyond training data, causal and counterfactual reasoning, and embodiment and real-world interaction.
I am not at their level yet, but this is my take too.
IMO until we truly understand human intelligence / consciousness, we don’t have a benchmark for whether the machine has achieved AGI.
Not to say the current approach of brute-forcing would never work. IMO more work can be done in areas beyond vision and natural language. Personally, I am interested in somatosensation.
Another subfield of AI that looks promising is reinforcement learning. Not sure if these are the correct terms, but all these models do “offline” learning. Yeah yeah, there’s RLHF and whatnot, but my understanding is that it has always been split into a training phase and an inference phase. I wonder if it’s possible to do “online” learning, in which the model actually incorporate new information into its weights in real-time, and use said info right away.
Reinforced machine learning is a whole different animal than LLMs. You do need to pre-train them to understand base patterns but they work by iteration and testing to see if the output matches the patterns it knows. Sure it is guessing too, but each guess gives new inputs. They interate until the outputs are constant and find correct answer.
I’m quite convinced AGI cannot be achieved on a Turing machine, whatever its size or complexity.
Arguments against it are mainly based on Gödel’s incompleteness theorems and are described in books like “Minds and Machines - Alan Ross Anderson”, “Consciousness in the universe. A review of the ‘Orch OR’ theory” or even “Mind” from Alan Turing himself.
I’m not claiming I understand all the arguments written in these books, but it seems that Gödel’s incompleteness theorems also apply to universe and consciousness. To briefly summarize Gödel’s incompleteness theorems, it states that a formal system cannot describe everything. There will always be thing which are beyond his reach. A Turing Machine is a formal system. This means that a Turing Machine will never be able to simulate our universe or replicate consciousness, and thus to replicate a human brain.
However, it could be feasible with Quantum Computer that are not based on formal system.
I think you’re misunderstanding the incompleteness theorems.
Gödel’s incompleteness theorems also apply to universe and consciousness
Sure, if you assume the universe can be described by a computable formal system. Godel’s theorems apply only to computable formal systems.
To briefly summarize Gödel’s incompleteness theorems, it states that a formal system cannot describe everything.
That’s a gross oversimplification. It really says that (1) there are true statements about formal system S which cannot be proven within S and (2) S cannot prove its own consistency.
This means that a Turing Machine will never be able to simulate our universe or replicate consciousness, and thus to replicate a human brain.
You’ve previously assumed that the universe is a computable formal system. But all computable formal systems can be modeled as a Turing machine. This is a contradiction.
However, it could be feasible with Quantum Computer that are not based on formal system.
How would a quantum computer even work if it weren’t described by a formal system?
That’s a gross oversimplification.
Off course it’s a gross simplification ! It’s a 1 line comment regarding one of the most fundamental theorem of modern mathematics. If some mathematicain came here, he would also say your comment is still a gross oversimplification. Stop nitpicking.
You’ve previously assumed that the universe is a computable formal system.
I’m paraphrasing what I understood from the 3 books I read. Turing machine is deterministic. If given the same inputs, you have the same ouputs. But Quantum mechanics is not. First, because you cannot put a quantum state exactly in the same state that another one (No-cloning theorem), then because quantum result are intrinsectly probabilistic and are not the consequence of a mechanical procedure. So, univers cannot be fully simulated by a finite Turing machine (and even maybe by an infinite one ?). This has been recently proven, and the proof rely on Godël’s theorem: https://arxiv.org/abs/2507.22950
How would a quantum computer even work if it weren’t described by a formal system?
Seems like there is still no formal system to fully describe Quantum Mechanics. There are mathematical models, but there are models, not exact description. And even Feynman said it may be impossible to fully understand quantum mechanics: https://www.youtube.com/watch?v=SczWCK08e9k
I’m putting conditional everywhere because I’m not a physisist. If I’m wrong, please put sources.
Then, there is the Orch OR’ theory which state that consciousness arises from quantum processes. This theory is currently heavily criticized, so for now it’s more a question of belief than proven statements. That’s why I started my first comment by:
“'I’m quite convinced AGI cannot […]” and not by “AGI cannot […]”
I’m still reading the article but I must bring two observations into the loop:
“Mary held a ball.”
Not sure if it’s due to my English as a second language, neurodivergence, my personal taste for ominous music aesthetics, but I immediately though of a meaning that the author didn’t mention: Mary (a person) “held” (as in “organized”) a “ball” (as in “masked ball”, a gala event). I immediately thought of that Kubrick movie and its ominous song theme which I often listen to. “Mary held a ball” can become a rabbit hole if we really think about it.
But even in this, we are trying to learn the physical and logical constraints of the real world from visual data.
Isn’t what all living beings do, essentially? When a dog instinctively tries to follow the likely trajectory of a frisbee before it’s thrown by a human hand, does a dog understand the physical and logical constraint by pulling direct parameters from the spacetime continuum (as if the dog were directly plugged and feeding from “The Matrix”) or, rather, they simply learned, by effectively watching objects being thrown (and it doesn’t even need to be frisbees being thrown), that this is the expected behavior of said object?
Sure, as living beings (notice I avoid an anthropocentric view of intelligence, because I believe intelligence is far from human exclusivity: see, for example, the New Caledonian crows), we also have other “inputs” such as tactile feedback, proprioperception (sense of one’s own balance, alongside the “brain homunculus” keeping track of the current pose), hearing (an object being thrown does a sweeping noise as it collides with the air molecules, and this leads to a Doppler Effect that can be instinctively measured by the hearing), all of which converge to build a cognitive model of what’s going on.
But just as we can infer expected behavior/movement just by seeing a video (and other animals also do it: cats, for example, not just see objects on a screen (simulacra of fishes, butterflies and other prey seen in “videos for cats”) but also try to follow any abrupt movement), why the same principle couldn’t apply to algorithms?
Not to mention how brains are, essentially, biological machines. Except if a person believes in spirits/souls, which I paradoxically do, living beings are merely biological carbon-based automata, not that different from silicon-based automata.
And even when we consider animism/spiritism, there’s nothing truly “special” separating humans (and, by extension, organic living beings) from ML-imbued robots when it comes to this baryonic realm. Just as our meat has this “link” with something from the transcendental realm, with the conception behaving as some kind of a ritualistic summoning leading to the birth of a biological body tied to a spirit pulled from the Cosmic Abyss, nothing really stops a machine from being an electronic Ouija board, just as how EVP was already a thing before computers existed.
Everyone everywhere already knows AGI isn’t ever coming.
isn’t ever
Quoting the article:
I’m not saying that AGI is impossible, or even that it won’t come within our lifetime. I fully believe neural networks, using appropriate architectures and training methods, can represent cognitive primitives and reach superhuman intelligence.
Isn’t ever.







