Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>So why restrict it to very high level bins?

I don't think it's a quantitative issue (the level of the bin). It's a qualitative issue. When people say that ML lacks intelligence what they're saying is that it lacks robustness, common sense, agent like behaviour, the ability to reason and so on.

Intelligence (in humans or animals) does not appear to be just data driven pattern matching. I think we can say this with some confidence given that even the fanciest ML algorithm still hopelessly sucks at performing tasks that are trivial even for barely intelligent animals.



> Intelligence (in humans or animals) does not appear to be just data driven pattern matching. I think we can say this with some confidence given that even the fanciest ML algorithm still hopelessly sucks at performing tasks that are trivial even for barely intelligent animals.

Do you mind expanding on this? A lot of the tasks primitive animals can be replicated by Deep RL algorithms given the same environment.

Humans adapt to sensor information, memory and a reward/penalty system just like RL. We're just much more advanced and have sophisticated systems to sense and act.


>A lot of the tasks primitive animals can be replicated by Deep RL algorithms given the same environment.

Highlighted the last part because this is the key difference. Humans and animals can navigate unknown and unstructured and open-ended environments. You can identify a dangerous predator without having to see ten thousand images of mauled bodies.

Humans and animals can generalise in a way that is robust and independent from 'data' because they understand what they see. RL algorithms have no understanding of the world. If you let an RL system play breakout but you resize the paddle by five pixels and tilt it by 2 degrees it cannot play the game.

Daniel Dennett helped popularise the notion of Umwelt, which is losely translated someone's self-centered world as subjectively perceived by an organism and filled with meaning and a notion of how the agent relates to it. It's distinct from an objective 'environment' that everyone shares.

Machines lack this notion, they have no real concept of anything, even the fanciest algorithms. Which is also why conversational agents have only really made advances on one front, which is understanding sound and turning text into nice soundwaves. They have made virtually no progress in understanding irony, or ambiguity or anything that requires having an understanding of the human Umwelt. We don't even have any idea how the mind constructs this sort of interior representation of the world at all, and my prediction is we're not going to if we continue to talk about layers of neurons instead of talking about what the possible structure of a human or animal mind is.


I couldn't agree more.

I think our Umwelt is formed by our senses, especially the touch: our skin allows us to form a model, in our brain, of our body, and its movement abilities.

That's the reason why we have the Phantom Limb phenomenon: https://en.wikipedia.org/wiki/Phantom_limb

I think that the brain doesn't simply do pattern matching, it also forms models of things based on external stimuli, and then does pattern matching on properties of those models.

And this model constitutes a small 'universe' inside our brains, with us at the center.

And that's how consciousness emerges: we know of an entity that is us because our brain has an object in it that represents us.

In your Breakout example, there is really no formation of object 'paddle' inside the 'AI' that plays the game, and hence if the paddle is actually changed even by little, the algorithm doen't know how to handle it. Whereas, in our brain, we see the pixels on the screen as an actual paddle representation, and hence we can easily play any version of Breakout no matter how the paddle looks.

EDIT:

Same thing with the Vision sense: seeing things allows us to build 3d models of things in our brain. And then we can look at a photograph and recognize objects, because we have these models in our brain.


Well, what if you trained on the unbelievable amount of data that enters the human brain?

Like, say inputting years of high definition video, audio, proprioception, introspection, debug and error logs, data from a bunch of other sensors, etc. Then put that in a really tight loop with high precision motion and audio output devices, and keep learning. Also do it on a really fast computer with lots of memory. Also make the code itself an output.

If thats not enough, you could always try self replicating to try to create more successful versions over million year timescales.


That's trivially true in the sense that this is how we can only assume humans and animals came to be but I don't think there's any guarantee that this can be replicated in a silicon-based software architecture which is very different from analog and chemical biological organisms. Today already energy and computional costs are high with model computation cost going into five or six figures even in just one domain.

But more importantly, I think the problem with this approach is that it's essentially a hail mary of sorts with potentially zero scientific insight into how the brain and the mind works. It's a little bit like behaviourism before the cognitive revolution with AI models being the equivalent of a Skinner box.


I don't know the history of it and am not an expert in the field, but it seems to me that it's valid to call the things ML can already do "intelligence" in a generalized sense, and that there is nothing categorically different between that and human intelligence, it's just a matter of how complicated humans are.

That seems like a problem you can sort of throw hardware at for a while until it gets good enough to help you figure out how to make something smarter.


I think there is little evidence to suggest that neural networks and human minds behave in the same way modulo complexity.

In science there is a tendency to come up with models of the world - simplifications which we can observe and quantify - and then fall into the trap thinking that these models explain the world.

While neural networks are inspired by biological neural connections between synapses and neurons, the converse that neural networks are therefor intelligent does not hold.


I'm not suggesting NNs should be considered intelligent because of their inspiration from biology. Just that they should be considered intelligent because of what they are currently capable of (though that is way less intelligent than humans or animals of course).

But it seems like there is a plausible path to increasing their "intelligence" by dumping more data and hardware at them.

Like GPT2 shows quite a surprising amount of structural complexity even though it's very dumb in other ways – it feels like if you could pump a million times more data into it you'd get something that seemed really quite intelligent.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: