Discussion about this post

User's avatar
Harry Martin's avatar

Well, once again, you've prevented me from getting to the rest of my inbox on this rainy Saturday morning. Great piece! And astounding developments in OI. Keep up the great work. Your ability to explain complex concepts like this is wonderful.

Fager 132's avatar

That explanation of the difference between brains and LLMs--how the prediction loop closes--is really interesting. Mostly because it validates my contention that LLMs just simulate learning and intelligence, but also because it explains why: because their feedback loops don't run "at the timescale of experience." They don't update in real time. The internal model doesn't change.

With respect to the DishBrain experiment, though, I don't get it. I get the neurons growing on a chip part, but they're "connected to a software simulation of Pong"? How? Physically, I mean. The neurons navigate toward Doom enemies and shoot at them...how? A game that requires spatial memory requires perceiving spatial relationships, which requires senses of touch or vision or both. So without perception, how is there a directed response (output), as opposed to just a preference for an input of order over chaos? Aren't all the other elements of learning absent? Perception, conceptualization, extrapolation, integration, automatizing?

This link https://www.science.org/doi/10.1126/science.adk4858 appeared in another Substack article and I'm not going to pretend to understand the implications of the paper's research. But this plain text summary of it (on X), giving context on the complexity of the human brain, is fascinating:

"The math on this project should mass-humble every AI lab on the planet. 1 cubic millimetre. One-millionth of a human brain. Harvard and Google spent 10 years mapping it. The imaging alone took 326 days. They sliced the tissue into 5,000 wafers each 30 nanometres thick, ran them through a $6 million electron microscope, then needed Google’s ML models to stitch the 3D reconstruction because no human team could process the output. The result: 57,000 cells, 150 million synapses, 230 millimetres of blood vessels, compressed into 1.4 petabytes of raw data. For context, 1.4 petabytes is roughly 1.4 million gigabytes. From a speck smaller than a grain of rice. Now scale that. The full human brain is one million times larger. Mapping the whole thing at this resolution would produce approximately 1.4 zettabytes of data. That’s roughly equal to all the data generated on Earth in a single year. The storage alone would cost an estimated $50 billion and require a 140-acre data centre, which would make it the largest on the planet...One neuron had over 5,000 connection points. Some axons had coiled themselves into tight whorls for completely unknown reasons. Pairs of cell clusters grew in mirror images of each other. Jeff Lichtman, the Harvard lead, said there’s 'a chasm between what we already know and what we need to know.' This is why the next step isn’t a human brain. It’s a mouse hippocampus, 10 cubic millimetres, over the next five years. Because even a mouse brain is 1,000x larger than what they just mapped, and the full mouse connectome is the proof of concept before anyone attempts the human one. We’re building AI systems that loosely mimic neural networks while still unable to fully read the wiring diagram of a single cubic millimetre of the thing we’re trying to imitate. The original is 1.4 petabytes per millionth of its volume. Every AI model on Earth fits in a fraction of that. The brain runs on 20 watts and fits in your skull."

5 more comments...

No posts

Ready for more?