Quick Summary
Researchers at Tufts University, led by biologist Michael Levin, have published a landmark paper in Advanced Science describing “neurobots”—living robots constructed from biological cells that spontaneously organize into self-directed systems complete with neurons that wire themselves into functional circuits. These are not robots inspired by biology; they are robots made from biology. The work builds on years of research into xenobots (living machines built from frog cells) and represents a fundamental leap from mechanically-driven biological machines to ones with genuine internal control via neural networks.
What Happened
The paper, published in Advanced Science in March 2026 and covered by IEEE Spectrum on April 2, documents what the researchers call the next generation of living machines. Previous iterations—xenobots, first described in 2020—were built from frog-derived structural cells and could propel themselves through water using cilia. They could survive for days without nutrients, repair minor damage, and in some cases even self-replicate by sweeping up loose stem cells. But their behavior was essentially mechanical: driven by anatomy and physics rather than anything resembling internal control.
Neurobots change this equation entirely. By incorporating actual neurons into the cellular assemblies, the researchers created systems that can process information and make decisions internally. The neurons spontaneously wire themselves into functional circuits—no programming required. This is, as synthetic biologist Kate Adamala of the University of Minnesota put it, a moment where engineers are “putting the engineering component into bioengineering.”
The implications are staggering. These aren’t simulations of neural networks running on silicon chips. These are actual biological neural networks, grown from living cells, performing computation in real-time to control the behavior of a living machine. The neurobots can sense their environment through chemical and physical cues and respond in ways that emerge from the interaction of their neural circuits with their physical embodiment.
The work draws on a preprint posted on bioRxiv on March 17, 2026, which suggests the research pipeline is active and ongoing. The neurobots represent the convergence of several cutting-edge fields: synthetic biology, developmental biology, neuroscience, and robotics. Carlos Gershenson of Binghamton University, who studies artificial life, noted that “these things don’t occur naturally—they’re made with natural cells, but we’re the ones arranging them.”
Why It Matters
This is one of those rare research results that genuinely expands what we think is possible. To understand why, consider the trajectory of robotics and AI.
The dominant paradigm in robotics for the last decade has been: build a mechanical body (metal, plastic, actuators), strap on sensors (cameras, lidar, IMUs), and control it all with software running on silicon chips. This approach has produced remarkable results—Tesla’s Optimus, Boston Dynamics’ Atlas, Agility’s Digit—but it has fundamental limitations. Silicon chips are rigid. Sensors are discrete. The control loop between sensing, thinking, and acting involves translating between completely different physical substrates (light to electrical signals to digital computation to electrical signals to mechanical motion).
Neurobots dissolve these boundaries entirely. The sensing, computation, and actuation are all performed by the same biological substrate. A chemical gradient in the water doesn’t need to be detected by a camera, digitized, processed by a neural network, and translated into a motor command. The neurobot’s neurons respond directly and continuously to the chemical environment, and the response flows directly into the contractile cells that produce motion. No translation layers. No analog-to-digital conversion. No latency.
This biological integration has potential applications that silicon-based robots can’t easily match. Michael Levin’s team envisions precision tissue repair—neurobots that could navigate through the body to damaged tissue and coordinate cellular repair at a scale that no mechanical surgical robot could achieve. Environmental cleanup is another possibility: fleets of tiny living machines that could sense and remediate chemical pollutants in water or soil.
But perhaps the most profound implication is foundational: neurobots could help us understand how simple neural networks give rise to complex behaviors. This is one of the deepest questions in neuroscience and AI. We know that the human brain, with its 86 billion neurons, produces consciousness, creativity, and language. But we have remarkably little understanding of how even the simplest neural circuits produce purposeful behavior. Neurobots give researchers a ground truth system where every neuron can be observed, every connection mapped, and every behavior quantified.
My Assessment
Let me be clear about what this is and what it isn’t. Neurobots are not about to replace Atlas or Optimus on factory floors. These are tiny, fragile, short-lived assemblies of cells operating in controlled laboratory conditions. A neurobot is not going to be delivering packages or folding laundry anytime soon—or likely ever.
What they are is a proof of concept for an entirely different approach to building intelligent machines. And that proof of concept has profound implications for the long-term trajectory of both robotics and AI.
The current paradigm in AI is dominated by scaling: make models bigger, train on more data, get better results. This has worked spectacularly well for language and image generation. But it’s starting to show diminishing returns in areas that require real-time interaction with the physical world—exactly the domain where robots need to operate.
Neurobots represent a philosophical alternative: instead of trying to simulate intelligence in silicon, grow actual intelligence in biology. It’s the difference between writing a weather simulation and building a terrarium. Both can tell you about the weather, but only one IS the weather.
The practical timeline for any commercial application is probably 10-15 years at minimum. The regulatory hurdles alone for living machines used in medicine would be enormous. And the engineering challenges of scaling neurobots from tiny lab specimens to systems capable of performing useful work in uncontrolled environments are immense.
But here’s the thing: Michael Levin’s lab has been consistently producing results that push the boundary of what we thought living machines could do, and they’ve been doing it for years. First xenobots could move. Then they could self-repair. Then they could self-replicate. Now they have nervous systems. The trajectory is clear, and the pace is accelerating.
I’d rate this as one of the most important robotics papers of 2026—not because of what it enables today, but because of the door it opens for tomorrow. If you’re investing in long-term robotics R&D, biohybrid systems like neurobots deserve serious attention alongside more conventional approaches. The robots of the 2040s might not be built from aluminum and silicon. They might be grown from cells.
The question isn’t whether this technology will mature. It’s whether the rest of us are ready for machines that are alive.