Let's Talk About the Humanoid Robot in the Room

| 8 min read | Permalink

I love robots. I’ve worked on AI and robotics for ~5 years. So naturally I would very much love to have a humanoid robot in my home that does my dishes and washes my clothes and perhaps even does my grocery shopping. Who wouldn’t?

But I really don’t think that we’re close to having general purpose humanoid robots at home. Somehow, in 2026, it seems like that’s a controversial opinion; so I figured I’ll write it down.

Here’s my claim: General purpose humanoid robots are like self-driving cars, but actually much harder. Before I explain why, let’s talk about the good news first.

There are some very impressive improvements in humanoid robotics that have happened over the last 5 years. We do now have:

But building general purpose humanoid robots remains extremely difficult. The closest thing we have is self-driving cars—those are basically autonomous robots on wheels. And since we’ve worked on self-driving for a while now, we can look at how that went.

Well, self-driving turned out to be really hard, and it’s still not fully solved. Waymo is arguably mostly there after 17 years and billions of dollars, but only in a handful of cities, and they still need to do things like hiring DoorDash drivers to close car doors. There’s also a long list of companies that tried and failed: Cruise, Uber, and Argo AI. The tail of problems is long, you know.

And yet, self-driving is actually the easier problem. Compared to humanoid robots, self-driving cars have significant structural advantages:

  • Mature hardware. Humans have built cars for more than 100 years. Modern cars are incredibly reliable and safe.
  • A constrained environment. Cars drive on public roads. Public roads are heavily regulated environments. Of course there’s some level of chaos and a wide range of possible events, but the environment overall is very structured.
  • A single, well-defined task. Self-driving cars need to go from A to B. Driving is complicated because you need to handle complex and diverse situations, not because there’s diversity in the task domain.
  • Self-driving is a feature, not a requirement. A regular car that you drive yourself has a lot of utility. You don’t need it to drive itself for it to be useful.
  • Scalable data collection. Because of the aforementioned property, there’s a very scalable path to data collection: Record what the human drivers do day-to-day. There’s no need to hire or pay someone for this; this data can be naturally collected as a by-product of the intended use.

Now, contrast that with general purpose humanoid robots:

  • Immature hardware. Yes, there was progress here but we have not built and deployed humanoid robots for 100 years at scale. They will be unreliable. They will break a lot. Somewhat tellingly, 1X’s Neo is already broken in the demo they did for Joanna Stern at the Wall Street Journal. The hardware problems here are far from solved. Also, there is no infrastructure for maintenance and repairs yet, which is ubiquitous for cars.
  • An extremely open-ended environment. You’re deploying these things in people’s homes. Homes are extremely different: Some of them are very small, some are very large, some are neat, some are messy, some are colorful, some have multiple stories. There are also very few rules and regulations around how people set up their homes. Also if you want your robot to go to the grocery store, you have to deal with public roads and the store as well. That’s a lot!
  • A very large variety of tasks. Today I want you to do my laundry. Tomorrow, can you cook me a dish? Also can you vacuum the place, pick up some groceries, fetch my mail from the mailbox and maybe walk the dog. The breadth of tasks a general purpose robot is expected to do is really, really large!
  • Full autonomy is table stakes, not just a feature. In contrast to a car, the robot itself is utterly useless if it’s not automated. Imagine buying a robot for $20k and then doing the dishes yourself by teleoperating it. Yeah, nobody wants to do that. The whole point of a robot is that it does something that I do not want to do myself. So you have to either a) have it work autonomously via AI or b) you pay a person to teleoperate the robot in your home on your behalf. There’s also a social aspect here: The robot operates in an environment with humans and therefore has to adapt to social conventions and behaviors, which raises the bar even more.
  • Data is very scarce. This is the big one. Tesla collects driving data from every car on the road—millions of miles, for free, as a by-product of people just driving. That data advantage is real; it’s a big part of why FSD has gotten as far as it has. Humanoid robots don’t have this. The robot is useless without AI, so there are no human users generating data. You either pay someone to teleoperate the robot (doesn’t scale), you pay someone to collect human demonstrations,3 collect data from the robot’s own autonomous operations (only works if the AI is already decent), or rely on simulation. World models might be a big unlock here, but we don’t know yet. For homes in particular, there are also obvious issues around privacy.4

To be fair, humanoid robots also have some advantages: There are fewer laws and regulations around their deployment, the cost of individual failures is lower (a dropped plate vs. a car crash), a home robot can pause and ask the human for help in a way that a car on a highway cannot, and perhaps most importantly, perception and foundation models are dramatically better today than when Waymo started in 2009.

That said, I think these advantages are more than offset by the challenges above, especially the data problem. So, all told, the odds still seem really stacked against general purpose humanoid robots happening anytime soon.

But that doesn’t mean nothing is happening in home robotics. Look at how much robot vacuums have improved over the years; Roborock is now even attaching an arm to their flagship model. Robot lawn mowers from companies like Husqvarna and Mammotion are now a thriving market. These work precisely because they sidestep some of the problems above: simpler and more mature hardware, well-defined tasks, and existing deployment fleets that generate data. Less exciting than a humanoid butler, sure. But they’re real and they’re shipping today. I think these companies have a real shot to scale to much more powerful home robots over time. They also ask a key question: Do we really need the humanoid form factor?

There are also companies like Physical Intelligence and Generalist AI who are building foundation models for industrial robots. This might turn out to be another viable path: Start with proven industrial hardware and operate in a more constrained environment. Then gradually scale your capabilities.

If you’re working on general purpose humanoid robots: godspeed. The problems I outlined above are real and they’re hard. The key challenge is to figure out how to overcome these structural deployment and data flywheel issues. But someone has to try, and I hope you succeed!

In the meantime, let’s stay level-headed. The vision is exciting. I want it. But wanting something doesn’t make it close. And a cool demo does not yet make for a real product.

Footnotes

  1. Tesla, Agility, Figure, 1X and a few other ones are also doing stuff (but probably not at the same level). ↩︎

  2. I personally think this is mostly yet another attempt by Elon Musk to pump up the stock. With car sales declining, they had the spare capacity anyway. ↩︎

  3. Examples of this are Tesla’s motion capture suits or Nvidia’s very recent research on using egocentric video. This might turn out to be more scalable than teleoperation because you do not need as many physical robots. ↩︎

  4. I do not like the thought of a person teleoperating a robot in my home and therefore effectively granting them read/write access to a very personal space. But I recognize this is different across people and cultures. ↩︎

Embrace your Laziness in the Age of AI

| 3 min read | Permalink

I have a confession: I’m a bit lazy. Not dysfunctionally lazy. But definitely a bit.

Early in my career, I thought this was a problem. There’s so much more I could do if I weren’t lazy! But, over time, I realized it’s the opposite: My laziness is an asset. It’s a regularizer.

When you write code, you have an infinite blank canvas. You could build anything. This is pretty distinct from lots of other domains: Code is inherently less constrained than, say, the work of a mechanical engineer because you don’t have to fight the constraints of physics every step of the way (of course a computer is still physical, so there are some real constraints; but you get the point). This is fantastic. But it’s also daunting.

Enter AI agents. These things are great and I use them all the time. The amazing thing is you can build anything now. The trouble is also that you can build anything now. They never get tired, they never get frustrated, they are infinitely patient and motivated. In other words, they reduce the execution cost of building something towards zero.

So a natural question arises: What should you build? This was always a question, of course, but AI agents make it a lot more urgent.

When it comes to deciding on what to build and how, taste is an important concept. A lot has been written about this. For example, there’s Paul Graham’s famous “Taste for Makers” essay and John Schulman’s excellent guide to ML research, which identifies taste as a critical skill as well.

But what they don’t talk about is laziness. Taste is about quality and direction and simplicity and beauty. Laziness is about restraint and friction and effort. Laziness regularizes taste—taste without laziness gives you beautifully overengineered systems nobody needed. Also, tastelessness without friction is by far the worst: that’s slop.

Remember when you had to write all code and do all research yourself and you went “ugh”? Because you really didn’t feel like doing something? That’s healthy! That “ugh” carries information. It’s telling you: “This is going to be work.” And it’s asking: “Is this work really worth doing or should I just… not?” The answer might be yes or no, but it really matters that this question gets raised in the first place.

AI agents remove this natural friction. They are so fast. They never stop. But they have a price: You have to remember your very human laziness. You have to remember to ask yourself: “Should I really build this?”—or should you just be lazy today.

Hello, World!

| 1 min read | Permalink

After years of maintaining a simple static about page, I’ve decided to rebuild with a proper blog.

I plan to use this as an outlet for ideas I find interesting in AI, machine learning, and technology more broadly. Expect to read short personal opinions on those topics.

If you want to get notified when I publish more, you can subscribe to my newsletter.