90% of Robot Demos Never Make It to a Real Factory. Here's Why.
Show notes
90% of robot prototypes never make it to real factories.
They work in a closed lab. They look impressive on video. And then reality hits.
In this episode, we break down what actually separates a convincing prototype from a system that runs reliably in production. And why that gap is much harder to close than most people think.
You'll gain insights into:
- what makes a prototype fail in real deployment
- why 99% reliability is harder than it sounds
- how the digital twin works inside a neural net
- where humanoid robots really stand today
More about RobCo: Website: https://www.rob.co LinkedIn: https://www.linkedin.com/company/robco-therobotcompany/ Instagram: https://www.instagram.com/robco_therobotcompany/
Chapter markers 00:00 Intro 02:00 Why robot prototypes are often misleading 05:41 Reliability beats impressive capabilities 07:46 Why RobCo builds end-to-end solutions 09:27 48-hour testing & real-world data loops 12:23 How closed learning loops actually work 14:09 Digital twins explained simply 17:49 The digital twin as a map of the real world 19:57 How physical AI filters relevant information 22:32 What people will misunderstand about physical AI 24:51 Humanoid robots: hype vs. reality
Show transcript
00:00:00: So one of the things we always see is people really having great demos.
00:00:03: And I'm pretty sure you've seen it all over the internet as well, demos off robots doing crazy things a i doing amazing visuals just demos coming out from companies and then in real life they never use their network work sometimes to even shelfed talk.
00:00:18: do someone few weeks ago when he told me that ninety percent of those demos getting created being created which worked on an enclosed environment They Never See the real life.
00:00:30: Why?
00:00:30: Because there's several problems with this once they don't work in real life and The other thing is, They don't even have a good application.
00:00:36: And that's why we have This topic here today With Clemens from RobCo where We talk about lab demos versus Real Work & Real Factories.
00:00:53: So because That Is something which I've seen i mean Something thats really special About RobCo that you're creating these robots, these autonomous machines in the office where your all staying right now.
00:01:08: And basically interesting part is it's also integrated and very near to real life application.
00:01:15: What I think what i'm interested in was a real challenge when robots move from controlled lamps into this massive factories because I always think factories are clean.
00:01:25: spaces were everything control works but reality its not.
00:01:29: No, I mean in the end.
00:01:31: The proof is on the putting or you know it's always a hypothesis how you build things once you move there environment changes?
00:01:40: The challenges are bigger than you had before and then You need to make work
00:01:46: absolutely so welcome to the Robco podcast because that's what we talk about physical AI and autonomous robots.
00:01:54: And i just want to say hello to you Clemens again from Robco.
00:01:58: He is one of the principal engineers, but he's definitely leading the whole space where the robots are being created and programmed that they can work for you.
00:02:07: And create a future of physical AI.
00:02:10: let us move on to questions.
00:02:12: why our impressive robot demos often misleading when it comes to real industrial deployment?
00:02:17: That also something I led in with intro because i love this topic.
00:02:21: tell me more about that.
00:02:23: well if you have demo especially video then Obviously, it can be cherry-picked.
00:02:29: Yes
00:02:30: And oftentimes when you're dealing especially with data driven demos It's relatively easy to get to success rates of eighty to ninety percent and then it takes Another time You know another time the same development to get two ninety nine percent And so on.
00:02:51: So recently, a very famous researcher in the field said it's basically adding a nine always adds as much of development effort.
00:03:01: What we're trying to do is make this process easy and fast but we need be able show that ourselves when we do developments our selves.
00:03:13: I think something really important when you're optimizing for ninety nine point nine, nine.
00:03:18: Nine percent every nine.
00:03:21: that your ad takes a lot of effort right?
00:03:24: And because and that's why autonomous driving isn't solved yet fully Because it is at ninety nine percent at ninety-nine dot ninety ninety I don't know how many nines already but not fully.
00:03:34: That's why some of the accidents still happen.
00:03:42: When you're solving for these in a demo environment, it's very easy because there is nobody running through.
00:03:47: There are no changes happening or age-happening as well and the real life is different.
00:03:56: You obviously deploy into factories then get feedback loops too.
00:03:59: but what makes this so much harder?
00:04:04: What are the biggest challenges that happen to the factory?
00:04:09: So I mean, it's easier and harder.
00:04:11: So I think autonomous driving is a case where you say this AI completely have to solve all of the AI in order to deploy anything.
00:04:19: so what we've seen that actually doable if just work really hard for many years For us.
00:04:28: try simplify problem by restricting type problems.
00:04:35: So for example, if you deploy in a particular setting and the factory.
00:04:40: You have to solve on much easier problem than if you deployed out into streets or in kitchen where things are messy
00:04:51: with changing environments
00:04:52: exactly so we can still control the type of problems that want us all.
00:04:57: It's not that there are no problems out.
00:04:59: We're getting bombarded with customer requests, build us the AI robot but can you solve this?
00:05:04: There is things we cannot automate with traditional robotics But in the end it all about atoms and real world.
00:05:20: You find these situations where something can go wrong, either it's on the hardware side or its in software side.
00:05:29: Simple things like dealing with very diverse set of objects and suddenly the gripper solution that you had built doesn't work anymore so certainly need to adapt.
00:05:42: there we're talking about sometimes cases where ninety nine point nine nine percent is not even possible.
00:05:51: Why is reliability often more valuable for factories than impressive, robotic capabilities?
00:05:58: I think it's the expectation from somebody who wants to automate a solution.
00:06:04: Is often that it works hundred percent of time and then the value diminishes very quickly when its below that.
00:06:12: because now you have to have A person on staff can deal with all issues.
00:06:17: you might have production lines that cannot stop easily.
00:06:20: And so suddenly the whole question, couldn't you have a human there?
00:06:24: because humans are adaptable and can solve these problems on this spot?
00:06:29: Might actually then make more sense?
00:06:31: Yeah it's really interesting Because It is not about doing at once but many times The imperfections.
00:06:41: obviously we want to minimize them these the reliability across thousands and thousands of repetitions.
00:06:49: Of the same task, right?
00:06:51: And yeah so The thing that I always find interesting is how many real world developments does it really take?
00:06:59: a real-world deployments in the factory doesn't really Take for the system to become truly robust.
00:07:06: Is like with this Like first time has already worked.
00:07:09: or just take like get many iterations until becomes Really robust.
00:07:13: It depends on how much experience you have in a certain case, and obviously also the challenging task is.
00:07:23: We have certain tasks like palletizing for example where we have very robust system that can deal with all sorts of challenges if he do things additional development effort and a very strong tracking of metrics in order to make it work.
00:07:48: Yeah, for the tracking of matrix as I think something that is very interesting And maybe that's also the reason why you did so Robco is implement systems end-to-end So from production two deployment whereas many other robotic companies digital robots to resellers and to integrators.
00:08:03: Is that one of the reasons?
00:08:04: Why do they end integration?
00:08:06: yes For me, a company like Robco is an information processing system.
00:08:12: So for us that the best way to get the feedback loop going Is to make the Feedback really short and by short I mean literally because right now people who are Deploying our sitting on the same floor as The People Who Are Developing Their Robots And so we have this loop in place where We Learn All The Time In Every Day.
00:08:36: Yeah That's Really Interesting.
00:08:37: And so the acceleration that you get from these system improvements, it must be really high because we have a connection of people sitting together.
00:08:48: But also to get data form real world right?
00:08:51: How can you integrate this data or how do they use them?
00:08:55: Yeah I mean obviously depends on customer case but there is lot said about how much extra value by being able to process the data, make this system overall more robust.
00:09:10: And so for that we have a possibility record the data also select cases where something interesting happens and then be able use it.
00:09:23: further improvements
00:09:25: I've seen your testing lab in Robco, I've seen the arms move and everything.
00:09:32: So that's a pre-production setup right?
00:09:34: That you use before your deploy.
00:09:36: Right
00:09:37: Yes it's after production but we run every robot for forty eight hours to make sure there is no initial breakage of any component
00:09:48: And do already have data coming back from that machine so they can feed into this system.
00:09:52: or yes We are storing all the telemetry logs And so we can react accordingly.
00:10:01: Very interesting.
00:10:02: So when looking at Especially like the data generator in the field and that you can use We just talked about it definitely helps a lot because then you're getting I mean, i think It's one of The most important things for Autonomous cars as well is generating all this data.
00:10:18: The camera or infrared data to understand certain situations better Is same with robots at RobCo Obviously, the same thing over and over.
00:10:30: The data stays relatively the same.
00:10:32: but is it like the exceptions that are interesting?
00:10:34: That you're working with and that can improve the models that your'e working With?
00:10:41: So what we currently doing Is working on these level three or four autonomy cases We talked about last time Where its absolutely vital Or a core part of the whole system that we get all the sensor data, because suddenly you're not just a robot that executes something.
00:11:03: You are a robot processing information from sensors many times per second and so obviously it is important to keep the data as necessary for improving a certain task.
00:11:20: This might also include The operator being able to step in and correct for errors.
00:11:27: And these are the most interesting cases because then immediately we get some feedback if something goes wrong, but it might also be that We can connect it to an external sensor.
00:11:38: That gives us a feedback signal whether something was successful or not.
00:11:42: Interesting right?
00:11:43: So this way you know funneled back into the feedback loop.
00:11:46: Oh that is very interesting.
00:11:47: so basically take all of that you've seen.
00:11:50: That was wrong or off by a certain measurement and then feedback in correct, play it back to the customer?
00:11:58: And this is probably just as software update right.
00:12:01: so the customer doesn't have to redeploy anything usually I guess but gets an update and its ready again.
00:12:09: for us It's important that our customers are in control But we can then update... Usually it's a model update And so this way we can put in new versions of the model.
00:12:23: If it is corrected right away on The Robot, It's even simpler than directly while the correction Is done.
00:12:31: learning takes place On The Robot and basically what We're just updating is weights In a neural net.
00:12:38: Maybe explain to me A little bit more how This system works with closed Learning loops.
00:12:44: How can I understand this?
00:12:45: So physical AI improves obviously through closed learning loops and how does that work.
00:12:52: Can i understand this better?
00:12:54: Yeah, so when you correct a robot then the first bit of information is something goes wrong And so then the robot can compare moved with how you made it move.
00:13:17: And so from this delta, that's what learning is all about and finding out deltas between What You Would Have Done and What Do Ya Do?
00:13:24: So now the delta can be called backpropagation.
00:13:30: we can propagate all errors through neural net every step there a slight weight update And in the end, at the end of the update, the neural net will behave differently than before.
00:13:43: Interesting!
00:13:43: I never thought about that process being that but it's basically like having a human trainer who is telling you okay?
00:13:50: You have to do this and then you know your deviation or at least the trainer sees the deviation.
00:13:56: But obviously as machine you can see yourself even though you're deviating from course.
00:14:00: It's very interesting.
00:14:01: It is all modeled after human learning, although of course the implementation is very different.
00:14:08: but we're trying to find out We as in The community Of people who deal with Learning problems and how To do it as efficiently As a Human.
00:14:18: does it?
00:14:19: So when were talking about humans?
00:14:20: We have this topic of the digital twin And How Does that work With Robots under Atropko?
00:14:29: Yeah, so here we need to distinguish between the different levels of autonomy that you find.
00:14:37: at its core Robco is makes modular robots.
00:14:43: And So even if you just assemble a robot The Robot controller already gets the information how it is comprised and certain aspects off with physics that it needs operate the robot in the first place.
00:15:00: Because if you don't do that, then and return off the brakes around would just fall down right?
00:15:06: So they the simplest thing that it needs to be able to is know its weight into counter forces from gravity.
00:15:14: okay um
00:15:15: And this applies so all of the robot control problems.
00:15:19: Then when you talk about for example path planning or computer vision integration of the physical world into this digital twin so that you can deal with it and react to.
00:15:37: This is often done in certain optimization problems, So we want to find a shortest path from A-to B given there's some kind of obstacle.
00:15:48: I would say the digital twin when we need to deal with these first layers that are in production, that are usable by clients and they're also still improved.
00:16:03: We have active teams working on improving them making it more simple.
00:16:08: Now when it comes to physical AI which is the next step of development That will be a dominant piece over the next few years.
00:16:19: Then we talk about much more subtle representation in some sense baked into the neural nets themselves.
00:16:30: So, so... In order to execute a certain task The model needs to understand physical aspects of world how geometry works How physics works and materials work when I want interact with them.
00:16:49: So the digital twin is not expressed in code and vectors, and points anymore.
00:16:56: It's inside of the neural net as an emergent property because we have trained it on a lot data.
00:17:03: Interesting!
00:17:03: Do I need to understand that Digital Twin being just virtual representation of the robot doing these same things?
00:17:12: And then basically them combined... So the software, well... The digital twin is the brain and the robot is body of that brain.
00:17:23: Is it a correct representation in level one or two?
00:17:28: And then at level three getting more to physical AI?
00:17:31: basically you don't really need this digital.
00:17:39: Yeah, representation like where it's exact has an exact path and exact movements.
00:17:43: everything but more than it evolves into being open model that can understand its environment fully.
00:17:50: So I would say I would phrase.
00:17:52: this way digital twin is a little bit Like what a map Is to the real world?
00:17:57: Okay
00:17:58: okay so good example.
00:18:00: so What do you put in The Map?
00:18:02: You Put In A Map The Information That Is Relevant.
00:18:04: Yes Example If You Have A Road Map You don't care about maybe the restaurants along the way, right?
00:18:14: If you're talking about traditional roadmap.
00:18:17: But you care about distances I dunno.
00:18:19: nobody uses roadmaps anymore everybody... It's
00:18:22: a great example to be honest because if you have different ...you could have a roadmap that is specifically for restaurants and it can one obviously that specificly for distances.
00:18:31: Yeah exactly.
00:18:32: And so the term digital twin comes from the aerospace, where you need to understand the physics of a wing or an airplane plane.
00:18:42: Or how the engine works?
00:18:43: So this is like forty fifty years ago when these first simulations came about and obviously then that digital twin was only as good As it represents The environment accurately.
00:18:54: okay
00:18:55: right.
00:18:55: And so we have all the tools for doing.
00:19:00: For robotics, it's probably a little bit simpler.
00:19:02: We have the physics of the robot we have certain objects which you want to represent but It is important for them to be able to execute a task Now physically.
00:19:18: as I said all this isn't given up front.
00:19:25: And the interesting thing that we see from large language models is then, in order to solve the problems that you wanted to solve it needs to build a digital twin inside.
00:19:39: Otherwise It wouldn't be able to solve this task.
00:19:41: Interesting!
00:19:42: Very interesting.
00:19:43: So let me ask one more question about these topics because I find really fascinating to understand them.
00:19:48: We as humans can walk into a crowded place like an airport and focus only on relevant information A shop's there.
00:19:56: We have escalators, we have people running around in different areas all having their own agenda and we can focus on the timetable and gate to go through.
00:20:07: so very good at you know canceling out noise or cancel not important information.
00:20:12: how do you that?
00:20:14: In physical AI?
00:20:15: in digital twin?
00:20:17: because when getting sensory data I believe it's definitely a task to filter all that, because you need to be focusing on what is relevant.
00:20:26: and how do you identify.
00:20:28: The processing power must be crazy as well.
00:20:31: It's very good example.
00:20:33: also i'm totally an autopilot in airports And uh...I never know which airport im on but i can find things around.
00:20:42: So this where the big leap learning The learning community has dealt with is being able to build models that do exactly this kind of filtering while learning things.
00:20:59: This the core of so-called transformer architecture, there's a part an equal list of things to select from, which might be tokens in a long text.
00:21:16: But it might also be the internal representation of some abstract concept that has come up along the way and it learns how to put these filtering... These little filter masks around pieces that are relevant for this particular thing.
00:21:34: And if you did wrong then every time he does it wrong figures out, you know how to correct things in the best possible way.
00:21:43: In order To make it better next time.
00:21:45: and this correcting is again The closed loop right?
00:21:48: It Is the close loop.
00:21:49: yeah its all about About getting towards an end state And then we need to have a means to find Out How far away is it from the desired State.
00:22:00: and Then You Have This kind of Delta and the delta Flows back through the whole thing and if you can do that multiple times per second than this Way it learns.
00:22:09: Clemens, thank you so much again for explaining this to me because that's something I've read about.
00:22:14: So much hurts a lot about the digital twin but it is really sometimes hard to understand what actually means?
00:22:19: Thank you for diving deep into this topic with me and diving already in between lab results and real life result i think thats important.
00:22:28: Robcox one of the manufacturers out there full end-to-end solution providers do everything and deploy not just a lab result, but real manufacturing results.
00:22:42: And obviously if you're listening right now we'd love to subscribe for our podcast also through our newsletter and join us again when it's the next time RobCo Podcast.
00:22:52: But I have one last question for you.
00:22:54: It is bit more open world big picture question Because i think that's good way.
00:22:59: close this podcast off.
00:23:02: What will people misunderstand about physical AI over the next five years?
00:23:07: So I think there is a common misconception that if you see humanoid, then it's like a human.
00:23:18: And obviously the humanoid needs to use physical AI do anything because he can't program it.
00:23:24: but a human has evolved for millions of years and skin self-repair.
00:23:33: It has all these sensors in every finger.
00:23:38: And the misconception is that just by having a humanoid do something, it can replace humans.
00:23:47: We see this demos where there are humanoids doing kung fu dances But they're not even zombies.
00:23:58: They have really no brain at all.
00:24:03: We will hopefully learn to make that differentiation and then really be able to use the benefits of physical AI, expand things we can do without having scores of cheap workers somewhere working in terrible conditions.
00:24:33: interesting and much better products this way.
00:24:39: Thank you so much, I love that so much because most people think we're really close in having humanoids doing everything for us but i think were quite far away... We talked about some other time how difficult it is to have touch working on a humanoid robot.
00:24:55: also having an understanding of the world is much, much harder than it seems.
00:25:01: And obviously all the videos you see on YouTube they're amazing which shows like this robot can do this but in reality he could once or twice and an effect resetting.
00:25:12: It's required to do these hundreds of thousands times in a real reliability.
00:25:18: This isn't given with these demos that we've seen.
00:25:21: Thank You so much Clemens!
00:25:22: Anything else?
00:25:26: No thank-you for having
00:25:28: me.
00:25:30: I hope we hope to see you all again when it's the RobCo podcast, where we talk about physical AI and the advancements in robotics.
00:25:51: Thank.