Robotics and machine vision with professor Peter Corke

author avatar

26 Sep, 2019

Peter Corke

Peter Corke

Peter Corke is well known for his work in computer vision and has written one of the books that define the area. We get to hear about his long and interesting journey into giving robots eyes to see the world.

Peter Corke is an Australian roboticist known for his work on Visual Servoing, field robotics, online education, the online Robot Academy and the Robotics Toolbox and Machine Vision Toolbox for MATLAB (matrix laboratory). He is currently director of the Australian Research Council Centre of Excellence for Robotic Vision, and a Distinguished Professor of Robotic Vision at Queensland University of Technology. His research is concerned with robotic vision, flying robots and robots for agriculture.

Corke is a Fellow of the Australian Academy of Technological Sciences and Engineering and of the Institute of Electrical and Electronics Engineers. He is also a founding editor of the Journal of Field Robotics, and a former member of the executive editorial board of The International Journal of Robotics Research. 

Peter Corke is well known for his work in computer vision and has written one of the books that define the area. We get to hear about his long and interesting journey into giving robots eyes to see the world. 

In this interview, Peter talks about how serendipity made him build a checkers playing robot and then move on to robotics and machine vision. We get to hear about how early experiments with “Blob Vision” got him interested in analyzing images and especially moving images.

Interview: Robotics and machine vision with professor Peter Corke

Per Sjöborg, host of the Robots In Depth podcast, interviews Peter Corke to learn more on his view on the developments in the field of robotics. Below is a transcript of the interview. 

Podcast transcript

Per Sjöborg: Welcome to the podcast version of Robots in Depth and this launch episode with Peter Corke in cooperation with Wevolver. Today, I'm honored to have Peter Corke from Queensland University of Technology and computer vision is your thing. How did you get started in robotics?

Peter: My first job after I graduated I did electrical engineering at the University of Melbourne and I got a research assistant job in that same school. It was a control systems lab and I think maybe it was the second year I was there we had a university open day was coming up. The school wanted to have something that was a bit visual and interesting to show parents and potential future students coming through. This was a long time ago. This is probably 1983 or something like that. We bought a little robot, 5-axis robot with stepper motors in it. That was pretty cool. We bought that and I connected it to a computer which was in the day we call a mini computer, it's a great big rack of stuff and interfaced to this was an LSI 11 which is still my favorite computer ever and wrote a whole bunch of software. I'm thinking it was probably written in Fortran. I played a game of checkers. This little robot just sort of sat on the side of a checkers board and someone would make their move and I think I'd have to type that in on the terminal and then the robot would make its move and so on. We’re talking 1983 so you need to lower your expectations a bit.

Per: But still it was a robot playing chess.

Peter: Not chess, it was checkers which is a simpler game. It did very simple manipulation and pick the pieces up and all about. I was pretty happy with that and how it went really my first exposure to robotics and kinematics and things like that. Then some time, not long after there was an advertisement in the newspaper for Australian federal research organization, an organization called CSIRO and they were looking for roboticist. They were just like five blocks away from the university where I was working. I applied and I got that job. I stayed there for 25 years and during that time then we did a whole bunch of different robotics projects. Started in manufacturing robotics and the first project there was concerned with deburring so that's where you have a robot with a grinding wheel and you're trying to take the rough edges off a piece of metal. That’s pretty challenging because most robots are position controlled and to do deburring you need to use force control. We’re trying to do force control but we've got a grinding wheel on the end of the robot and that's injecting a ton of noise into the force sensors. There’s a lot of signal processing, control engineering to get a PUMA robot holding a very small grinding tool and to grind metal.

Per: Very advanced for because again we're talking in mid-eighties here. That’s quite a, as you say the noise there and that signal has to be horrible.

Peter: It is pretty horrible. There was a lot of filtering in that and the robot had to be able to react very quickly to changes in force. The PUMA robot came with a control box, the Unimate controller and the vowel programming language and all of that. We stripped all of that away and we developed our own robot control architecture. Again at this time we were experimenting with very early 32-bit microprocessors so National Semiconductor 32000 series and then 68000, 68020 and then they came out with floating-point units. We’re talking probably 16 megahertz processors with a few megabytes of RAM.

Per: Very hard to do such a hard problem in such a constrained environment.

Peter: Absolutely, we wrote everything from schedulers all the way up. This code was probably all written in C at this stage. We wrote kinematic modules, forward and inverse kinematics and simple trajectory planners. Very influenced at that time by a software package called RCCL which was developed at initially at University of Pennsylvania and then later in other places. We’ve sort of built a sort of more modular more portable version of that. We call it ARCL and attempted to open-source that. A couple of people used that I think but it didn't have a have a particularly big impact but it was a good enough tool for us to do be able to do our work.

The other thing that was happening at that lab at the same time as we're doing this force control work, there was another group of people who were looking at doing very high-speed image processing. You take a video stream and then you threshold it so you get a binary image stream. Then we want to be able to describe the binary objects in the scene so this is very simple blob vision. They were starting to develop some custom microchips that would do this blob processing in real time. That seem like a cool project so I hung out with me for a bit and got very much involved in in that project. The result of that was this big VME bus card with custom chips on that, semi-custom and full custom ICs. I got a little bit involved in that sort of stuff. I haven't really ever touched that again since but what it did is it would take a stream of video from a camera and it would produce and interrupt every time a blob in the scene was complete. It would tell you its area, its perimeter, its first and second moments and from that you could say something about its shape. It gave me the ability to process visual information 25 frames a second with very low latency and being a control systems guy I thought that's kind of cool. I could actually use that to close the loop on the robot.

At 20 to 25 Hertz that that's good enough to close a loop on a robot. That’s when I got interested in this whole area of vision based control. I took the technology really from these two projects and brought them together and demonstrated. I certainly demonstrated closed loop performance but initially the performance was pretty lousy. It was very laggy. Actually the closed loop bandwidth was very poor. I did some of this work when I was on a fellowship at the University of Pennsylvania. That was 1988-89. Then looking at how well it did from a control systems perspective I was unhappy with close looped performance. When I went back to Australia I embarked on a PhD because I didn't have a PhD previously.

I started PhD and the topic of that was the dynamics of closed-loop visual control systems. I then looked at much more sophisticated controllers, looked at predictive control and it's the prediction that's really important because by the time a camera sees something and the image is transmitted from the camera into the computer and then it's processed and you get a result. Even if you use all this cool hardware in between there's still quite a delay. The robot is always reacting to what was rather than what is. The only way to get around that is to then have models of how things are moving in the world, predict where the thing would be in the nearest future and react to that rather than the old information that is coming from the sensor.

Per: Very interesting, trying to look at the image not only interpret the image which I found mind-bogglingly hard but also trying to do if an object is moving in a trajectory in the image it is safe to presume that it might continue to do so.

Peter: That's right and you have to use pretty strong assumptions here about how the object will move into the future. I think we do this our whole ability to play any kind of sport that involves dynamic objects, the ability to catch something really relies on us having an internal mental model of the dynamics moving objects. We’ve got a lot of delays in our visual processing system and our motor control system. We absolutely couldn't function unless we were able to do prediction. You can argue that perhaps one essential capability, capability requirement of intelligence is really to be able to reason about the world not just as it is now but as it will be into the short-term future.

Per: Depending on what you see and what you have seen you can predict the future.

Peter: Absolutely and I think that's critical to what goes on in here. This is a large extent a prediction engine about what will go on the future and if what you predict comes to be then you pay it no heed but if what you predict doesn't happen you are surprised. You have a learning moment and then your skill changes.

Per: Then your next prediction will be better. The results are so that in this world tennis balls just don't change trajectory mid-air without hitting anything. There are many contexts where we actually can tell how an object is going to behave.

Peter: I think this is the thing that's really important about robotics, robot devices that are embodied in the physical world. The law of physics applies to their motion but also to the motion of everything that's around them. In computer vision a lot of the effort goes into trying to interpret a particular image. In a robot yes, we may need to interpret a particular image but then we need to interpret the next image and the next image and the next image. There’s not going to be very much difference from one image to the other. It’s not like we have to process a thousand images in a row they're all completely different. We have this temporal continuity in the sensory perception that coming to the robot and that I think is it's critical about robotics we can rely on temporal dynamics and physics.

"I think this is the thing that's really important about robotics, robot devices that are embodied in the physical world. The law of physics applies to their motion but also to the motion of everything that's around them"

Per: You can look at the image and see what changed and maybe focus on that.

Peter: If there's something of interested in one frame and you can be pretty sure where it will be in the next frame and then you just have to process those particular pixels there. Although you might argue that to process one image is hard and to process a stream of images is going to be a lot harder actually I think it's simpler to process the stream of images. The quicker you take those images, the shorter the inter frame time, the less difference there is from frame to frame and that interpretation becomes easier. It’s somewhat unintuitive that to process a stream of images that come into you at a really high rate that processing problem is actually not so hard on a frame by frame basis.

Per: Especially if you want to separate an item from a background for instance because if the robot is stationary or you can predict the robot's motion because it knows how it's moving its own body and this object that is moving you could discern it from the background by simply saying okay this is a blob that's moving yeah and the other blobs are standing not may be necessary still but they're moving in a predictable way depending on how I move.

Peter: One of the limitations of this early work we did was this blob vision when you segment the world into objects and not objects, white things against the black background and that was probably acceptable in the eighties when we were struggling with all manner of problems that was sort of a nice simplification to be able to make but where visual perception has always struggled is how do we deal with much more complex, much more realistic worlds and visual scenarios and how do we deal with the fact that as the lighting changes things look different. It will certainly look different to a vision system and it's really only the last few years I think that we've made a lot of progress to using techniques like deep learning, deep neural networks to be able to quite robustly understand objects in the world irrespective of lighting, irrespective of the viewpoint that we have and that makes it very exciting that we've now got very robust perception. We know how to process information about where objects are, use that to control how robots move. We’ve got almost all the pieces in place now I think to have robots that can robustly react to their visual world.

"It very exciting that we've now got very robust perception. We know how to process information about where objects are, use that to control how robots move. We’ve got almost all the pieces in place now I think to have robots that can robustly react to their visual world"

Per: We're talking here about images whom you haven't kind of defined. Of course this can be a visual image like you and I see the world in kind of a color but it could also be from heat cameras. It that also what you call an image.

Peter: I'm probably not terribly consistent on this but certainly we are one of the most exciting senses that's been on the roboticists land toolkit for the last five more years of these so called RGB D sensors. For every pixel they give you red, green, blue and depth. Things like the Kinect sensor for instance was really the first commercially available low-cost RGB D sensor. That provides very rich information. Sadly it doesn't work very well outdoors where there's a lot of infrared illumination from the sun but it works adequately in an indoor environment. Having that depth information is really important to a robot because if you've got the depth information about a scene you can reconstruct the geometry and most robot problems are posed in terms of geometry. I’ve got an object. It’s not a pose. The robot arms got to pose and you want to make the pose of the robot in effect move toward the pose of your object. It’s all geometry and RGB D sensors give you that directly so does LiDAR sensors. What's always intrigued me is the fact that we can do these problems and I can guide my hand to pick up an object just using my eyes and my eyes are a pair of projective cameras effectively RGB cameras. We’re able to do fantastic geometric reconstruction.

Per: Catching the ball flying by us.

Peter: We can do it not just statically but we can do it dynamically. These are very, I won’t to say these are cheap sensors but cameras that can sense a RGB pixel values a very low cost. They cost probably in the quantity that somebody like Apple buys them and puts them in phones they probably cost only a couple of dollars each. If we look at the visual capability that we have and almost all organisms have is phenomenal. We can do all of these things just using a pair of cameras. What we don't think about so much is the one third of our brain, the back third, the visual cortex which is doing amazing processing of that raw visual information and turning that into information, actionable information.

Per: Sharing that with the rest of the brain.

Peter: Our eyes are probably a hundred megapixel sensors each, very high dynamic range but we've got half kilogram of gray matter at the back of our heads. It consumes six watts and it's able to process very complex scenes, work at how far away things are, can recognize faces and do as you say the visual prediction that allows me to catch the tennis ball. All of that for six watts. That’s the amazing thing. We’re doing great things now in vision by throwing GPUs at the problem and they're churning down hundreds of watts of electrical power.

"Our eyes are probably a hundred megapixel sensors each, very high dynamic range but we've got half kilogram of gray matter at the back of our heads"

Per: They're not 600 grams and they're very far away from the same capabilities.

Peter: Yes but I think we're starting at least to be able to create algorithms that can solve these problems. The way we do it is perhaps crude compared to the way evolution has solved the problem for us but once we've got a handle on what are the right algorithms then I think we'll come up with better computing architectures that will execute those algorithms in low weight, low cost, low dollar cost and low power cost so I think that will all come. The other area I think that is a reason to be to be kind of excited at the moment is this couple of very big projects looking at the way the human brain is structured. It’s a big project in Europe, big project in the in the US using a lot of new technology to basically be able to map the neural structure of our brain. As we do that it'll give us some insights too into the way we're wired and maybe we'll learn about some of the tricks that we use to solve these very tricky problems. Really interesting times are the fact that computer scientists are coming up with fantastic algorithms. We’ve got great hardware, GPU Hardware coming out from companies. It’s getting better and better and we've got other sorts of scientists trying to understand how we work.

Per: Then we combine that.

Peter: Absolutely, next 5 to 10 years I think we will make great progress.

Per: We might also come to this critical point of where we are actually able to put this system into robotics out there and then get all that feedback from them used in the field and actually providing use. Do you think we're close to that deviation point where we can use vision based systems.

Peter: I would like to say we're very close to that and this center that I'm director of it's our mission is to be able to equip robots with a sense of vision. Our tagline is creating robots that see because we believe that until robots can see as competently or better than we can a whole lot of jobs that we do effortlessly will not be out of reach of robots. They’ve got the similar perceptual capability but they won't be able to do the work we want them to do.

Per: I am totally with you there. I am the say absolutely the same thing. The eye and the hand are the key for robotics for the future for many applications. Both today are very limited. Robots don't see well enough and they aren't dexterous enough.

Peter: Here's an interesting example was given to me by one of my colleagues. Consider the problem of chess. Chess was once considered the pinnacle of human intellectual achievement or capability. Computers were able to beat best human chess player more than a decade ago, maybe two decades ago. I can't memory exactly when it was. Everyone said well, that's it. We’re done. You think about the problem of chess okay, we can solve the algorithmic problem of chess but a two-year-old child can pick up a chess piece. I think robots are still not able to pick up a chess piece on a cluttered board very quickly or very reliably. Then you've got the perception problem. I'm not sure now we could come up with a very robust vision system that could tell you the state of the board for any kind of chess set. If I gave a chess set you've never seen before and I gave it to you and asked you to pick up the White Queen. You would just reach over and pick it up.

Per: Although I've never seen it before and it doesn't look like a queen.

Peter: Yes but you'd be able to figure it out partly on its appearance and partly you'll be able to what you know about chess and there's only one queen probably the tall piece.

Per: Relationships to the other, it's not one of the small ones, that kind of high-level reasoning.

Peter: I think we can't do that. I think we would struggle to come up with a system that could read a chess board that it had never seen before and in indicate which was that particular piece and then have a robot come over and pick that up and move it. It is funny that we consider chess a solved problem, the algorithmic part is but the manipulation and the vision part is not. The general public don't quite understand the difference between artificial intelligence and robotics. They are related in many particular ways but to my mind artificial intelligence is the disembodied intelligence where robotics is that intelligence embodied and interacting with the physical world, sensing the physical world, manipulating the physical world and those things are really hard yet we're pre-wired at birth to be able to discover the abilities to do that. We're struggling to get robots to do that though last few years fantastic progress in deep learning and at this particular conference we've seen great results in deep learning for understanding scenes and also for manipulation.

Per: I also think since we are so good and we learn, we train our hardware, we're born with the hardware and then we train them at the time where we're not self-reflective. We have a hard time relating to why those robots have such a hard time doing this. This is simple because we don't remember the six or ten years we spent learning to do it ourselves. We were born with amazing hardware to start with. Here we have a robot that has inferior hardware and it is two months old because it hasn't had this enormous growth period that a human have.

Peter: but the sad thing is that every single human has to go through that learning phase. It’s not the case for robots. Only one robot needs to go through the learning phase and then it can share its learning with all the other robots. If one of those other robots has a surprise where what it's learned doesn't gel with reality it's going to have an increment of learning and it could share that with all of the other robots. Collectively the population of robots is going to be able to learn at a phenomenal rate. We need to understand how to represent the learning what it is that they have is a representation of the world and the skills that they need to interact with the world and perceive the world and then how that would be communicated. I'm not sure anyone's looking at that problem just yet.

Per: That sounds fascinating.

Peter: It is fascinating. It sounds a little flaky and it sounds a little scary Terminator like.

Per: They can even share each other's sensors. They could determine I'm here to perform my task. I need to know what's going on over there. My sensor aren't good enough where there's something in the way or something like that and they could say but you're over there. Could you share your image with me and I can have a look. Humans can't do that. If you're around the corner and I need to see what you see you have to take a picture and send it to me but robots can share that with more lag but anyway.

Peter: Not necessarily more lag but this is an interesting point you raise but we this I guess anthropomorphic view of sensors being wired to our brains and they all are. All our tactile sensors, all our eyes are wired to our brains. Your eyes are completely useless to me as a resource. One of my PhD students is looking into this problem and the use case we have is blind robots navigating around an environment. They just make a request for views. Anyone near me can give me a view of what's going on but not just sending the images directly. The cameras are doing some processing and saying show me where's some clear space. Can you see me? Can you see which direction it is that that I should go? If we were doing it, say I was a robot and you were a robot and we're doing a task and I need to see what was around the back. I just get some features that were coming from your eyes and fold them into my algorithm with the features coming from my eyes. If that wasn't enough there was another robot over there so I come over here no tell me what you see.

"All our tactile sensors, all our eyes are wired to our brains. Your eyes are completely useless to me as a resource"

Per: That's spooky. As you say it's a little like whoa but that is because we relate to ourselves. I mean that's spooky for us but it's going to be common, like standard for robots.

Peter: I think it's just we're not thinking sufficiently laterally when it comes to sensing resources that they don't have to be, I don't have to own the eyes. As long as I've got access to the eyes to help me do the job that's enough. The other thing, you look at a lot of these drones and they all have gimbal mechanisms to stabilize the camera. Gimbal as a slow, heavy mechanical thing and I really wonder why we don't just plaster a whole bunch of very lightweight cameras fixed just pointing in a whole bunch of different directions and then post hoc warp out you know the stable image that you that you want. The one thing that I have learned of recently that really surprised me are consumer grade cameras which are able, which got phenomenally high ISO ratings so ISO rating 50,000. You can take it out in the dark. It might be pitch-black, you can't see anything. These cameras are forming images. They’re a little bit grainy, a bit noisy but it's a decent image that you can actually run feature detectors and so on. Sony recently released a commercial camera with this very high ISO rating and I believe other, Canon has one.

People are talking about cameras now with ISO ratings of millions. That’s potentially a game-changer. Outdoor robots are going to need to function day and night and as a community I've always been a little bit amused by the fact that we just kind of ignored night. We just pretended it doesn't exist. We had an agricultural robotics project running for the last few years at university and we had to deal with the night problem. We dealt with that very old-school way. We just put very bright lights on the on the robot but engineering a decent light source that allowed stereo vision to work 5 to 10 meters away from the robot was pretty hard and consumed a lot of power. It was very bright. It hurts your eyes if you get too close to it.

Per: That's every bug in the area so your vision is suddenly dealing with 10,000 flying things around it. Some of them glowing themselves.

Peter: You have that problem but it's just a very clumsy and inelegant solution to the problem. Our eyes have got fantastic dynamic range. We can get dark-adapted if we wait quarter of an hour. The gain in our in our eyes increases through some very slow chemical processes. Those robots could just switch on a night vision. That would be fantastic.

Per: I see the same thing when I studied cameras for this video project I see that, not camera that's I need to use but ISO steps up, half a stop or stop every time somebody releases a new camera giving you the equivalent image. As you say for the higher ISOs they're still not very good but I have seen, was it the four million ISO, Canon, that's kind of not a consumer camera. It’s still many thousands of dollars but it exists. You can buy it over the over the counter and it's just amazing what they can do.

Peter: This is a capability that once existed only in Special Forces had access to that kind of night vision capability. Now it's in the professional camera market and it will trickle down into the regular smartphone market. Maybe it'll be in your smartphone in a few years’ time and that's staggering. To me that's the most surprising sort of sensory capability I've learned about in recent months.

Per: This was amazing. We’ve learned so much about vision processing. Of course I can recommend Peter's book. Say that you you're not a university student, you're not a PhD or an undergraduate, you're maybe a younger person going to high school how would you recommend somebody getting into the field of computer vision?

Peter: of computer vision or robotics or both?

Per: Both actually. I mean if you have a primer for the 15 year old or the person that doesn't have a, because we want to bring in everybody into robotics. Then they need somewhere to start.

Peter: If you look at what's available online and in the book marketplace there is really sort of three categories. There’s a lot of material aimed at the hobbyist. How do you build robots using Arduinos and RC servo motors and whatever? Those books are a lot about doing. How do you take the tech and make it do something? Then there's at the graduate level. There are quite advanced textbooks with a ton of maths in them which are going to be inaccessible to the demographic that you that you specifically mentioned. Then there's a number of textbooks which are designed for university level. Some of those tend to be quite theoretical, quite principle, would have a lot of mathematical formalism in them. My book was designed for mid undergraduates so if someone in engineering doing engineering degree maybe second year engineering. I tried to be pretty light on the on the formalism.

It’s a very hands-on book so it allows you just to write code and experiment. I think the best way to learn anything is by doing and it tries to take you by the hand and gently leads you through just the minimal level of formalism to understand some other concepts. It introduces you to a lot of the algorithms in a hopefully fairly painless way. That’s what the book is about. It’s a very chatty book. It’s more like it's quite conversational. It doesn't try to be very formal and it mixes text with diagrams with pieces of code and really leads you through. Because it's aimed at engineering students it may be beyond people who are still in high school. If you are in that that demographic, you are in high school and you want to get into robotics give it a crack. You can't do robotics without the formalism. You need to learn the math. If you're going to get into robotics learn the math. I'd like to think my book is probably one of the most painless ways in so you need to know a little bit of linear algebra. You need to know about vectors and matrices but if you understand vectors and matrices then absolutely give it a go. Worst thing that can happen is you're not going to understand it and then maybe you have to go off and you know look at some stuff on Khan Academy.

Per: I was just about to mention Khan Academy and that's a great resource.

Peter: It's a fantastic resource.

Per: It's going to change the world. It might change the world exactly in the magnitude of robotics actually because it's going to be able to teach people the things that they need to get access to your kind of material and then build robots. That’s what we want to do. We want more people to be part of it. We want more people to do robotics. I am thinking this is great that your book exists and that you have this approach. You can start with something simpler building your robots from as you say more hobby-grade stuff. Then you can try your book out, add a little bit of Khan Academy to that and you're well on your way on to use computer vision in your robot.

Peter: You've got to be able to code. That’s a fundamental message. You’ve got to be competent coding in C++ or Python.

Per: Is Python fast enough for vision processing in real time.

Peter: There are libraries that work with Python so open CV for instance which is a very well known, very complete set of primitives for image processing. It has got really excellent Python bindings. You can you write in Python. The underlying stuff is happening in C++ code under the hood.

Per: You can start using it in Python and when you realize the library is not doing what you want then you can do that little piece. Then you have to learn C or another language but you can gradually go there rather than having to start there.

Peter: I think you look at almost all robotic systems there's some mixture of C++ and Python coding. The percentages will vary but if you've got those two languages you're very well-placed to get into robotics. The other tool if you get into robotics you really need to know about is ROS. There’s quite a few good introductory books now for ROS. That didn't used to be the case. You just used to go to the website and do the tutorials. Now there are some really nice books that will again take you by the hand and help you on that on their journey.

Per: The two language thing the C and the Python, as a programmer I am definitely of the opinion that that's the best way to go because to write all of it in C is overkill. It will make you less productive yet you cannot write some of it in Python so you have to have both.

Peter: Python gives you the productivity and you’re going to lose some performance and C++ is the converse. It always comes down to some suitable mixture.

Per: Towards the end of this interview we are going to go to the fashion area. I'm going to ask you to show your very nice t-shirt here. This episode is brought to you by ICRA in Australia 2018. Peter here is wearing the t-shirt.

Peter: Do you want to see the t-shirt?

Per: Yes.

Peter: That is the front of the t-shirt and that's the back of the t-shirt. We did like a road show. We got the date and the location of every career that that has happened.

Per: Now you can add fashion model to your CV next to professor.

Peter: 2018, ICRA will be held in my hometown Brisbane in Australia. Brisbane is in the subtropics so the conference will be held in May 2018 so approaching winter. Because we're in the southern hemisphere we're in autumn moving into winter. Very lovely season, lovely weather. Brisbane is a great city and I think it will be a wonderful conference. First one ever in the southern hemisphere. In the whole history of ICRA none has been in in the southern hemisphere. Some of the big vision conferences have gone down under but not yet robotics.

Per: We're very much looking forward to it and I hope to speak to you again then. Thank you.

Peter: Thanks very much Per.

Per: I hoped you like this episode of the podcast version of Robots in Depth. This episode is produced together with Wevolver. Wevolver is a platform and community providing engineers informative content to help them innovate. It helps engineers stay cutting edge. Aptomica is the founding sponsor for Robots in Depth. Aptomica runs anything in modular robotics. Dream, rent, build. Visit to connect.

End of transcript

Follow the podcast on Apple, and Spotify.

The Wevolver Robots in Depth podcast is published multiple times per week. If you, or your company is interested in supporting this podcast and reach an audience of professional engineers and engineering students, please contact us at richard [at]

More by Per Sjoborg

I interview CEOs, founders and CTOs from successful and up and coming robotics companies, as well as researchers, investors and thought leaders within robotics and AI. You'll learn about the latest technologies that are being developed at innovative robotics companies, the lessons learned by incredi...

Wevolver 2022