paint-brush
How Mixed Reality Will Shape the Design of Intelligent Systems and Robots by@himanshu_ragtah
252 reads

How Mixed Reality Will Shape the Design of Intelligent Systems and Robots

by Himanshu RagtahApril 9th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Engineers who design robots have no holistic idea of how the robot will actually look in real life, in full-scale in their lab, workplace, etc. by looking at a 3D model on a 2D computer screen. Engineers must take into account the orientation of every part, measurement of each side and angle, the location of the part in terms of the whole assembly, surfaces that should meet vs not, appropriate torque required for any screws. This is super time-consuming and is prone to multiple errors along the way. This allows you to get a better perspective and design the robot with the perspective of its surroundings.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How Mixed Reality Will Shape the Design of Intelligent Systems and Robots
Himanshu Ragtah HackerNoon profile picture

As of now, if you are an intelligent design systems engineer interested in designing, fabricating and assembling a robot for your lab or factory, you must go through the laborious process of first designing a 3D model on a 2D screen (which is problematic without full-scale context) then making a crammed engineering drawing listing the bill of materials and displaying your design in different views (isometric, right, left, section, exploded). For the engineer putting this together, this takes hours to days to do so depending on the complexity of the robot. One must take into account the orientation of every part, measurement of each side and angle, the location of the part in terms of the whole assembly, surfaces that should meet vs not, appropriate torque required for any screws, etc.

imgsrc: Repair Robots (KUKA KR6 C1 Arm Assembly)

How it’s done now: Engineers who design such robots have no holistic idea of how the robot will actually look in real life, in full-scale in their lab, workplace, etc. by looking at a 3D model on a 2D computer screen. They often end up wasting money on 3D printed prototypes

It’s hard to visualize the precise length, breadth and height measurements of a 3D model on a 2D screen. It just doesn’t translate well into real life. This is the reason why people often try to print full-scale 3D printed mockups of models to get an idea of how the model or robot will fit in the bigger picture (for eg. A robot that is supposed to be seated next to a lab stand with other objects that should be in grasp).

Many such robot designers end up 3d printing multiple iterations of the prototype before finalizing the design for production.

Imgsrc: RoboSimian, JPL (case in point: expensive prototypes of each iteration just to get a holistic idea of what the model will look like in real life, full scale)

How it’s done now: Engineers who design such robots insert too much information into small engineering drawings for purposes of fabrication and assembly 

Engineers who design such robots often spend hours making a dozen or more drawings with the tiniest of details crammed into an engineering drawing.

The standard practice as of now involves inserting a bill of materials that shows the names and quantities of all the parts, with an isometric view of the whole assembly.

Further, engineers make side views, section views, cutout views, exploded views to somehow explain the location and orientation of each part with respect to the whole assembly.

To make matters worse, certain parts are occluded by the nature of the view. 

Imgsrc: Cyril Kenward

How it’s done now: Engineers who are putting this together spend hours trying to decode and analyze scores of views (section, isometric, side, hole markings, orientation details, etc. ) in the engineering drawings

Imgsrc: Behance (example of an exploded view of a Kuka robot with a huge amount of subparts)

Imagine having to put together the above Kuka robot with such an exploded view. The questions that often run through an engineer who is putting this together are: Which part should be assembled first? Which part goes where? Which surfaces meet? What’s the exact orientation of the part in question?

An exploded-view of the 3D model on 2D drawing makes it insanely hard to figure these things out efficiently.

The new trend is where engineers try to refer to the 3D model on the 2D screen in addition to the intricate drawings and try to figure out what goes where. This is super time-consuming and is prone to multiple errors along the way.

How it’s done now: Intelligent systems hardware that is not cut by CNC but is manually fabricated, requires multiple measurements and checks to ensure accuracy

As of now, every measurement, every side, and curve is painfully double-checked with a vernier caliper to ensure accuracy. This back and forth between measuring and cutting is often that consumes precious seconds in the fabrication and production stages of such robots.

The future: bringing 3D models to life and full scale with mixed reality.

1. See 3D model in real life, in full scale and tweak it with the best perspective 

Autodesk and Microsoft are experimenting with a new mixed reality technology that enables the projection of the 3D model to full-scale real life. This allows you to get a better perspective and design the robot with the perspective of its surroundings and environment that it may interact with. This can speed up the design stage of the robot many folds.

src: Microsoft Hololens

2. Easily fabricate parts using accurate mixed reality overlay

Accurate mixed reality overlay on a stock material that is meant to cut into the desired shape can greatly reduce the amount of time spent measuring each side. This speeds up the production of robot parts many folds.

3. Easily see in full scale exploded view to analyze which part goes where with respect to the whole assembly in full scale

The ability to see a full scale exploded view of the robot thanks to mixed reality will allow engineers to get the best perspective and will, in turn, speed up the assembly of robots. It will allow engineers to easily see what goes where, in what orientation, what is stacked on top of what, etc. 

src: Microsoft Hololens

4. Isolate subassemblies in mixed reality

The ability to isolate subassemblies in mixed reality will allow intelligent systems engineers to build the mega assembly one subassembly at a time. As mentioned again, having an exploded view of the subassembly will really help in putting the robot together at a much faster pace.