paint-brush
Convergent Futuresby@blakehudelson
120 reads

Convergent Futures

by Blake HudelsonMay 28th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Every year I attend CES — the largest technology conference in the world — to get a peak at new technology breakthroughs. There is always a cacophony of useless devices such as smart hairbrushes that remind you how bald you’ve become, or connected pillows that still can’t help your insomnia. But there are always some amazing technologies to see as well. What struck me this year wasn’t the tech de jour or the thousands of drones and VR headsets, but the technologies that are starting to intermix and create <em>integrated experiences</em>. Think autonomous vehicles + augmented reality + dispersed workspaces. Think VR headsets + eye tracking sensors+ voice interface+ 360 camera. Think wearables + AI + health data analytics.
featured image - Convergent Futures
Blake Hudelson HackerNoon profile picture

Every year I attend CES — the largest technology conference in the world — to get a peak at new technology breakthroughs. There is always a cacophony of useless devices such as smart hairbrushes that remind you how bald you’ve become, or connected pillows that still can’t help your insomnia. But there are always some amazing technologies to see as well. What struck me this year wasn’t the tech de jour or the thousands of drones and VR headsets, but the technologies that are starting to intermix and create integrated experiences. Think autonomous vehicles + augmented reality + dispersed workspaces. Think VR headsets + eye tracking sensors+ voice interface+ 360 camera. Think wearables + AI + health data analytics.

When singular technologies become integrated, unexpected convergent consequences start to occur, opening up unprecedented capabilities of technology. In this post, I’ll play out a scenario in which the integration of certain technologies starts to create experiences that were previously unimaginable.

In the near future…

It’s 8:15 in the morning. I just arrived at my office in San Francisco to prepare for my 8:30 meeting with my new client who’s headquartered in Germany. I’ve been working as an architect for 20 years and have seen the full progression of design, from hand drawing to immersive 3D design. Aside from a few plants and my favorite mid-century modern desk, my office doesn’t have much in it because augmented reality glasses eliminated the need for screens and hardware accessories years ago. I slip on my AR glasses and I’m greeted by Sydney, my AI assistant. Sydney shows me an overlay of my calendar events for the day. With a few grunts and flicks (it’s still early), Sydney activates a virtual conference room that overlays over my existing office space. My client from Germany, Maya, appears across the table from me.

Maya runs a non-profit, building schools all over the world for communities in need. Her current project is located in Tanzania, where she just secured four acres of land to start building a new K-12 facility.

Maya excitedly greets me. Our avatars are able to shake hands, make eye contact, and recognize subtle things like posture and level of attentiveness — just as if we were in the same room together. This level of nuance has taken a while to perfect, but it really makes a difference when interacting virtually. I remember the day when AR glasses hit 8k resolution, making graphics indistinguishable from reality, bringing AR into the mainstream.

Maya only speaks german and I only speak english, but fortunately the software in our glasses can translate each language in real time. She remarks how I haven’t been to Germany (in person) for over a year. I used to have to travel there every month, which was so exhausting that I almost left my job. Now that AR technology is so advanced, I only travel when I choose to because I can virtually visit anywhere in the world from the comfort of my home or office.

My company pays for a monthly subscription to get access to a network of camera drones from DJI anywhere in the world. I ask my AI assistant to bring up the DJI drone interface that I use often. A large graphic interface appears in front of me showing me the network of drones around the world for me to choose from. I find an available drone stationed within a few miles from the site, which is on the outskirts of Arusha, a small city in Northern Tanzania. I assume control of the drone and give it the address of our property to go explore. Once the drone arrives at the vacant property, Maya and I both have an immersive bird’s eye view of the site in real time.

The drone, which is equipped with 360 cameras, takes a few minutes to scan the site and then creates a 3D holographic model for Maya and I to view. We pan around it and study the topography, water-collection areas, and the types of vegetation currently on the site. We can even switch into X-ray mode and see if there are any infrastructural services already at the site such as plumbing or underground electricity. Together we identify an ideal part of the property for the school’s main building, which is flat, out of the flood plain, and gets plenty of sun for a solar array to be installed. I make a few gestures and place a digital model of a modular building on the site to make sure it fits within the constraints of the land. Maya is able to test some ideas for the site as well, inserting a parking lot, new trees, and a some wind turbines. Once Maya is happy with the “test fit” of new elements on the site, she captures a few key snapshots to show her colleagues later.

The reality is all of the technologies mentioned in this story already exist, but as isolated entities. Once these singular pieces of technology become integrated, new experiences will emerge that were inconceivable just a few years prior.

Thanks for reading! Part II of this series can be found here. If you’d like to continue the conversation, leave a comment or message me on Twitter @blakehudelson.