Joe Zeoli


Using Virtual Reality to Brand Interiors

Often after one of our branding engagements, the design team at 20nine will be tasked with infusing the visual identity into their office design. It usually involves taking pictures of the space, importing them into Photoshop, and designing on top of them. This helps the client visualize how their new brand will come to life in the spaces they call home.

Below are a couple of examples to illustrate how some of the deliverables usually look.

As a developer at a creative agency, I try to merge technology with our current process to enhance the experience. With the rise of cheap, accessible VR headsets like Cardboard and increased smartphone processing power, I thought it would be awesome to create VR environments with our designs to really immerse the client in their potential new office. My initial idea involved just editing a regular panoramic image that would be viewable in a VR headset…but then I found Google’s Cardboard app.

What’s really interesting about the Google Cardboard Camera app is that it creates 360-degree panoramic images with depth perception. It algorithmically creates a right eye image that recreates depth. It also grabs a clip of ambient audio to further add to the sensation of feeling like you’re in the picture. I was blown away by the stunning experience it created, and I wondered if I could edit the VR photos the way we do with our clients’ office spaces.

At 20nine, we had been in the process of updating our own office over the last few months, so I had some internal designs already created to start testing with. I grabbed a panoramic shot of the 20nine office using the app and saved it to Google Photos so that I could download it onto my computer. The Cardboard Camera software saved it as a jpg so before doing any research, I brought it directly into Photoshop. I zoomed into one of the empty walls and overlaid one of the proposed designs on the wall. I saved it out and imported it back into the Cardboard app, but I noticed the right eye didn’t have the updated image.

I jumped onto Google’s developer site to see if I could find out how the files are made up. It turns out that the VR elements are stored within the image itself. The main image is the left eye and the right eye and audio is stored as base64-encoded binary blobs in the image’s metadata. The metadata is stored using Adobe’s XMP standard. Since the base64-encoded image is large, it is written into the metadata in chunks based on the XMP specification. The end of each chunk contains a URL that looks something like”.

Initially I planned to use Javascript’s FileReader and some carefully crafted regular expressions to strip out the data and download the images, but merging the images back together would be an issue. So I landed on using Python and the command line to manage both the split and the join.

I found the Python XMP Toolkit ( and it was as easy as setting the properties found in the Google Developer’s site and getting the properties using the XMP API. After grabbing the relevant base64 data, I saved the images and audio in the same folder. Below is an example of getting the right image data.

right_image_base64 = xmp.get_property(XMP_NS_GPHOTOS_IMAGE, u’GImage:Data’)

Once the images were downloaded, I could overlay the same graphic onto the left and right images, save the jpg and merge everything back together using the same XMP properties. Importing it back into the app was as easy as connecting a USB to my phone and dropping the image into the apps folder. Once the image is back inside the Google Cardboard app, you can create a shareable link to send out to clients as a deliverable.

This is an awesome ⏤ and surprisingly easy ⏤ way to WOW the client with something more than a static design.

If you download the app, you can use the links below to check out the before and afters!



More by Joe Zeoli

Topics of interest

More Related Stories