As a design studio focused on new media, we used to spend our days obsessing over the synchronization of cameras and visual effects using robots. This required careful planning, very precise machines, and hours of calibration that attempted to automate all of the things on set to perfect a single shot. Since then, there have been a number of technological advances ranging from computer vision algorithms to more powerful processing capabilities that have made us rethink how we experience mixed reality. This post attempts to explain the reasons why we traded in our video cameras for a different medium, the game engine.
Computer games have become a major cultural and economic force within contemporary digital culture, transporting us to fantastical fictional worlds. However, these worlds don’t exist without the medium through which they are created, a finite set of algorithmically generated strings of machine code often referred to as “game engines”. This code embodies the fundamental rule systems of games, and procedural instructions for how the game world is to be constructed and played in real-time¹. The application of game engines have recently ventured outside of their intended use, ranging from architectural visualizations, simulation training for the medical and military industries, to the previsualization of film². However, all of these have one thing in common (which is the main focus of this post), they are all restricted to the space of the screen.
What are the qualities of game engines that make them so relevant today? Game technologies are interactive, collaborative, customizable, and variable³. Images are generated in real time, algorithmically. But beyond their amazing imaging capabilities, they inherently begin with a structure. They are rule-based, merely a series of instructions and behaviors attached to virtual objects. Although we are excited about the increase in rendering capabilities like everyone else, we are more interested in the potential of the game engine to not only influence the virtual and what we see on our screen, but also the physical environment.
Albeit much more complicated, our world behaves much like a game engine. It is just a really good game engine. Darwin said the greatest simulation of them all is nature herself; infinitely mutating, never finished, and always evolving⁴. The game of life, a multiplayer game of about 7.5 billion people, is full of complex interactions between organisms of approximately 8.7 million different species.
Games also have a long history in art, from movements like Dada to Fluxus. John Cage, Duchamp, and others embraced the generative nature of a game-like process by constructing a set of guidelines or constraints, and encouraging acts of improvisation within their devised set of rules⁵. These works draw parallels to the unpredictability and chance-like behaviors found in game engine simulations.
Rotary Demisphere, Duchamp (left), Fluxus Table Tennis (middle), Play It By Trust, Yoko Ono (right)
This sort of thinking highlights the potential of interactive experiences that are non-linear, and are not dictated by a singular view or perspective. Often creating a deeper engagement with the viewer, game as an art form is both generative and participatory.
As mediums such as augmented reality become ever more present into our daily lives, the virtual and physical will become even more blurred. So why do we restrict the power of game engines to the limitations of a rectangle? We see a future in which objects can have both virtual and physical behaviors that radically changes how we experience the world. For example, virtual interactions that can actually have an effect on the physical environment around you, and vice versa.
Here at Novel, we developed an interface inside Unity3d for the real-time control of robots and IoT devices. Below are just a few of the reasons why we believe the game engine is the creative medium of the future.
Robotic Interactive Control inside Unity3d — Novel
We have recently tested some of these concepts through a series of experiments that aim to produce more meaningful experiences between the digital and physical worlds. We believe a truly immersive experience requires a dynamic interplay between both realities, a mixture of mechanical and digital. Each experiment aims to challenge the user’s perception by being stuck between these two worlds. In order to prove this theory, we use a combination of machines and devices driven by video game software in communication with AR apps. On one hand, the user is confronted with the physical environment they are accustomed to, full of real objects made up of matter and governed by physical law. Overlaying this familiar world is the virtual, made up of objects whose behavior is coded in a game engine and played out in a simulation. Each object or entity has its own behaviors; some that exist in simulation and some that have an effect on the physical space (i.e. audio, lighting, motion). We call these objects that have dual properties ‘non-objects’. They are often seen in our experiments as glossy and black, and mysterious in their behavior. Non-objects are objects that are sort of like real objects, but not really. They may resemble things that we see in our everyday life, but behave differently than what we are accustomed to — an advantage to being half physical, half digital. This duality offers unique opportunities of experience.
Example of a Non-Object — Novel
Through a series of short experiments (which can be found here), we at Novel are constantly searching for interesting behaviors found in the translation between mediums, from digital to physical and back. We begin each experiment with a simple rule, that each AR app prototype has to highlight a unique type of interaction that is half digital, half physical. For example, a virtual pendulum swing that’s acceleration is affecting the lights in the room, a modular synthesizer that drives the motion of a dancing cube, or a sinusoidal reaction of a virtual sphere driven by robotic manipulation.
Reactive Lighting, Robotic Manipulation — Novel
Another Novel project entitled “There, Not Here”, is a prototypical multiplayer experience that aims to challenge the user’s perception by being stuck between the virtual and the real. The user is first confronted with a robot and a table, with the robot hovering over the surface as if it were searching for something. By opening the AR app, the user unlocks the other half of the experience, and becomes an active participant in a game that highlights various laws of physics. Real-time control of the robot is driven by the spatial interactions of the participants.
There, Not Here — Novel
Game engines have the potential to transform how we interact with the world beyond the confines of rectangles. The experiments shown above are just briefly touching on what we see as a rich area for human experience, and we believe expands on the potential of AR to become something more than just a visual overlay. AR will become more integrated into the spaces and devices that we interact with in our everyday lives, making our virtual experiences richer and more consequential than ever. We are excited about this new frontier, and intend to keep chasing objects that we can’t quite see.