Sounds in Space was a 3.5 year project exploring the nature of sound, location and orientation in space. Essentially what happens when we allow sound to exist relative to you, and have a tangible presence.
The Sound in Space technology allows you to place virtual sounds in a 3D space using a mobile phone. Once the sounds are placed, audience members can walk around the space wearing headphones to hear the sounds, all of which are binaural and respond to the person’s movement within the space.
We outsourced all the code so that it can be used for your own experiments - it is (surprisingly) well documented at the GCL Github: https://github.com/googlecreativelab/sounds-in-space
Tea: SO we're in your studio, but we're wearing headphones, so the sound gets very distorted. It's not natural sound, and it feels unnatural to be wearing headphones in this small little room. And you can move between those sounds and that can allow you to understand the sound, understand the context of the space you're in differently. It can affect how you experience the information.
Tea: Just the idea of moving the sounds so that we could make it sound like we're in a vast concert hall. And then you begin to understand how if you can do that with objects in space, all of this stuff with AR and VR, we're always listening. We might be able to see it on a screen in the thing, but unless you can hear it in exactly the right place, and we can't hear it in exactly the right place because the sound isn't coming from that place.
Tea: It's being generated artificially, and it can't be artificially generated specifically for you, because your ears are a funny shape. So, because sound is subjective to you, that thing of putting a sound in space precisely so that it works, becomes an incredibly challenging exercise. And it's one that I'm enjoying enormously. I think we'll get our ears scanned.
Tea: In a few years time, I think you'll get your ears scanned just like you get your eyes tested, and that you'll have headphones which literally adapt to the inner structure of your ear, and allow you mainly entirely for the purpose of augmented reality, that sounds can begin to come from space, from very precise spaces. And they're going to need to know the shape of your ear in order for it to come from that precise space.
Debbie: So, customized listening.
Tea: Yeah, customized listening. Because then information doesn't need to come into you from a screen. It can begin to exist as sound.
Debbie: Is this something that you're working on at Google?
Tea: We’re really interested in all the AR core stuff, and anchoring sounds in space, and how you orient. There are things that humans do really well. Filtering sensory perception being one, biases and filtering being another one, and understanding your location in space is another one.
You begin to realize how our understanding of reality, objective reality, is really reliant on those skills happening in our brain. So until we can begin to allow our little supercomputers in our pocket to understand where they are in space, and to understand how we experience sensory perception, not a default for humans, then these kind of experiences are still limited.
And when we do begin to play with those spaces, it's amazing, because you start to put information into the real world, in a way that you are experiencing it. I always talk about sliding doors, the opening and shutting doors, in that it's a great example of augmented reality. You walk towards the thing, the thing understands your intent, which is to walk through the door, and the mechanics of the door, open the door. Or sometimes they don't.
There's no real reason where that can't ... That sort of intent, and you think about how much information we know about you, and the patterns that you exist in on a daily basis. And for me, I like playing in cultural spaces. So theater gets massively changed when you move into a space where you've got whole other metalayers of information around the performance, or books move into an interesting space.