Architecture of Language and Sound

There are many ways we can talk about media, many lenses through which we can analyze the way the information and signals around us, both physical and digital, impact our lives.

I’ve long had an interest in different conceptual (and real!) dimensions of space: from the narrative worlds that can be constructed around and inferred from photographs, to the nested layers and networks of links and code that define and expand our digital spaces, to the social philosophy-play Situationist exploration and cartography of urban landscapes, the idea of “space”—mathematical, physical; imaginative, abstract—can be a very interesting framework and fertile jumping-off-point-of-reference for exploring many of my other interests, from narrative to design to technology.

I’d like to focus here on a few related strands of spatial thinking, centered on the idea that first language, and even more so, sound, can construct physical space in different, often more direct ways than other media. I’m still thinking through many of these things, meandering from point to point as I go, so forgive me if this comes across as more a meditation on possibility than thesis-bound statement of fact. (That seems to be one of the great benefits of writing in this format, and an approach I plan to continue taking; I’m more excited to learn from your responses than I am to simply publish a post and forget about it!)

When I think about the architecture of words and language, I’m focusing not primarily on the structure of words themselves—linguistic patterns, etymological building blocks—but on the superstructures which words, and their larger structure (sentences, paragraphs, poems, essays) potentiate.

My typical conception of the published text is one of books and magazines. It’s mostly flat. Even the hyperlinked structure of the Internet—the most obvious, emergent text-based layer of information architecture—is mostly flat. So is the experience of reading on a mobile device, whether via magazine apps or bookmarked articles. Some may have multiple layers but overall are still bound to a flat image, the paradigm of the page. So, I’ve been thinking: what greater abilities might language have to construct a more robust architecture (in that sort of grand spatial-conceptual sense); to create enhanced systems (narrative, perceptual, relational) all around us?

The poetry embedded in much great music is one way I’ve experienced language viscerally. In particular, rap lyrics: woven amidst other sonic elements to create what, at its best, can be considered a 3D sound-space (I’ll return to this idea later) imbued with spatial signifiers and the building blocks of imagery and narrative alike, great rap has always been for me a wonderful medium of the imagination. Another is street art and public signage—text integrated into the built environment—still a flat overlay lacking much interactivity, but at least goes beyond the substrate of the page or screen. There must be further avenues for spoken language and the physicality of words to not just persist but explore new territories. The verbal narrative tradition is being brought back from its historical roots (oral storytelling and whatnot) through mediums like, as I mentioned, rap music—and notably the expansion of radio into the digital omnisphere in the form of podcasts or digitally distributed audio shows (such as Radiolab). These mediums have the potential to construct imagined spaces through sound—which I mention here as an important corollary to and extension of language, though I recognize that they’re conceptually quite different, and also that I’m shifting directions a bit. Space created by sound can be significantly different than space created visually, because it can enable fully realized and multidimensional “mind-spaces” potentially more powerful and “real” than even 3D-ified video images—whereas the latter remain mostly flat renderings of visual space with the illusion of dimension appreciably illusory (requiring the suspension of disbelief), sound can create spaces that feel viscerally immersive and, it seems to me, in certain ways much more able to provoke the extension of the imagination.

To take a brief detour (for which I’m rather unqualified but will help to ground this discussion) into the mechanics of hearing: One big reason I think of sound-based media as having greater dimensional potential than visual media is that the act of perceiving sound is more reliant on three-dimensional space than is vision. Sound is physical: it essentially consists of vibrating waves that our ears capture mechanically (note: the human ear is an amazing machine!); we determine the direction of sound via reflection patterns (how waves bounce around in the curves of our ears). By analyzing some combination of intensity and delay, our brains can physically locate sounds to a surprising degree of accuracy. Our eyes, by contrast, are very close together; depth in our vision is more nuanced and easily tricked. Furthermore, by processing echoes we’re even able to infer information about the larger spaces in which sounds are created—differences in boundaries (distance) and materials of, say, a phone booth and an airplane hangar equate to radically different sonic signatures. With sound-design tools such as Logic’s Space Designer we can realistically create sounds that seem to originate in any number of spaces.

This all to say that with sound, there’s a greater interpretive and creative potential, with the onus placed closer to the listener, rather than on the creator, in imaginatively extending that space. This of course doesn’t always happen—far from the case. Often sound seems just as flat and located as any image. But I’m interested to see web apps, native iOS apps , media installations and the like take on a greater capacity to help users construct their own interpretation of a space. Backtracking just a bit—words have much greater potential than we often give them credit for to be the root of experiences that break beyond the prototypical planar iterations of information display. Sound, even more—and so much of this potential remains unrealized. It amazes me how many people still rely on crappy iPod headphones or built-in computer speakers for almost all their listening needs when soundspaces, created with intentionality, can open such a rich complement to visual experience, and expand it by such a large degree. But as it is now, there aren’t a considerable number of works or experiences that require audio nuance, so I suppose this lack in infrastructure, so to speak, should really be no surprise.

There are, however, several great examples I’d like to point to with excellent use of audio-visualization, examples that reach beyond the typical surround sound of games and film and point to paths worth exploring further. I don’t claim any sort of deep knowledge when it comes to this field, and I welcome additional suggestions for examples and resources that deserve a second look.

Apps: I don’t have an iPad or iPhone (yet), so I vouch for none of these firsthand, but they look great: Strange Attractor, Voco, Dimensions The Game and  Inception the App (both by RJDJ)—and a few others here

Games—in particular, games for (or at least playable by) the visually impaired: Troopanum, Papa Sangre (read more about it here), and work in progress Blindside

Binaural audio: Virtual Barber Shop; examples from Princeton’s 3D Audio & Applied Acoustics Lab