Lyraflo
A VR music theory learning experience, developed in Unity.
Skills: Unity, FMOD, Transformational Design, Sound Design, Music Composition, Music Theory
Lyraflo: Music Theory in VR, 2021 https://projects.etc.cmu.edu/lyraflo/
Game Designer, Sound Designer, Composer
Project Description:
Lyraflo aims to explore how the unique properties of VR could be used to convey music theory concepts to music beginners.
​
My Role:
On this 5-person team, I worked as a game designer and sound designer, communicating closely with the artist and programmers to create interactions which integrate physical, visual, and audio feedback to teach music theory.
​
A Quick Overview:
Being constrained to certain music theory concepts for each prototype was a big design challenge. For example, one of our prototypes explored how to convey the concept of major and minor using a mix of visuals and audio transformations. I was tasked with composing a series of musical pieces that could dynamically switch between major and minor based on player input.
​
Lyraflo Trailer
This is an early version of the tonality prototype. I composed the music that can shift between various tonalities.
Music Composition
​
Composing Musical Pieces that Sound Good in Both Tonalities
In order for players to clearly hear the difference in tonalities, the various music had to be composed almost solely in either major or minor. When composing, I first had to map out the chord progressions and plan out the music so that no matter when the player changed the tonality, the music would still sound good. This meant I could only use chords that shared the same root in both major and minor keys, such as the first, fourth, or fifth degrees. The melodies also had to incorporate a lot of thirds in order to make major and minor clearly discernable.
​
Making the Music More Tolerable
As you can imagine, music written with just these chords and constraints could become a bit monotonous. My first instinct was to write polyphonic music to make things more interesting, yet doing so confused playtesters. As such, I opted to compose music that would be mainly monophonic, and instead played around with timbre and pitch to create more sonically interesting compositions. After making these adjustments, playtesters had much more success hearing the difference between major and minor, with 20 out of 22 testers successfully identifying the tonality of various music.
​
Drawing Players' Attention to Auditory Cues
Another challenge we faced was that many players did not focus enough on the audio. In order to remedy this, here are two examples of how I used sound design to highlight and draw attention to audio changes first and foremost.
​
Creating Feedback Sequences
One such example is the use of time, and offsetting when the visual and audio feedback would happen. For example, when players triggered an interaction, we would often play the visual and audio feedback in sequence, instead of simultaneously. Having too much feedback simultaneously would force players to subconsciously choose one stimulus to focus on, which in most cases was visual stimuli. Playing around with feedback sequences created interactions that provided space for players to solely focus on audio feedback at given times, which better conveyed musical concepts.
​
Creating Soundscapes with Intent
Initially, Lyraflo's soundscape was always quite full, with music, ambience, and sound effects in every scene. After playtesting however, it became clear that the soundscapes needed more design focus. For example, when players are being introduced to a concept such as major or minor for the first time, they only need to hear music showcasing these tonalities. Any other sound would simply be a distraction. On the other hand, if the player is being taught how different tonalities can create different moods and atmosphere, having ambiences and sound effects became crucial. Thinking about what kind of information the player needs at a given moment, and creating the proper soundscape to support that need was crucial to Lyraflo's success. Having a few right sounds at the right moments was much more effective at communicating information than having a panoply of sound assets.
​
​