AI Generated Sound Effects

This research explores utilizing AI to generate sound effects for implementation in games. With continuous platform expansion, there appears to be an endless demand for fresh game audio content. Such content needs to be uniquely rich in variety and have the qualities of flexibility and adaptability to accommodate the complexity and diversity presented in various game settings and environments. This particular application, AISFX, utilizes a variational autoencoder (VAE) to model and synthesize “procedural audio” as an alternative to the main contemporary approach of playing back pre-recorded audio files. Demonstrations of the approach are at the links below:

AISFX Demo: AISFX
GameSoundCon Demo: AI Monster Sounds
Full Presentation: AI SFX Talk

Feature-Based Delay Line Using Real-Time Concatenative Synthesis

This research introduces real-time concatenative synthesis for a novel Feature-Based Delay Line (FBDL). Unlike traditional delays, it segments and concatenates the wet signal based on specific audio features. The paper outlines the process, emphasizing targeting, feature extraction, and concatenation synthesis, with insights on use cases, performance evaluation, and potential advances in digital delay lines. 

The research was authored by UC Santa Cruz student Niccolo Abate under my mentorship. It was presented at the 26th International Conference on Digital Audio Effects (DAFx) at Aalborg University Copenhagen in 2023.

The paper is available in the full conference proceedings: DAFx 2023 Proceedings

Quasar Spectroscopy Sonification

This research presents sonification approaches to support research in astrophysics, using sound to enhance the exploration of the intergalactic medium and the circumgalactic medium. Sonification approaches presented are used to convey key spectral features utilized in absorption line spectroscopy. The result is a novel software tool, Quasar Spectroscopy Sound, that enables researchers to analyze cosmological datasets via the sonification techniques presented. 

The paper was published in November 2020  Journal of the Audio Engineering Society (AES) and is available here: Quasar Spectroscopy Sonification

Network Sonification

This research explores how sonic features can be used to represent network data structures that define relationships between elements. Through a set of pilot studies, it presents initial findings on the ability of users to understand, decipher, and re-create sound representations to support primary network tasks such as counting the number of elements in a network, identifying connections between nodes, determining the relative weight of connections between nodes, and recognizing which category an element belongs to. 

The accompanying paper was published in the 2019 proceeding of the International Conference on Audio Display (ICAD) and can be downloaded here: Network Sonification ICAD

Sensations of Tone

This research explores sonic domains of sensory dissonance in a spatial context. The project was realized utilized the Allosphere facility at the University of California Santa Barbara California NanoSystems Institute. The result was an interactive sound installation that allowed a user to control and position sonic fields of various intensities of sensory dissonance and harmonicity.

Concepts and details relating to the piece are documented in my Ph. D. Dissertation, available here: Dissertation.

Sensory Dissonance and Sonic Sculpture

This research explores sonic applications of sensory dissonance. In particular it aims to develop a model for representing sensory dissonance in three-dimensional space. With a method established, three-dimensional dissonance fields can be constructed, resulting in generated “contours” of sound that form sonic sculpture.

The accompanying paper was published in the 2014 proceeding of the International Computer Music Conference (ICMC) and can be downloaded here: Sensory Dissonance and Sonic Sculpture