Log InCreate An Account
  1. GPU Audio Blog
This is a Blog post published by GPU Audio

The Rise of the Machines: Surviving the A.I. Music Apocalypse

-
GPU Audio

A.I. is here and it's changing the fundamental ways in which we interact with and use computers, with some spectacularly powerful and accessible tools in industries stretching far beyond music and audio.

In an expansive conversation on A.I., its implications, applications and prospective futures, we brought together a selection of luminaries in the field: Ale Koretzky from Splice, Andrew Fyfe of Qosmo, and Zach Evans from Harmonai. The conversation was chaired by Jonathan Wyner, amongst a number of performances, presentations, and panels at the GPU Audio Innovation Lounge, recorded live at the NAMM Show 2023.

We begin with some context; comparing the emergence of A.I. to the advent of MIDI, and how, at the time, the idea was so disruptive to the status quo, that for many, it seemed like an unnerving development for musical artists - many fearing being made obsolete or redundant by the technology. We're acutely aware that MIDI has paved the way for exponential development in modern music, with version 1 still being the go-to standard for interaction of hardware and software instruments over 40 years later. So, with A.I - there's nothing to be scared of, right?

Your Creative Identity

One key element the panel covers early on is the importance of maintaining your creative identity when working with A.G.I.systems (Artificial General Intelligence). Ale Koretzky explains: "The end to end nature of A.I. models, most of the time leaves little to no room for creative input along the way. And what ends up happening is that humans get disengaged, because it's not the type of thing that you want to see as part of a creative tool." He continues: "There's no one right or wrong way to do things, but if we're talking about building tools from our human creation, you need to pay special attention to how you deploy A.I. systems".

Imaging software has long since had the ability to interact with content for removal or customisation of specific elements; Zach Evans compares this approach to generating new audio for his music making process: "I was working in the image space to the image A.I. space where I'm taking a picture and get a variation of that by just saying "bury this semantically, nothing less", that would be a great use case for audio - if I need more drum fills, or I need more bass sounds and I've got one that I like, I need ones that are similar to those, not the same".

A.I. models also, to some extent, have their own personality, which can be developed and refined to suit the users needs or even to bring them unexpected results. Andrew Fyfe believes that: "The good thing about working with A.G.I. right now is that they all have their own behaviours and characteristics and all of these inflections to the sound and characteristics. And that's all highly subjective.... that could be something that you really like. They'll have artefacts and characteristics and that can be really inspiring as an artist or musician as well".

With a myriad of processing technologies available, it's vitally important to keep the human at the centre, as Ale Koretzky points out with some developmental questions: "Where's my place now in this new scheme of things? How do I keep the human at the centre, but that centre moves with technology? What we're seeing more and more with A.I. is that creation starts looking a lot more like curation, where there's more elements of curation within what's going to be considered creation. It's an evolution of what can be considered a creative action". Andrew Fyfe continues "If you have an A.I. that sort of plays with you, which is very playful with what it creates - that can open up a world of opportunities to express yourself".

The Use & Misuse of A.I.

A.I. is able to be used with specific goals in mind, and, like many processes, can be misused for creative good. Being in the infancy of its development, we have a huge amount of potential for positive changes and improvements using these platforms, which still allow for those serendipitous happy accidents.

Zach Evans gives us some real world examples of misuse which have radically changed the development of music: "It was before my time but things like the 808 and 303 are my go-to examples here - the TB-303 bass synthesiser by Roland was made so that you could have a bassline playing next to you while you were playing guitar and singing near an open mic...it was made to replace bassists, with sounds like a bass guitar. Not really, right, it was the 80s - it sounded like a Roland bass machine! But take that resonator knob, turn it all the way up, pick that sequence and you've got the beginnings of acid house - and that is what created rave culture." He continues "Auto-Tune is a classic example - the guy that made Auto-Tune was making that to help singers sing better, and then Cher turned the timing all the way to zero to create a new effect and that created two decades of vocal sound design".

So we can be sure that A.I., like most things, will bring us surprises along the way, and could, in theory shape the future landscape of the music scene in ways we cannot foresee.

Challenges of Developing A.I.

Looking at and overcoming the challenges of developing A.I. is also key to how it ends up shaping our world. There are technological, logistical, and computational challenges - especially when it comes to the audio fidelity we've come to expect, with early models being developed with low fidelity sample rates, for lightweight processing - which don't reflect the high fidelity industry we now operate in.

Zach Evans suggests that "trying to get the stuff from the research side and getting that sounding good and then delivering it in ways that are intuitive, but don't lose a lot of expressivity by the modelling is one of the big challenges right now". Ale Koretzky continues: "To get to the quality that generative neural synthesis is intended to get, there's a lot more investment that needs to happen. We're not there yet. I'm sure it's gonna happen. But again, what is it gonna take to get there, right?". Perhaps GPU Audio processing can help.

With a greater emphasis on audio fidelity and latency than ever before, he explains:"We're moving into a place where you see larger and larger models, talking billions of parameters. Those models, in many cases, cannot run on a client environment, they each run in the cloud - that costs money. If you're trying to build a product, you want that product to be cost effective, right?"

Community: The Cyclic Nature of A.I. Development

Andrew Fyfe's Qosmo have released a free A.I. platform called Neutone - making this incredible technology accessible for experimentation on both Windows and macOS: "With Neutone, we tried to bridge the gap by allowing an easy way for researchers to deploy models and a familiar interface for musicians and artists. So I think what we've seen from that is that there seems to be this cyclic relationship between the two, where the research goes to the artists, and the artists use that in interesting ways. And that informs the research".

The end user is often the focus, so communities are exceptionally important places for people to come and share ideas, as Andrew explains: "In the last few years, I've seen the emergence of all these communities, and that we're now seeing these communities that are loaded with researchers, academics, artists, and musicians. They're coexisting together, and working together and building or informing the development of all these new tools in a way that we didn't see in the past". He continues: "There's a lot of artists, musicians that are attending, talks of academics, because you can see how practical the research is and I feel like they're a lot closer now than they've ever been."

Zack Evans explains how this fusing of worlds is being approached by Harmonai and with open source software: "As far as Stability A.I., we provide an acute grant to different researchers and groups trying to push forward the open source stuff. And you know, open source is really nice, because if it's just a thing you can download it and learn - it's pretty much free minus the cost of having a GPU and knowing how to run these things".

And the benefits increase when we consider people starting out too: "In terms of access, a lot of the really cool tools in music production could be an awesome VST that costs $300, and for the bedroom producer who doesn't have a job or you know, is 15 years old, or a student in high school they can't really afford it. But these technologies in a way that are accessible are really nice for that".

Accessibility & Diversity in A.I.

A diverse landscape of users and potential developers is forming and Jonathan Wyner goes further in the discussion: "You know, one of the things that I've been keenly aware of is that a lot of this space is occupied by electronic music production. Partly because I think it's less expensive. It's easier. I also think the kids love their electronic music. And they're sort of the ones developing the datasets and the tools that we're working with. But also think about language, I think about different identities being represented in whatever way that's possible".

Zach Evans makes the point that "If you've got artists whose whole style is putting together weird glitchy sounds, then that's a great product fit for them. I don't really have anything yet for the person playing acoustic guitar and singing in a cafe! You know, a lot of the time it comes down to the technology and what it is good at making and with what genres and artists are the hub of work".

Ale Koretzky elaborates: "There are going to be tools and technology that are working right for a local use for someone who only records a guitar or an electronic music producer. I think there's always been this notion of, "we're going to build this tool and, you know, people are going to figure out how to benefit from that". There has been a lot of that in the past two decades. I think now we're seeing more granularity in terms of who you are building for".

Can The System Learn Me?

Before closing, Jonathan Wyner poses a superb rhetorical question to the group: "One of the questions I remember coming up early and often was, can the system learn me? And if there was some way of incorporating the idea of training a system, putting the user at the centre of the training, that could be a shortcut, sort of dealing with some of this too, because by definition, you're not representing that person". Interesting stuff, and I'm sure we'll see more of this humanised-customisation and personalisation coming in the future.

Andrew Fyfe closes with his views on the value of open source software: "For that reason of inclusivity as well, and making sure everyone's part of the conversation - like having access to these technologies, I think, with Harmonai. they're making a lot of their technologies open source, and with Neutone as well, we have open source components to that. It means everybody can be involved in the thing that's super critical for making sure there's all these representations, culturally, across the world".

Ale Koretzky reflects on the focus of the panel "Looking back to the title, there's not really a lot to worry about - there's no apocalypse coming, I think. But it's up to us right back to hunting, how we keep the human at the centre, and how we use A.I. for the good and empowering creators".

And a final thought from Zach: "I would say, if you're interested in this stuff, now is the time to get into it, it's gonna keep being more and more, bigger and bigger. So it's not like you're gonna miss any trains. But now the community is still pretty small, very willing to talk. And they're just starting this stuff - you can get in now on, let's say, the ground floor, but it'll all keep going. So yeah, now is the time to do this stuff!"

Jonathan Wyner closes the discussion: "I absolutely encourage everybody here to pay attention to the space. Because for no other reason than to just be informed about what the tools are. Without that I think we can easily fall into the trap of being very, very afraid of the apocalypse". Indeed.

Thanks to all of our panellists, participants, and people who made the Innovation Lounge the creative hub which it was. Watch the conversation in full on the GPU Audio Youtube channel, and look out for more from the GPU Audio NAMM Show coming online soon.

Watch the panel on Youtube here

Discussion

Discussion

Discussion: Active

Please log in to join the discussion

News & Deals Related To This Item

Show more...