Posted Reaction by PublMe bot in PublMe

How will AI impact the next generation of DAWs? These developers have their sayAs Apple, Google and Microsoft integrate machine learning and artificial intelligence into their consumer tech products en masse, the music production space is loudly finding novel ways to harness AI — with mixed results. From impressive stem separation tools and smart plugins to litigious text-to-music services such as Suno and Udio, you can bet there’s some form of AI already infiltrating your studio. But is it a threat?

READ MORE: Will RIAA’s lawsuit against Udio and Suno really be the win we’re hoping for?

While major DAWs are making tentative steps towards incorporating AI and machine learning – like Logic Pro 11’s generative Studio Assistant instruments — it’s the smaller developers who are seemingly doing the truly innovative, out-there thinking right now. Notable among them are Hit’n’Mix, developers of RipX DAW, Moises.AI, which makes a multi-platform app for pulling apart and modifying existing music and Wavtool, a browser-based system for generating, editing and remixing music.
AI has been on the mind of Martin Dawe, founder of Hit’n’Mix, for a very long time. His roots go back over 30 years when he first started to toy with the idea of audio separation. “In about 1997, I came up with the idea of scanning sheet music, and the guys at Sibelius contacted me and asked me to meet up. We still sell PhotoScore with Sibelius. A few years later, I started doing audio separation, trying to convert it into notation, and throughout the ’00s that developed into achieving higher and higher quality. By about 2010, I had the basic bones of Hit’n’Mix and we actually launched it to see if the technology could start to deconstruct music tracks.” Dawe continues, “About four years ago, we released Hit’n’Mix Infinity which was the first pro version of the technology, and then machine learning algorithms were incorporated and we went more into audio source separation as well as audio cleanup.”
Geraldo Ramos, co-founder and CEO at Moises.AI had a much more recent conversion to the cause. “The development of Moises began in late 2019. I’m a drummer and always wanted to find a way to take a song and separate the tracks so I could mute the drum part and play the drum part myself alongside the rest of the song. I thought AI could be used to make this happen and then I thought, ‘Why not start a business?’ It was like a weekend hackathon. That Monday, I shared my work with Eddie Hsu (my co-founder and COO) because he’s an accomplished musician and knows music so well. He was pretty excited about it so we kept working on it.”
Hit’n’Mix RipX DAW. Image: MusicTech
Keith Chia, developer of Wavtool focused their efforts on the browser platform. “We launched in March 2023, but development on WavTool started quite a while before that. The idea of WavTool came about when Sam Watkinson was working on a number of various web audio projects to experiment on what was possible with audio manipulation within a browser. Eventually, he created a platform to tie together all these experiments, which took the rough shape of a proto-DAW.”
Why has AI taken off so rapidly in the last two years? Part of it is purely technical, with companies like Apple building ever-more-powerful neural processors for its Apple Silicon computers, while certain graphics cards available for PCs are also capable of hugely accelerating the power-hungry processes that make AI work. But it’s also a corresponding surge in the development of software tools as people start to see what is possible, especially in terms of generating music, and demand increases.
Moises’ Ramos expands on this, “AI has been powering music software for a while —we’ve been using AI since 2019. But new and improved AI models are coming to market constantly, and that’s definitely accelerated in the last couple of years. Part of it is the growth in R&D and development, but also strong interest and demand from the marketplace. There’s a big appetite for quality AI tools in creative fields”.
Wavtool’s Chia gave us his thoughts on the rise of AI: “Generative AI has definitely been having its moment in the last year by removing the line between creation and consumption. The act of making art has been simplified by removing entire chains of complicated tooling from the process — people can create a song just by inputting a text prompt. It’s a very simple way of igniting the creative spark in people by showing them that they can flex their creative muscles without having to spend thousands of hours mastering an instrument or learning complicated production software. We’re giving them an easy way to still rely on AI to navigate the difficult parts of the music creation process while enhancing their expressiveness, creativity, and play.”

RipX DAW is arguably the most powerful AI music suite out there at the moment, and also the closest to the kind of desktop DAW you might recognise – though it has tools that go way beyond what others currently offer. Riley Knapp, a drummer, producer and songwriter who is now working as an AI business strategist for Hit’n’Mix explained that, in his view, the product solves an important problem with catalogues and tracks that you potentially don’t have stems for. So for example you could now separate them out and get into cleaning up audio in ways that were not possible five years ago. In other words, you can deconstruct mixed tracks and pull out what you need to edit, clean up or otherwise modify.
The experience of using AI to generate music is very different to learning an instrument or even programming MIDI in a grid. At first, it can seem alien, but as Chia explains, it’s just progress.
“Voice and text interactions are now becoming more commonplace — verbal instructions can be translated to precise technical adjustments using AI. Interfaces and processing power are now increasingly decoupled with cloud computing, and devices won’t need to be tethered to the studio any more. It’s an exciting time to experiment with all sorts of new ways to make music, and it’s difficult to tell what will stick around, but technological innovation has been a constant in the progression of the music industry.”
Even if you haven’t used it yet yourself, you might have heard more AI-generated or AI-assisted music than you realise. Riley from Hit’n’Mix fires us some surprising stats, including that 60 per cent of independent artists are already using AI in their music creation workflow, but only 15 per cent want to discuss it. “It’s very taboo right now,” Riley says. “That’s the other beauty behind AI-generated music: a lot of these companies are being very ethical with it. AI music is built in a way that essentially allows copyright-free material to be generated from other material for which those users can be paid. That’s another area that can really be valuable because you’re able to go in and see the DNA within the sounds and you’re able to track and trace and understand what that musical DNA is within even AI-generated music. It’s a very exciting time.”

And do these developers worry about the possible copyright implications of tools that let you unpick, unmix and remix other peoples’ music? Chia, on the whole, thinks not. “Human music is not going away, even though full-AI-generated music is going to be extremely good. The fact is, humans have strong tendencies towards community, creativity, fandom, and personal growth, and creating and sharing original music is one of the most powerful ways these tendencies are expressed. We don’t think the presence of new kinds of music will change these facts of human nature. With the right tools, AI will make the creation of original, human-authored music much more accessible by helping creative people focus on the parts of the production process that they do best.”
While AI-generated music might not always match human composition for style and quality, RipX thinks its hyper-detailed editing tools can remedy that. Says Riley, “it’s transformed into a creative tool that solves the problem of generative AI music being unlistenable a lot of the time — chaotic or robotic.” And this could help its wider adoption, aiming to produce an end result with AI that’s more acceptable to most people’s ears. It’s also likely that auto-generated music will become less robotic in time as the tools used to make it improve.
We ask Martin Dawe how his software RipX DAW actually works, or at least to have it explained to us in layman’s terms. “The ripping is split into two main processes. The first is using machine learning, where the machine learning algorithm works out how to separate the full track into different stems. The second process is converting it into the RipX audio format, which gives you the separated notes, and that’s a bit more tricky to explain but essentially it finds all the frequencies that are available and then groups them into notes. And each of these notes contains not just vibrations, they contain things like the frequency of the harmonics and their level, stereo position, all this type of thing.”
Credit: Hit’N’Mix
Once the data has been analysed, it’s presented in a unique way that’s quite unlike what you might be used to in a conventional DAW, but it’s rendered completely malleable and doesn’t need to be re-processed each time you edit it. When you are changing the pitch or other characteristics of that data, it doesn’t require any complex digital signal processing algorithms once it’s in the RipX format, according to Dawe.
So it’s capable of deconstructing whole mixes into a format that’s almost uniquely flexible. But we’re interested to know who they are aiming this toolkit at. Dawe says “it’s a very, very broad product.
“There’s a lot of remixers and DJs but an equal number of musicians who use it to look at songs and work out what notes are played so they can learn a tricky guitar part. Plus, obviously, producers wanting to adjust sounds inside a mix. I don’t think there’s any kind of musician that wouldn’t get some sort of benefit out of it.”
Just a couple of years into AI and machine learning becoming more mainstream in music production, we are already able to do things that were impossible not so long ago, especially the ‘unmixing’ of tracks and the generation of music based on simple prompts. But where do things go from here? Geraldo Ramos says there will be “tremendous growth in generative AI music applications.” He warns that “only those who can find a strong market fit and business model will survive.
Geraldo Ramos, CEO at Moises.AI
“It may also depend on how the legal landscape evolves. For those companies following the infringement-as-business-model strategy, changes in the legal landscape may determine the winners and losers. Companies will see incredible growth in the same way the industry benefited from the MIDI standard and the opportunity it created for synthesizers, samplers, and drum machines. It’s my hope that as music tech becomes more capable with AI, it will have a democratizing effect that empowers creators.”
Chia, meanwhile, explains why he thinks AI is going to become a crucial part of music production. “For the next few years, we expect the quality of models that generate and manipulate audio to improve dramatically. We expect AI song generators to get much more creative and controllable. We expect to see new kinds of audio manipulation becoming possible, as existing techniques are applied in novel ways and new techniques are developed. These improvements will give experts more options, while improvements in more broadly applicable tech (like multi-modal LLMs) will make natural language interfaces for music creation more and more effective for beginners. The forms of AI we’re working with today will evolve and change, but AI as a category of technology is only going to become more important.”
The last word goes to Dawe, who concludes by explaining “We’ve got our finger on the pulse of AI. With this Rip format, we have the advantage of being able to build it from the ground up rather than having to go back through 30 years of code and try to reimagine it. So we’re very aware of all the different tools coming out, and we’re really just thinking about what the best integrations would be, what the best ways to go about collaborating with some of these technologies would be, and making sure that we’re at the forefront of it, whatever it is. Because as we all know, these things change day to day. Whatever it may be, we will be at the forefront of having the best interface to use AI music in an ethical, creative and fun way.”
AI is only going to get more advanced and more commonplace — and, despite concerns voiced by some, it seems likely that it will augment rather than replace humans’ input into the music-making process. We can expect to see more major DAW developers take further steps towards implementing sophisticated AI tools into their software while smaller companies continue to innovate. Not everyone will think AI belongs in the studio but producers should be prepared for it to become a part of the musical landscape, whether they choose to embrace it or not. Those that do can expect development to continue at speed.
Read more about AI in music production.
The post How will AI impact the next generation of DAWs? These developers have their say appeared first on MusicTech.

From stem separation tools and plugins to text-to-music services, you can bet there’s some form of AI already infiltrating your DAWs