Posted Reaction by PublMe bot in PublMe
I’ve used Cubase for 36 years — here’s what I honestly think about Cubase 15’s AI toolsYes, you read the headline correctly. I first started creating music with Cubase all the way back in 1989 when, as a budding young music technology student, my college updated the MIDI sequencing software on its suite of Atari STs from Steinberg Pro24 to the new, shiny, and much more advanced Cubase. While I have acquired wrinkles and grey hair (and wisdom, I hope!) since that time, Cubase has remained evergreen, accumulating features and abilities that have kept it close to the cutting edge of technology.
READ MORE: Universal Audio’s LUNA 2.0 DAW, as reviewed by a Pro Tools user
The newly released Cubase 15 has received its first sprinkling of AI technology in the form of stem separation and has gained the ability to produce vocals without going near an actual singer. But are these tools useful, and will they put anybody out of a job?
Stem separation in Cubase 15. Image: Press
How effective is Cubase 15’s stem separation?
As with all its recent releases, Cubase 15 comes with SpectraLayers Go, a cut-down version of Steinberg’s advanced spectral editor. This can run as an ARA extension, lending various abilities to the DAW, including separation of vocals from a mix. Separation of other stems – drums, bass, and so on – required an upgrade to SpectraLayers Pro. Not so with Cubase 15, though, because it has stem separation built in, and this recognises vocals, drums, bass and ‘other’ stems.
The separation operates directly in the timeline, placing the separated stem tracks into a folder track directly beneath the source audio, and I find this much more convenient than running an ARA extension or opening an external app. Stem recognition is decent too but, as with all such tools, accuracy depends in large part on the source material — the predictable instrumentation of a traditional rock band being much easier for AI to identify than the sonic smorgasbord of contemporary electronic styles.
The separation process itself is surprisingly swift, but there’s no way of telling Cubase to slow things down in order to produce better results, and, unfortunately, I don’t think the fixed speed/accuracy is quite where it should be. Results are easily respectable enough for sketching out ideas, or for adding reinforcement to other parts in a mix, but I find they have a more processed sound than delivered by other stem separators I use. Hopefully, Steinberg will add a ‘quality’ slider to the stem separation in a future update; until they do, I’m likely to still be calling on SpectraLayers Pro more often than not.
Omnivocal beta in Cubase 15. Image: Press
Will Omnivocal allow me to fire my singer?
Omnivocal is developed by Steinberg’s parent company, Yamaha, specifically for use in Cubase (and, presumably, the next Nuendo release). As the name suggests, it’s a vocal plugin. No, it isn’t a vocal effect processor, but an instrument that synthesises vocals entirely from scratch. And it’s astonishing.
Despite what you may have heard online, Omnivocal isn’t actually an AI-based system but rather a highly specialised synthesiser. However, it’s rooted in the same technology as used to give AI systems a speaking voice, and raises many of the same questions about putting creative humans out of work, so I think it’s fair to describe it as ‘AI-aligned’ and to consider it as part of the bigger AI technology picture.
The instrument allows you to choose between male and female voices, and allows the character of that voice – formant, attack, air, and such – to be tweaked and automated. It’s incredibly simple to use too: just record (or input) a melody, open the Key Editor, select a note and then enter the desired lyric for that note into the Text field on the editor’s status bar. Each piece of text you enter is automatically converted into IPA (International Phonetic Alphabet) notation, and this is shown in square brackets following the plain text so that you can edit and adjust it as needed (‘cos we all know the IPA, right?).
Listened to in isolation, it’s relatively plain to hear that Omnivocal’s voice isn’t real but once it’s lathered with effects and perhaps a harmony or four, the results are far more convincing. And this is only the beta version – apparently, there’s a lot more to come for this remarkable instrument.
Vocals can be crucial when composing, providing context and structure that keep your ideas focused and the music moving forward. I could lay down guide vocals, but as someone who can hit the notes but sounds awful when doing so, I find listening to my own singing to be massively off-putting. Omnivocal is a compelling solution to this – it’s much nicer to listen to than my own caterwauling. I will still intend to replace lead vocals with a real singer’s unique voice and interpretation, but I will perhaps leave backing vocals to Omnivocal.
On the flip side, simple though it is to use, I find creating an entire vocal part with Omnivocal to be quite a chore, necessitating consultation of IPA tables, text, and a fair slab of automation to bring the voice to life. It’s certainly nothing like as spontaneous and fun as working with an actual musician, but it’s an amazing tool that I’m already making a lot of use of.
Cubase 15’s AI tools: The verdict
There is, naturally, a lot more to Cubase 15 than just these two features, so keep an eye out for our full review.
Some people may be unnerved by seeing AI creeping into Cubase, or by Omnivocal’s surprisingly realistic voice, but I see these features as nothing more than tools.
There’s room for improvement, sure, but they keep Cubase on the crest of that technological wave, and as youthful and sprightly today as it was 36 years ago.
The post I’ve used Cubase for 36 years — here’s what I honestly think about Cubase 15’s AI tools appeared first on MusicTech.
I’ve used Cubase for 36 years — here’s what I honestly think about Cubase 15’s AI tools
musictech.comStem separation, vocal generation and more come to Cubase 15. Here’s what I honestly think about these AI tools
PublMe bot
bot


