Vlad Masslove's Liked content

  • Neural DSP Technologies Introduces 'TINA' Robotic Amp ModelingAs part of its enduring commitment to pioneering industry-leading amplifier modeling technology, Neural DSP announced the arrival of TINA, the company’s proprietary data-collection robot which takes authentically and faithfully modeling the sonic nuances of a guitar amplifier to an unprecedented level. TINA – a Telemetric Inductive Nodal Actuator – marries mechanical robotics with machine learning to digitally replicate analog devices, such as guitar amplifiers, like never before.

    Leveraging the advanced capabilities of TINA, Neural DSP also launched CorOS 3.0.0 and the arrival of Quad Cortex Plugin Compatibility (PCOM), giving creators the ability to access the virtual devices in their Neural DSP plugins on their Quad Cortex. This initial release includes compatibility for two plugins: Archetype: Plini X and Archetype: Gojira X, with additional QC-compatible plugins set to launch in subsequent CorOS updates.

    "TINA represents a groundbreaking integration of robotics and state-of-the-art machine learning for audio processing, furthering Neural DSP's commitment to redefining the standards in guitar amplifier modeling accuracy,” says Douglas Castro, CEO at Neural DSP Technologies. “This feat is the result of collaboration between our respective plugin and Quad Cortex teams, who have worked tirelessly to significantly improve the architecture of both platforms to ensure plugins can run on Quad Cortex. We’ve successfully removed all human intervention within the amplifier modeling process – ensuring an unparalleled level of precision in every model by capturing every subtle detail in the amplifier's controls.”

    TINA: Leveraging Robotics for Authentic Sound Replication

    In the spaces between the markings on an amplifier’s controls — gain, bass, mid, treble, presence, master — there is an entire universe of complex interactions and sonic distinctions. Guitarists will naturally play with these controls in a subjective manner, dialing in changes instinctively. But in trying to digitally emulate that process, even snapshot models of different configurations of those controls cannot translate all of the possible fine-tuned interactive combinations, and managing those million or more combinations would be impossible.

    What Neural DSP does with TINA is robotically access the entire spectrum of every control’s range by physically connecting with those controls via actuator arms. Every control is systematically turned with its output recorded. With enough recorded examples (typically thousands of control positions), a neural network is trained to replicate the behavior of the device for each one of these settings. Through this training process, the finished model will also generalize and precisely infer the sound of the device in any unseen control setting and input signal.

    TINA does the tedious part, deducing what control positions need to be recorded, plans the sequence to turn the knobs while minimizing wear-and-tear, and finally returns a collection of recordings with all of the related control positions carefully and precisely annotated.

    By combining robotic data collection with machine learning, Neural DSP can distill the full range of an amp’s continuous control into a single neural network model with unparalleled precision. It also removes the need for painstaking and often biased human analysis and design. The collected data is always a complete representation of the device and its history; every tube, every transformer, every pot, every ding, and every scratch; anything you can hear and feel will be a part of the data the models are trained on.

    “TINA is the backbone of our robust and automated modeling pipeline, pushing the boundaries of model fidelity,” says Aleksi Peussa, Machine Learning Team Lead and Researcher at Neural DSP Technologies. “The collected data provides the ground-truth for the sound and feel of the device. No assumptions, no preferences, no limitations. Purely data. The vast amount of data along with advanced machine learning approaches can systematically push model accuracy to unparalleled levels of realism. Our goal is always to create models that are indistinguishable from the real thing, even by experts.”

    To learn more about PCOM and how to update to CorOS 3.0.0, click here and here.The post Neural DSP Technologies Introduces 'TINA' Robotic Amp Modeling first appeared on Music Connection Magazine.

    As part of its enduring commitment to pioneering industry-leading amplifier modeling technology, Neural DSP announced the arrival of TINA, the company’s proprietary data-collection robot which takes authentically and faithfully modeling the sonic nuances of a guitar amplifier to an unprecedented level. TINA – a Telemetric Inductive Nodal Actuator – marries mechanical robotics with machine learning to

  • Real-time voice morphing with Dreamtonics Vocoflex Dreamtonics have announced the release of a new AI-powered plug-in that’s capable of transforming the sound of a vocalist in real time.

    Dreamtonics have announced the release of a new AI-powered plug-in that’s capable of transforming the sound of a vocalist in real time.

  • This might be the most customisable delay plugin ever releasedAudio plugin innovator Sound Particles has launched InDelay, a “modern take on a traditional delay plugin” with never-before-seen features.
    Featuring the brand’s 3D Particles interface and functionality – seen in spatial audio synth SkyDust 3D, which we gave a 9/10 last year – InDelay allows users to play with up to 100 different delays in a whole host of configurations.

    READ MORE: BandLab mobile users can now experiment with beats for free before buying

    Features include the ability to position up to 100 delays anywhere around you, plus the option to move them dynamically while a track plays.

    InDelay allows users to “particle-ize” delays with a click, add Air simulation to create realism, or dive under the hood and customise independent channel sources for each of the 16 taps available.
    “This new audio effects plugin is designed for musicians, mixers, sound designers and all sound professionals and enthusiasts who wish to expand their toolset with new cutting-edge software,” says Sound Particles.
    “A delay is one of the most used plugins by sound professionals, so we had to create something that could fit this daily need, but keep pushing the boundaries of creative expressiveness,” says Nuno Fonseca, founder and CEO of Sound Particles.
    “So we decided to create our modern take on a classic delay, with plenty of new features never done before inside a delay. We’re sure that it will captivate a lot of musicians and sound designers and that they’ll find tonnes of inspiration on InDelay.”
    Check out what InDelay can do in the introductory video below:

    InDelay is available now for a July-only price of £115.43. For more information, head to Sound Particles.
    The post This might be the most customisable delay plugin ever released appeared first on MusicTech.

    Audio plugin innovator Sound Particles has launched InDelay, a “modern take on a traditional delay plugin” with never-before-seen features.

  • Link Rot: Why your Digital Links won’t last foreverUnfortunately, link deterioration, often called link rot, is inevitable. Here are some effective strategies for managing its impact on online content. by Bobby Owsinski of Music 3.0 Just about everyone. Continue reading
    The post Link Rot: Why your Digital Links won’t last forever appeared first on Hypebot.

    Manage the impact of link deterioration on your online content. Discover effective strategies to prevent 404 errors and improve user experience.

  • “Once tech like stem separation is inside the CDJ, people will get a bit more creative”: Richie Hawtin on how real-time stem separation will impact live showsRichie Hawtin has shared his thoughts on the impact real-time stem separation might have on contemporary DJs, especially their approach to live sound manipulation.
    Hawtin debuted his new concert series, DEX EFX XOX, at this year’s Movement Festival Detroit and Sónar Barcelona. The show sees him focus less on grandeur and visual spectacle, and far more on the most important element – the music.

    READ MORE: Erica Synths and Richie Hawtin’s Bullfrog Drums will “teach you drum programming and sampling”

    For DEX EFX XOX, Hawtin uses Traktor, Bitwig, his own MODEL 1 mixer, two A&H Xone K2 MIDI controllers, a Novation Launchpad, and a bunch of “custom scripts” that allow on-the-fly control over a suite of Roland software emulations, including the TR-808, TR-909, and the SH-101.
    In the future, the use of stem separation – something he’s avoided up til now – may also make its way into his sets. In a new interview feature for MusicTech, he explains, “My shows are all pretty spontaneous. I’ve been reluctant to use any stem separation because it all has to be done beforehand. But real-time, high-quality stem separation is coming very shortly, and I’m excited because that will allow for even more fluid mixing.”
    With his shows, Hawtin wants to revive that hypnotic state that immersive sound and lighting can induce alone. He wants his shows to feel more like a club experience, rather than concert which may focus more on feeding the eyes than the ears. Of the current DJ sphere, he says, “On one level, I see that the scene has exploded with the TikTok DJ generation who maybe think that DJing is just two CDJs and a mixer, but I’m starting to see some of the DJs who’ve been around longer really jumping into these hybrid setups.

    View this post on Instagram

    A post shared by Richie Hawtin (@richiehawtin)

    “Once tech like stem separation is inside the CDJ, people will get a bit more creative — but will we see a whole generation of DJs working on their own unique setup? I’m not sure that that’s going to happen.”
    He adds, “I don’t want to sound like I’m slagging off the new-school DJs. Really, the production etiquette and technique of young, modern producers is fucking mind blowing. The music they’re making crosses and combines genres more than ever before — there used to be the house lane, the techno lane, the minimal lane. Now, it’s all going back into the melting pot and that’s where a lot of the energy and excitement is coming from.”
    So, why does he feel rising DJs are not experimenting with their setup as much as they perhaps should? “Part of it is just the convenience of jumping on a plane with a USB stick and jamming out some great tunes,” he says. “I would have been excited if I could have done that 30 years ago, instead of dragging around three 50kg cases and a friend to help me.”
    Find out more about Richie Hawtin, or view all of his scheduled live dates.
    The post “Once tech like stem separation is inside the CDJ, people will get a bit more creative”: Richie Hawtin on how real-time stem separation will impact live shows appeared first on MusicTech.

    Richie Hawtin has shared his thoughts on the impact real-time stem separation might have on up and coming DJs, especially their approach live sound manipulation.

  • IK Multimedia offers FREE Mesa Mark III guitar amp this month
    IK Multimedia is giving away a free Mesa Mark III and matching cab for new AmpliTube 5 Custom Shop (CS) users throughout July. AmpliTube 5 CS is the free version of IK Multimedia’s virtual guitar and bass workstation for Mac and Windows. Mesa/Boogie launched the Mark III in 1985, and the amp is a three-channel, [...]
    View post: IK Multimedia offers FREE Mesa Mark III guitar amp this month

    IK Multimedia is giving away a free Mesa Mark III and matching cab for new AmpliTube 5 Custom Shop (CS) users throughout July. AmpliTube 5 CS is the free version of IK Multimedia’s virtual guitar and bass workstation for Mac and Windows. Mesa/Boogie launched the Mark III in 1985, and the amp is a three-channel,Read More

  • YouTube’s revamped eraser tool uses AI to remove copyrighted music without impacting other audioYouTube has launched an updated eraser tool which allows its creators to remove copyrighted music from their videos, without affecting speech, sound effects or other audio.
    Previously, videos flagged for copyrighted audio were muted or taken down entirely. The updated tool is in its early stages however, and YouTube does warn that the “edit might not work if the song is hard to remove”.

    READ MORE: “Human-created works must be respected”: 50 major music tech brands sign Principles for Music Creation with AI

    As first reported by TechCrunch, the updated eraser tool was launched on 4 July. It utilises an AI-powered algorithm to cut just the song used, and leave the rest of the video intact. YouTube chief Neal Mohan shared the news of the launch to X along with a video.
    In the footage, it is explained that the company had been testing the eraser tool for a while, but it wasn’t as accurate in removing copyrighted tracks as they wanted it to be. Now, the use of an AI-powered algorithm brings a “big improvement”, and users have the choice to mute all sound or simply erase just the music within their videos.
    Find out more below:

    Good news creators: our updated Erase Song tool helps you easily remove copyright-claimed music from your video (while leaving the rest of your audio intact). Learn more… https://t.co/KeWIw3RFeH
    — Neal Mohan (@nealmohan) July 3, 2024

    YouTube is attempting to make big strides with AI right now – last week it was reported that the company had been offering lump sums of money to the “big three” major labels in hopes of rolling out AI music licensing deals with them.
    The move follows the launch of its AI tool Dream Track last year. The feature allowed users to create music using AI voice imitations of famous artists. The platform has allegedly been in talks with Sony, Warner and Universal to try to convince more artists to allow their music to be used in training AI software.
    To find out more about YouTube’s eraser tool, including step-by-step instructions for its use, visit YouTube Support.
    The post YouTube’s revamped eraser tool uses AI to remove copyrighted music without impacting other audio appeared first on MusicTech.

    YouTube has launched an updated eraser tool which allows its creators to remove copyrighted music from their videos.

  • 40 resources for aspiring musicians
    Explore these resources for aspiring musicians that can help you unlock inspiration, finish your productions, and share them with the world.

    Explore these resources for aspiring musicians that can help you unlock inspiration, finish your productions, and share them with the world.

  • Master Your Music Library: Top Tips for Organizing Music FilesTransform your music library with these expert tips for organizing your files. Learn the secrets to seamless file management and boost your productivity quickly.  by CRISTINA CANO of DIY Musician. Continue reading
    The post Master Your Music Library: Top Tips for Organizing Music Files appeared first on Hypebot.

    Transform your music library with these expert tips for organizing your files. Learn the secrets to seamless file management and boost your productivity quickly.  by CRISTINA CANO of DIY Musician. Continue reading

  • WebSampler allows you to sample any audio from the internet right within your DAWSampling and music production go hand in hand, so it’s no surprise that there’s a demand among producers for tools to make the sampling process easier.
    While there are already a number of audio capture tools available, many producers still rely on online YouTube-to-MP3 websites, which themselves seem to appear and get taken down as regular as clockwork.

    READ MORE: “Enough’s enough”: deadmau5 threatens to pull music from Spotify following Daniel Ek “cost of creating content” comments

    WebSampler, a new tool from WXAudio, aims to eliminate the need for YouTube-to-MP3 tools by offering producers the ability to record audio from any website from directly within a DAW.

    How it works is simple; WebSampler is a VST plugin with an internet browser built right in, where you can head to any website and record a sample and insert it as an audio clip right in your DAW’s timeline.
    While WebSampler definitely streamlines the practicality of creating samples for your mixes, it still should be remembered that samples, more often than not, require permission to be used in songs and other projects. That said, WebSampler doesn’t claim to have anything to do with clearing rights for samples, for what it does – quick and easy sampling from anywhere on the web – it really does seem like a knockout idea.
    WebSampler costs a very reasonable $10 and is available in VST3, AU and standalone formats. For more info, head to WXAudio.
    The post WebSampler allows you to sample any audio from the internet right within your DAW appeared first on MusicTech.

    Sampling and production go hand in hand, so it’s no surprise there’s a demand among producers for tools to make the sampling process easier.

  • “I never expected anyone to listen to it”: Moby says Play was made in a “completely unpressurised environment”Moby has spoken about the “completely unpressurised environment” that led to the making of his breakthrough electronica album Play.
    Released in 1999, Play got off to a slow commercial start, only to explode in popularity – the album has since become the best-selling electronic music album of all time, with over 12 million copies sold to date – after it began to be licensed for commercials and other projects.

    READ MORE: “People want to talk without doing homework”: Swizz Beats on criticism of Verzuz beat battle deal with Elon Musk’s X

    Speaking to MusicRadar about the making of the album, Moby recalls: “The interesting thing is that the music on Play was made in a completely unpressurised environment because I never expected anyone to listen to it.”
    “In 1997/98 when I was finishing the music for Play, I’d been dropped by my American record label, and Daniel Miller of Mute Records hadn’t dropped me but it felt like that was because he felt sorry for me.”
    “Play was made in my bedroom on cheap equipment and the commercial expectations were so low as to be non-existent, so it was a very unpressurised environment,” he adds.
    According to Moby, things changed during the next few records, where he “put a lot of pressure on [himself] to try and make music that would be creatively interesting and commercially successful.”
    “But I realised pretty quickly that I’m not good at that,” he says. “Some producers, especially now, are very good at accommodating the commercial marketplace, but whenever I’ve tried to do that the end result has been mediocre.”
    The musician, who recently released his 22nd solo album Always Centered at Night, also admits that a young him would not have expected such success.
    “For most of my life, up until a certain point, I assumed I was going to make music in my spare time that no one ever listened to,” he says. “I never expected to have a record deal or play concerts or shows to more than 20 or 30 people and certainly never expected to have anything resembling commercial awareness or success.”

    The post “I never expected anyone to listen to it”: Moby says Play was made in a “completely unpressurised environment” appeared first on MusicTech.

    Moby has spoken about the “completely unpressurised environment” that led to the making of his breakthrough electronica album Play.

  • “I began to feel like I had no control over the whole thing”: Imogen Heap on making an AI voice modelImogen Heap has constantly been ahead of the curve when it comes to technological innovation in music. So, now that she’s returning from a 10-year hiatus, it seems fitting that she’s diving into the world of AI.
    In April, she released her first remix using her AI voice model, ai.mogen, collaborating with Slovakian alt-pop singer Karin Ann on  false gold. Heap made the remix alone but her vocals were made by an AI model that she developed with her team.
    In an interview with MusicRadar, Heap revealed that she’d had numerous offers from companies to make an AI  model of her voice. “They knew that I’m interested in technology, and they knew my answer probably wouldn’t be a flat no,” she tells the outlet.
    Her excitement was dampened, however, by the list of caveats and terms and conditions attached to the offers. “I began to feel like I had no control over the whole thing. Everybody kept saying how hard it is to create an AI voice model… but I just thought, it can’t be that hard.”
    Heap turned the offers down and worked with an audio engineer on an open-source model, which they trained on recordings from throughout her career. “You know what? It came out pretty good,” Heap says. “After that, I was feeling more empowered, like I had a leg to stand on.”

    Heap fed the entirety of false gold through ai.mogen, working with over twenty instrumental and vocal stems. “It was the weirdest thing, but it sounded amazing,” she enthuses. “It was my voice trying to sing the kick and snare, the bassline, the keys. My voice became a kind of aura surrounding everything and it really decided the direction I wanted to take the remix.”
    The artist is also asked whether she’ll ever use ai.mogen’s text-generation capabilities to write song lyrics. “I mean… yeah? Perhaps I wouldn’t use her because there are other services out there that do a much better job right now,” she says. “If someone wanted to generate something in the style of my lyrics that would be fine, though I would like to be credited at some point.”
    She also has plans to expand ai.mogen’s capabilities to eventually become a songwriting and production assistant. “Every single scrap of unused or used audio that I ever create goes into a folder,” she says. “We’re preparing to semantically describe all of it so that, in the future, I can come into my studio and Mogen will say ‘may I suggest this thing that you created in 1998 and as good a place to start?'”
    Elsewhere in the interview, she discusses some of the issues AI poses in the music industry, particularly unauthorised voice models. “It does worry me. I will eventually release ai.mogen so that everyone can use it, but I don’t want my voice to say hateful things. So, I need to find a way to do it on acceptable terms.”
    She adds: ““I’m in the middle of creating an app that enables musicians to train their own vocal models with privacy and security. The hope is that we can educate and protect people and help them feel a bit more in control of their voice.”
    Read more music technology news.
    The post “I began to feel like I had no control over the whole thing”: Imogen Heap on making an AI voice model appeared first on MusicTech.

    Imogen Heap has spoken about how she made her new AI voice model, ai.mogen, in a new interview and what she hopes to do with it.

  • Creative sound design with mix correction pluginsThe music production world is awash with mix correction plugins, with many using ground-breaking AI that can take a less-than-stellar vocal recording from dud to ‘dude!’ with the click of a button. Elsewhere, de-reverb and stem separation software can pull active sonic elements from a finished track. However, have you ever wondered if they could be used in less corrective and more creative ways? Long answer short: they sure can.

    READ MORE: 12 best stem separation software for vocals, ranked

    You can use a variety of correction plugins as part of the sound design process. While many will yield crunchy and funky lo-fi artefacts, others can elicit unexpected — and extremely useful — results.
    This tutorial heads into experimental territory so be sure to don your cleanest lab coat. You may want to have a few beakers of your favourite spirits on hand as well to encourage out-of-the-box thinking. And don’t worry: if you make a mess, just fix it with the same plugin!
    Lossy melodies with stem separation
    Stem separation is a popular and powerful type of correction software that splits a piece of audio up into its constituent parts — drums, bass, vocals — and lets you work them individually. RipX DAW Pro from Hit’n’Mix does one better, turning the stems into malleable audio that you can adjust on a per-note basis. What happens if you feed it just a single instrument, like a marimba line with baked-in delay effects?
    Start by loading your audio into RipX DAW Pro. It will do its thing, analyzing the audio and then separating it out into individual notes. Move the pieces of audio up and down the piano roll to create a new melody, deleting unnecessary ones as you go. There are a number of pitch effects as well, such as Pitch to Scale, Quantize Pitch and Flatten Pitch. Play around with these until you end up with something you like. Finally, bounce it out and import it into your main DAW project.
    While RipX DAW Pro does have a Repair section to reduce artefacts, you can ignore this for this technique. After all, swimmy, low-bit MP3 effects are gaining popularity thanks to plugins such Goodhertz Lossy and Lese Codec. It’s an extreme effect but could be just what your next lo-fi creation needs.

    [products ids=”5rZcis5Lt4B0USFvT3xhDY”]
    Transient enhancing with de-reverb
    De-reverb plugins are a handy way to remove room sound from vocals, particularly recordings for interviews and podcasts. However, there’s no rule saying they can’t be used on other types of material.
    In this example, the De-reverb module from iZotope’s RX 10 Elements does its best to clean up the reverb from a noisy tambourine loop. By tweaking the controls, you can emphasise the transient attack of the tambourine — the portion when the hand strikes the skin — and bring out some lo-fi artefacts in the process.
    First, click the Learn button and let the in-built AI listen to the signal. Next, bring up the Reduction slider and adjust the different frequency bands in the Reverb Profile until you’ve brought out the strike of the tambourine. Engage the Enhance dry signal button for a more pronounced effect. Try lowering the Artifact smoothing slider to bring out lo-fi goodness. It’s not a bug, it’s a feature!

    [products ids=”4PfP5kzaOHCiAHGOnI5IKf”]
    Transient smoothing with de-click
    In the same way that de-reverb can remove the sustain portion of a signal, so can de-clicking take away the transient. Here, RX 10 Elements De-click from iZoptope is confusing the attack potion of a clave in a loop with a click and doing its best to wipe it away.
    Start by placing De-click on the track you want to affect. It’s a pretty simple plugin; just bring up the Sensitivity slider until the transient is suitably smooshed. Try using the Click widening control and changing the algorithm for different results.
    For an extreme lo-fi effect, try strapping De-click across an entire drum bus.

    [products ids=”4PfP5kzaOHCiAHGOnI5IKf”]
    Psychedelic drums with vocal processing plugins
    Designed to correct pitch imperfections, vocal processing plugins can work wonders on the human voice. But can they be used on non-vocal material? Percussion sounds tend to not have much pitch information — sounds like the perfect opportunity for experimentation.
    While full-on pitch correction like Auto-Tune might not have too much of an effect, there are plenty more plugins in Antares’ Auto-Tune Unlimited suite that do, such as Choir.
    Auto-Tune Choir, as the name suggests, is a vocal multiplier. Instead of voices, try running percussion through it, like this conga loop. Turning up the Choir Size to 32 voices creates a psychedelic and tightly delayed drum line. Use the controls in the Variation section – Vibrato, Pitch and Timing – to further tweak out the drums. Results are sure to be unique and unexpected.

    [products ids=”7Eg6YKvP9aVYV6HltfBu3g”]
    Drum loop tightening with drum removal technology
    It may seem paradoxical but slapping a drum removal plugin onto a drum bus can result in some surprisingly useful results. Tightening, levelling and punch-ifying are all possible with judicious use of the technology.
    Zynaptiq’s Unmix::Drums is a top-quality plugin for removing or reducing the level of drums in mixed stems. When you put it on a drum bus or percussion loop, as in this example, you can make some interesting adjustments.
    Start with the big Drum Level control in the middle and fine-tune it until you have a nice balance of punch and room sound. Use the Attack and Release controls to affect the transient and tail. In the Fine-Tune area, bring up the Bass Synth slider to add power back to the kick drum, if necessary. Finally, engage the compressor and limiter functions at the top for extra punch.

    Learn more at https://musictech.com/learn/.
    The post Creative sound design with mix correction plugins appeared first on MusicTech.

    There’s more to correction plugins than just removing pops and clicks. Here’s how to use them in creative and surprising ways on drums.

  • “Human-created works must be respected”: 50 major music tech brands sign Principles for Music Creation with AIOver fifty global music technology companies and associations have penned their support for Roland and UMG’s Principles for Music Creation with AI. Per the principles, the participating companies advocate for the responsible use of AI in music creation, to “protect the essence of music — its human spirit”.
    BandLab Technologies, Splice, Beatport, Focusrite, Output, LANDR, Waves, Eventide, Native Instruments, NAMM, Sequential, Oberheim and more have united in a bid to protect the rights of musicians as the industry sees an acceleration of generative AI tech.
    The guidelines were established to encourage key figures in the music technology space to be mindful of the potential risks of AI. In a statement, AIformusic says that it’s crucial to responsibly manage the impact of machine learning tools and adhere to the Principles to ensure that the music industry is protecting the integrity of artists. Still, it adds, it acknowledges that AI can be an empowering tool for musicians and creators when applied with caution.
    The statement continues to say that the alignment of music industry leaders “cannot be understated and plays an invaluable role in shaping a responsible future for AI in music creation.”
    AIformusic also says it “strongly encourages” further organisations and brands around the globe to endorse the principles, as the
    The seven Principles for Music Creation with AI are as follows:

    “We believe music is central to humanity.”
    “We believe humanity and music are inseparable.”
    “We believe that technology has long supported human artistic expression, and applied sustainably, AI will amplify human creativity.”
    “We believe that human-created works must be respected and protected.”
    “We believe that transparency is essential to responsible and trustworthy AI.”
    “We believe the perspectives of music artists, songwriters, and other creators must be sought after and respected.”
    “We are proud to help bring music to life.”

    BandLab Technologies CEO and co-founder Meng Ru Kuok says of the principles, “We are at a pivotal moment in the evolution of music creation. As leaders, it is our responsibility to thoughtfully ensure that AI supports artists and respects their creative integrity. As we develop new tools, we must remember that technology is at its best when it enhances, not overshadows, human creativity.”
    Splice CEO Kakul Srivastava adds, “AI brings new opportunities to our industry and many musicians are being inspired by these tools. But this is a critical time to support responsibility around new technology and respect for the rights of creators everywhere. This is about the human at the center.”
    “As with all technologies, the Focusrite Group desires to see AI become another toolset to further the creativity of artists vs. a threat to our industry,” said Focusrite PLC CEO Tim Carroll. “We are proud to support AI For Music and to do our part to help ensure this technology is used in a responsible manner.”
    Earlier today, the RIAA announced that it had filed a lawsuit against AI music generation platforms Udio and Suno, with the plaintiffs including Sony Entertainment Music, Warner Records, and Universal Music Group. The lawsuit seeks damages of up to $150,00 for each piece of infringed work, and to stop the two AI companies from training on the label’s copyrighted songs.
    Read more music technology news. 
    The post “Human-created works must be respected”: 50 major music tech brands sign Principles for Music Creation with AI appeared first on MusicTech.

    Over fifty global music technology companies and associations have penned their support for Roland and UMG's Principles for Music Creation with AI.

  • Soundtheory announce Kraftur Soundtheory's second release is a multi-band saturation plug-in capable of avoiding the artifacts that come with more traditional approaches to soft clipping.

    Soundtheory's second release is a multi-band saturation plug-in capable of avoiding the artifacts that come with more traditional approaches to soft clipping.