Reactions

  • Analog vs. digital synthesizers: What’s the difference and which should you choose?
    Learn about the strengths and limitations of analog vs. digital synthesizers, and when you'd want to reach for each.

    What’s the real difference between an analog vs. digital synthesizer? Learn which synth type best fits your workflow and production needs.

  • RealOpen and TRON verify $9.4M in USDT for crypto-enabled real estate purchasesRealOpen, the leading platform for buying real estate with crypto, today announced the conclusion of its collaborative "Fast Moves, Fast Payments" Holiday Campaign with TRON

    RealOpen, the leading platform for buying real estate with crypto, today announced the conclusion of its collaborative "Fast Moves, Fast Payments" Holiday Campaign with TRON

  • Google gains 25M subscriptions in Q1, driven by YouTube and Google OneGoogle added 25M paid subscriptions in Q1, reaching 350M total, as YouTube and Google One grow.

    Google added 25M paid subscriptions in Q1, reaching 350M total, as YouTube and Google One grow.

  • Using a VT-100 TodayYou may not know what a ADM-3, a TV910, or a H1420 are, but you probably have at least heard of a VT-100. They are all terminals from around the same time, but the DEC VT-100 is the terminal that practically everything today at least somewhat emulates. Even though a real VT-100 is rare, since it defined what have become ANSI escape sequences, most computers you’ve used in the last few decades speak some variation of the VT-100’s language. [Nikhil] wanted to see if you could use a VT-100 for real work today.
    While the VT-100 wasn’t a general-purpose computer, it did have an 8080 inside. It only had about 3K of RAM, which was enough to act as a serial terminal. A USB serial port and a terminal with modern Linux, how hard could it be?

    As it turns out there were a few issues. MacOS assumes terminals can take data at 9600 baud with no handshaking, apparently. It also means that any application that assumes redrawing the whole terminal is fast will be sorry for that choice.
    Of course, there are commands modern VT-100-like terminals accept that the original didn’t. However, as you’ll see in the post, all of these things you can either live with or solve.
    It is easy to make your own VT-100 replica. While the VT-100 may seem simple today, it was a marvel compared to even older terminals.

    You may not know what a ADM-3, a TV910, or a H1420 are, but you probably have at least heard of a VT-100. They are all terminals from around the same time, but the DEC VT-100 is the terminal that p…

  • Audio Fusion Bureau releases RoomDiY, a FREE acoustic room simulation plugin
    From developer Audio Fusion Bureau comes RoomDiY, a free acoustic room simulation plugin for macOS and Windows. RoomDiY offers advanced real-time acoustic modelling and room analysis. In short, the plugin allows you to design the ideal acoustic space for any given project. We’ve covered many convolution reverb plugins that offer impulse responses of real-world spaces, [...]
    View post: Audio Fusion Bureau releases RoomDiY, a FREE acoustic room simulation plugin

    From developer Audio Fusion Bureau comes RoomDiY, a free acoustic room simulation plugin for macOS and Windows. RoomDiY offers advanced real-time acoustic modelling and room analysis. In short, the plugin allows you to design the ideal acoustic space for any given project. We’ve covered many convolution reverb plugins that offer impulse responses of real-world spaces,

  • UMG generated $3.39 billion in Q1, up 8.1% YoY – driven by BTS, Olivia Dean, Taylor Swift, and moreUniversal Music Group has published its Q1 2026 results for the three months ending March 31
    Source

    Universal Music Group has published its Q1 2026 results for the three months ending March 31…

  • Recording industry Renaissance man David Goggin (aka Mr. Bonzai) passes at 78Music Connection was saddened to learn of the passing of David "Mr. Bonzai" Goggin this week:

    David Goggin (often known by his pen name “Mr. Bonzai”), whose journalism, photography, visual art, and advocacy chronicled the golden age of recording studios, has died peacefully after a valiant fight with two cancers and a stroke. He was 78.

    David is survived by his wife of 42 years, acclaimed artist Keiko Kasai, with whom he shared a long and intimate personal and artistic partnership. She was the muse for over 1,000 of his drawings and portraits.

    A true Renaissance man, Goggin was an accomplished artist, writer, photographer, journalist, filmmaker, and poet. He was best known for his monthly interviews with producers, engineers, and musicians for Mix magazine and later EQ magazine from the late 1970s through the 1990s. He produced over 250 interviews for both these magazines, offering quirky, insightful, and vivid portraits of studio life, where some of the era’s most iconic albums were recorded. His work documented the voices of producers, engineers, and session musicians often overlooked in mainstream music reporting.“

    I just kind of fell into it. I was always around music,” Goggin told podcaster Daniel Keller. “I wasn’t thinking about a career; I was just doing what I loved. Suddenly, I’m in the studio with these legends, documenting them making their music. This became my life—capturing these moments. I realized I had a front-row seat to history.”

    Born in Kingston, New York in 1947 to cartoonist Edward James Goggin and Anna Marie Farrell, David Goggin graduated from the University of California at Irvine (UCI) with a degree in English Literature. After producing light shows for concerts by Janis Joplin and Buffalo Springfield at UCI, he spent a year studying abroad at the University of Edinburgh and traveled extensively in the UK, where he met John Lennon in 1968-1969, and witnessed a session where The Beatles recorded “I Am the Walrus.” This experience ignited Goggin’s lifelong passion for the craft of recording and the people behind it.

    While at UCI, Goggin studied drawing with David Hockney; it was a pursuit he continued throughout his lifetime. Building from his drawing technique, his art practice expanded to include delicate wire sculptures that are widely collected by an eclectic group of Hollywood luminaries including Norman Lear.Goggin started his career in media in the late sixties, hosting a late-night comedy radio show in Montreal. When the show was cancelled, David returned to Orange County and began work in the recording industry as the studio manager at the Lyon Recording Studio, while doing publicity for an affiliated company, Lyon Lamb Video Animation Systems.

    Goggin’s first break as a music journalist came with the then-startup Mix magazine in 1979, where editor and soon-to-be lifelong friend David Schwartz invited him to write a monthly column about the pressured, offbeat life inside a small Orange County recording studio. Writing under the pen name Mr. Bonzai, his columns became a staple of the magazine, evolving into his first book and the popular Lunching with Mr. Bonzai series. Over his career, Goggin wrote more than 1,000 articles and interviews for major publications in the U.S., Europe, and Asia, including Rolling Stone, The New York Times, Billboard, and The Hollywood Reporter, Sound & Recording Japan.

    Among his many skills was his ability to elicit brain-scratching quotes from pressured artists. Film composer CJ Vanston called him “the mother of all flies on the wall,” and Suzanne Ciani said he was “always a charming and clever centerpiece at any industry convention,” while “Weird Al” Yankovic said that Mr. Bonzai “got inside my mind when I wasn’t looking,” and Graham Nash observed that his greatest talent was “being invisible,” and George Massenburg described him simply as “curiosity and joy.”

    Many of his articles featured his award-winning photography, establishing him as Los Angeles’ preeminent recording studio photographer. Sights of Goggin, in his pork pie hat, metallic glasses, lanyard Montblanc fountain pen, and multi-colored shirt, working booths at industry conventions with a Leica camera and ladder in hand, made him one of the most recognized figures in the pro-audio industry.

    Goggin’s early studio stories were compiled into his first book, Studio Life: The Other Side of the Tracks (1984), and his life’s work included seven more books: Santa’s Secret Sled (1980), co-written with Bruce Lyon; Hal Blaine and The Wrecking Crew (1990), co-written with legendary session drummer Hal Blaine; The Sound of Money (2000), co-written with his friend and client Chris Stone; Faces of Music (2006); Music Smarts (2009); and John Lennon’s Tooth (2012).

    In 2025, he co-authored Buzz Me In: Inside the Record Plant Studios with music journalist Martin Porter, reconstructing the wild and innovative history of Record Plant Studios in New York, Sausalito, and Los Angeles, where Goggin worked as a press agent.

    In addition to his work at Record Plant, Goggin collaborated with the studio’s owner Chris Stone on industry advocacy groups such as SPARS and the World Studio Group. He co-founded, with producer/engineer Ed Cherney and Stone, the Music Producers Guild of the Americas, which later became the Producers & Engineers Wing of the Recording Academy.

    He was also active in the National Association of Music Merchants (NAMM) community, producing conference sessions with audio-industry pioneers and hosting the Technical Excellence & Creativity (TEC) Awards. He appeared on NAMM’s TEC Tracks stage in January 2026, with Devo frontman Mark Mothersbaugh and producer Bob Margouleff to discuss the making of the 1980 hit “Whip It” at Record Plant.Mothersbaugh once called him “a master of modern music photojournalism,” obliquely adding that “Mr. Bonzai is the future of the past.”Goggin’s company, Communication Arcs, provided PR and photographic services to leading pro-audio manufacturers and recording studios, including Sony, Telefunken, Sommer Cable, Ocean Way Recording, United Recording, and Bernie Grundman Mastering.

    For half a century, David Goggin’s work compiled the audio and visual history of the recording studio era, bridging the early creative chaos of the analog studio age and the digital birth of personal music production, always focusing the lens on the people and technology behind the scenes that made the music happen.

    "David 'Mr. Bonzai' Goggin was a friend of mine, and to countless others in the Pro Audio/MI industries," says MC publisher Eric Bettelli. "Dave's contribution to our industry spans decades. He was a top notch professional music journalist, publicist and author, whose work speaks for itself. He just never missed a beat. And most of all, Dave was one of the good guys, who was always eager to share his incredible talent with all. A real Mench. Dave, RIP, and hope to see you again on the other side."The post Recording industry Renaissance man David Goggin (aka Mr. Bonzai) passes at 78 first appeared on Music Connection Magazine.

    Recording industry Renaissance man David Goggin passes at 78. Music Connection was saddened to learn of the passing of David "Mr. Bonzai" Goggin this week.

  • No Type No Tag Beats StuttermationStuttermation is a dynamic, timeline-based stutter, glitch, and time-manipulation multi-effect built for precise rhythmic audio editing inside a DAW-synced plugin. Instead of relying on MIDI triggers or static LFO patterns, Stuttermation lets you draw audio manipulation events directly onto a timeline and shape each moment independently. The plugin is built around Blocks. Each block can have its own rate, buffer behavior, gate shape, pitch movement, probability, direction, filtering, drive, volume, pan, and envelope curves. This makes it possible to chain tight stutters, halftime chops, reverse cuts, pitch ramps, gated 808 tail edits, accelerating glitches, distorted rhythmic effects, and full-loop transformations inside one continuous sequence. Main Features Timeline-Based Block Sequencing: Draw, resize, duplicate, delete, and arrange stutter blocks on a DAW-synced timeline with customizable snap settings, zooming, smooth scrolling, and an interactive minimap. The timeline is capped at 500 bars for stable large-session behavior. Per-Block Slice Engine: Each block renders through a strict slice-local playback core, so the block's rate controls slice count while pitch, halftime, reverse, and buffer length shape what happens inside each slice without changing the number of slices. Dynamic Rate Morphing: Set independent Start and End rates for a block, from slow rhythmic chops to rapid glitch bursts, for smooth accelerations, decelerations, and evolving stutter movement. Musical Buffer Length Control: Choose Match Rate or shorter buffer lengths per block. Shorter buffers play only that portion of each slice, then smooth to silence without stretching or repeating the audio, preserving pitch and transient integrity. Advanced Audio Grabbing: Choose how each block captures audio: Start of Block for classic stutter sampling, Moving Playhead for live granular behavior, or Fixed Offset for repeatable delayed capture. Gate Shape Modes: Shape every slice with dedicated gate modes: Truncate for tight cuts, Fade for smoother exits, and T-Safe for transient-conscious slicing. Gate percentage, Attack, Release, Reverse, and Halftime are all set per block. Bass and 808 Tail Handling: Designed to work cleanly on sustained low-end material, including 808 tails and bass notes, with adaptive entry smoothing, safer slice transitions, and pitch-aware rendering. Transient-Aware Pitch Rendering: Pitch ramps and pitch-shifted blocks use improved slice handling to preserve punch on transient-heavy material while keeping sustained bass and 808 content stable. High-Quality Interpolation: Includes multiple interpolation modes, including an upgraded HQ sinc mode for smoother pitch movement, cleaner high-rate stutters, and improved resampling quality. Per-Block Curve Envelopes: Draw automation curves directly onto blocks with adjustable tension for ease-in/ease-out movement. Available lanes include Volume, Pitch, Pan, High-Pass Filter, Low-Pass Filter, and Overdrive. Envelope Workflow Tools: Copy and paste envelope shapes between lanes, apply built-in shapes such as ramps, saws, squares, and pulses, and use slice-aligned visual guides for faster rhythmic editing. Drive and Distortion Control: Shape distortion per block with the Overdrive envelope, optional drive-only 2x/4x oversampling, and a 2x Drive option for more aggressive saturation when needed. Output Safety Stage: A transparent final safety stage helps catch stacked blocks, heavy drive, and dense overlapping effects before they create unexpected output spikes. Generative Probability: Use Block Probability to decide whether an entire block triggers, and Stutter Probability to control whether individual slices play, creating evolving rhythms from static loops. Randomize and Humanize Tools: Quickly generate new ideas with randomize options for selected block settings and envelopes, plus humanize tools for subtle rate and probability variation. Polyphonic Micro-Grain DSP: A 16-voice slice/grain engine with explicit voice allocation, click-conscious transitions, and stable voice stealing for dense overlapping blocks and fast stutter patterns. Channel-Safe Automation: Volume, pan, filter, wet, and drive transitions are smoothed and audited to avoid block-entry leaks, center blips, and opposite-channel artifacts. Global Playback Modes: Fine-tune the overall engine behavior with global Transition, Interpolation, Timing, and Mix modes for different editing styles and playback needs. Visual Slice Feedback: Timeline blocks show slice shapes, gate behavior, probability markings, envelope overlays, reverse direction, and compact gate-mode badges for quick visual editing. Built-In Preset Manager: Save, load, and browse XML-based stutter sequences across sessions and DAWs, including block settings, gate modes, envelopes, probability, global modes, and timeline data. Support. For support or inquiries: NO.TYPE.NO.TAG.BEATS@gmail.com FAQ. Does Stuttermation use MIDI triggering? No. Stuttermation is timeline-based. You draw blocks directly where you want the effect to happen. Can each block have different settings? Yes. Every block can have its own rate, buffer length, envelopes, gate mode, reverse, halftime, drive, filter, pan, probability, and more. Does it work on 808s and bass tails? Yes. The engine has been tuned for sustained low-end material, including 808 tails and bass notes. Can I use it for glitch effects? Yes. Use rate morphing, pitch envelopes, buffer length, probability, reverse, drive, and filter envelopes to create glitch builds and rhythmic edits. Read More

  • Inspired by Four Tet and Bonobo: Excite Audio unveils Bloom Drum Kits, the latest addition to its much-loved Bloom plugin seriesExcite Audio has expanded its much-loved Bloom series of plugins and virtual instruments, this time foraying further into the world of live drums with Bloom Drum Kits.
    Inspired by the raw drum sounds used by the likes of Four Tet, Bonobo, Nicolas Jaar and Geese, Bloom Drum Kits has been built using kits played “in the room”, offering a “rough sense of motion and individuality” to your tracks.

    READ MORE: Claude can now be plugged into Ableton to assist with your music projects

    Bloom Drum Kits offers up both raw, closed mic’d drums and “tape-worn, processed hits”, with a collection of professionally played rhythms and one-shots spanning detuned toms, snare sounds, and so much more.
    It sports a similar minimalist user interface featured on the rest of Excite Audio’s Bloom line, and even allows producers to upload their own samples and play them using the Bloom Drum Kits interface.

    The plugin arrives with 250 presets organised into seven different categories:

    Basic (BA) – Straightforward drum beats built from each factory kit.
    Experimental (EXP) – Abstract, offbeat presets with a more sound design-focused feel.
    Kits (KIT) – Single hits and phrases created entirely from one-shots.
    High Energy (HI) – Fast, busy, and more aggressive beats for added momentum.
    Low Energy (LO) – Laid-back, minimal grooves for softer and more downtempo tracks.
    Percussion (PC) – Rhythmic sequences built entirely from percussion loops and samples.
    Processed (PRO) – Heavily treated beats featuring distortion, effects, and macro-driven movement.
    Top Loops (TOP) – Snare and hi-hat loops for layering and groove.

    Bloom Drum Kits is available now at an introductory price of just £19 / $19 until 31 May.
    Learn more at Plugin Boutique.
    The post Inspired by Four Tet and Bonobo: Excite Audio unveils Bloom Drum Kits, the latest addition to its much-loved Bloom plugin series appeared first on MusicTech.

    Excite Audio has expanded its much-loved Bloom series of plugins and virtual instruments, this time foraying further into the world of live drums with Bloom Drum Kits.

  • iZotope RX 12 is here RX 12 introduces two entirely new modules, Stems View and Scene Rebalance, and boasts a number of improvements to its existing tools thanks to some behind-the-scenes tweaks. 

    RX 12 introduces two entirely new modules, Stems View and Scene Rebalance, and boasts a number of improvements to its existing tools thanks to some behind-the-scenes tweaks. 

  • Expressive E launch the Osmose CE The latest additons to the Osmose family deliver the same playing experience as the originals, but with a companion software suite rather than an internal synth engine. 

    The latest additons to the Osmose family deliver the same playing experience as the originals, but with a companion software suite rather than an internal synth engine. 

  • Sentinel AV and Bladerunner release Folda, a 4-group morphing distortion plugin
    Sentinel AV and UK drum-and-bass producer Bladerunner have released Folda, a 4-group morphing distortion plugin. Folda is available as VST3 for Windows and as VST3/AU for macOS (Universal Binary), with an intro price of $67 for the first seven days after launch (until May 2), regular price $99 thereafter. The main idea behind Folda is [...]
    View post: Sentinel AV and Bladerunner release Folda, a 4-group morphing distortion plugin

    Sentinel AV and UK drum-and-bass producer Bladerunner have released Folda, a 4-group morphing distortion plugin. Folda is available as VST3 for Windows and as VST3/AU for macOS (Universal Binary), with an intro price of $67 for the first seven days after launch (until May 2), regular price $99 thereafter. The main idea behind Folda is

  • Claude can now be plugged into Ableton to assist with your music projectsClaude – the AI assistant and chatbot from Anthropic – can now be directly plugged into Ableton, as well as a raft of other creative platforms, including Blender and Photoshop.
    The move follows the launch of Claude Design, a new product by Anthropic Labs that lets you collaborate with Claude to create “polished visual work” like designs, one-pagers and more.
    With the new set of connectors for Claude, the popular chatbot is able to plug into Ableton, and act as an AI assistant within your music projects. Anthropic says its partnership with a “coalition of partners” – which also includes Blender, Adobe (Photoshop and Premiere Pro) and Affinity by Canva.

    READ MORE: Focusrite unveils ISA C8X, its first ISA audio interface built on Rupert Neve’s preamp legacy

    Interestingly, Splice is also named in the list of brands integrating Claude into its products. It means producers can now search Splice’s catalogue of royalty-free samples directly within Claude.
    According to a blog post on the Anthropic website, within these platforms, Claude can be used in a variety of ways. Users can ask Claude complex questions about the software, with the chatbot acting as a virtual tutor to help you better understand your workflow.
    Elsewhere, Claude Code can write scripts, plugins, and generative systems for these platforms.
    And perhaps most importantly for creatives, Claude can be used to take care of manual, repetitive tasks that get in the way of the creative process.
    “Claude can’t replace taste or imagination, but it can open up new ways of working – faster and more ambitious ideation, a more expansive skillset, and the ability for creatives to take on larger-scale projects,” Anthropic says [via The Verge]. 
    “AI can also help shoulder the parts of the creative process that eat up time by handling repetitive tasks and eliminating manual toil.”
    Check out the video below for a walkthrough on how to integrate Claude into Splice:

    Anthropic has also now become a Corporate Patron of the Blender Development Fund, helping the open-source platform to stay free, and to allow developers to “keep pursuing projects independently, and to focus on building tools for artists and creators”. Anthropic will give Blender €240,000 every year.
    The post Claude can now be plugged into Ableton to assist with your music projects appeared first on MusicTech.

    Claude – the AI assistant and chatbot from Anthropic – can now be directly plugged into Ableton, as well as a raft of other creative platforms, including Blender and Photoshop.

  • MIT engineers’ virtual violin produces realistic soundsThere is no question that violin-making is an art form. It requires a musician’s ear, a craftsperson’s skill, and an historian’s appreciation of lessons learned over time. Making a violin also takes trust: Violin makers, or luthiers, often must wait until the instrument is finished before they can hear how all their hard work will sound.But a new tool developed by MIT engineers could help luthiers play around with a violin’s design and tweak its sound even before a single part is carved.In a study appearing today in the journal npj Acoustics, the MIT team reports on a new “computational violin” — a computer simulation that captures the detailed physics of the instrument and realistically produces the sound of a violin when its strings are plucked.While there are software programs and plug-ins that enable users to play around with virtual violins, their sounds are typically the result of sampling and averaging over thousands of notes played by actual violins.In contrast, the new computational violin takes a physics-based approach: It produces sound based on the way the instrument, including its vibrating strings, physically interacts with the surrounding air.As a demonstration, the researchers applied the computational violin to play two short excerpts: one from “Bach’s Fugue in G Minor,” and another from “Daisy Bell” — a nod to the first song that was ever produced by a computer-synthesized voice.The computational violin currently simulates the sound of plucked strings — a type of playing that musicians know as “pizzicato.” Violin bowing, the researchers say, is a much more complicated interaction to model. However, the computational violin represents the first physics-based foundation of a strung violin sound that could one day be paired with a model of bowing to produce realistic, bowed violin music.For now, the team says the new virtual violin could be used in the initial stages of violin design. Luthiers can tweak certain parameters such as a violin’s wood type or the thickness of its body, and then listen to the sound that the instrument would make in response.“These days, people try to improve designs little by little by building a violin, comparing the sound, then making a change to the next instrument,” says Yuming Liu, senior research scientist at MIT. “It’s very slow and expensive. Now they can make a change virtually and see what the sound would be.”“We’re not saying that we can reproduce the artisan’s magic,” adds Nicholas Makris, professor of mechanical engineering at MIT. “We’re just trying to understand the physics of violin sound, and perhaps help luthiers in the design process.”Makris and Liu’s MIT co-authors include Arun Krishnadas PhD ’23 and former postdoc Bryce Campbell, along with Roman Barnas of the North Bennet Street School.Sound matrixThe quality of a violin’s sound is determined by its dimensions and design. The instrument is made from thoughtfully crafted parts and materials that all work to generate and amplify sound. In recent years, scientists have sought to understand what artisans have intuited for centuries, in terms of what specific parameters shape a violin’s sound.In one early effort in 2006, scientists, as part of the Strad3D project, put a rare Stradivarius violin through a CT scanner. The violin was crafted in 1715 by the master violinmaker Antonio Stradivari, during what is considered the “Golden Age” of violin making. To better understand the violin’s anatomy and its relation to sound, the scientists scanned the instrument and produced 600 “slices,” or views, of the violin.The CT scans are available online for people to view and use as data for their own experiments. For their study, Makris and his colleagues first imported the CT scans into a solid modeling software program to generate a detailed three-dimensional model of the violin. They then ran a finite element simulation, essentially dividing the violin into millions of tiny individual cubes, or “elements.”For each cube, they noted its material type, such as if a cube from the violin’s back plate is made from maple or spruce, or if a string is made from steel or natural fibers. They then applied physics-based equations of stress and motion to predict how each material element would move in relation to every other element across the instrument.They also carried out a similar process for the air surrounding the violin, dividing up a roughly cubic-meter volume of air and applying acoustic wave equations to predict how each tiny parcel of air would move and contribute to generating sound.“The entire thing is a matrix of millions of individual elements,” explains Krishnadas. “And ultimately, you see this whole three-dimensional being, which is the violin and the air all connected and interacting with each other.”A plucky modelThe team then simulated how the new computational violin would sound when plucked. When a violinist plucks a string, they pull the string sideways and let it go, causing the string to vibrate. These vibrations travel across the instrument and inside it; the air’s vibrations are amplified as they travel out of the violin and into the surroundings, where a listener hears the vibrations as sound.For their purposes, the engineers simulated a simple string pluck by directing one of the virtual violin’s strings to stretch out and then rebound. The simulation computed all the resulting motions and vibrations of the millions of elements in the violin, and the sound that the pluck would produce.For notes that require pressing down on a violin’s fingerboard, they simulated the same plucking, and in addition, included a condition in which the string is held fixed in the section of the fingerboard where a violinist’s finger would press down.The researchers carried out this computational process to virtually pluck out the notes in several measures of “Daisy Bell” and “Bach’s Fugue in G Minor.”“If there’s anything that’s sounding mechanical to it, it’s because we’re using the exact same time function, or standard way of plucking, for each note,” says Makris, who is himself a lute player. “A musician will adapt the way they’re plucking, to put a little more feeling on certain notes than others. But there could be subtleties which we could incorporate and refine.”As it is, the new computational model is the first to generate realistic sound based on the laws of physics and acoustics. The researchers say that violin makers could use the model to test how a violin might sound when certain dimensions or properties are changed. For instance, when the researchers varied the thickness of the virtual violin’s back plate or changed its wood type, they could hear clear differences in the resulting sounds.“You can tweak the model, to hear the effect on the sound,” Makris says. “Since everything obeys the laws of physics, including a violin and the music it makes, this approach can add an appreciation to what makes violin sound. But ultimately, we get most of our inspiration from the artisans.”This work was supported, in part, by an MIT Bose Research Fellowship.

    MIT researchers developed a “computational violin” — the first computer simulation that captures the detailed physics of the instrument and realistically produces the sound of a violin when its strings are plucked. Violin makers could use the model to test how a violin might sound when certain dimensions or properties are changed.

  • iZotope RX 12’s focus on improved accuracy and quality pays offElements: $99
    Standard: $399 (update from RX 11 Standard $129)
    Advanced: $1399 (update from RX 11 Advanced $269)
    iZotope.com
    Despite impressing me upon its launch in May 2024, RX 11 had already gained a patina of age bycthe turn of the year thanks to the ever-growing crop of machine-learning-based audio tools hitting the market.

    READ MORE: Steinberg’s new SpectraLayers 12 has “a strong focus on the needs of the post-production industry”

    Steinberg SpectraLayers Pro is RX’s closest competitor — it quickly trumped RX 11 with its own v11 release in June 2024, followed a year later by the even more impressive Pro 12. Also, given one of RX’s biggest draws is its stem-splitting tools, the growth and quality of services such as LANDR Stems and LALAL.AI, not to mention native stem splitting within DAWs, had made RX 11’s stem splitting look – and sound – increasingly tardy in comparison.
    This left users wondering how and when iZotope was going to respond, and what that response would look like. Well, wonder no more because RX 12 is here. Is there enough in the update for it to regain its premiere position? Let’s see…
    Image: Press
    What’s new in iZotope RX 12?
    The obligatory user interface update expected of all software updates is, in RX 12’s case, fairly subtle. So much so that one aspect flagged by iZotope – namely a larger spectrogram – is so marginal that I wouldn’t have noticed had it not been pointed out. Nevertheless, features such as an ever-present monitor volume slider, resizable History panel, and a small reworking of the colour palette, make for a pleasing refresh.
    I’m more interested in the consequential stuff, though, and here there’s a lot more meat on the bones…
    The majority of processing modules now offer difference (delta) monitoring/processing, which reverses what is output from the module. For example, with Dialogue Isolate, engaging difference processing means the dialogue will be removed from the audio, leaving background noise. While this is handy as a way of extending the functionality of modules (sticking with the example, Dialogue Isolate also becomes Background Foley Isolate!), it’s perhaps most useful when configuring module parameters, making it easier to judge the impact of those parameters.
    Moving to specific modules, there’s an all-new Trim Silence processor that’s particularly useful when editing podcasts, voiceover tracks and field recordings, making it easier to move between sections of dialogue (or whatever you’ve recorded) during editing. It’s a big time-saver too and, unlike stripping silence in a non-destructive environment like a DAW or NLE, Trim Silence produces entirely new – and often much smaller – audio files, and so can significantly reduce the total file size of a project.
    Trim Silence in iZotope RX 12. Image: Adam Crute
    Machine learning enhancements in RX 12
    Of course, RX’s biggest attractions are its various machine-learning-powered rebalancing and separation modules. New here is the Scene Rebalance module, which does for video production what Music Rebalance does for music production. The new module recognises dialogue, music and effects, allowing the volumes of these elements to be rebalanced in-place or split into separate audio streams. This it does with an impressive degree of accuracy and a minimum of audible artefacts. Moreover, as with all of RX’s separators, Scene Rebalance is 100% lossless – that is, if you separate a source and play it back alongside a phase-reversed copy of the original audio, all you hear is silence. Unfortunately, Scene Rebalance is only included in RX 12 Advanced, the pricing of which I’ll return to later.
    Music Rebalance and Dialogue Isolate, whose results had not kept up with those of competing stem separators, now deliver markedly higher quality than previously – as convincing as any I’ve heard! I’m somewhat disappointed that Music Rebalance can still only recognise vocals, drums, bass and ‘other’, but this is because iZotope’s focus was on improving the quality and accuracy of separation in RX 12, something it’s achieved effectively. Extended instrument recognition is very-much on RX’s development roadmap, however.
    Scene Rebalance in iZotope RX 12. Image: Adam Crute
    The De-bleed and Breath Control modules have both received ground-up rebuilds to embed ML-based features within them. In De-bleed’s case, this allows the module to automatically isolate a variety of common sources from the mic-bleed captured from other instruments, thereby saving you from having to train the module yourself (although this mode is still available). With Breath Control, machine learning makes for a far faster setup, and much more accurate recognition and removal of unwanted breath sounds than previously. Once again, the classic operation mode is still available for those who want to use it.
    Alongside improved results, the overhauled ML processing is noticeably faster than previously, even when operating at the highest quality level. For example, splitting a 4’30” test mix in RX 11 took around 2’30”, but the same task in RX 12 took around 1’45”. The results sounded significantly better too. Not only does this mean less thumb-twiddling, it’s also allowed Music Rebalance and Dialogue Isolate to join the suite of real-time RX plugins that can be used natively in a DAW. Nice!
    De-bleed in iZotope RX 12. Image: Adam Crute
    Stems View
    Another common gripe about RX’s stem splitting has been that separated stems are opened in their own tabs. Playback of those tabs could of course be synchronised, but editing meant a lot of fiddling and switching between different screens.
    RX 12 addresses this with a new Stems View that displays stems as lanes within a single tab. Lanes can be muted and soloed as needed, and selections – both time and frequency – affect all lanes simultaneously. When you need detailed control over a single stem, you can select that stem from a dropdown menu to switch to a standard full-window view of the audio. Not only is Stems View infinitely better than how previous RX versions handled things, it’s more natural and intuitive than the approach taken by SpectraLayers.
    Stems View is also massively useful when working with RX as an ARA plugin in your DAW. Annoyingly, though, RX 12 still only supports ARA in Apple Logic and PreSonus Studio One 7 / Fender Studio Pro 8. Extended ARA support is on iZotope’s to-do list, though.
    Stem Split View in iZotope RX 12. Image: Adam Crute
    What’s the difference between the different RX 12 editions?
    As previously, RX 12 comes in three editions. Elements is the most affordable, providing a set of six RX plugins for use in your DAW, although the standalone RX audio editor and modules are not included. Elements is useful for dealing with common problems like clicks and hums, and includes the Repair Assistant that combines various types of corrective processing into a single plugin.
    RX 12 Standard is solid value, including the standalone editor and the vast majority of RX modules along with their plugin counterparts. It only lacks the processors and modules aimed at TV and film post-production – Scene Rebalance, Ambience Match, and so on.
    These additional modules are only found in RX Advanced, but are they really worth an additional $1000 over the cost of Standard? They’re impressive tools, for sure, and aren’t widely useful in a music production context, but they are useful to podcasters, streamers, independent filmmakers, and many others who wouldn’t have access to the big studio budgets this pricing assumes. Given that RX isn’t the only rooster in the henhouse, this premium may be costing iZotope sales.
    Nevertheless, Advanced is an incredibly powerful proposition, as is RX generally, and this latest version is a big step up from its predecessor. There remain some gaps in its capabilities compared to competing systems, such as the limited number of stem types recognised by Music Rebalance, but RX is very much back to the top of its game.
    Image: Press
    Key features

    Spectral audio editor with advanced processing modules
    Many processing modules included as plugins (AU, AAX and VST3 formats)
    NEW Scene Rebalance, Stems View and Trim Silence
    REBUILT De-bleed and Breath Control
    IMPROVED Music Rebalance, Dialogue Isolate, Difference (delta) monitoring and processing, user interface and workflow
    Requires macOS Sonoma (14.7.x) and upwards; Windows 10 (22H2) or Windows 11 (24H2)
    ARA plugin compatibility with Apple Logic Pro and PreSonus Studio One 7 / Fender Studio Pro 8

    The post iZotope RX 12’s focus on improved accuracy and quality pays off appeared first on MusicTech.

    In the realm of machine learning technology, two years is a long time. Can iZotope’s RX 12 put a spring back into its step?