Reactions

  • Lucy Dickins, long-time agent for Adele and Mumford & Sons, to exit WME after seven yearsCurrent Co-Head Kirk Sommer will continue to lead WME's Contemporary Music division.
    Source

    Current Co-Head Kirk Sommer will continue to lead WME’s Contemporary Music division.

  • Rhodes offers three classic virtual pianos with new Pianology pluginRhodes has expanded its lineup of virtual pianos and keyboard instruments with Pianology, a collection of three “timeless pianos”: a concert grand, acoustic upright and vintage electric grand.
    Offering producers and composers access to instruments which are “rare and seldom encountered outside renowned studios, concert halls and private collections”, Pianology’s three software instruments were sampled by Rhodes’ engineering and sound design teams “with a focus on instant playability”.

    READ MORE: “Our own take on the iconic reed-based electric piano”: Rhodes packs the classic Wurlitzer into plugin form

    First up in the three pianos included in Pianology is a Japanese Upright Piano based on a Yamaha U-series – known for its bright tone, compact sound and midrange, and tailored to jazz playing and “understated songwriting”.
    Next up is a classic American Grand Piano, which boasts a broad dynamic range and balanced tonal profile, as well as a harmonic depth and sustain making it suitable for a vast range of playing styles.

    And finally, you’ve got Japanese Electric Grand, a virtual electro-acoustic piano sampled from one with real strings and pickups. This one’s reminiscent of the sound of 1980s pop, rock and electronic music, Rhodes says.
    Meanwhile, Pianology features a simplified set of sound-shaping controls to further shape the tone of your piano sounds, with global controls including Timbre Shift, Global Tune, Mechanical Noise and Lid Position on the grand piano.
    There’s also an integrated Amp and Mic section, offering access to amplifier models and microphones derived from Rhodes’ V8 Pro plugin. Additionally, you’ve got compression, EQ, modulation, delay, stereo tremolo, panning and reverb controls, too.
    “Building on the worldwide success of V8 Pro, Wurli, and Anthology, Pianology marks the next chapter in our evolving Rhodes sound library,” said Dan Goldman, Chief Product Officer at Rhodes. 
    “With three carefully prepared and deeply sampled pianos, Pianology is designed for expressive playability, organic tone, and a close connection between player and instrument. Paired with our effects engine and sound-shaping tools, Pianology reflects Rhodes’ approach to tone and expressivity, placing the playing experience at the heart of the sound.”
    Pricing & availability
    Rhodes Pianology is available now in VST, AU and AAX formats. Until 7 April, you can get it at 30% off, so you’ll pay just £99.95 / $104.95 instead of £139.95 / $149.95.
    For more information, head to Rhodes. There’s also a number of demo sounds available at Rhodes’ official SoundCloud page.
    The post Rhodes offers three classic virtual pianos with new Pianology plugin appeared first on MusicTech.

    Rhodes has expanded its lineup of virtual pianos and keyboard instruments with Pianology, a collection of three “timeless pianos”: a concert grand, acoustic upright and vintage electric grand.

  • The Crow Hill Company releases Icky Bass, a free Fender Rhodes Bass fuzz library
    The Crow Hill Company has added Icky Bass to its free Vaults collection. Icky Bass is more than just a cool name for a bass instrument. It’s actually named after the White Stripes album “Icky Thump.” The instrument uses recordings of a rare Fender Rhodes Piano Bass run through Jack White’s signature Knife Drop fuzz [...]
    View post: The Crow Hill Company releases Icky Bass, a free Fender Rhodes Bass fuzz library

    The Crow Hill Company has added Icky Bass to its free Vaults collection. Icky Bass is more than just a cool name for a bass instrument. It’s actually named after the White Stripes album “Icky Thump.” The instrument uses recordings of a rare Fender Rhodes Piano Bass run through Jack White’s signature Knife Drop fuzz

  • Noisebud Shape2 AdvancedShape2 Advanced is an extreme low-end shaper and compressor designed to give you precise control over the lowest part of the spectrum. Instead of traditional EQ or compression, Shape2 Advanced reshapes the behaviour of the bass itself. The core processing is based on a waveshaper that compresses and reshapes the low frequencies so they use the available headroom more efficiently while keeping the rest of the spectrum untouched. In version 1.50 the plugin became much more flexible. The waveshaper can now be turned off completely, allowing Shape2 Advanced to function as a dedicated low-end envelope shaper and limiter. This opens up a new workflow where you can design how the bass moves over time rather than just compressing it. The expanded envelope section includes Attack, Release, and Curve controls, giving detailed control over how the low frequencies react and recover. Together with the built-in low peak limiting and dry/wet blending, this allows anything from subtle tightening of the bass to extremely controlled low-end shaping. The original waveshaper workflow is still fully available, so the plugin can be used exactly like before or in completely new ways. Shape2 Advanced is particularly useful for mastering and mix bus work where controlling the lowest frequencies can dramatically improve headroom, punch, and translation. Key Features Waveshaper-based low-end compression and reshaping. Optional waveshaper bypass for envelope-based low-end control. Envelope shaping with Attack, Release, and Curve controls. Low peak limiter for controlling maximum bass peaks. Dry/Wet control for parallel processing. Designed for precise low-frequency control without affecting the rest of the spectrum. Formats Mac: VST3, AU Windows: VST3. https://youtu.be/iOfncZ8_on4 How to Get Noisebud Plugins Noisebud plugins are distributed through Patreon. The idea is simple: you can either subscribe for access to almost everything, or purchase a single plugin. Patreon subscription – $6/month Subscribing gives you access to almost the entire Noisebud plugin library. You do not need to stay subscribed. Once you download something, it is yours to keep. If you want to download future updates later on, you simply re-subscribe for a month. One-time purchase If you prefer not to subscribe, you can purchase individual plugins directly via Patreon: Plugins: $30. Reaper scripts: $15. A single purchase includes all minor updates up to the next major version. For example, buying version 5.00 includes free updates up to 5.99. Visit the Noisebud Patreon page. Read More

  • Galcher Lustwerk: “You can’t pinpoint the Lustwerk sound on any specific gear; I’m constantly switching it up”Twelve years on from 100% Galcher, the mixtape that quietly inspired a generation of underground house artists, Galcher Lustwerk isn’t interested in nostalgia. “I’m concentrating on the next release,” says the Cleveland-born producer and vocalist.  “Not trying to chase any trends, anybody or group of people, and keep my head down.” That focus is palpable in his work, with a distinct sonic identity built around considered instrumentation and production.
    Vestibule, his first EP in two years, comprises three dance cuts that place you in a hazy, dimly-lit basement club. Galcher moves between woozy introspection and autobiographical rap, recording sounds from a collection of retro ROMplers in a small New York home studio that he describes as feeling like the Mother computer from Alien.
    In this Studio Files, Galcher walks us through the techniques that helped him craft the sound of Vestibule, tells us why he doesn’t rely on one piece of gear for the “Lustwerk sound”, and how a Pro Tools mentor taught him the art of reduction.
    Vestibule EP by Galcher Lustwerk
    Hey Galcher. Loving the sounds and refined minimalism of Vestibule. What’s your approach to selecting instruments and sounds, and being intentional about each one?
    The sounds on Vestibule were tracked from ROMplers I’ve gained slowly over the course of a few years — specifically, a Yamaha Motif ES Rack, E-Mu Ultra Proteus, and two Roland JV-1010s. All my favourite presets on everything are loaded up and linked to a power supply. I can simply turn everything on and start writing. I like to start with the Yamaha or the E-Mu, then multi-track the rest in.
    Image: Press
    This is your first release in a couple of years. It’s a fast-moving scene; has anything changed for you in that time?
    I’m still making music and DJing. Not trying to chase any trends, anybody or group of people, and keep my head down.
    How do you think about the balance of bars and house beats when you’re writing? Is it a conscious decision, or does it just flow out?
    I like starting with no words. The instrumental can establish itself. I also like to repeat one verse over and over again. A Lustwerk track to me falls into the house tradition, not the hip-hop tradition, so repetition is fair game. Each beat tells its own story in due time; lyrics find their way in, or they don’t.
    Image: Press
    Tell us a bit about your studio.
    My studio is in a small room in New York City. It reminds me of the Mother computer from Alien. There’s a single 12U rack, with a Novation MIDI keyboard and a Streamdeck. The Stream Deck is hot-keyed up with my favourite plugins, so I can look at the screen less. Keyboard and mouse swivel with the office chair. And as many bass traps as I possibly can, because the low end in this room is a mess, even with these little speakers (IK Multimedia MTM). My Yamaha WX5, which I use with the PLG150-VL on the Motif, sits in the corner. I’ve got a Subpac on the chair, which I mean to use, but keep tripping the cable and breaking adapters.
    What’s your latest gear purchase?
    RME Babyface Pro for live sets. I needed better gain performance + the ability to EQ/ring out the mic before it hits Ableton. Since I like to keep the mic gain high, I deal with a lot of feedback issues in clubs. The Zoom interface I had before wasn’t cutting it. It’s difficult to replicate a spoken word voice in a 100db environment.
    How do you see your sound and studio evolving in the next two years?
    I’ll have sold everything and got a whole other workflow! The GAS is never-ending. Next stop is an iZ RADAR. You can’t pinpoint the Lustwerk sound on any specific gear; I vary the sound by constantly switching my gear and plugins up.
    Image: Press
    Were there any instruments or gear that were crucial to Vestibule?
    The Yamaha Motif ES and its two PLG 150 expansion cards, AN (virtual analogue modelling) and VL (wind controller compatible sounds) were crucial, especially on the sax solos for Wet Bulb and Vestibule’s tenor sax solos. For Shorty Out, I used an AN sawtooth as a base layer, and combined it with three other pad presets and effects on the Motif. Once tracked i,n I used VSTs sparingly, with Fab Filter Pro Q4 for EQ, and Eventide H3000 or Acustica Firethepan to get extra space where needed.
    The piano on Wet Bulb is a nice contrast to the gritty synth bass. A lot of your other releases also have tension between the instrumentation and notes. Is this always intentional?
    i just start with a nice sound and go from there. Any contrast is serendipitous. I’ll gravitate towards playing certain keys, of course, which evoke a dramatic mood. The piano and orchestra sounds on Wet Bulb are from the Motif, and the synth bass is from a Waldorf Blofeld, which I’ve since sold.
    Image: Press
    100% Galcher has been called an “all-timer” by fans online. How do you view that record now, and does the legacy weigh on you when you start a new project?
    It can feel like fighting against nostalgia. I’m concentrating on the next release while people are hung up because of personal preferences and what-not. I get it. I can only hope they feel something special and current with my new music.
    Do you have a dream piece of gear?
    Some exorbitantly expensive AD/DA converter like a JCF Latte, just for the hell of it. Dr. Dre’s known to use a Lavry converter for clipping but I don’t really clip stuff.
    Image: Press
    What’s a music production myth you think needs debunking?
    Oversampling. They just want you to buy more RAM and upgrade your computer for no reason. If you can hear the difference between 2x and 8x oversampling, I feel bad for you.
    Who gave you the biggest lesson in your career? Can you tell us about how it impacted you?
    I attribute Morgan Louis to taking my production to another level. He was already Pro Tools certified when I met him. When we would work on music together, I would add something, and he’d delete half of it. I add more, and he’d delete even more. He had a reductive way of working, and I learned to listen closer and find the foundation of the groove before filling things in.
    The post Galcher Lustwerk: “You can’t pinpoint the Lustwerk sound on any specific gear; I’m constantly switching it up” appeared first on MusicTech.

  • Get Ink Slide, a $60 Kontakt slide guitar library, FREE on Audio Plugin Deals
    Audio Plugin Deals is offering Ink Slide by Ink Audio for free for the next two weeks. The library normally costs $60 and requires the full version of Native Instruments Kontakt. This is one of the better Audio Plugin Deals freebies I’ve seen in a while. They sometimes give away stuff that I wouldn’t necessarily [...]
    View post: Get Ink Slide, a $60 Kontakt slide guitar library, FREE on Audio Plugin Deals

    Audio Plugin Deals is offering Ink Slide by Ink Audio for free for the next two weeks. The library normally costs $60 and requires the full version of Native Instruments Kontakt. This is one of the better Audio Plugin Deals freebies I’ve seen in a while. They sometimes give away stuff that I wouldn’t necessarily

  • How I built a live track from scratch using only the Roland TR-1000The Roland TR-1000 is a bulking, titanic piece of kit. But with that comes immense power and masses of hands-on control. Although it’s big, you could take this drum machine on the road and perform complete tracks without much else.
    Before I sent my review unit back to Roland, I wanted to see if I could build an entire track to perform live with the TR-1000. I used the built-in sounds and samples, and the various editing and performance tools this behemoth drum machine has: knob assignments, effect sends, master effects, filters, snapshots and step loop stutter edits.

    READ MORE: Roland’s TR-1000 might be the world’s greatest drum machine

    Below you’ll find details of the different elements I programmed and the choices I made, culminating in a final jam performance before I sadly parted ways with the machine. Hopefully, it will inspire anyone else looking to perform with the TR-1000 or any other programmable drum machine.

    Sequencing
    I first select sounds and then sequence them into a beat. I could opt to use a pre-defined kit, which has the bonus of having parameters already mapped to the main knob panel. However, I’m going to assign specific controls for live tweaking later, so I start with a blank slate and choose characterful sounds for each track.
    The fundamental beat uses a mixture of analogue drum sounds, sampled hits and a chopped percussion loop. Although the sequence is only 16 steps, I use the Cycle feature to make certain hits play on different repetitions. This adds subtle variation and makes the loop feel less repetitive. Alternatively, you could use VARI CHAIN to have up to 8 different 16-step patterns playing back to back.

    Melodic sounds and effects
    Next, I add a few extra elements that include a simple single note bass, a pulsating synth sound, a stab chord, a menacing sustained bass, and a splashy noise hit. These elements can be brought in and out on top of the beat to build up a more complete-sounding track. To give a deeper and more polished sound, I also send some of these to the Reverb and Delay send effects.

    Volume throws
    The sustained synth and the splashy noise are overkioil as continual sounds. However, they sound great with lashings of delay and short throws of the volume sliders. It can be useful to have a couple of tracks that play continually and be thrown in when needed. Another option might be a vocal loop that can be chopped in like a DJ doing cuts.

    Balancing and mixing
    Volume balancing is important to get right and can be done with the faders or with adjustments in the Amp section. Ideally, you don’t want to be worrying about balancing on the fly by remembering the fader placements, so using the Amp gain is preferable.
    The TR-1000 is incredibly flexible when it comes to refining your sounds. Each track comes with its own EQ/filter, compressor and effects slot. With careful use of volume balancing, pan placement, carving out space in the low-end using the filter or EQ, and the effects, you can create a surprisingly mix-ready output. Effects like the spreader and chorus can give stereo width and 3D depth to mono sounds, and the drive and saturation can add character and thickness.

    Master effects, drive and filter
    The analogue drive can be used on the entire output to add some final harmonic richness, volume and grit. You could just send specific tracks to it, but I prefer using it to add glue to the entire output.
    There’s also an analogue filter that can affect the whole output and be set to LP, HP or BP. The bandpass cuts out too much signal, but the lowpass is great for resonant filtered disco build-ups. Alternatively, the highpass is a quick and easy way to reduce low end for an edit or a sparser section. I’ve opted to use the highpass as it works well with the Morph slider.
    When it comes to master effects, you have a lot to choose from. Chorus, flanger, and phaser can add familiar sounding character, while more experimental and glitchy options include DJFX, Scatter and Sideband Filter. Alternatively, you could choose additional distortion, bitcrush, compressor or transient effects to help shape the overall sound.
    It’s a shame that there’s no separate master compressor though. I decide to go with the Sideband filter set at 80% wet as it can be used for dramatic edit and build-up type sounds.

    Main Panel Knobs
    It’s hard to miss the 42 knobs that control the ten tracks of the TR-1000. Each one can be freely assigned to up to 4 different parameters, which is plenty of mappable control to keep you busy. I decide to use the bottom row for filtering and sound shaping, and the second row for send effects. Having a repeatable system like this helps aid with muscle memory. Rather than over-complicate things, I decide to only map a handful of parameters: Kick Drum Decay, Snare Drum Mix, 909 Hat Decay, Bass LP Filter and Delay Send, Synth Sound LP Filter and FREQSHIFT FX, and Synth Chord Stab.

    Morph slider
    The Morph slider is a fun way to create more complex macros that shift multiple parameters across the entire machine. You can save up to 16 different Morph sweeps, which are accessible using the 16-step buttons after you activate the Morph button. I program a filter sweep on the whole output that cuts the lows, but doesn’t completely thin out the track. I also shorten the kick decay and increase the reverb and delay amounts to create a more washed-out sound. You could potentially mess with the controls and tuning to create a completely different section of a track that can easily be switched back and forth.

    Motion recording, step loop and snapshot
    The Motion Recording, Step Loop, and Snapshot functions provide different ways to add and perform edits.
    Motion Recording lets you record dial movements across the length of your loop. You could create a subtle sense of movement or do more dramatic moves that breathe life into static parts. You can choose if this is something that’s pre-recorded or created new for each performance/jam.
    Step Loop lets you perform stutter-like edits in real-time by hitting the steps that you’d like to loop. It could be a single snare hit that gets repeated, or 3 hits taken from different parts of the sequence that create a cool syncopated edit. You have to be a bit careful with this, as it can sound great to you but a bit awkward and gimmicky to the audience. With some practice, though, it can be a useful way to inject flair and detail into your performance.
    The Snapshot function lets you take a snapshot of the dials for a particular track and then save it to one of the 16 step buttons. This offers a different and more immediate way to flip sounds, and is also the best (current) way to get tracks to play chromatically, as you can set each key to play a different tuning for the instrument or sample.

    Final jam
    With all these control options in place, I finally get to have a jam and get a feel for how the parts interact. I instantly find that I’m coming up with structures and edits that I would never have thought of when in front of a DAW. It also leads to ideas for extra parameters that could be controlled or refined.
    The TR-1000 offers a flexible collection of tools for performance, so there are many other ways in which it could be tailored to your play style. With time, effort and patience, you could become a drum-machine performance master.

    The post How I built a live track from scratch using only the Roland TR-1000 appeared first on MusicTech.

    Join us as we build and program a live DAW-less jam using nothing but Roland’s powerful TR-1000.

  • Sonora Cinematic introduce Pure Steel String In contrast to Pure Nylon, which was designed with warmth and intimacy in mind, Pure Steel is said to sound “bright, crispy and resonant”, capturing the unmistakable sound of a steel-string acoustic guitar. 

    In contrast to Pure Nylon, which was designed with warmth and intimacy in mind, Pure Steel is said to sound “bright, crispy and resonant”, capturing the unmistakable sound of a steel-string acoustic guitar. 

  • “Music and art should not be easy”: Damon Albarn is writing the score for a movie about Open AI – and now believes it “isn’t possible for AI to make soulful music”Damon Albarn has confirmed that he’s writing the score for Artificial, Luca Guadagnino’s upcoming movie about OpenAI and the creators of ChatGPT.
    In a recent interview with The Needle Drop, the Blur and Gorillaz frontman opens up about his work on the film’s score and the wider role of AI in music.

    READ MORE: “AI, when done right, isn’t here to replace musicians”: Charlie Puth joins AI music platform Moises as Chief Music Officer

    “I’ve been quite involved with AI because I’ve been making a score for a movie called Artificial at the moment, which is all about the founders of ChatGPT,” Albarn says. “So I’ve had a lot of time to think about it.”
    “Music and art should not be easy. Once it becomes easy, it’s meaningless. In a way, it’s the things you don’t see or hear that make it art. You know what I mean, in a way. It’s a weird intuition that the listener has that picks up on the journey that the artist has been through to make that particular thing with the tone of the voice, etc. You can’t replace that.”
    The frontman also reflects on the limitations of AI in music, which the project has made even clearer to him: “I think there was a foolish moment where the big corporations thought AI was going to make their life easier and more money,” Albarn adds. “And well, that is not the case. It’s just going to… I don’t think it’s possible for AI to make soulful music.”
    Albarn first confirmed his involvement in Artificial earlier this year during an interview with Uncut. At the time, he revealed he would be “singing some songs and writing some electronic and orchestral backing” for the film.
    The musician’s interest in AI isn’t new. He has previously weighed in on posthumous AI releases, including projects like the Beatles’ Now and Then, noting that, “if enough people are interested, there could be hundreds of my songs released after my death, including songs that I would never have wanted to release.”
    Last month, Gorillaz released their ninth studio album The Mountain. The band will also embark on their first full North American tour in nearly four years in support of the record.

    The post “Music and art should not be easy”: Damon Albarn is writing the score for a movie about Open AI – and now believes it “isn’t possible for AI to make soulful music” appeared first on MusicTech.

    Damon Albarn has confirmed he’s writing the score for Artificial, Luca Guadagnino’s upcoming film about OpenAI and the creators of ChatGPT.

  • Apple Music’s new “Transparency Tags” aim to flag AI-generated content – but labels have to self-reportApple Music is rolling out what it calls “Transparency Tags,” a system for flagging AI-generated content on its platform. Before celebrating the dawn of radical honesty in streaming, though, there’s a catch: the system appears to rely largely on record labels choosing to actually use it.
    On Wednesday (4 March), Apple sent a newsletter to industry partners announcing that AI disclosures would now be a “delivery requirement” for content submitted to the service.
    The tags cover four categories: Artwork, Track, Composition, and Music Video, each intended to indicate when AI contributed a “material portion” of the work.

    READ MORE: Apple Music demonetised 2 billion fraudulent streams in 2025 – that’s nearly $17 million in royalties

    “Proper tagging of content is the first step in giving the music industry the data and tools needed to develop thoughtful policies around AI,” the newsletter states. “We believe labels and distributors must take an active role in reporting when the content they deliver is created using AI.”
    “These new tagging requirements provide a concrete first step toward the transparency necessary for the industry to establish best practices and policies that work for everyone.”
    Exactly how that transparency will be enforced remains unclear.
    Apple’s technical specification describes the tags as “optional” – at least for now – and the system does not appear to include any visible enforcement mechanism or verification process. “If omitted, none is assumed,” the notes state.
    Credit: Apple
    In practice, that likely means labels can tag AI-generated elements – be it a drum loop, lyric line, or album artwork – if they choose to disclose it. If they don’t, nothing changes.
    Given the sheer scale of AI-generated uploads, that limitation could prove significant. Last September, Spotify introduced similar AI disclosure labels, alongside a policy allowing the removal of tracks with unauthorised AI generated voice clones.
    Other platforms have also taken a more proactive approach. Deezer, for once, implemented an automated AI-detection system more than a year ago. The company says it now receives over 60,000 AI-generated songs every day, and its detection tools have identified more than 13.4 million AI-created tracks on the service.
    Nevertheless, Apple’s Transparency Tags represent a step toward clearer disclosure – though relying on self-reporting alone is unlikely to slow the flood of AI-generated music.
    The post Apple Music’s new “Transparency Tags” aim to flag AI-generated content – but labels have to self-report appeared first on MusicTech.

    Apple Music is rolling out what it calls “Transparency Tags,” a system for flagging AI-generated content on its platform.

  • When I played Fallout game I noticed there were unfortunately no perfect final because...war never changes

  • SEC ends case against Justin Sun with $10M settlementThe Securities and Exchange Commission has ended its long-running fraud and securities violation lawsuit against Justin Sun in a $10 million settlement.

    The SEC has ended its case against crypto mogul Justin Sun and his companies, agreeing to settle claims of fraud and securities law violations for $10 million.

  • Cluely CEO Roy Lee admits to publicly lying about revenue numbers last yearThe $7 million in annual recurring revenue that Cluely CEO Roy Lee shared last summer was a lie, its founder and CEO Roy Lee admitted on Thursday on X.

    The $7 million in annual recurring revenue that Cluely CEO Roy Lee shared last summer was a lie, its founder and CEO Roy Lee admitted on Thursday on X.

  • Recreating the forms and sounds of historical musical instrumentsWhat if there were a way to create accurate replicas of ancient and historical instruments that could be played and heard? In late 2024, senior MIT postdoc Benjamin Sabatini wrote MIT Professor Eran Egozy to ask just that, and about a collaborative research project between the Center for Materials Research in Archeology and Ethnology (CMRAE) and the MIT School of Humanities, Arts, and Social Sciences (SHASS) to CT scan, chemically and structurally characterize, and produce replicas of the ancient and historical musical instruments housed at the Museum of Fine Arts, Boston (MFA).He was soon introduced to Mark Rau, a newly hired MIT professor in music technology and electrical engineering. Sharing similar interests, the two together contacted Jared Katz, the Pappalardo Curator of Musical Instruments at the MFA, to propose a cross-institutional project. Rau, an avid museum-goer, particularly of musical instrument collections, has always wanted to hear the instruments on display, commenting that “my biggest qualm is often there are no accompanying audio examples. I want to hear these instruments; I want to play these instruments.” Katz, fortuitously, specializes in ancient musical practices and has developed a technique for 3D scanning and printing playable replicas of ancient instruments for his research. He had long dreamed of having access to a CT scanner to better understand how ancient instruments were constructed. The MFA was also an ideal institution for the project, since, according to Katz, the MFA’s musical instrument collection began in 1917 and has since grown to just over 1,450 instruments from six continents, with the earliest dating to approximately 1550 BCE. Rau and Sabatini, soon after, applied to and were funded by the MIT Human Insight Collaborative (MITHIC) with Katz's support. The team of five, including Nate Steele, program associate in the MFA’s Department of Musical Instruments and MIT postdoc Jin Woo Lee, now meets regularly at the MFA to scan and acoustically measure the instruments.Using a CT scanner from Lumafield, a company founded by MIT alumni, the team measures both internal and external dimensions. When combined with non-destructive vibration and acoustic testing and numerical simulations, these measurements are used to digitally replicate the instruments’ sound accurately. “For example, if we’re trying to recreate a violin, we can use an impact hammer — a very small hammer with a transducer in it — so we’re imparting a known force signal into the instrument, and then measure the resulting [surface] vibrations with a laser Doppler vibrometer,” says Rau.The team then uses 3D-printed copies of the instruments to create plaster mold negatives, which are cast into using slip, such as with the Paracas whistle, a ceramic artifact from Peru dating from 600-175 BCE, to replicate the instruments physically. The team demonstrated a playable replica at the MITHIC Annual Event in November. They also intend to build replicas of wooden instruments using old-growth wood in collaboration with local luthiers.Sabatini, a member of CMRAE, sees the humanistic implications of the project and the importance of studying the instruments from a materials and archaeological perspective, which is to explore and understand the cultures that were involved in their production, stating that “[from our] perspective, we want to understand the people who made these instruments through both the materials that they’re made of, but also the sound that they have.”With his team of Undergraduate Research Opportunities Program (UROP) students, including Irene Dong and Mouhammad Seck, Sabatini reproduced several ancient and historical clay instruments in the CMRAE archaeology lab, including the Paracas whistle, which was showcased at the MITHIC event.So far, the team has scanned approximately 30 instruments from the MFA’s collection, with the goal of scanning at least 100 instruments over the duration of the project, documenting them, and supporting future study. The data from the scans are used to reconstruct the instruments, both physically and in software, matching their physical form and sound.“They’re both visually beautiful and striking objects, but they are meant to be heard,” Katz says. Further stating that his “hope for this research is to provide us with a way to protect the original instrument while still allowing them to be heard and experienced in the way they were intended to be experienced.”Katz also sees potential for outreach and community engagement through these playable replicas, which is a goal written into the project’s proposal, further stating that “[i]t shows how powerful it can be when art and science come together to create new understandings and to help us reactivate these instruments in exciting ways.”Students have also been drawn to the project, including Victoria Pham, a second-year undergraduate in materials science and engineering, who is working with Sabatini as a UROP student. Pham was “drawn to this project because I love history,” she says. “I love wandering through the halls of the MFA and immersing myself in the descriptions of paintings and artifacts. I find learning about ancient peoples to be fascinating, especially in how their legacy affects us today.”Her work involves finite element modeling of a Veracruz poly-glabular flute, dating to 500-900 CE, to investigate its acoustics non-destructively. She notes that “[m]y work is fulfilling because I was able to learn new software and problem-solve to improve my model, which was very satisfying.”Pham thinks that “contributing to the new, budding field of music technology scratches an itch in my brain, and I hope that my work inspires others to get interested in archaeology, material science, or music technology.”Alexander Mazurenko, a second-year undergraduate majoring in music and mathematics, has also been working on the project. He began last summer and continued during this year's Independent Activities Period in January.Mazurenko notes that his involvement in this project has furthered his interdisciplinary education at MIT, commenting that “[t]he opportunity to participate in this UROP with Professor Rau was the perfect chance to begin to work in the intersection of my passions.” His work, and that of Pham, will be presented at upcoming conferences, and are expected to produce academic papers under the guidance of Sabatini and Rau.

    Through an interdisciplinary collaboration between MIT and Boston's Museum of Fine Arts, researchers are creating playable physical and synthesized replicas of historical and prehistoric musical instruments.

  • Pr.Germux Speaker Diarization ProSpeaker Diarization Pro Automatically split mixed-speaker audio into separate tracks, right inside your DAW Transform any mono or stereo recording into isolated speaker stems, subtitles, and timeline files for podcasts, interviews, post-production, and research workflows. With a single plug-in instance, Speaker Diarization Pro uses embedded diarization model assets to detect speaker boundaries and export per-voice outputs, saving hours of manual editing. Key Features Advanced Speaker Segmentation (1 to 20) Choose the number of speakers from 1 to 20, or enable Auto mode for speaker-count detection. Expanded Pro Input Formats Pro supports WAV, MP3, AIFF/AIF, FLAC, and OGG. Basic supports WAV only. Higher Speaker-Identity Accuracy vs first Basic (192-dim) Pro uses full 512-dimensional speaker embeddings. That is +167% richer embedding representation (512 vs 192) and removes the earlier 63% embedding truncation. In practice, diarization quality is more stable on difficult multi-speaker recordings. Pro Controls for Cleaner Turns Adjust sensitivity, minimum segment length, and merge gap for better speaker boundary behavior. Hardware Modes Run Auto hardware mode (GPU when available with CPU fallback) or force CPU-only mode. Multi-Export Workflow Export WAV stems, SRT subtitles, and CSV diarization timeline in one run. Fully Local Processing Runs inside your DAW with no cloud upload and no external app round-trip. Pro vs Basic (Quick Contrast) Capabilities | Basic | Pro Input formats | WAV only | WAV, MP3, AIFF/AIF, FLAC, OGG Max speakers | up to 10 | up to 20 (+ Auto mode) Exports | WAV stems | WAV stems + SRT + CSV How It Works 1) Install (copy) your Speaker Diarizer folder to the system VST3 folder: Windows (64-bit): C:\Program Files\Common Files\VST3\ macOS: /Library/Audio/Plug-Ins/VST3/ Or if you specifically pinpoint you DAW application to the plug-in root folder. 2) Open the Speaker Diarization plug-in in your DAW program. 3) Browse your recording in WAV format and choose number of speakers inside the recording. 4) Adjust sensitivity, minimum segment length, or expected speaker count. 5) Export automatically speaker's in root folder. System Requirements Windows 10 or later (64-bit or 32-bit). macOS 10.15+ (Intel or Apple Silicon). DAW supporting VST3 (Audition only supports effects, not instruments). CPU: SSE4.1+ (most CPUs since 2010). Optional compatible GPU for accelerated Auto mode. ~100 MB disk space for plug-in + model files. What's Included Speaker Diarization Pro.vst3 (x86, x64, arm64). ONNX models (.onnx) pre-optimized for real-time. Runtime components required by the plug-in. Lifetime license with free minor updates. Licensing & Support Perpetual License: purchase once, use forever. Email support: pr.germux@gmail.com. Take your podcast, interview, and post-production workflow to the next level. Use Speaker Diarization Pro and stop manual chopping — let AI do the hard work. All sales are final, and no refunds will be issued for this product due to its digital nature. If you encounter any issues or need assistance, feel free to contact me at: pr.germux@gmail.com. I'll be happy to help resolve any questions or concerns. Read More