Reactions
- in the community space Music from Within
I imagine this is quite logical thing and generally people don't want to listen to #music simply and automatically generated by #AI machines. There is some good stuff for #creativity support and is interesting as well in general. But I think it is good to smartly combine analog, digital and AI worlds...or not...there is not one right answer
in the community space Music from WithinMusic is Getting Physical Again (in the Age of AI)- AlgoRhythms 2026What if the most surprising thing in music right now isn't what AI is creating, but what fans are reaching for instead? This week we bring highlights from AlgoRhythms 2026: Olivia Jones of MIDiA Research on why... ... - in the community space Education
Analog vs. digital synthesizers: What’s the difference and which should you choose?
Learn about the strengths and limitations of analog vs. digital synthesizers, and when you'd want to reach for each.Analog vs. Digital Synthesizer - Blog | Splice
splice.comWhat’s the real difference between an analog vs. digital synthesizer? Learn which synth type best fits your workflow and production needs.
RealOpen and TRON verify $9.4M in USDT for crypto-enabled real estate purchasesRealOpen, the leading platform for buying real estate with crypto, today announced the conclusion of its collaborative "Fast Moves, Fast Payments" Holiday Campaign with TRON
RealOpen and TRON verify $9.4M in USDT for crypto-enabled real estate purchases
cointelegraph.comRealOpen, the leading platform for buying real estate with crypto, today announced the conclusion of its collaborative "Fast Moves, Fast Payments" Holiday Campaign with TRON
Google gains 25M subscriptions in Q1, driven by YouTube and Google OneGoogle added 25M paid subscriptions in Q1, reaching 350M total, as YouTube and Google One grow.
Google gains 25M subscriptions in Q1, driven by YouTube and Google One | TechCrunch
techcrunch.comGoogle added 25M paid subscriptions in Q1, reaching 350M total, as YouTube and Google One grow.
Using a VT-100 TodayYou may not know what a ADM-3, a TV910, or a H1420 are, but you probably have at least heard of a VT-100. They are all terminals from around the same time, but the DEC VT-100 is the terminal that practically everything today at least somewhat emulates. Even though a real VT-100 is rare, since it defined what have become ANSI escape sequences, most computers you’ve used in the last few decades speak some variation of the VT-100’s language. [Nikhil] wanted to see if you could use a VT-100 for real work today.
While the VT-100 wasn’t a general-purpose computer, it did have an 8080 inside. It only had about 3K of RAM, which was enough to act as a serial terminal. A USB serial port and a terminal with modern Linux, how hard could it be?As it turns out there were a few issues. MacOS assumes terminals can take data at 9600 baud with no handshaking, apparently. It also means that any application that assumes redrawing the whole terminal is fast will be sorry for that choice.
Of course, there are commands modern VT-100-like terminals accept that the original didn’t. However, as you’ll see in the post, all of these things you can either live with or solve.
It is easy to make your own VT-100 replica. While the VT-100 may seem simple today, it was a marvel compared to even older terminals.Using a VT-100 Today
hackaday.comYou may not know what a ADM-3, a TV910, or a H1420 are, but you probably have at least heard of a VT-100. They are all terminals from around the same time, but the DEC VT-100 is the terminal that p…
- in the community space Tools and Plugins
Audio Fusion Bureau releases RoomDiY, a FREE acoustic room simulation plugin
From developer Audio Fusion Bureau comes RoomDiY, a free acoustic room simulation plugin for macOS and Windows. RoomDiY offers advanced real-time acoustic modelling and room analysis. In short, the plugin allows you to design the ideal acoustic space for any given project. We’ve covered many convolution reverb plugins that offer impulse responses of real-world spaces, [...]
View post: Audio Fusion Bureau releases RoomDiY, a FREE acoustic room simulation pluginAudio Fusion Bureau releases RoomDiY, a FREE acoustic room simulation plugin
bedroomproducersblog.comFrom developer Audio Fusion Bureau comes RoomDiY, a free acoustic room simulation plugin for macOS and Windows. RoomDiY offers advanced real-time acoustic modelling and room analysis. In short, the plugin allows you to design the ideal acoustic space for any given project. We’ve covered many convolution reverb plugins that offer impulse responses of real-world spaces,
- in the community space Music from Within
UMG generated $3.39 billion in Q1, up 8.1% YoY – driven by BTS, Olivia Dean, Taylor Swift, and moreUniversal Music Group has published its Q1 2026 results for the three months ending March 31
SourceUMG generated $3.39 billion in Q1, up 8.1% YoY – driven by BTS, Olivia Dean, Taylor Swift, and more
www.musicbusinessworldwide.comUniversal Music Group has published its Q1 2026 results for the three months ending March 31…
- in the community space Music from Within
Recording industry Renaissance man David Goggin (aka Mr. Bonzai) passes at 78Music Connection was saddened to learn of the passing of David "Mr. Bonzai" Goggin this week:
David Goggin (often known by his pen name “Mr. Bonzai”), whose journalism, photography, visual art, and advocacy chronicled the golden age of recording studios, has died peacefully after a valiant fight with two cancers and a stroke. He was 78.
David is survived by his wife of 42 years, acclaimed artist Keiko Kasai, with whom he shared a long and intimate personal and artistic partnership. She was the muse for over 1,000 of his drawings and portraits.
A true Renaissance man, Goggin was an accomplished artist, writer, photographer, journalist, filmmaker, and poet. He was best known for his monthly interviews with producers, engineers, and musicians for Mix magazine and later EQ magazine from the late 1970s through the 1990s. He produced over 250 interviews for both these magazines, offering quirky, insightful, and vivid portraits of studio life, where some of the era’s most iconic albums were recorded. His work documented the voices of producers, engineers, and session musicians often overlooked in mainstream music reporting.“
I just kind of fell into it. I was always around music,” Goggin told podcaster Daniel Keller. “I wasn’t thinking about a career; I was just doing what I loved. Suddenly, I’m in the studio with these legends, documenting them making their music. This became my life—capturing these moments. I realized I had a front-row seat to history.”
Born in Kingston, New York in 1947 to cartoonist Edward James Goggin and Anna Marie Farrell, David Goggin graduated from the University of California at Irvine (UCI) with a degree in English Literature. After producing light shows for concerts by Janis Joplin and Buffalo Springfield at UCI, he spent a year studying abroad at the University of Edinburgh and traveled extensively in the UK, where he met John Lennon in 1968-1969, and witnessed a session where The Beatles recorded “I Am the Walrus.” This experience ignited Goggin’s lifelong passion for the craft of recording and the people behind it.
While at UCI, Goggin studied drawing with David Hockney; it was a pursuit he continued throughout his lifetime. Building from his drawing technique, his art practice expanded to include delicate wire sculptures that are widely collected by an eclectic group of Hollywood luminaries including Norman Lear.Goggin started his career in media in the late sixties, hosting a late-night comedy radio show in Montreal. When the show was cancelled, David returned to Orange County and began work in the recording industry as the studio manager at the Lyon Recording Studio, while doing publicity for an affiliated company, Lyon Lamb Video Animation Systems.
Goggin’s first break as a music journalist came with the then-startup Mix magazine in 1979, where editor and soon-to-be lifelong friend David Schwartz invited him to write a monthly column about the pressured, offbeat life inside a small Orange County recording studio. Writing under the pen name Mr. Bonzai, his columns became a staple of the magazine, evolving into his first book and the popular Lunching with Mr. Bonzai series. Over his career, Goggin wrote more than 1,000 articles and interviews for major publications in the U.S., Europe, and Asia, including Rolling Stone, The New York Times, Billboard, and The Hollywood Reporter, Sound & Recording Japan.
Among his many skills was his ability to elicit brain-scratching quotes from pressured artists. Film composer CJ Vanston called him “the mother of all flies on the wall,” and Suzanne Ciani said he was “always a charming and clever centerpiece at any industry convention,” while “Weird Al” Yankovic said that Mr. Bonzai “got inside my mind when I wasn’t looking,” and Graham Nash observed that his greatest talent was “being invisible,” and George Massenburg described him simply as “curiosity and joy.”
Many of his articles featured his award-winning photography, establishing him as Los Angeles’ preeminent recording studio photographer. Sights of Goggin, in his pork pie hat, metallic glasses, lanyard Montblanc fountain pen, and multi-colored shirt, working booths at industry conventions with a Leica camera and ladder in hand, made him one of the most recognized figures in the pro-audio industry.
Goggin’s early studio stories were compiled into his first book, Studio Life: The Other Side of the Tracks (1984), and his life’s work included seven more books: Santa’s Secret Sled (1980), co-written with Bruce Lyon; Hal Blaine and The Wrecking Crew (1990), co-written with legendary session drummer Hal Blaine; The Sound of Money (2000), co-written with his friend and client Chris Stone; Faces of Music (2006); Music Smarts (2009); and John Lennon’s Tooth (2012).
In 2025, he co-authored Buzz Me In: Inside the Record Plant Studios with music journalist Martin Porter, reconstructing the wild and innovative history of Record Plant Studios in New York, Sausalito, and Los Angeles, where Goggin worked as a press agent.
In addition to his work at Record Plant, Goggin collaborated with the studio’s owner Chris Stone on industry advocacy groups such as SPARS and the World Studio Group. He co-founded, with producer/engineer Ed Cherney and Stone, the Music Producers Guild of the Americas, which later became the Producers & Engineers Wing of the Recording Academy.
He was also active in the National Association of Music Merchants (NAMM) community, producing conference sessions with audio-industry pioneers and hosting the Technical Excellence & Creativity (TEC) Awards. He appeared on NAMM’s TEC Tracks stage in January 2026, with Devo frontman Mark Mothersbaugh and producer Bob Margouleff to discuss the making of the 1980 hit “Whip It” at Record Plant.Mothersbaugh once called him “a master of modern music photojournalism,” obliquely adding that “Mr. Bonzai is the future of the past.”Goggin’s company, Communication Arcs, provided PR and photographic services to leading pro-audio manufacturers and recording studios, including Sony, Telefunken, Sommer Cable, Ocean Way Recording, United Recording, and Bernie Grundman Mastering.
For half a century, David Goggin’s work compiled the audio and visual history of the recording studio era, bridging the early creative chaos of the analog studio age and the digital birth of personal music production, always focusing the lens on the people and technology behind the scenes that made the music happen.
"David 'Mr. Bonzai' Goggin was a friend of mine, and to countless others in the Pro Audio/MI industries," says MC publisher Eric Bettelli. "Dave's contribution to our industry spans decades. He was a top notch professional music journalist, publicist and author, whose work speaks for itself. He just never missed a beat. And most of all, Dave was one of the good guys, who was always eager to share his incredible talent with all. A real Mench. Dave, RIP, and hope to see you again on the other side."The post Recording industry Renaissance man David Goggin (aka Mr. Bonzai) passes at 78 first appeared on Music Connection Magazine.
Recording industry Renaissance man David Goggin passes at 78
www.musicconnection.comRecording industry Renaissance man David Goggin passes at 78. Music Connection was saddened to learn of the passing of David "Mr. Bonzai" Goggin this week.
- in the community space Tools and Plugins
No Type No Tag Beats StuttermationStuttermation is a dynamic, timeline-based stutter, glitch, and time-manipulation multi-effect built for precise rhythmic audio editing inside a DAW-synced plugin. Instead of relying on MIDI triggers or static LFO patterns, Stuttermation lets you draw audio manipulation events directly onto a timeline and shape each moment independently. The plugin is built around Blocks. Each block can have its own rate, buffer behavior, gate shape, pitch movement, probability, direction, filtering, drive, volume, pan, and envelope curves. This makes it possible to chain tight stutters, halftime chops, reverse cuts, pitch ramps, gated 808 tail edits, accelerating glitches, distorted rhythmic effects, and full-loop transformations inside one continuous sequence. Main Features Timeline-Based Block Sequencing: Draw, resize, duplicate, delete, and arrange stutter blocks on a DAW-synced timeline with customizable snap settings, zooming, smooth scrolling, and an interactive minimap. The timeline is capped at 500 bars for stable large-session behavior. Per-Block Slice Engine: Each block renders through a strict slice-local playback core, so the block's rate controls slice count while pitch, halftime, reverse, and buffer length shape what happens inside each slice without changing the number of slices. Dynamic Rate Morphing: Set independent Start and End rates for a block, from slow rhythmic chops to rapid glitch bursts, for smooth accelerations, decelerations, and evolving stutter movement. Musical Buffer Length Control: Choose Match Rate or shorter buffer lengths per block. Shorter buffers play only that portion of each slice, then smooth to silence without stretching or repeating the audio, preserving pitch and transient integrity. Advanced Audio Grabbing: Choose how each block captures audio: Start of Block for classic stutter sampling, Moving Playhead for live granular behavior, or Fixed Offset for repeatable delayed capture. Gate Shape Modes: Shape every slice with dedicated gate modes: Truncate for tight cuts, Fade for smoother exits, and T-Safe for transient-conscious slicing. Gate percentage, Attack, Release, Reverse, and Halftime are all set per block. Bass and 808 Tail Handling: Designed to work cleanly on sustained low-end material, including 808 tails and bass notes, with adaptive entry smoothing, safer slice transitions, and pitch-aware rendering. Transient-Aware Pitch Rendering: Pitch ramps and pitch-shifted blocks use improved slice handling to preserve punch on transient-heavy material while keeping sustained bass and 808 content stable. High-Quality Interpolation: Includes multiple interpolation modes, including an upgraded HQ sinc mode for smoother pitch movement, cleaner high-rate stutters, and improved resampling quality. Per-Block Curve Envelopes: Draw automation curves directly onto blocks with adjustable tension for ease-in/ease-out movement. Available lanes include Volume, Pitch, Pan, High-Pass Filter, Low-Pass Filter, and Overdrive. Envelope Workflow Tools: Copy and paste envelope shapes between lanes, apply built-in shapes such as ramps, saws, squares, and pulses, and use slice-aligned visual guides for faster rhythmic editing. Drive and Distortion Control: Shape distortion per block with the Overdrive envelope, optional drive-only 2x/4x oversampling, and a 2x Drive option for more aggressive saturation when needed. Output Safety Stage: A transparent final safety stage helps catch stacked blocks, heavy drive, and dense overlapping effects before they create unexpected output spikes. Generative Probability: Use Block Probability to decide whether an entire block triggers, and Stutter Probability to control whether individual slices play, creating evolving rhythms from static loops. Randomize and Humanize Tools: Quickly generate new ideas with randomize options for selected block settings and envelopes, plus humanize tools for subtle rate and probability variation. Polyphonic Micro-Grain DSP: A 16-voice slice/grain engine with explicit voice allocation, click-conscious transitions, and stable voice stealing for dense overlapping blocks and fast stutter patterns. Channel-Safe Automation: Volume, pan, filter, wet, and drive transitions are smoothed and audited to avoid block-entry leaks, center blips, and opposite-channel artifacts. Global Playback Modes: Fine-tune the overall engine behavior with global Transition, Interpolation, Timing, and Mix modes for different editing styles and playback needs. Visual Slice Feedback: Timeline blocks show slice shapes, gate behavior, probability markings, envelope overlays, reverse direction, and compact gate-mode badges for quick visual editing. Built-In Preset Manager: Save, load, and browse XML-based stutter sequences across sessions and DAWs, including block settings, gate modes, envelopes, probability, global modes, and timeline data. Support. For support or inquiries: NO.TYPE.NO.TAG.BEATS@gmail.com FAQ. Does Stuttermation use MIDI triggering? No. Stuttermation is timeline-based. You draw blocks directly where you want the effect to happen. Can each block have different settings? Yes. Every block can have its own rate, buffer length, envelopes, gate mode, reverse, halftime, drive, filter, pan, probability, and more. Does it work on 808s and bass tails? Yes. The engine has been tuned for sustained low-end material, including 808 tails and bass notes. Can I use it for glitch effects? Yes. Use rate morphing, pitch envelopes, buffer length, probability, reverse, drive, and filter envelopes to create glitch builds and rhythmic edits. Read More
https://www.kvraudio.com/product/stuttermation-by-no-type-no-tag-beats?utm_source=kvrnewindbfeed&utm_medium=rssfeed&utm_campaign=rss&utm_content=35371 Inspired by Four Tet and Bonobo: Excite Audio unveils Bloom Drum Kits, the latest addition to its much-loved Bloom plugin seriesExcite Audio has expanded its much-loved Bloom series of plugins and virtual instruments, this time foraying further into the world of live drums with Bloom Drum Kits.
Inspired by the raw drum sounds used by the likes of Four Tet, Bonobo, Nicolas Jaar and Geese, Bloom Drum Kits has been built using kits played “in the room”, offering a “rough sense of motion and individuality” to your tracks.READ MORE: Claude can now be plugged into Ableton to assist with your music projects
Bloom Drum Kits offers up both raw, closed mic’d drums and “tape-worn, processed hits”, with a collection of professionally played rhythms and one-shots spanning detuned toms, snare sounds, and so much more.
It sports a similar minimalist user interface featured on the rest of Excite Audio’s Bloom line, and even allows producers to upload their own samples and play them using the Bloom Drum Kits interface.The plugin arrives with 250 presets organised into seven different categories:
Basic (BA) – Straightforward drum beats built from each factory kit.
Experimental (EXP) – Abstract, offbeat presets with a more sound design-focused feel.
Kits (KIT) – Single hits and phrases created entirely from one-shots.
High Energy (HI) – Fast, busy, and more aggressive beats for added momentum.
Low Energy (LO) – Laid-back, minimal grooves for softer and more downtempo tracks.
Percussion (PC) – Rhythmic sequences built entirely from percussion loops and samples.
Processed (PRO) – Heavily treated beats featuring distortion, effects, and macro-driven movement.
Top Loops (TOP) – Snare and hi-hat loops for layering and groove.Bloom Drum Kits is available now at an introductory price of just £19 / $19 until 31 May.
Learn more at Plugin Boutique.
The post Inspired by Four Tet and Bonobo: Excite Audio unveils Bloom Drum Kits, the latest addition to its much-loved Bloom plugin series appeared first on MusicTech.Inspired by Four Tet and Bonobo: Excite Audio unveils Bloom Drum Kits, the latest addition to its much-loved Bloom plugin series
musictech.comExcite Audio has expanded its much-loved Bloom series of plugins and virtual instruments, this time foraying further into the world of live drums with Bloom Drum Kits.
- in the community space Tools and Plugins
iZotope RX 12 is here RX 12 introduces two entirely new modules, Stems View and Scene Rebalance, and boasts a number of improvements to its existing tools thanks to some behind-the-scenes tweaks.
iZotope RX 12 is here
www.soundonsound.comRX 12 introduces two entirely new modules, Stems View and Scene Rebalance, and boasts a number of improvements to its existing tools thanks to some behind-the-scenes tweaks.
- in the community space Tools and Plugins
Expressive E launch the Osmose CE The latest additons to the Osmose family deliver the same playing experience as the originals, but with a companion software suite rather than an internal synth engine.
Expressive E launch the Osmose CE
www.soundonsound.comThe latest additons to the Osmose family deliver the same playing experience as the originals, but with a companion software suite rather than an internal synth engine.
- in the community space Tools and Plugins
Sentinel AV and Bladerunner release Folda, a 4-group morphing distortion plugin
Sentinel AV and UK drum-and-bass producer Bladerunner have released Folda, a 4-group morphing distortion plugin. Folda is available as VST3 for Windows and as VST3/AU for macOS (Universal Binary), with an intro price of $67 for the first seven days after launch (until May 2), regular price $99 thereafter. The main idea behind Folda is [...]
View post: Sentinel AV and Bladerunner release Folda, a 4-group morphing distortion pluginSentinel AV and Bladerunner release Folda, a 4-group morphing distortion plugin
bedroomproducersblog.comSentinel AV and UK drum-and-bass producer Bladerunner have released Folda, a 4-group morphing distortion plugin. Folda is available as VST3 for Windows and as VST3/AU for macOS (Universal Binary), with an intro price of $67 for the first seven days after launch (until May 2), regular price $99 thereafter. The main idea behind Folda is
Claude can now be plugged into Ableton to assist with your music projectsClaude – the AI assistant and chatbot from Anthropic – can now be directly plugged into Ableton, as well as a raft of other creative platforms, including Blender and Photoshop.
The move follows the launch of Claude Design, a new product by Anthropic Labs that lets you collaborate with Claude to create “polished visual work” like designs, one-pagers and more.
With the new set of connectors for Claude, the popular chatbot is able to plug into Ableton, and act as an AI assistant within your music projects. Anthropic says its partnership with a “coalition of partners” – which also includes Blender, Adobe (Photoshop and Premiere Pro) and Affinity by Canva.READ MORE: Focusrite unveils ISA C8X, its first ISA audio interface built on Rupert Neve’s preamp legacy
Interestingly, Splice is also named in the list of brands integrating Claude into its products. It means producers can now search Splice’s catalogue of royalty-free samples directly within Claude.
According to a blog post on the Anthropic website, within these platforms, Claude can be used in a variety of ways. Users can ask Claude complex questions about the software, with the chatbot acting as a virtual tutor to help you better understand your workflow.
Elsewhere, Claude Code can write scripts, plugins, and generative systems for these platforms.
And perhaps most importantly for creatives, Claude can be used to take care of manual, repetitive tasks that get in the way of the creative process.
“Claude can’t replace taste or imagination, but it can open up new ways of working – faster and more ambitious ideation, a more expansive skillset, and the ability for creatives to take on larger-scale projects,” Anthropic says [via The Verge].
“AI can also help shoulder the parts of the creative process that eat up time by handling repetitive tasks and eliminating manual toil.”
Check out the video below for a walkthrough on how to integrate Claude into Splice:Anthropic has also now become a Corporate Patron of the Blender Development Fund, helping the open-source platform to stay free, and to allow developers to “keep pursuing projects independently, and to focus on building tools for artists and creators”. Anthropic will give Blender €240,000 every year.
The post Claude can now be plugged into Ableton to assist with your music projects appeared first on MusicTech.Claude can now be plugged into Ableton to assist with your music projects
musictech.comClaude – the AI assistant and chatbot from Anthropic – can now be directly plugged into Ableton, as well as a raft of other creative platforms, including Blender and Photoshop.
- in the community space Education
MIT engineers’ virtual violin produces realistic soundsThere is no question that violin-making is an art form. It requires a musician’s ear, a craftsperson’s skill, and an historian’s appreciation of lessons learned over time. Making a violin also takes trust: Violin makers, or luthiers, often must wait until the instrument is finished before they can hear how all their hard work will sound.But a new tool developed by MIT engineers could help luthiers play around with a violin’s design and tweak its sound even before a single part is carved.In a study appearing today in the journal npj Acoustics, the MIT team reports on a new “computational violin” — a computer simulation that captures the detailed physics of the instrument and realistically produces the sound of a violin when its strings are plucked.While there are software programs and plug-ins that enable users to play around with virtual violins, their sounds are typically the result of sampling and averaging over thousands of notes played by actual violins.In contrast, the new computational violin takes a physics-based approach: It produces sound based on the way the instrument, including its vibrating strings, physically interacts with the surrounding air.As a demonstration, the researchers applied the computational violin to play two short excerpts: one from “Bach’s Fugue in G Minor,” and another from “Daisy Bell” — a nod to the first song that was ever produced by a computer-synthesized voice.The computational violin currently simulates the sound of plucked strings — a type of playing that musicians know as “pizzicato.” Violin bowing, the researchers say, is a much more complicated interaction to model. However, the computational violin represents the first physics-based foundation of a strung violin sound that could one day be paired with a model of bowing to produce realistic, bowed violin music.For now, the team says the new virtual violin could be used in the initial stages of violin design. Luthiers can tweak certain parameters such as a violin’s wood type or the thickness of its body, and then listen to the sound that the instrument would make in response.“These days, people try to improve designs little by little by building a violin, comparing the sound, then making a change to the next instrument,” says Yuming Liu, senior research scientist at MIT. “It’s very slow and expensive. Now they can make a change virtually and see what the sound would be.”“We’re not saying that we can reproduce the artisan’s magic,” adds Nicholas Makris, professor of mechanical engineering at MIT. “We’re just trying to understand the physics of violin sound, and perhaps help luthiers in the design process.”Makris and Liu’s MIT co-authors include Arun Krishnadas PhD ’23 and former postdoc Bryce Campbell, along with Roman Barnas of the North Bennet Street School.Sound matrixThe quality of a violin’s sound is determined by its dimensions and design. The instrument is made from thoughtfully crafted parts and materials that all work to generate and amplify sound. In recent years, scientists have sought to understand what artisans have intuited for centuries, in terms of what specific parameters shape a violin’s sound.In one early effort in 2006, scientists, as part of the Strad3D project, put a rare Stradivarius violin through a CT scanner. The violin was crafted in 1715 by the master violinmaker Antonio Stradivari, during what is considered the “Golden Age” of violin making. To better understand the violin’s anatomy and its relation to sound, the scientists scanned the instrument and produced 600 “slices,” or views, of the violin.The CT scans are available online for people to view and use as data for their own experiments. For their study, Makris and his colleagues first imported the CT scans into a solid modeling software program to generate a detailed three-dimensional model of the violin. They then ran a finite element simulation, essentially dividing the violin into millions of tiny individual cubes, or “elements.”For each cube, they noted its material type, such as if a cube from the violin’s back plate is made from maple or spruce, or if a string is made from steel or natural fibers. They then applied physics-based equations of stress and motion to predict how each material element would move in relation to every other element across the instrument.They also carried out a similar process for the air surrounding the violin, dividing up a roughly cubic-meter volume of air and applying acoustic wave equations to predict how each tiny parcel of air would move and contribute to generating sound.“The entire thing is a matrix of millions of individual elements,” explains Krishnadas. “And ultimately, you see this whole three-dimensional being, which is the violin and the air all connected and interacting with each other.”A plucky modelThe team then simulated how the new computational violin would sound when plucked. When a violinist plucks a string, they pull the string sideways and let it go, causing the string to vibrate. These vibrations travel across the instrument and inside it; the air’s vibrations are amplified as they travel out of the violin and into the surroundings, where a listener hears the vibrations as sound.For their purposes, the engineers simulated a simple string pluck by directing one of the virtual violin’s strings to stretch out and then rebound. The simulation computed all the resulting motions and vibrations of the millions of elements in the violin, and the sound that the pluck would produce.For notes that require pressing down on a violin’s fingerboard, they simulated the same plucking, and in addition, included a condition in which the string is held fixed in the section of the fingerboard where a violinist’s finger would press down.The researchers carried out this computational process to virtually pluck out the notes in several measures of “Daisy Bell” and “Bach’s Fugue in G Minor.”“If there’s anything that’s sounding mechanical to it, it’s because we’re using the exact same time function, or standard way of plucking, for each note,” says Makris, who is himself a lute player. “A musician will adapt the way they’re plucking, to put a little more feeling on certain notes than others. But there could be subtleties which we could incorporate and refine.”As it is, the new computational model is the first to generate realistic sound based on the laws of physics and acoustics. The researchers say that violin makers could use the model to test how a violin might sound when certain dimensions or properties are changed. For instance, when the researchers varied the thickness of the virtual violin’s back plate or changed its wood type, they could hear clear differences in the resulting sounds.“You can tweak the model, to hear the effect on the sound,” Makris says. “Since everything obeys the laws of physics, including a violin and the music it makes, this approach can add an appreciation to what makes violin sound. But ultimately, we get most of our inspiration from the artisans.”This work was supported, in part, by an MIT Bose Research Fellowship.
MIT engineers’ virtual violin produces realistic sounds
news.mit.eduMIT researchers developed a “computational violin” — the first computer simulation that captures the detailed physics of the instrument and realistically produces the sound of a violin when its strings are plucked. Violin makers could use the model to test how a violin might sound when certain dimensions or properties are changed.
Sol8
@Sol8drab.v.a
@valdrabiblisalexandermusic
@iblisalexandermusiccervantezeric552
@EROCKSTER1






