Reactions

  • Ramera Abraham to host masterclass at Miloco Gear 2024 Miloco have announced that the Miloco Gear 2024 Pro Audio Showcase will feature a masterclass delivered by vocal producer and recording engineer Ramera Abraham.

    Miloco have announced that the Miloco Gear 2024 Pro Audio Showcase will feature a masterclass delivered by vocal producer and recording engineer Ramera Abraham.

  • Schwabe Digital release Gold Clip Pack update Schwabe Digital have announced that their popular Gold Clip plug-in will now be known as Gold Clip Pack, and will include a new, free add-on plug-in called Gold Clip Track.

    Schwabe Digital have announced that their popular Gold Clip plug-in will now be known as Gold Clip Pack, and will include a new, free add-on plug-in called Gold Clip Track.

  • More music is released in a day in 2024 than in all of 1989 combinedIf you think that there’s too much music coming out and not enough time to listen to it all, it turns out that’s not an illusion. A new report has found that more music is released in a single day now than in the entirety of 1989.

    READ MORE: “It’s a dangerous job to be a young artist”: Skrillex shares sentiments on the music industry and plans for new music in 2025

    The finding was confirmed in the MusicRadar report by music business economist Will Page, the former Chief Economist of Spotify.
    “More music is being released today (in a single day) than was released in the calendar year of 1989,” Page explains. “More of that music is being done by artists themselves, meaning there’s even more demand for music production software.”
    While this finding does put into perspective just how saturated the musical climate is nowadays, it’s perhaps not surprising given how much music has been democratised  in recent years thanks to technological progress. It’s easier than ever to make and share high quality music from your bedroom, even without label backing, and it’s also brought down the cost – and subsequently, the industry’s barriers to entry.
    According to the MIDiA’s “State of the Music Creator Economy” report, there were 75.9 million global music creators in 2023, which increased 12 per cent from 2022.  By 2030, that number is predicted to hit 198.2 million.
    This might be great for consumers with infinite choice of what to listen to for minimal cost on streaming services, but it’s consequently harder for artists to stand out. Money remains an issue thanks to the miniscule payments streaming services make and the rising cost of music-making as more and more software, plugins, and other tools switch to subscription-based models. The effect, according to the report, is that “fatigue and resentment [is] on the rise.”
    According to MIDiA, “a quarter of software, sound, and services revenues” in 2022 came from subscriptions, which is expected to increase to one-third by 2030.
    “Software companies have followed the lead set by Wall Street. Recurring revenue is very sexy right now,” says  Steve Heithecker of Pyramind Institute in the report. People often also forget they have the subs and then it’s a bit like free money for these companies when they auto renew.”
    The post More music is released in a day in 2024 than in all of 1989 combined appeared first on MusicTech.

    A new report has founded that more music is released in a typical day in 2024 than in the whole of 1989 combined.

  • Stephen Pearcy Discusses 40th Anniversary Reissue of Ratt's Out of the CellarStephen Pearcy Discusses 40th Anniversary Reissue of Ratt's Out of the Cellar, the Musical Importance of 1984, and Why the Surviving Ratt Members Haven't Reunited Recently.

    The early to mid '80s was certainly a high point for hard rock and heavy metal – particularly, when you take into account the high amount of now-classic albums that were issued…

  • Donald Trump taps crypto advocate Lutnick as commerce secretaryThe future commerce secretary is a billionaire whose Wall Street firm has ties to Tether.

  • Win $1,000 and get your song pitched to K-pop artists by MNDR, Margo XS, and Jess Corazza
    We're excited to announce that we're partnering with MNDR, We Make Noise, and Soundtoys for our latest K-pop challenge.

    We're excited to announce that we're partnering with MNDR, We Make Noise, and Soundtoys for our latest K-pop challenge.

  • PSA: You shouldn’t upload your medical images to AI chatbotsSecurity and privacy advocates have long warned that sensitive medical data can be used to train AI models, and can expose personal data down the line.
    © 2024 TechCrunch. All rights reserved. For personal use only.

    Security and privacy advocates have long warned that sensitive medical data can be used to train AI models, and can expose personal data down the line.

  • Raspberry Pi Compute Module 5 Seen in the WildLast Thursday we were at Electronica, which is billed as the world’s largest electronics trade show, and it probably is! It fills up twenty airplane-hangar-sized halls in Munich, and only takes place every two years.
    And what did we see on the wall in the Raspberry Pi department? One of the relatively new AI-enabled cameras running a real-time pose estimation demo, powered by nothing less than a brand-new Raspberry Pi Compute Module 5. And it seemed happy to be running without a heatsink, but we don’t know how much load it was put under – most of the AI processing is done in the camera module.
    We haven’t heard anything about the CM5 yet from the Raspberry folks, but we can’t imagine there’s all that much to say except that they’re getting ready to start production soon. The test board looks very similar to the RP4 CM demo board, so we imagine that the footprint hasn’t changed. If you look really carefully, this one seems to have mouse bites on it that haven’t been ground off, so we’re speculating that this is still a pre-production unit, but feel free to generate wild rumors in the comment section.
    The CM4 was a real change for the compute module series, coming with a brand-new pinout that enabled them to break out more PCIe lanes. Despite the special connectors, it wasn’t all that hard to work with if you’re dedicated. So if you need more computing power in that smaller form factor, we’re guessing that you won’t have to wait all that much longer!
    Thanks [kuro] for the tip, and for walking around Electronica with me.

    Last Thursday we were at Electronica, which is billed as the world’s largest electronics trade show, and it probably is! It fills up twenty airplane-hangar-sized halls in Munich, and only takes pla…

  • A model of virtuosityA crowd gathered at the MIT Media Lab in September for a concert by musician Jordan Rudess and two collaborators. One of them, violinist and vocalist Camilla Bäckman, has performed with Rudess before. The other — an artificial intelligence model informally dubbed the jam_bot, which Rudess developed with an MIT team over the preceding several months — was making its public debut as a work in progress.Throughout the show, Rudess and Bäckman exchanged the signals and smiles of experienced musicians finding a groove together. Rudess’ interactions with the jam_bot suggested a different and unfamiliar kind of exchange. During one duet inspired by Bach, Rudess alternated between playing a few measures and allowing the AI to continue the music in a similar baroque style. Each time the model took its turn, a range of expressions moved across Rudess’ face: bemusement, concentration, curiosity. At the end of the piece, Rudess admitted to the audience, “That is a combination of a whole lot of fun and really, really challenging.”Rudess is an acclaimed keyboardist — the best of all time, according to one Music Radar magazine poll — known for his work with the platinum-selling, Grammy-winning progressive metal band Dream Theater, which embarks this fall on a 40th anniversary tour. He is also a solo artist whose latest album, “Permission to Fly,” was released on Sept. 6; an educator who shares his skills through detailed online tutorials; and the founder of software company Wizdom Music. His work combines a rigorous classical foundation (he began his piano studies at The Juilliard School at age 9) with a genius for improvisation and an appetite for experimentation.Last spring, Rudess became a visiting artist with the MIT Center for Art, Science and Technology (CAST), collaborating with the MIT Media Lab’s Responsive Environments research group on the creation of new AI-powered music technology. Rudess’ main collaborators in the enterprise are Media Lab graduate students Lancelot Blanchard, who researches musical applications of generative AI (informed by his own studies in classical piano), and Perry Naseck, an artist and engineer specializing in interactive, kinetic, light- and time-based media. Overseeing the project is Professor Joseph Paradiso, head of the Responsive Environments group and a longtime Rudess fan. Paradiso arrived at the Media Lab in 1994 with a CV in physics and engineering and a sideline designing and building synthesizers to explore his avant-garde musical tastes. His group has a tradition of investigating musical frontiers through novel user interfaces, sensor networks, and unconventional datasets.The researchers set out to develop a machine learning model channeling Rudess’ distinctive musical style and technique. In a paper published online by MIT Press in September, co-authored with MIT music technology professor Eran Egozy, they articulate their vision for what they call “symbiotic virtuosity:” for human and computer to duet in real-time, learning from each duet they perform together, and making performance-worthy new music in front of a live audience.Rudess contributed the data on which Blanchard trained the AI model. Rudess also provided continuous testing and feedback, while Naseck experimented with ways of visualizing the technology for the audience.“Audiences are used to seeing lighting, graphics, and scenic elements at many concerts, so we needed a platform to allow the AI to build its own relationship with the audience,” Naseck says. In early demos, this took the form of a sculptural installation with illumination that shifted each time the AI changed chords. During the concert on Sept. 21, a grid of petal-shaped panels mounted behind Rudess came to life through choreography based on the activity and future generation of the AI model.“If you see jazz musicians make eye contact and nod at each other, that gives anticipation to the audience of what’s going to happen,” says Naseck. “The AI is effectively generating sheet music and then playing it. How do we show what’s coming next and communicate that?”Naseck designed and programmed the structure from scratch at the Media Lab with assistance from Brian Mayton (mechanical design) and Carlo Mandolini (fabrication), drawing some of its movements from an experimental machine learning model developed by visiting student Madhav Lavakare that maps music to points moving in space. With the ability to spin and tilt its petals at speeds ranging from subtle to dramatic, the kinetic sculpture distinguished the AI’s contributions during the concert from those of the human performers, while conveying the emotion and energy of its output: swaying gently when Rudess took the lead, for example, or furling and unfurling like a blossom as the AI model generated stately chords for an improvised adagio. The latter was one of Naseck’s favorite moments of the show.“At the end, Jordan and Camilla left the stage and allowed the AI to fully explore its own direction,” he recalls. “The sculpture made this moment very powerful — it allowed the stage to remain animated and intensified the grandiose nature of the chords the AI played. The audience was clearly captivated by this part, sitting at the edges of their seats.”“The goal is to create a musical visual experience,” says Rudess, “to show what’s possible and to up the game.”Musical futuresAs the starting point for his model, Blanchard used a music transformer, an open-source neural network architecture developed by MIT Assistant Professor Anna Huang SM ’08, who joined the MIT faculty in September.“Music transformers work in a similar way as large language models,” Blanchard explains. “The same way that ChatGPT would generate the most probable next word, the model we have would predict the most probable next notes.”Blanchard fine-tuned the model using Rudess’ own playing of elements from bass lines to chords to melodies, variations of which Rudess recorded in his New York studio. Along the way, Blanchard ensured the AI would be nimble enough to respond in real-time to Rudess’ improvisations.“We reframed the project,” says Blanchard, “in terms of musical futures that were hypothesized by the model and that were only being realized at the moment based on what Jordan was deciding.”As Rudess puts it: “How can the AI respond — how can I have a dialogue with it? That’s the cutting-edge part of what we’re doing.”Another priority emerged: “In the field of generative AI and music, you hear about startups like Suno or Udio that are able to generate music based on text prompts. Those are very interesting, but they lack controllability,” says Blanchard. “It was important for Jordan to be able to anticipate what was going to happen. If he could see the AI was going to make a decision he didn’t want, he could restart the generation or have a kill switch so that he can take control again.”In addition to giving Rudess a screen previewing the musical decisions of the model, Blanchard built in different modalities the musician could activate as he plays — prompting the AI to generate chords or lead melodies, for example, or initiating a call-and-response pattern.“Jordan is the mastermind of everything that’s happening,” he says.What would Jordan doThough the residency has wrapped up, the collaborators see many paths for continuing the research. For example, Naseck would like to experiment with more ways Rudess could interact directly with his installation, through features like capacitive sensing. “We hope in the future we’ll be able to work with more of his subtle motions and posture,” Naseck says.While the MIT collaboration focused on how Rudess can use the tool to augment his own performances, it’s easy to imagine other applications. Paradiso recalls an early encounter with the tech: “I played a chord sequence, and Jordan’s model was generating the leads. It was like having a musical ‘bee’ of Jordan Rudess buzzing around the melodic foundation I was laying down, doing something like Jordan would do, but subject to the simple progression I was playing,” he recalls, his face echoing the delight he felt at the time. “You're going to see AI plugins for your favorite musician that you can bring into your own compositions, with some knobs that let you control the particulars,” he posits. “It’s that kind of world we’re opening up with this.”Rudess is also keen to explore educational uses. Because the samples he recorded to train the model were similar to ear-training exercises he’s used with students, he thinks the model itself could someday be used for teaching. “This work has legs beyond just entertainment value,” he says.The foray into artificial intelligence is a natural progression for Rudess’ interest in music technology. “This is the next step,” he believes. When he discusses the work with fellow musicians, however, his enthusiasm for AI often meets with resistance. “I can have sympathy or compassion for a musician who feels threatened, I totally get that,” he allows. “But my mission is to be one of the people who moves this technology toward positive things.”“At the Media Lab, it’s so important to think about how AI and humans come together for the benefit of all,” says Paradiso. “How is AI going to lift us all up? Ideally it will do what so many technologies have done — bring us into another vista where we’re more enabled.”“Jordan is ahead of the pack,” Paradiso adds. “Once it’s established with him, people will follow.”Jamming with MITThe Media Lab first landed on Rudess’ radar before his residency because he wanted to try out the Knitted Keyboard created by another member of Responsive Environments, textile researcher Irmandy Wickasono PhD ’24. From that moment on, “It's been a discovery for me, learning about the cool things that are going on at MIT in the music world,” Rudess says.During two visits to Cambridge last spring (assisted by his wife, theater and music producer Danielle Rudess), Rudess reviewed final projects in Paradiso’s course on electronic music controllers, the syllabus for which included videos of his own past performances. He brought a new gesture-driven synthesizer called Osmose to a class on interactive music systems taught by Egozy, whose credits include the co-creation of the video game “Guitar Hero.” Rudess also provided tips on improvisation to a composition class; played GeoShred, a touchscreen musical instrument he co-created with Stanford University researchers, with student musicians in the MIT Laptop Ensemble and Arts Scholars program; and experienced immersive audio in the MIT Spatial Sound Lab. During his most recent trip to campus in September, he taught a masterclass for pianists in MIT’s Emerson/Harris Program, which provides a total of 67 scholars and fellows with support for conservatory-level musical instruction.“I get a kind of rush whenever I come to the university,” Rudess says. “I feel the sense that, wow, all of my musical ideas and inspiration and interests have come together in this really cool way.”

    Acclaimed keyboardist Jordan Rudess’s collaboration with the MIT Media Lab culminated in a live improvisation between an AI "jam_bot" and the artist.

  • PRS Guitars Announces Kanami Limited Edition Custom 24-08PRS Guitars today announced the Kanami Limited Edition Custom 24-08. This is the first signature model for guitarist Kanami Tono of Japan’s hard-rock band: BAND-MAID. Kanami is BAND-MAID’s lead guitarist and has been playing PRS for more than a decade. This limited edition is based off her most recent PRS Custom 24-08 guitar but, most notably, boasts PRS’s 85/15 humbuckers and PRS Brushstroke birds."PRS has always been an essential part of BAND-MAID's sound ever since I started playing the Custom 24,” said Kanami Tono. “When thinking about the specs of my signature model, the control layout of the Custom 24-08 was the best choice as a base, because many BAND-MAID songs require the high E note on the 24th fret, and I often switch between humbucker and single-coil sounds. As for the color, I chose Trampas Green Burst. This is the color of my first PRS I bought when I decided to become a professional guitarist with BAND-MAID. One of my dreams to have my own signature model has come true here, and I would say this guitar is an extension of my first PRS. What an honor it is!"The PRS Kanami Limited Edition was soft launched at this weekend’s PRS Guitars & American Vintage Guitar Show in Shibuya, Tokyo. Only 200 of these instruments will be made. Each guitar is hand-signed by Kanami on the guitar’s backplate.For complete specifications, video, and more, please visit https://prsguitars.com/ and follow @prsguitars on Instagram, YouTube, Facebook, TikTok, and X to stay in the conversation.The post PRS Guitars Announces Kanami Limited Edition Custom 24-08 first appeared on Music Connection Magazine.

  • Court asked to review Ed Sheeran ‘Thinking Out Loud’ legal victoryThe owner of part of the rights to Marvin Gaye's Let's Get It On is arguing in front of an appeals court to reinstate a case against Ed Sheeran
    Source

    The owner of part of the rights to Marvin Gaye’s Let’s Get It On is arguing in front of an appeals court to reinstate a case against Ed Sheeran.

  • Supercon 2024 SAO Petal KiCad Redrawing ProjectLast week I completed the SAO flower badge redrawing task, making a complete KiCad project. Most of the SAO petals are already released as KiCad projects, except for the Petal Matrix. The design features 56 LEDs arranged in eight spiral arms radiating from the center. What it does not feature are straight lines, right angles, nor parts placed on a regular grid.
    Importing into KiCad
    Circuit Notes for LEDs, Thanks to [spereinabox]I followed the same procedures as the main flower badge with no major hiccups. This design didn’t have any released schematics, but backing out the circuits was straightforward. It also helped that user [sphereinabox] over on the Hackaday Discord server had rung out the LED matrix connections and gave me his notes.
    Grep Those Positons
    I first wanted to only read the data from the LEDs for analysis, and I didn’t need the full Kicad + Python scripting for that. Using grep on the PCB file, you get a text file that can be easily parsed to get the numbers. I confirmed that the LED placements were truly as irregular as they looked.
    My biggest worry was how obtain and re-apply the positions and angles of the LEDs, given the irregular layout of the spiral arms. Just like the random angles of six SAO connector on the badge board, [Voja] doesn’t disappoint on this board, either. I fired up Python and used Matplotlib to get a visual perspective of the randomness of the placements, as one does. Due to the overall shape of the arms, there is a general trend to the numbers. But no obvious equation is discernable.

    It was obvious that I needed a script of some sort to locate 56 new KiCad LED footprints onto the board. (Spoiler: I was wrong.) Theoretically I could have processed the PCB text file with bash or Python, creating a modified file. Since I only needed to change a few numbers, this wasn’t completely out of the question. But that is inelegant. It was time to get familiar with the KiCad + Python scripting capabilities. I dug in with gusto, but came away baffled.
    KiCad’s Python Console to the Rescue — NOT
    This being a one-time task for one specific PCB, writing a KiCad plugin didn’t seem appropriate. Instead, hacking around in the KiCad Python console looked like the way to go. But I didn’t work well for quick experimenting. You open the KiCad PCB console within the PCB editor. But when the console boots up, it doesn’t know anything about the currently loaded PCB. You need to import the Kicad Python interface library, and then open the PCB file. Also, the current state of the Python REPL and the command history are not maintained between restarts of KiCad. I don’t see any advantages of using the built-in Python console over just running a script in your usual Python environment.
    Clearly there is a use case for this console. By all appearances, a lot of effort has gone into building up this capability. It appears to be full of features that must be valuable to some users and/or developers. Perhaps I should have stuck with it longer and figured it out.
    KiCad Python Script Outside KiCad
    This seemed like the perfect solution. The buzz in the community is that modern KiCad versions interface very well with Python. I’ve also been impressed with the improved KiCad project documentation on recent years. “This is going to be easy”, I thought.
    First thing to note, the KiCad v8 interface library works only with Python 3.9. I run pyenv on my computers and already have 3.9 installed — check. However, you cannot just do a pip install kicad-something-or-other... to get the KiCad python interface library. These libraries come bundled within the KiCad distribution. Furthermore, they only work with a custom built version of Python 3.9 that is also included in the bundle. While I haven’t encountered this situation before, I figured out you can make pyenv point to a Python that has been installed outside of pyenv. But before I got that working, I made another discovery.
    The Python API is not “officially” supported. KiCad has announced that the current Simplified Wrapper and Interface Generator-based Python interface bindings are slated to be deprecated. They are to be replaced by Inter-Process Communication-based bindings in Feb 2026. This tidbit of news coincided with learning of a similar 3rd party library.
    Introducing KiUtils
    Many people were asking questions about including external pip-installed modules from within the KiCad Python console. This confounded my search results, until I hit upon someone using the KiUtils package to solve the same problem I was having. Armed with this tool, I was up and running in no time. To be fair, I susepct KiUtils may also break when KiCad switched from SWIG to IPC interface, but KiUtils was so much easier to get up and running, I stuck with it.
    I wrote a Python script to extract all the information I needed for the LEDs. The next step was to apply those values to the 56 new KiCad LED footprints to place each one in the correct position and orientation. As I searched for an example of writing a PCB file from KiUtils, I saw issue #113, “Broken as of KiCAD 8?”, on the KiUtils GitHub repository. Looks like KiUtils is already broken for v8 files. While I was able to read data from my v8 PCB file, it is reported that KiCad v8 cannot read files written by KiUtils.
    Scripting Not Needed — DOH
    At a dead end, I was about to hand place all the LEDs manually when I realized I could do it from inside KiCad. My excursions into KiCad and Python scripting were all for naught. The LED footprints had been imported from Altium Circuit Maker as one single footprint per LED (as opposed to some parts which convert as one footprint per pad). This single realization made the problem trivial. I just needed to update footprints from the library. While this did require a few attempts to get the cathode and anodes sorted out, it was basically solved with a single mouse click.
    Those Freehand Traces
    The imported traces on this PCB were harder to cleanup than those on the badge board. There were a lot of disconinuities in track segments. These artifacts would work fine if you made a real PCB, but because some segment endpoints don’t precisely line up, KiCad doesn’t know they belong to the same net. Here is how these were fixed:

    Curved segments endpoints can’t be dragged like a straight line segment can. Solutions:

    If the next track is a straight line, drag the line to connect to the curved segment.
    If the next track is also a curve, manually route a very short track between the two endpoints.

    If you route a track broadside into a curved track, it will usually not connect as far as KiCad is concerned. The solution is to break the curved track at the desired intersection, and those endpoints will accept a connection.
    Some end segments were not connected to a pad. These were fixed by either dragging or routing a short trace.

    Applying these rules over and over again, I finaly cleared all the discontinuities. Frustratingly, the algorithm to do this task already exists in a KiCad function: Tools -> Cleanup Graphics... -> Fix Discontinuities in Board Outline, and an accompanying tolerance field specified as a length in millimeters. But this operation, as noted in the its name, is restricted to lines on the Edge.Cuts layer.
    PCB vs Picture
    Detail of Test Pad Differences
    When I was all done, I noticed a detail in the photo of the Petal Matrix PCB assembly from the Hackaday reveal article. That board (sitting on a rock) has six debugging / expansion test points connected to the six pins of the SAO connector. But in the Altium Circuit Maker PCB design, there are only two pads, A and B. These connect to the two auxiliary input pins of the AS1115 chip. I don’t know which is correct. (Editor’s note: they were just there for debugging.) If you use this project to build one of these boards, edit it according to your needs.
    Conclusion
    The SAO Petal Matrix redrawn KiCad project can be found over at this GitHub repository. It isn’t easy to work backwards using KiCad from the PCB to the schematic. I certainly wouldn’t want to reverse engineer a 9U VME board this way. But for many smaller projects, it isn’t an unreasonable task, either. You can also use much simpler tools to get the job done. Earlier this year over on Hackaday.io, user [Skyhawkson] did a gread job backing out schematics from an Apollo-era PCB with Microsoft Paint 3D — a tool released in 2017 and just discontinued last week.

    Last week I completed the SAO flower badge redrawing task, making a complete KiCad project. Most of the SAO petals are already released as KiCad projects, except for the Petal Matrix. The design fe…

  • Lucy Huang appointed Chief Technology & Product Officer at TuneCoreExec will be responsible for the company’s development and strategic direction of new technology and products
    Source

    Exec will be responsible for the company’s development and strategic direction of new technology and products…

  • The best Black Friday music technology deals 2024: the biggest savings on synths, DJ controllers and plugins right nowYou don’t need to wait until the big day itself to make the most of some of the best Black Friday music technology deals. Black Friday and Cyber Weekend are turning into something of a month-long affair, and all the biggest musical instrument retailers have already begun slashing prices on hundreds of products. So if you’re in the market for a new synth, MIDI controller or plugin bundle, look no further than our guide right here.
    So when exactly is Black Friday? It normally falls on the Friday following Thanksgiving in the US, so this year, Black Friday lands on 29 November. While we’re already seeing deals trickle in, the majority of savings occur on Black Friday and the weekend of, which is referred to as Cyber Weekend.
    And where can you find savings already? All the biggest retailers are offering discounts, including Thomann, Amazon, Guitar Center and Reverb, and if you’re a plugin enthusiast, Plugin Boutique is hosting some of its best deals of the year.
    The team here at MusicTech are already on the prowl searching for the very best Black Friday music deals to save you scouring the web yourself. We’ve included a list of the best deals below, but to save you some time, here are just some of the places we’ll be searching for savings:

    UK/EU Deals
    US Deals

    Thomann
    Guitar Center

    Amazon UK
    Amazon US

    Reverb
    Reverb

    PMT
    Zzounds

    Editor’s picks – plugins
    [deals ids=”33UMGRhvnRETi3XOZjkB9x”]
    FabFilter plugins are some of the best in the biz, with intuitive user interfaces and all the features you need to get your mixes sounding top notch. And you can get up to 25% off select plugins right now at Plugin Boutique.
    [deals ids=”7eZMnPPRTEnTnRk0mmqIqK”]
    Total Studio 4 MAX is IK Multimedia’s all-in-one production suite, with 170 of IK’s most powerful music creation titles, 66 plugins covering the entire creative process, over 19,000 authentic-sounding instruments, over 1,400 creative and studio effects and so much more.
    [deals ids=”68rRnU59IlwQF2D5YDkrcO”]
    If you’re eager to expand your mixing and mastering tool belt, look no further. Plugin Boutique has just announced a huge flash sale on Waves plugin bundles – and you could save thousands.
    Editor’s picks – synths
    [deals ids=”2Qjc9tNBi9iXA0CHVJaE7J”]
    The Arturia PolyBrute Noir is a powerhouse synth which usually retails at $2,699 – but ahead of Black Friday, you can grab it with $1,200 off, meaning you pay just $1,499.
    [deals ids=”3OclkZqXcONSJUthcnDtkc”]
    At just $299, the Arturia Microfreak features both wavetable and digital oscillators, analogue filters, a poly-aftertouch flat “touch plate” keyboard and loads more.
    [deals ids=”3W3ZuLezfK0t5inpzUTcWO”]
    Legendary synthesis, radically re-imagined. Organic, inspirational, and mind-blowingly powerful. Only $349.
    Editor’s picks – MIDI controllers
    [deals ids=”6Qd1qybYlUfzigApXFxMiP”]
    Push is a hardware instrument that seamlessly integrates with Ableton Live to give you a hands-on and expressive music-making experience. And in this killer B-stock bargain, you can grab one for just $499 (they’re usually $1,248).
    [deals ids=”1gUpWLJVL5C8VS1kwHbBtG”]
    The Arturia KeyStep is a dream come true for musicians looking to unite their hardware and software. Get yours for just $75.
    Editor’s picks – pro audio
    [deals ids=”3VTNzBqbU0IEhSBywIXIna”]
    The post The best Black Friday music technology deals 2024: the biggest savings on synths, DJ controllers and plugins right now appeared first on MusicTech.

    You don’t need to wait until the big day itself to make the most of some of the best Black Friday music technology deals.

  • How To (Officially) Report Shady Spotify PlaylistsWith an estimated 100,000 songs being uploaded to Spotify every day, it’s harder than ever to break through the (literal) noise. And now that AI companies are flooding the DSPs with their so-called “music,” it’s getting even harder to find an audience.

    With an estimated 100,000 songs being uploaded to Spotify every day, it’s harder than ever to break through the (literal) noise. And now that AI companies are flooding the DSPs with their so-called “music,” it’s getting even harder to find an audience.