PublMe bot's Reactions

  • How Musicians Can Leverage New In-App Spotify Group ChatsWord-of-mouth has always been the gold standard for music discovery. Spotify is leaning hard into that reality with its latest update. The streamer expanded its "Messages" feature to include Spotify Group Chats. Now, users can share and discuss music with up to 10 people directly within the app.
    The post How Musicians Can Leverage New In-App Spotify Group Chats appeared first on Hypebot.

    Learn how Spotify Group Chats enhance music discovery by allowing users to share tracks and playlists in dedicated groups.

  • What Luminate Report Reveals About Fan Engagement in 2026Engaging fans - true (aka super) fans - is key to the success of any artist or release. A new report by the music analysts at Luminate look at the state of fan engagement in 2026.
    The post What Luminate Report Reveals About Fan Engagement in 2026 appeared first on Hypebot.

    Explore fan engagement in 2026,. the Luminate Report and how casual listeners become true superfans through meaningful interactions.

  • Steinberg release WaveLab 13 The latest version of Steinberg's popular editing and mastering software introduces a whole host of new features, many of which centre around the suite’s Dolby Atmos capabilities.

    The latest version of Steinberg's popular editing and mastering software introduces a whole host of new features, many of which centre around the suite’s Dolby Atmos capabilities.

  • Deezer plans to license its AI-detection tool to other companies – after using it to demonetise 85% of AI music on its platformStreaming services are under increasing pressure to clamp down on AI-generated music, with many musicians and industry professionals saying it dilutes royalty pools – meaning less money in the pockets of human artists – and muddies real human creativity.
    French service Deezer has been a frontrunner in dealing with AI music – and its consequences – on its platform. As far back as 2023, the company made its intention to “detect and delete” AI-made music clear. In June last year, Deezer unveiled a new AI content tagging system which filters such content out of royalty payments and blocks it from showing up in editorial playlists. Despite this, in September, the platform revealed up to a third of music uploaded every day was fully AI-generated.

    READ MORE: Native Instruments CEO updates users: “Business continues as usual at Native Instruments, iZotope, Plugin Alliance and Brainworx”

    Back then, that third of uploaded music accounted for about 30,000 tracks – all AI-generated. Now, Deezer says, that figure has doubled, with around 60,000 new AI tracks uploaded to the platform every day, around 39% (per Mixmag).
    But the platform’s fight against AI music continues; it says over 13.4 million AI tracks have been tagged using its system launched in June. Up to 85% of those have been marked as “fraudulent”, and subsequently “removed” from the royalty pool.
    “Music generated entirely by AI has become nearly indistinguishable from human creation, and with a continuous flood of uploads to streaming platforms, our approach remains crystal clear: transparency for fans and protecting the rights of artists and songwriters,” says Deezer CEO Alexis Lanternier.
    “Every fraudulent stream that we detect is demonetised so that the royalties of human artists, songwriters and other rights owners are not affected.”
    Deezer is also taking things a step further, with plans to license the technology behind its AI tagging system to other brands and platforms.
    “We’ve seen a great interest in both our approach and our tool, and we have already performed successful tests with industry leaders, including Sacem,” Lanternier adds. “From now on, we are licensing the tech to make it widely available.”
    You can learn more about the latest findings, plus the platform’s AI tagging tool, at Deezer.
    The post Deezer plans to license its AI-detection tool to other companies – after using it to demonetise 85% of AI music on its platform appeared first on MusicTech.

    “We’ve seen a great interest in both our approach and our tool,” says CEO Alexis Lanternier, as Deezer plans to license its AI flagging system.

  • Why the music industry needs to learn to live with AIIt has been some time since I last posted here. Most of my blog activity now takes place over at MIDiA Research https://www.midiaresearch.com/blog and in the MIDiA newsletter (including the newsletter-only ‘Letter from the MD’). Follow me there and LinkedIn https://www.linkedin.com/in/markmulligan/ for regular updates and posts. Now onto today’s Music Industry Blog post. It’s a controversial one, so hold onto your hats…

    If life is a party, AI gate crashed it in 2025. With financial losses rising even more quickly than critical voices, AI will not find things quite so easy in 2026. You don’t have to look very far to find alarm bells being rung. Deutsche bank said of OpenAI’s $143 billion cumulative negative cash flow, “No startup in history has operated with losses on anything approaching this scale” (per Adweek). Meanwhile, at the World Economic Forum, Microsoft’s Satya Nadella said that we must “do something useful” or lose “social permission” for the vast quantities of electricity it requires. So much of the financial system is vested in AI’s success that a bubble burst akin to the dot-com era is possible. However, with an MIT report claiming 95% of businesses are getting “zero return” from AI investments, something is going to have to change. 

    This is the state of AI at the start of 2026 – but it is not the state of music AI. Music is emerging as a case study of where AI is actually delivering (and getting better by the day). This means that everyone in the music industry needs to start thinking about how to co-exist with AI, whether they like it or not.

    The impact of generative AI on music creation

    The music creator economy may be the canary in the coal mine for AI’s impact on music. Leading company Native Instruments just announced that it is entering preliminary insolvency (per Music Radar). Native Instruments make beautiful software, hardware, and sounds that appeal most to established, successful music creators – creators that have spent years honing their craft. What it doesn’t do so well is cater for the emerging generation of younger creators that want to go to 0-100 in a millisecond. 

    This new breed of creators want making good music to be as easy as taking good photos and videos on their phones. A growing number see making music as personal entertainment rather than chasing dreams of multi-platinum success. It is a dynamic we explore in our brand new report: Music creator survey | Creation: Rise of the new breed.

    AI did not create this dynamic but it did supercharge it. If music software democratised the means of production, AI has set it free. Thom York sang “anyone can play guitar” but anyone who has tried  (as I have done since I was five) will tell you that you have to spend a lot of time being bad before you are good. This is the case with all instruments. Gen AI, however, takes away being-bad-to-be-good. Anyone can write a text prompt. Now, is a single line of text ‘creation’. I’d personally say ‘no’, but those doing it will likely think ‘yes’. It is a similar question to whether an unmade bed installation in a gallery art? Does that text prompt become creative if it is a deeply considered paragraph of text defining melodic feel, lyrical content, instrumentation and arrangement? If so, what is the word count cut off between being creative and not?Is entering a text prompt ever going to be creative in the same way as sitting down at a piano and writing a song? No. But neither is opening a DAW and building a track from samples and typing in MIDI notes. But does that make electronic music not creative? (And before you answer, I know there are still plenty of people out there who would say electronic music is not ‘actual’ music!). And we should expect gen AI music to develop and become more sophisticated, as all consumer apps do over time. But whereas most consumer apps improve convenience and reduce friction, gen AI music will likely go in the opposite direction. It started as zero friction but make music creation too easy and the creative satisfaction soon wears thin. Creative friction is what make music making so important to people. And, from a cynical perspective, the longer it takes to make music, the more time spent on an app.

    Regardless of whether current gen AI is creation or not, the result is a whole new wave of people making music – and the number paying do so is rising rapidly. In 2025, gen AI music users were already 10% of all music creators, and the number paying to create with AI doubled. Meanwhile the number of people buying traditional music software fell in both 2024 and 2025, as did revenues. This indicates that not only are new creators flowing in, established creators are shifting activity and spend to AI too.

    One of the reasons is that gen AI music is improving. While licensing disputes roll on, gen AI has learned from the best chord progressions, vocal performances, arrangements, etc., that music has to offer and – crucially – what consumers do with that. The constraint on quality was always going to be computation technique, not innate capability. 

    Industry stakeholders can make the AI slop argument, and music critics can claim that they can identify even the best AI songs as not being made by humans. But that misses the point. AI is for the masses, both on the creation side and the consumption side. 

    Tracks on Suno can sound convincing enough to the average listener. AI artists like Sienna Rose command millions of Spotify listeners, while earlier this month ‘Jag vet, du är inte min’ hit the top of the Swedish charts only to be banned for being AI (per the BBC). AI is not going to replace human content, but it will increasingly displace it. 

    AI is here to stay in music

    The music industry needs to learn not just where AI fits in it, but where it fits in AI. This requires work from the industry, such as creating ‘lanes’ for AI as we argued in our Future of music streaming report. However, it also requires artists to put in work too. 

    Last year, YouTube-first music creator Mary Spender laid bare the challenge: 

    “First it was about gigs and selling CDs, then it was streams, then it was about content, now it is something else entirely.”

    Her solution? To use her YouTube channel as her ‘proof of work’, the thing that communicates the humanness of her music. As this piece from It’s Nice That lays out, this is an approach being pursued throughout the creative industries.

    Gen AI music enters 2026 of the back of two years of hockey stick growth. The coming 12 months will likely be more of the same. None of this is to suggest that creators and rightsholders should simply sit back and let unlicensed activity continue unabated – those battles still need to be fought. But, just as happened with music piracy, consumer behaviour is accelerating regardless. 

    Some rightsholders are already leaning into AI’s capabilities – as explained by UMG’s Jon Dworkin at MusicAlly’s great Connect conference. Others are resisting with every effort they can muster. Neither approach is more right or wrong than the other. Part of carving out a role is deciding whether you want to be part of or apart from. Whatever your choice, music AI is not going away – at least not anytime soon.Gen AI music is going to get bigger before (if) it gets smaller. Legislation isn’t going to be fast enough to stop this near term surge. Until it does, everyone in the industry needs to work out what they want to do in that time. To be ‘part of’ or ‘apart from’. Doing nothing and hoping for it to go away is not an option anymore. And whether AI stays or goes, it has catalysed the consumerisation of creation. That genie is out of the bottle. And the implications for music listening are clear. The more time that people spend making music, the less they spend listening to music. Whether the music they make finds an audience is almost besides the point. As I wrote about consumer AI music back in 2023: the music industry should worry less about the song with 1 million streams and more about the 1 million songs with 1 stream.

    It has been some time since I last posted here. Most of my blog activity now takes place over at MIDiA Research and in the MIDiA newsletter (including the newsletter-only ‘Letter from the MD&…

  • Bitfinex Bitcoin longs hit 2-year high: Is a rally to $100K possible?Bitcoin margin longs at Bitfinex exchange reached a 2-year high prior to stocks and crypto selling off sharply. Should traders expect a rally or the correction to continue?

    Crypto traders may interpret a 2-year record high in Bitfinex margin longs as bullish, but data suggests complex arbitrage and other trading strategies are at play.

  • Elon Musk’s SpaceX, Tesla, and xAI in talks to merge, according to reportsThis merger would bring the Grok chatbot, Starlink satellites, and SpaceX rockets together under one corporation.

    This merger would bring the Grok chatbot, Starlink satellites, and SpaceX rockets together under one corporation.

  • NAMM 2026: HEDD Audio Type 20 A-Core The latest addition to HEDD Audio’s all-analogue line-up delivers a powerful three-way design that extends the company’s acclaimed A-Core range into larger professional studios and mastering environments. 

    The latest addition to HEDD Audio’s all-analogue line-up delivers a powerful three-way design that extends the company’s acclaimed A-Core range into larger professional studios and mastering environments. 

  • Ultimate Patches MOOG Messenger ULTIMATE PATCHESThe new Moog Messenger presets / sound pack with 200 next-level presets. Free Taster pack: https://www.ultimatepatches.com/free-synth-presets-patches.html Hear the demo: https://www.youtube.com/watch?v=SI6Qmc0h5ew Read More

  • Handheld Steering Wheel Controller Gets Force-FeedbackFor a full-fledged, bells-and-whistles driving simulator a number of unique human interface devices are needed, from pedals and shifters to the steering wheel. These steering wheels often have force feedback, with a small motor inside that can provide resistance to a user’s input that feels the same way that a steering wheel on a real car would. Inexpensive or small joysticks often omit this feature, but [Jason] has figured out a way to bring this to even the smallest game controllers.
    The mechanism at the center of his controller is a DC motor out of an inkjet printer. Inkjet printers have a lot of these motors paired with rotary encoders for precision control, which is exactly what is needed here. A rotary encoder can determine the precise position of the controller’s wheel, and the motor can provide an appropriate resistive force depending on what is going on in the game. The motors out of a printer aren’t plug-and-play, though. They also need an H-bridge so they can get driven in either direction, and the entire mechanism is connected to an Arduino in the base of the controller to easily communicate with a computer over USB.
    In testing the controller does behave like its larger, more expensive cousins, providing feedback to the driver and showing that it’s ready for one’s racing game of choice. It’s an excellent project for those who are space-constrained or who like to game on the go, but if you have more space available you might also want to check out [Jason]’s larger version built from a power drill instead parts from an inkjet.

    For a full-fledged, bells-and-whistles driving simulator a number of unique human interface devices are needed, from pedals and shifters to the steering wheel. These steering wheels often have forc…

  • What is a MIDI keyboard?
    Learn all about the basics of MIDI keyboards, and find out how to start making music with one.

    In this introductory guide, learn all about the basics of a MIDI keyboard, and find out how you can start making music with one.

  • Guitar Center Business Solutions Announces ResonateThis week, Guitar Center Business Solutions announced the inaugural Resonate, "the company’s first dedicated industry expo, launching in Nashville to showcase the future of integrated audio, video and control technology, according to a company statement.

    "The free, one-day event will take place Thursday, April 9, from 10 a.m. to 6 p.m. CDT at the Music City Center in Nashville," they added. "The expo will bring together leading brands, integrators and decision-makers across music, education, venues and enterprise. Nashville was selected as the host city for Resonate because it reflects the convergence shaping today’s market and serves as the headquarters of Guitar Center Business Solutions."

    “We created Resonate as systems are converging faster than organizations can adapt, and the industry needs clearer leadership around how everything connects,” said Curtis Heath, president of Guitar Center Business Solutions, in a statement. “Our experience across education, performance and enterprise environments positions us to help the market move forward with solutions that are practical, scalable and built to last.”

    “Nashville is the perfect place to close the gap between creators and the systems that amplify their work,” Heath told MC. “Resonate brings together music, pro audio, and pro AV—along with the networked, enterprise-grade technology behind it—to show what’s possible when you design the entire experience end-to-end. No other organization connects these worlds at this scale.”

    Resonate Event Details:- Resonate, presented by Guitar Center Business Solutions- Thursday, April 9 | 10 a.m. to 6 p.m. CDT- Music City Center | Nashville, TN- Registration is free for early registrants; space is limited.

    For more information and to register, visit resonate-expo.com.

    The post Guitar Center Business Solutions Announces Resonate first appeared on Music Connection Magazine.

  • 60,000 AI tracks hit Deezer daily as platform moves to license detection tech to wider music industryDeezer also revealed that up to 85% of all streams on AI-generated music were fraudulent in 2025
    Source

    Deezer also revealed that up to 85% of all streams on AI-generated music were fraudulent in 2025…

  • Future-proofing your DAW project: A guide to exporting multitracks, stems, and moreLosing a DAW project to software upgrades or crashed hard drives is a rite of passage in music production. You back up your work, confident you’ve properly archived your musical history; several months or years later, you bitterly realise that your computer won’t even open the session, let alone play it back.

    READ MORE: Learn how to layer monosynths to give your music more impact

    This kind of disaster scenario can happen at any time. Plugins change, software companies go bankrupt, and product support disappears—but your music doesn’t have to. Here’s how you can future-proof a project before it leaves your DAW.
    Organising with intent
    Future-proofing your projects will prevent any technical mishaps but will also keep your music adaptable to different media forms.
    Imagine missing out on a TV or film placement because you can’t provide the stems from a previous song, or needing a spatial mix years after release, when the original project is no longer accessible.
    Multitracks are essential for building flexible live sets. Logistics force you to scale your setup down for your next show, and suddenly, you long to have control over every layer of sound in the backing tracks. The same applies to remixing. An a cappella master offers far less creative freedom than dry vocal tracks.
    That said, for producers working at speed, creating a detailed delivery folder for every sketch isn’t the most productive strategy. Focusing on music that’s already released or release-ready is more sensible.
    A typical archival folder might include:

    Masters (with alternative versions)
    Stems (submixes of grouped instruments)
    Multitracks (each channel exported individually)
    MIDI files (to preserve tempo and key information)
    Metadata files (lyrics, credits, keywords)

    Each of these files supports different use cases.
    Image: SIRMA
    Exporting masters and stems
    For many artists, the full master feels like the finish line. But it’s just as important to get the a cappella and instrumental versions from your mastering engineer. If you perform live with backing tracks, it’s also worth having a master without lead vocals. Radio edits and TV masters are useful too, especially for longer pieces or tracks with explicit lyrics.
    In digital media, 24-bit / 48kHz WAV is widely considered the industry standard. Some distributors, such as CD Baby, require 16-bit / 44.1kHz files. Physical formats like vinyl, cassette, and CD each have their own technical requirements.
    So what’s a producer to do to cover all their bases? Well, if your DAW runs a project at 24-bit / 48kHz, exporting at 32-bit / 96kHz won’t improve the audio’s sonic quality. Print at the highest resolution native to your session instead. You can always create conversions for specific formats later.
    As for stems, bypass the limiter, but include group processing. When played together in an empty session, they should closely resemble the final mix, with only minor dynamic differences.
    How granular you want to get with your stems is up to you. For example, it’s common to create one stem for the lead vocal, another for harmonies, and a separate one for ad-libs. Some producers stem out all the drums together. Others combine the kick and snare in one stem, and bundle the cymbals in another.
    Once you make your decision, solo the channels you want to export together to create each stem and bounce your session as normal.
    Image: SIRMA
    Exporting multitracks
    It’s common to compress all the channels of a drum kit together. But what if you want to process them differently in the future?
    This is where multitracks matter most: they offer control over every individual sound.
    Most DAWs can export all channels at once. But the results are often far from perfect. You may end up with glitches and missing audio files. Soloing and printing each channel individually takes longer, but it’s far more reliable.
    A common workflow is to secure dry sources, such as vocals, individually first. After that, you can solo the return channels to capture the reverb or delay output. Manage the sends carefully to craft a modular multitrack session that’s easy to reconstruct later.
    When imported into a blank project, all multitracks should recreate the mix exactly, minus any group or master bus processing.
    Image: SIRMA
    Alternative audio assets, MIDI, and metadata
    Once your masters, stems, and multitracks are finalised, consider archiving anything else you may need later.
    It’s best to preserve all your original panning decisions in multitracks. But you can always create a supplementary folder containing centred harmony vocals, dry drum elements, or even raw, untuned vocal comps.
    Likewise, exporting MIDI files for each software instrument can simplify future tasks such as score preparation. At minimum, a single consolidated MIDI file preserves tempo, key, and harmonic structure.
    Image: SIRMA
    Next, prepare a PDF of a metadata sheet that includes:

    Musician credits (with PRO information included for each composer)
    ISRC and UPC codes
    Label and/or music libraries representing the song
    Lyrics
    Tempo, time signature, and key
    Genres

    You can embed some metadata directly into WAV files using software such as Audacity. But a standalone document is still the most accessible solution.
    Final checks
    Simple organisational tactics can make your archives more functional.
    Label every item distinctly and include the track title in all file names to save yourself the headache later.
    To maintain alignment, export all files with one or two bars of silence at the beginning and end.
    Once complete, import everything into a blank session and listen through: first the stems, then the multitracks. This way, if anything is misaligned or missing, you’ll be able to spot it quickly.
    Future-proof one project at a time. Once the process becomes familiar, you’ll develop a repeatable system that protects your music for years to come.
    Check out more music production tutorials.
    The post Future-proofing your DAW project: A guide to exporting multitracks, stems, and more appeared first on MusicTech.

  • Native Instruments CEO updates users: “Business continues as usual at Native Instruments, iZotope, Plugin Alliance and Brainworx”Following the news of Native Instruments going into preliminary insolvency proceedings, the brand’s CEO, Nick Williams, has issued an update via NI’s blog page. He assures the community of creators, customers and partners that it’s “business as usual at Native Instruments, iZotope, Plugin Alliance and Brainworx.”
    Since the news spread, the internet has been rife with predictions on which company will acquire NI — InMusic, Splice, and Fender are among the many brand names that have been suggested as potential buyers. Meanwhile, many posts criticising and mourning NI are circulating, with one YouTuber even citing the news as “the upcoming collapse of the music industry”.
    Williams suggests that such speculation is premature, saying that the company is “working diligently and responsibly to secure a healthy, financially sustainable future for Native Instruments.” He adds that the Native Instruments brands are also “continuing to develop and launch new products and features. Our NKS Partnerships team continues to process Kontakt Player licences and NKS Partner submissionss.”
    Read the full statement below.
    “I want to personally take a moment to address the recent news about Native Instruments.
    “Please rest assured that business continues as usual at Native Instruments, iZotope, Plugin Alliance and Brainworx. Our hardware and software products remain on sale and available for download and activation. Our passionate and dedicated teams are here and supporting customers as normal. In product and engineering, we are continuing to develop and launch new products and features. Our NKS Partnerships team continues to process Kontakt Player licences and NKS Partner submissions.
    “We are working diligently and responsibly to secure a healthy, financially sustainable future for Native Instruments. As you may have seen, Native Instruments GmbH has entered a restructuring process in Germany, as have 3 of our German non-operating holding companies. In legal terms, we have filed applications to open pre-insolvency proceedings for those companies.
    “We are focused on providing continuity for creators, customers, and partners. We’ll continue to share updates as we have them.
    “I’m a lifelong musician myself, and have been a passionate fan of Native Instruments for 25 years. Our mission to inspire and enable creators to express themselves through sound continues.”
    Keep up with more music production industry news. 
    The post Native Instruments CEO updates users: “Business continues as usual at Native Instruments, iZotope, Plugin Alliance and Brainworx” appeared first on MusicTech.

    Native Instruments CEO Nick Williams responds to the community's concerns about NI's insolvency and says that it's “business as usual”