Vlad Masslove's Liked content

  • Link Rot: Why your Digital Links won’t last foreverUnfortunately, link deterioration, often called link rot, is inevitable. Here are some effective strategies for managing its impact on online content. by Bobby Owsinski of Music 3.0 Just about everyone. Continue reading
    The post Link Rot: Why your Digital Links won’t last forever appeared first on Hypebot.

    Manage the impact of link deterioration on your online content. Discover effective strategies to prevent 404 errors and improve user experience.

  • “Once tech like stem separation is inside the CDJ, people will get a bit more creative”: Richie Hawtin on how real-time stem separation will impact live showsRichie Hawtin has shared his thoughts on the impact real-time stem separation might have on contemporary DJs, especially their approach to live sound manipulation.
    Hawtin debuted his new concert series, DEX EFX XOX, at this year’s Movement Festival Detroit and Sónar Barcelona. The show sees him focus less on grandeur and visual spectacle, and far more on the most important element – the music.

    READ MORE: Erica Synths and Richie Hawtin’s Bullfrog Drums will “teach you drum programming and sampling”

    For DEX EFX XOX, Hawtin uses Traktor, Bitwig, his own MODEL 1 mixer, two A&H Xone K2 MIDI controllers, a Novation Launchpad, and a bunch of “custom scripts” that allow on-the-fly control over a suite of Roland software emulations, including the TR-808, TR-909, and the SH-101.
    In the future, the use of stem separation – something he’s avoided up til now – may also make its way into his sets. In a new interview feature for MusicTech, he explains, “My shows are all pretty spontaneous. I’ve been reluctant to use any stem separation because it all has to be done beforehand. But real-time, high-quality stem separation is coming very shortly, and I’m excited because that will allow for even more fluid mixing.”
    With his shows, Hawtin wants to revive that hypnotic state that immersive sound and lighting can induce alone. He wants his shows to feel more like a club experience, rather than concert which may focus more on feeding the eyes than the ears. Of the current DJ sphere, he says, “On one level, I see that the scene has exploded with the TikTok DJ generation who maybe think that DJing is just two CDJs and a mixer, but I’m starting to see some of the DJs who’ve been around longer really jumping into these hybrid setups.

    View this post on Instagram

    A post shared by Richie Hawtin (@richiehawtin)

    “Once tech like stem separation is inside the CDJ, people will get a bit more creative — but will we see a whole generation of DJs working on their own unique setup? I’m not sure that that’s going to happen.”
    He adds, “I don’t want to sound like I’m slagging off the new-school DJs. Really, the production etiquette and technique of young, modern producers is fucking mind blowing. The music they’re making crosses and combines genres more than ever before — there used to be the house lane, the techno lane, the minimal lane. Now, it’s all going back into the melting pot and that’s where a lot of the energy and excitement is coming from.”
    So, why does he feel rising DJs are not experimenting with their setup as much as they perhaps should? “Part of it is just the convenience of jumping on a plane with a USB stick and jamming out some great tunes,” he says. “I would have been excited if I could have done that 30 years ago, instead of dragging around three 50kg cases and a friend to help me.”
    Find out more about Richie Hawtin, or view all of his scheduled live dates.
    The post “Once tech like stem separation is inside the CDJ, people will get a bit more creative”: Richie Hawtin on how real-time stem separation will impact live shows appeared first on MusicTech.

    Richie Hawtin has shared his thoughts on the impact real-time stem separation might have on up and coming DJs, especially their approach live sound manipulation.

  • IK Multimedia offers FREE Mesa Mark III guitar amp this month
    IK Multimedia is giving away a free Mesa Mark III and matching cab for new AmpliTube 5 Custom Shop (CS) users throughout July. AmpliTube 5 CS is the free version of IK Multimedia’s virtual guitar and bass workstation for Mac and Windows. Mesa/Boogie launched the Mark III in 1985, and the amp is a three-channel, [...]
    View post: IK Multimedia offers FREE Mesa Mark III guitar amp this month

    IK Multimedia is giving away a free Mesa Mark III and matching cab for new AmpliTube 5 Custom Shop (CS) users throughout July. AmpliTube 5 CS is the free version of IK Multimedia’s virtual guitar and bass workstation for Mac and Windows. Mesa/Boogie launched the Mark III in 1985, and the amp is a three-channel,Read More

  • YouTube’s revamped eraser tool uses AI to remove copyrighted music without impacting other audioYouTube has launched an updated eraser tool which allows its creators to remove copyrighted music from their videos, without affecting speech, sound effects or other audio.
    Previously, videos flagged for copyrighted audio were muted or taken down entirely. The updated tool is in its early stages however, and YouTube does warn that the “edit might not work if the song is hard to remove”.

    READ MORE: “Human-created works must be respected”: 50 major music tech brands sign Principles for Music Creation with AI

    As first reported by TechCrunch, the updated eraser tool was launched on 4 July. It utilises an AI-powered algorithm to cut just the song used, and leave the rest of the video intact. YouTube chief Neal Mohan shared the news of the launch to X along with a video.
    In the footage, it is explained that the company had been testing the eraser tool for a while, but it wasn’t as accurate in removing copyrighted tracks as they wanted it to be. Now, the use of an AI-powered algorithm brings a “big improvement”, and users have the choice to mute all sound or simply erase just the music within their videos.
    Find out more below:

    Good news creators: our updated Erase Song tool helps you easily remove copyright-claimed music from your video (while leaving the rest of your audio intact). Learn more… https://t.co/KeWIw3RFeH
    — Neal Mohan (@nealmohan) July 3, 2024

    YouTube is attempting to make big strides with AI right now – last week it was reported that the company had been offering lump sums of money to the “big three” major labels in hopes of rolling out AI music licensing deals with them.
    The move follows the launch of its AI tool Dream Track last year. The feature allowed users to create music using AI voice imitations of famous artists. The platform has allegedly been in talks with Sony, Warner and Universal to try to convince more artists to allow their music to be used in training AI software.
    To find out more about YouTube’s eraser tool, including step-by-step instructions for its use, visit YouTube Support.
    The post YouTube’s revamped eraser tool uses AI to remove copyrighted music without impacting other audio appeared first on MusicTech.

    YouTube has launched an updated eraser tool which allows its creators to remove copyrighted music from their videos.

  • 40 resources for aspiring musicians
    Explore these resources for aspiring musicians that can help you unlock inspiration, finish your productions, and share them with the world.

    Explore these resources for aspiring musicians that can help you unlock inspiration, finish your productions, and share them with the world.

  • Master Your Music Library: Top Tips for Organizing Music FilesTransform your music library with these expert tips for organizing your files. Learn the secrets to seamless file management and boost your productivity quickly.  by CRISTINA CANO of DIY Musician. Continue reading
    The post Master Your Music Library: Top Tips for Organizing Music Files appeared first on Hypebot.

    Transform your music library with these expert tips for organizing your files. Learn the secrets to seamless file management and boost your productivity quickly.  by CRISTINA CANO of DIY Musician. Continue reading

  • WebSampler allows you to sample any audio from the internet right within your DAWSampling and music production go hand in hand, so it’s no surprise that there’s a demand among producers for tools to make the sampling process easier.
    While there are already a number of audio capture tools available, many producers still rely on online YouTube-to-MP3 websites, which themselves seem to appear and get taken down as regular as clockwork.

    READ MORE: “Enough’s enough”: deadmau5 threatens to pull music from Spotify following Daniel Ek “cost of creating content” comments

    WebSampler, a new tool from WXAudio, aims to eliminate the need for YouTube-to-MP3 tools by offering producers the ability to record audio from any website from directly within a DAW.

    How it works is simple; WebSampler is a VST plugin with an internet browser built right in, where you can head to any website and record a sample and insert it as an audio clip right in your DAW’s timeline.
    While WebSampler definitely streamlines the practicality of creating samples for your mixes, it still should be remembered that samples, more often than not, require permission to be used in songs and other projects. That said, WebSampler doesn’t claim to have anything to do with clearing rights for samples, for what it does – quick and easy sampling from anywhere on the web – it really does seem like a knockout idea.
    WebSampler costs a very reasonable $10 and is available in VST3, AU and standalone formats. For more info, head to WXAudio.
    The post WebSampler allows you to sample any audio from the internet right within your DAW appeared first on MusicTech.

    Sampling and production go hand in hand, so it’s no surprise there’s a demand among producers for tools to make the sampling process easier.

  • “I never expected anyone to listen to it”: Moby says Play was made in a “completely unpressurised environment”Moby has spoken about the “completely unpressurised environment” that led to the making of his breakthrough electronica album Play.
    Released in 1999, Play got off to a slow commercial start, only to explode in popularity – the album has since become the best-selling electronic music album of all time, with over 12 million copies sold to date – after it began to be licensed for commercials and other projects.

    READ MORE: “People want to talk without doing homework”: Swizz Beats on criticism of Verzuz beat battle deal with Elon Musk’s X

    Speaking to MusicRadar about the making of the album, Moby recalls: “The interesting thing is that the music on Play was made in a completely unpressurised environment because I never expected anyone to listen to it.”
    “In 1997/98 when I was finishing the music for Play, I’d been dropped by my American record label, and Daniel Miller of Mute Records hadn’t dropped me but it felt like that was because he felt sorry for me.”
    “Play was made in my bedroom on cheap equipment and the commercial expectations were so low as to be non-existent, so it was a very unpressurised environment,” he adds.
    According to Moby, things changed during the next few records, where he “put a lot of pressure on [himself] to try and make music that would be creatively interesting and commercially successful.”
    “But I realised pretty quickly that I’m not good at that,” he says. “Some producers, especially now, are very good at accommodating the commercial marketplace, but whenever I’ve tried to do that the end result has been mediocre.”
    The musician, who recently released his 22nd solo album Always Centered at Night, also admits that a young him would not have expected such success.
    “For most of my life, up until a certain point, I assumed I was going to make music in my spare time that no one ever listened to,” he says. “I never expected to have a record deal or play concerts or shows to more than 20 or 30 people and certainly never expected to have anything resembling commercial awareness or success.”

    The post “I never expected anyone to listen to it”: Moby says Play was made in a “completely unpressurised environment” appeared first on MusicTech.

    Moby has spoken about the “completely unpressurised environment” that led to the making of his breakthrough electronica album Play.

  • “I began to feel like I had no control over the whole thing”: Imogen Heap on making an AI voice modelImogen Heap has constantly been ahead of the curve when it comes to technological innovation in music. So, now that she’s returning from a 10-year hiatus, it seems fitting that she’s diving into the world of AI.
    In April, she released her first remix using her AI voice model, ai.mogen, collaborating with Slovakian alt-pop singer Karin Ann on  false gold. Heap made the remix alone but her vocals were made by an AI model that she developed with her team.
    In an interview with MusicRadar, Heap revealed that she’d had numerous offers from companies to make an AI  model of her voice. “They knew that I’m interested in technology, and they knew my answer probably wouldn’t be a flat no,” she tells the outlet.
    Her excitement was dampened, however, by the list of caveats and terms and conditions attached to the offers. “I began to feel like I had no control over the whole thing. Everybody kept saying how hard it is to create an AI voice model… but I just thought, it can’t be that hard.”
    Heap turned the offers down and worked with an audio engineer on an open-source model, which they trained on recordings from throughout her career. “You know what? It came out pretty good,” Heap says. “After that, I was feeling more empowered, like I had a leg to stand on.”

    Heap fed the entirety of false gold through ai.mogen, working with over twenty instrumental and vocal stems. “It was the weirdest thing, but it sounded amazing,” she enthuses. “It was my voice trying to sing the kick and snare, the bassline, the keys. My voice became a kind of aura surrounding everything and it really decided the direction I wanted to take the remix.”
    The artist is also asked whether she’ll ever use ai.mogen’s text-generation capabilities to write song lyrics. “I mean… yeah? Perhaps I wouldn’t use her because there are other services out there that do a much better job right now,” she says. “If someone wanted to generate something in the style of my lyrics that would be fine, though I would like to be credited at some point.”
    She also has plans to expand ai.mogen’s capabilities to eventually become a songwriting and production assistant. “Every single scrap of unused or used audio that I ever create goes into a folder,” she says. “We’re preparing to semantically describe all of it so that, in the future, I can come into my studio and Mogen will say ‘may I suggest this thing that you created in 1998 and as good a place to start?'”
    Elsewhere in the interview, she discusses some of the issues AI poses in the music industry, particularly unauthorised voice models. “It does worry me. I will eventually release ai.mogen so that everyone can use it, but I don’t want my voice to say hateful things. So, I need to find a way to do it on acceptable terms.”
    She adds: ““I’m in the middle of creating an app that enables musicians to train their own vocal models with privacy and security. The hope is that we can educate and protect people and help them feel a bit more in control of their voice.”
    Read more music technology news.
    The post “I began to feel like I had no control over the whole thing”: Imogen Heap on making an AI voice model appeared first on MusicTech.

    Imogen Heap has spoken about how she made her new AI voice model, ai.mogen, in a new interview and what she hopes to do with it.

  • Creative sound design with mix correction pluginsThe music production world is awash with mix correction plugins, with many using ground-breaking AI that can take a less-than-stellar vocal recording from dud to ‘dude!’ with the click of a button. Elsewhere, de-reverb and stem separation software can pull active sonic elements from a finished track. However, have you ever wondered if they could be used in less corrective and more creative ways? Long answer short: they sure can.

    READ MORE: 12 best stem separation software for vocals, ranked

    You can use a variety of correction plugins as part of the sound design process. While many will yield crunchy and funky lo-fi artefacts, others can elicit unexpected — and extremely useful — results.
    This tutorial heads into experimental territory so be sure to don your cleanest lab coat. You may want to have a few beakers of your favourite spirits on hand as well to encourage out-of-the-box thinking. And don’t worry: if you make a mess, just fix it with the same plugin!
    Lossy melodies with stem separation
    Stem separation is a popular and powerful type of correction software that splits a piece of audio up into its constituent parts — drums, bass, vocals — and lets you work them individually. RipX DAW Pro from Hit’n’Mix does one better, turning the stems into malleable audio that you can adjust on a per-note basis. What happens if you feed it just a single instrument, like a marimba line with baked-in delay effects?
    Start by loading your audio into RipX DAW Pro. It will do its thing, analyzing the audio and then separating it out into individual notes. Move the pieces of audio up and down the piano roll to create a new melody, deleting unnecessary ones as you go. There are a number of pitch effects as well, such as Pitch to Scale, Quantize Pitch and Flatten Pitch. Play around with these until you end up with something you like. Finally, bounce it out and import it into your main DAW project.
    While RipX DAW Pro does have a Repair section to reduce artefacts, you can ignore this for this technique. After all, swimmy, low-bit MP3 effects are gaining popularity thanks to plugins such Goodhertz Lossy and Lese Codec. It’s an extreme effect but could be just what your next lo-fi creation needs.

    [products ids=”5rZcis5Lt4B0USFvT3xhDY”]
    Transient enhancing with de-reverb
    De-reverb plugins are a handy way to remove room sound from vocals, particularly recordings for interviews and podcasts. However, there’s no rule saying they can’t be used on other types of material.
    In this example, the De-reverb module from iZotope’s RX 10 Elements does its best to clean up the reverb from a noisy tambourine loop. By tweaking the controls, you can emphasise the transient attack of the tambourine — the portion when the hand strikes the skin — and bring out some lo-fi artefacts in the process.
    First, click the Learn button and let the in-built AI listen to the signal. Next, bring up the Reduction slider and adjust the different frequency bands in the Reverb Profile until you’ve brought out the strike of the tambourine. Engage the Enhance dry signal button for a more pronounced effect. Try lowering the Artifact smoothing slider to bring out lo-fi goodness. It’s not a bug, it’s a feature!

    [products ids=”4PfP5kzaOHCiAHGOnI5IKf”]
    Transient smoothing with de-click
    In the same way that de-reverb can remove the sustain portion of a signal, so can de-clicking take away the transient. Here, RX 10 Elements De-click from iZoptope is confusing the attack potion of a clave in a loop with a click and doing its best to wipe it away.
    Start by placing De-click on the track you want to affect. It’s a pretty simple plugin; just bring up the Sensitivity slider until the transient is suitably smooshed. Try using the Click widening control and changing the algorithm for different results.
    For an extreme lo-fi effect, try strapping De-click across an entire drum bus.

    [products ids=”4PfP5kzaOHCiAHGOnI5IKf”]
    Psychedelic drums with vocal processing plugins
    Designed to correct pitch imperfections, vocal processing plugins can work wonders on the human voice. But can they be used on non-vocal material? Percussion sounds tend to not have much pitch information — sounds like the perfect opportunity for experimentation.
    While full-on pitch correction like Auto-Tune might not have too much of an effect, there are plenty more plugins in Antares’ Auto-Tune Unlimited suite that do, such as Choir.
    Auto-Tune Choir, as the name suggests, is a vocal multiplier. Instead of voices, try running percussion through it, like this conga loop. Turning up the Choir Size to 32 voices creates a psychedelic and tightly delayed drum line. Use the controls in the Variation section – Vibrato, Pitch and Timing – to further tweak out the drums. Results are sure to be unique and unexpected.

    [products ids=”7Eg6YKvP9aVYV6HltfBu3g”]
    Drum loop tightening with drum removal technology
    It may seem paradoxical but slapping a drum removal plugin onto a drum bus can result in some surprisingly useful results. Tightening, levelling and punch-ifying are all possible with judicious use of the technology.
    Zynaptiq’s Unmix::Drums is a top-quality plugin for removing or reducing the level of drums in mixed stems. When you put it on a drum bus or percussion loop, as in this example, you can make some interesting adjustments.
    Start with the big Drum Level control in the middle and fine-tune it until you have a nice balance of punch and room sound. Use the Attack and Release controls to affect the transient and tail. In the Fine-Tune area, bring up the Bass Synth slider to add power back to the kick drum, if necessary. Finally, engage the compressor and limiter functions at the top for extra punch.

    Learn more at https://musictech.com/learn/.
    The post Creative sound design with mix correction plugins appeared first on MusicTech.

    There’s more to correction plugins than just removing pops and clicks. Here’s how to use them in creative and surprising ways on drums.

  • “Human-created works must be respected”: 50 major music tech brands sign Principles for Music Creation with AIOver fifty global music technology companies and associations have penned their support for Roland and UMG’s Principles for Music Creation with AI. Per the principles, the participating companies advocate for the responsible use of AI in music creation, to “protect the essence of music — its human spirit”.
    BandLab Technologies, Splice, Beatport, Focusrite, Output, LANDR, Waves, Eventide, Native Instruments, NAMM, Sequential, Oberheim and more have united in a bid to protect the rights of musicians as the industry sees an acceleration of generative AI tech.
    The guidelines were established to encourage key figures in the music technology space to be mindful of the potential risks of AI. In a statement, AIformusic says that it’s crucial to responsibly manage the impact of machine learning tools and adhere to the Principles to ensure that the music industry is protecting the integrity of artists. Still, it adds, it acknowledges that AI can be an empowering tool for musicians and creators when applied with caution.
    The statement continues to say that the alignment of music industry leaders “cannot be understated and plays an invaluable role in shaping a responsible future for AI in music creation.”
    AIformusic also says it “strongly encourages” further organisations and brands around the globe to endorse the principles, as the
    The seven Principles for Music Creation with AI are as follows:

    “We believe music is central to humanity.”
    “We believe humanity and music are inseparable.”
    “We believe that technology has long supported human artistic expression, and applied sustainably, AI will amplify human creativity.”
    “We believe that human-created works must be respected and protected.”
    “We believe that transparency is essential to responsible and trustworthy AI.”
    “We believe the perspectives of music artists, songwriters, and other creators must be sought after and respected.”
    “We are proud to help bring music to life.”

    BandLab Technologies CEO and co-founder Meng Ru Kuok says of the principles, “We are at a pivotal moment in the evolution of music creation. As leaders, it is our responsibility to thoughtfully ensure that AI supports artists and respects their creative integrity. As we develop new tools, we must remember that technology is at its best when it enhances, not overshadows, human creativity.”
    Splice CEO Kakul Srivastava adds, “AI brings new opportunities to our industry and many musicians are being inspired by these tools. But this is a critical time to support responsibility around new technology and respect for the rights of creators everywhere. This is about the human at the center.”
    “As with all technologies, the Focusrite Group desires to see AI become another toolset to further the creativity of artists vs. a threat to our industry,” said Focusrite PLC CEO Tim Carroll. “We are proud to support AI For Music and to do our part to help ensure this technology is used in a responsible manner.”
    Earlier today, the RIAA announced that it had filed a lawsuit against AI music generation platforms Udio and Suno, with the plaintiffs including Sony Entertainment Music, Warner Records, and Universal Music Group. The lawsuit seeks damages of up to $150,00 for each piece of infringed work, and to stop the two AI companies from training on the label’s copyrighted songs.
    Read more music technology news. 
    The post “Human-created works must be respected”: 50 major music tech brands sign Principles for Music Creation with AI appeared first on MusicTech.

    Over fifty global music technology companies and associations have penned their support for Roland and UMG's Principles for Music Creation with AI.

  • Soundtheory announce Kraftur Soundtheory's second release is a multi-band saturation plug-in capable of avoiding the artifacts that come with more traditional approaches to soft clipping.

    Soundtheory's second release is a multi-band saturation plug-in capable of avoiding the artifacts that come with more traditional approaches to soft clipping.

  • Steinberg says SpectraLayers 11 is a “breakthrough” in audio processing thanks to deeper AI integrationSteinberg’s SpectraLayers has had a makeover. With a bucketload of AI-driven improvements and powerful restoration and unmixing tools, SpectraLayers 11 vows to take audio editing and restoration to an entirely new level.

    READ MORE: The best DAWs for music producers in all genres, styles and workflows

    SpectraLayers 11 offers a range of features geared towards repairing and cleaning up live recordings. Users can take advantage of the Unmix Chorus module to separate lead and backing vocals, while the Unmix Crowd Noise module goes a step further, allowing you to fully remove crowd ambience from a live track.
    Unmix technology also extends to instrumental isolation. Steinberg has improved its Unmix Song feature, allowing users to extract up to seven different instruments from any given track. The advanced technology will surely be useful for separating stems, making it simpler to re-imagine and remix any given project.
    AI has also helped Steinberg produce a speech and vocal repair feature, Voice DeClip. Trained on hours of clips and non-clipped recordings, Steinberg insists the module has been extensively trained. As a result, Voice DeClip should be able to extract vocals from any given environment, no matter how loud or chaotic.

    Alongisde the new unmixing technology, SpectraLayers’ new Transfer Brush tool should allow users to shift between source and destination layers in real time, while the Transient Pencil allows users the freedom to draw, sculpt and shape transients directly in the spectrogram.
    Workflow has also been vastly improved for SpectraLayers’ 11th edition. Steinberg has introduced new dedicated panels for modules, as well as the option to chain modules and save your chains as presets.
    SpectraLayers Elements 11 is currently available for £68, while the Pro edition is £254.
    For more information, head to Steinberg’s website.

    The post Steinberg says SpectraLayers 11 is a “breakthrough” in audio processing thanks to deeper AI integration appeared first on MusicTech.

    SpectraLayers 11 has improved its audio restoration capabilities thanks to AI-powered vocal repair and advanced unmixing modules.

  • 5 creative ways to share music using QR codesHere are some innovative ways to share music and expand an audience with QR codes. Learn how these simple codes can make promoting your music more effective and engaging. by. Continue reading
    The post 5 creative ways to share music using QR codes appeared first on Hypebot.

    Here are some innovative ways to share music and expand an audience with QR codes. Learn how these simple codes can make promoting your music more effective and engaging. by. Continue reading

  • Mainstream is the new nicheFive years ago, we made the call that ‘niche is the new mainstream’. Today, this dynamic is so fundamental to music and culture that we are firmly in the stage of second order consequences. Superstars are getting smaller, the long tail is getting longer, and rightsholders are bringing in earnings thresholds to keep that growing long tail at bay. But it was a blog post by my colleague Tatiana – “Did Charli XCX go mainstream, or did the mainstream just go niche?” – that got me thinking whether, now five years in, the mainstreaming of niche has reached a tipping point.

    The dynamics of Charli XCX’s career (e.g., 25,000 RSVPs in one hour for a 1,000-cap Boiler Room gig) feel very much like those of Taylor Swift. Of course, the sheer scale of the Swift fandom machine is the big difference – or is it? Is mainstream about actual numbers or reach, or perhaps both? In fact, it is best measured in three key ways:

    Absolute scale: how big are the numbers?

    Relative scale: how big are the numbers compared to others?

    Active reach: what share of the total audience does an artist have?

    Let’s use Taylor Swift, as today’s biggest mainstream music artist, to test each.

    Absolute scale

    There is no getting away from the fact that everything “big” has got smaller. Michael Jackson, arguably the equivalent of Taylor Swift for the peak-CD era, shifted half a billion units worldwide, when units actually meant units. By comparison, Taylor Swift has fewer than 200 million ‘album equivalent sales’ – which of course means this figure is increasingly made up of streams being converted into ‘sales’. Given that so much of streaming behaviour today is radio-like, we would really need to add an estimate of total individual radio listens to Jackson, which would result in a figure that would comfortably end up in the tens of billions in ‘equivalent sales’.Yes, Jackson’s career happened in a different era, when fewer artists were competing and linear broadcast platforms dominated. But that is the entire point of fragmenting fandom.

    Relative scale

    It is abundantly clear that Taylor Swift has more streams and ticket sales than pretty much everyone else. She is the biggest artist on the planet right now. She has mainstream awareness, but does that make her actual listenership mainstream? 

    She certainly has more mainstream cultural clout than her peers, managing to become part of the mainstream media narrative – look no further than the Financial Times running pieces on ‘Swiftonomics’. This is thanks, in large part, to the fact she first built her fandom pre-fragmentation, when music was still much more a part of mainstream culture. It is an advantage enjoyed by other artists, such as Beyoncé, that came up pre-streaming’s peak, and therefore pre-fragmentation. But an FT subscriber reading a Swiftonomics story does not necessarily make them a listener (I’ll hazard a guess that particular conversion rate is not one to sing about). Having mainstream media reach is not the same as being a mainstream artist in terms of listenership, even though the two things did largely go hand-in-hand once upon a time.So, simply being bigger than the rest does not inherently equate to being mainstream. In the same way that the fastest kid at school could leave her classmates for dust but not even qualify for national heats, let alone compete with the fastest runners in the world.

    Reach

    Active reach is where the picture really comes into focus. The best-selling albums in US history (when sales were sales) were the Eagles ‘Their Greatest Hits 1971-1975’, with 38 million sales, and Michael Jackson’s ‘Thriller’, with 34 million. Based on the respective populations of the year of release of those albums, the Eagles was bought by 17.4% of the US population, while Michael Jackson was bought by 15.9%. 

    Taylor Swift’s best-selling US album was ‘1989’ (6.5 million) while ‘The Tortured Poets Department’ sold 2.9 million. As a share of the total US population, they represent 2.0% and 0.7%. 

    Taylor Swift’s biggest selling release has 12 times less reach than the Eagles, while her latest release had less than 1% reach.NOTE: with modern ‘sales’ figures including streams, Swift’s total audience may have been bigger (as many different people’s streams could add up to one sale). But equally, it could be lower, as one person’s streams could add up to multiple units.

    Of course, judging Swift’s reach only by album sales – an aging format, and an essentially extinct one for much of her listener base – is unfair. Yet interestingly, the c1% figure doesn’t just apply to Swift’s album sales. The record-breaking ‘Eras’ tour sold 4.5 million US tickets, which is just over 1% of the US population (and Swifties being Swifties, there was probably a decent number who saw the show more than once, meaning that percentage is likely a bit smaller). Meanwhile, Swift’s 26.1 billion Spotify streams in 2023 made her the most streamed artist of the year, yet that was just 1.4% of all global Spotify streams. Now, 1.4% of global streams for one artist is a massive achievement But in the analogue era so many more people would have listened to the biggest artist of the day because radio was the main consumption format, and on radio everyone listens to the same song, whether they like it or not.

    None of this is a critique of Taylor Swift, but instead a reflection of the modern music world which she is part of. She is clearly a hugely successful artist at the top of her game. But the game is not the same as it once was. It is not that Taylor Swift is not huge — she is. But she is not mainstream, because mainstream itself is now niche. Charli XCX shows how successful you can be when you understand the power of niche. Niche does not inherently mean small, and its potential is huge. The simple, hard truth is that now everything is niche, even mainstream.

    Five years ago, we made the call that ‘niche is the new mainstream’. Today, this dynamic is so fundamental to music and culture that we are firmly in the stage of second order consequences. Su…