PublMe bot's Reactions

  • A Sound Effect Offers FREE Cinematic SFX By Vadi Sound
    A Sound Effect offers the Cinematic SFX sample library by Vadi Sound for free this month. Vadi Sound, the creator of over 60 diverse libraries, including Cinematic Braams, Didgeridoo, and Space Anime, teamed up with A Sound Effect to deliver an early Christmas gift this year. The prolific sound designer offers the bestselling Cinematic SFX [...]
    View post: A Sound Effect Offers FREE Cinematic SFX By Vadi Sound

    A Sound Effect offers the Cinematic SFX sample library by Vadi Sound for free this month. Vadi Sound, the creator of over 60 diverse libraries, including Cinematic Braams, Didgeridoo, and Space Anime, teamed up with A Sound Effect to deliver an early Christmas gift this year. The prolific sound designer offers the bestselling Cinematic SFXRead More

  • Setting a goal “could be a limitation” in music creation, Rick Rubin saysRick Rubin has discussed the value of not setting goals when it comes to creative pursuits in music, saying they can act as “limitations”.

    READ MORE: “We couldn’t figure out what to do with it”: Meduza nearly scrapped hit track Piece Of Your Heart

    The topic was brought up during a recent episode of NPR’s All Things Considered show. On it, the Grammy-winning producer, Def Jam Recordings co-founder and well-respected creativity guru talks about working with Johnny Cash, spirituality and his role in music.
    Show host Rachel Martin reveals she’d recently tried – and failed – to learn the piano to create music similar to Ludovico Einaudi’s soundtrack for the 2021 film, Nomadland. She asks if there’s still “creative value” in “setting an artistic goal and not meeting it”.

    “I tend not to set goals,” Rubin responds. “I feel like a goal could be a limitation. Like, I can remember a big successful artist, a singer in a band saying to me, “I’m excited about our next album. We haven’t started writing any songs yet, but we want it to be this kind of sci-fi punk rock thing.” And I was like, OK, I’m listening. And then I said, “What happens if the best songs you write turn out to be more like Neil Young’s Harvest?” And he’s like, “Oh, that’d be great.”
    “So then it’s like, having the goal — that’s not going to help you get there. It’s more like, “start finger painting and see what happens”.
    “Did anything come from your piano experience? Did you feel more connected to the piano? Did you feel like you liked hearing yourself playing the notes? Was it a nice meditation being at the piano?”
    “Yes,” Martin replies.
    “Can you go back to playing the piano for five minutes a day, 10 minutes a day, whatever – you pick the window without having this goal, but just, ‘I’m gonna have fun?’ That might be a really nice gift to yourself.”
    This is not the first time Rubin has imparted his wisdom in a podcast. Speaking on his own show, Tetragrammatron, in October, he discussed with producer Kenny Beats’ his background at Berklee College of Music, saying, “There’s a real difference between being a technically great player and creating music – those are two different things”.
    Although not particularly a musician, Rubin has in the past pointed out that he acts as more of a listener when working with artists, creating inspiring environments that allow for artists to “go fucking mental” in the studio, as Kesha once said. In January, he bravely admitted on the CBS show 60 Minutes, that he has “no technical ability” and that he “knows nothing about music”.
    Read more Rick Rubin stories via MusicTech.
    The post Setting a goal “could be a limitation” in music creation, Rick Rubin says appeared first on MusicTech.

    Rick Rubin has discussed the value of not setting goals when it comes to creative pursuits in music, saying they can act as “limitations”.

  • ModeAudio Smoke Signal: Live Trip Hop Drums Smoke Signal - Live Trip Hop Drums from ModeAudio contains all the quiet fury and thick, hazy atmosphere of vintage 90s Trip Hop, hitting hard and grooving heavy across its selection of... Read More

  • DOCtron announce IMC-500 processing module DOCtron's live-focused EQ, dynamics and saturation processor is now available in a more studio-friendly form factor. 

    DOCtron's live-focused EQ, dynamics and saturation processor is now available in a more studio-friendly form factor. 

  • YouTube’s Dream Track could be the tipping point for AI-generated musicFor some time now, AI music generation has been the industry’s Waiting for Godot; fast approaching but never really seeming to arrive. With the release of Lyria and YouTube Dream Track, the suspense might finally be over.
    Created jointly by Google DeepMind and YouTube, the two companies – which are both owned by parent company Alphabet – call it their “most advanced AI music generation model to date.” Lyria boasts the ability to create vocals and instrumental textures; write lyrics; transform the timbre and tone of one sound into another; and offer nuanced controls over performance and style.
    Sure, the audio quality is grainy enough for a breakfast bowl, but Lyria brings several futuristic generative processes under one roof and slaps a user-friendly interface on top. There’s no denying it’s an important step toward consumer-ready AI music creation.

    Those creative tools might have some eager producers intrigued, but it’s the announcement of Dream Track that may ultimately prove to be the big story here.
    Dream Track, which is currently only available to a closed group of creators, takes the generative power of Lyria and integrates it into YouTube Shorts – the company’s answer to TikTok. Previously, creators looking to add some music to their videos could choose from a vast library of licensed music but Dream Track is capable of generating an entirely new song using just a few written prompts. You can even pick your own singer, with vocal models from artists including Charlie XCX, John Legend, and T-Pain on offer.
    It’s a match made in heaven: 30-60 second clips are not only perfect for the 21st-century attention span; they’re perfect for AI music. Generating short musical clips is what these generative models currently excel at, while creating longer stretches of music still represents a significant technical challenge. Moreover, Lyria’s undeniably lo-fi sound may not hold up on an album release, but YouTube’s betting that consumers might be willing to accept it on a short viral clip played over phone speakers.
    With the weight of the world’s largest tech companies behind it, YouTube Dream Track could well be the first use-case for AI-generated music that reaches critical mass. If it does, then the implications for artists are significant, and not necessarily positive.

    Getting your song attached to a viral video has become a major part of a modern music career. YouTube’s head of music, Lyor Cohen, recently emphasised its importance to the music industry in a company blog post where he wrote: “Shorts are NEARLY DOUBLING an artist’s total reach, so artists can spend more time doing what they do best: making great music.
    “Shorts are the appetizer to the entrée,” he continued. “They are the entry point, leading fans to discover the depth of an artist’s catalogue.”
    YouTube Shorts aren’t just a way to get discovered, they’re a way to get paid. As of 1 February 2023, the company introduced revenue sharing on these videos, meaning that if an artist’s music is used in a Short, then they can expect to earn income from any viral success. It’s hard to know exactly how much in royalty payments are generated directly from YouTube Shorts, but the company states that if music is used then half of any revenue generated by a video is allocated to music licensing costs. Some estimates put the income from 1m daily views at $1,157.74 per month, theoretically netting music rights holders $578.87, though few credible sources can confirm this at the time of writing.
    Keep in mind that YouTube is the fifth biggest music streaming provider by subscriber market share. Once you factor in the amount of music that is streamed on YouTube by non-subscribers, then the platform is easily one of the biggest on the planet. According to reporting from Rolling Stone and research firm, MIDiA, YouTube has the potential to eventually overtake Spotify as the single biggest funder of the music industry. Last year they paid out $6bn in royalties.
    And, just like that, the elephant lumbers into the room. Because there is more to Dream Track than simply finding a good use case for AI music. If implemented at scale, generating music could very well become a significant money saver for YouTube, cutting down the amount of royalties they have to pay out to living, breathing artists.

    The copyright of a song’s lyrics and music are one of the primary means through which songwriters earn a living. With Dream Track, these two revenue generators are functionally eliminated.
    That isn’t to say the music industry is getting the short end of the stick here. Lyria was almost certainly trained on music owned by the world’s largest record label: Universal Music Group. In August of this year, YouTube announced it was working with UMG to leverage the label’s “roster of talent” for an AI Music Incubator program. It would seem that Lyria is the fruit of that joint effort.
    So, when Dream Track pulls from UMG’s vast library of music to generate a new song, we can be sure that the label is getting some sort of compensation and that some slice of that money will eventually trickle down to the artists whose music was used to train the model. Maybe.
    However, the only individual musicians directly benefiting from this generative process will be the artists who’ve had their voice modelled. They, depending on the deal struck by UMG, should stand to get some type of royalty payment when their voice is used to generate a song.
    Can artists get discovered, build a fan base, and go viral primarily through the ubiquity of their vocal model? Perhaps, but right now, and for a long time to come, the vocal models available on YouTube Dream Track will be drawn from a small, exclusive club of label-backed artists – certainly not from young, emerging singers.
    All of this might seem a little alarming. But it’s early days – Dream Track is not yet publicly available, and perhaps users will ultimately prefer to use a song they know rather than generating a hazy one-off.
    However, Google DeepMind and YouTube are almost uniquely positioned to move the needle here: both companies are owned by a parent mega-corporation, Alphabet, and this allows the left hand to ‘strategically partner’ with the right hand to maximum benefit. Google DeepMind brings a formidable, and very well-established, AI research program to the table, while YouTube has deep connections to the music, TV, and creator industries. It’s likely how they were able to quickly negotiate and implement all this with Universal Music Group.
    Even taken individually, these companies operate at a scale so gigantic that their experiments can reshape large chunks of the creative economy. Working together, they’ve managed to produce Lyria within a timespan of months, not years. I’d say the time to begin worrying was yesterday.
     
    The post YouTube’s Dream Track could be the tipping point for AI-generated music appeared first on MusicTech.

    YouTube's Dream Track is looking to take an early lead in AI music – what does that mean for musicians?

  • SampleScience VHS Noise Generator https://youtu.be/qEkjgkAlL8A?si=cyckrg9Xi14T8fQP VHS Noise Generator is a unique tool that features 26 distinct background noises, each one created using an antiquated VCR and... Read More

  • Northampton DJ apologises for knocking 7,000 fans out of WhamageddonA football stadium DJ has apologised for playing Wham!’s Last Christmas and potentially eliminating 7,000 people from the cult game Whamageddon.
    Matt Facer – better known as DJ Matty, played the song during half-time at Northampton Town’s home game against Portsmouth on 2nd December and sparked anger among people who had been playing the game.

    READ MORE: “We couldn’t figure out what to do with it”: Meduza nearly scrapped hit track Piece Of Your Heart

    “I never knew people took it so seriously. I gave it a spin, thinking it would be quite funny to wipe out 7,000 people who couldn’t avoid it, but clearly it isn’t funny,” he said [via BBC News].
    “I had a bit of an insult on Twitter, light-hearted, [saying] it was not a nice thing to do, and apparently that was quite tame to what was being said in the stadium. So I officially apologise to everybody whose Christmas I’ve ruined.”

    For the uninitiated, Whamageddon involves players trying to avoid hearing Last Christmas for as long as possible before Christmas Eve. If they hear it, this knocks them out of the game.
    It first became popular 10 years ago on the internet and has grown in popularity ever since, even to the point that it has a website, merchandise and dedicated rules. For example, hearing a remix or cover of the song doesn’t count – the only way you can be knocked out is hearing the 1984 version performed by George Michael and Andrew Ridgeley.
    Facer has since promised not to play the song  during the home game against Fleetwood Town.

    “I can take on the chin with my home fans and Portsmouth, but I don’t think I’ll be playing it again,” he told BBC Radio Northampton, “I think it’s a shame people in professions like mine can’t play Wham! until [late] December, but it’s a game and we all have to jump on board.”

    Despite the popularity of Whamageddon, Last Christmas returned to the top of the charts last week, having first hit Number One in January 2021.

    The post Northampton DJ apologises for knocking 7,000 fans out of Whamageddon appeared first on MusicTech.

    A DJ for Northampton Town FC has apologised for knocking 7,000 fans out of Whamageddon by playing Last Christmas.

  • Is Sonic Charge’s Synplant 2 the future of sound design or just a new AI gimmick?€149, soniccharge.com
    Sonic Charge’s original Synplant plugin is over 15 years old and is essentially a two-oscillator and FM synth that lets you generate new sound variations by evolving patches from seeds, with a growing plant graphic in the centre. It was often overlooked because of its unusual user interface, but it gradually gained global support from artists such as Flume, Brian Eno and Orbital due to its innovative operation and organic sound.
    The new Synplant 2 adds a handful of improvements and subtle visual upgrades, but the main headline feature is a groundbreaking AI patch generator that attempts to clone a loaded sample. But is this the future or just another AI gimmick?

    Sowing seeds
    Synplant is designed to move shift the focus from more traditional knob tweaking, and instead letting you quickly develop sounds using your ears.

    The main page features a unique plant graphic, with a seed in the centre and then 12 branches that represent unique variations on the original sound. As you drag the branches further from the centre, the sound mutates from something more melodic and playable into more of an unpitched sound effect, depending on where you set the Atonality slider.
    You can generate a new random seed sound, or you can grab a single branch that you like the sound of, and plant it to create a new central seed to work from. Although the controls aren’t immediately obvious and there’s a bit of trial and error, it’s a refreshing and completely different way of developing sounds.
    Synplant 2 plugin main GUI
    Branching out
    The previous version has the different sound variations (branches) assigned to each note on the keyboard. Even better, Synplant 2 now adds the option to have each variation play across different MIDI velocities, or across ranges of six-semitones over the whole keyboard.
    There’s also a new Layered mode that lets you play more than one sound at once for richer timbres. Other controls around the edge include tuning, Effect (controlling the reverb amount and pan width), volume (which drives into a soft clipper when pushed), release, and the aforementioned Atonality. There are also some useful functions accessible via a drop-down menu like Correct Tuning, which attempts to pull each branch in tune, and Normalise Loudness, which will balance the volume of each branch.

    Finally, at the bottom you have a new voice mode for selecting Polyphonic, Monophonic or Legato, a Tempo Sync selector, and a Wheel Target that lets you choose a destination for the mod wheel. This is most dramatic when set to the branch growth, as it can create complex morphing sounds with a flick of the wrist. You can also choose from more familiar parameters like volume, LFO amount, Effect amount, and cutoff.
    It’s a shame that you can’t choose more than one here though. In fact, it would make the instrument a little more playable if you had access to a few more macro controls with flexible destinations, rather than just the fixed Wheel Target.
    Quality genes
    These controls offer the top level of sound design that can get you pretty far in a fairly intuitive way, but if you want to delve deeper, you’ll need to go into the DNA editor. This gives you access to the 48 genes of the synths internals, including the oscillators, FM, envelopes, LFO, and effects.
    Synplant 2 plugin DNA editor
    Synplant 1 was criticised for being a little impenetrable here, but improvements have made it more visual this time around. However, there are still some quirks that make it a little hard to decipher and edit if you’re used to working with traditional controls. The envelope is a case in point, as it includes extra time, loop and tilt controls – far more fiddly than a traditional ADSR. The rotating DNA spiral does look cool and unique though, so visual points added there.
    When it comes to effects, you can get gnarly sounds with saturation and clipping, and there’s a decent chorus and a nice-sounding reverb that go a long way to adding character to the patches. You also get a low and high shelf filter for tonal shaping.
    Talking of the sound, this version has a re-written engine that improves the audio quality and it shows. A quick flick through the large selection of presets helps to show off the synth’s versatility. It has bags of character and achieves complex sounds that are more interesting than synths with twice as many parameters.
    It just goes to show what can be achieved with careful and intricate programming. Still, the preset browser is a little archaic; it has no tagging and just loads files from a system folder.
    Copycat
    Now for Synplant 2’s most impressive feature, the Genopatch. To use this, you load in a sample and then select up to two seconds of it to reference, then hit start. You’ll see four strands sprouting upwards as it attempts to generate optimal synth settings to match the sample. Patches appear as circles, which will gradually get closer to the original sound as the process continues.
    Synplant 2 plugin Genopatch
    It uses a lot of CPU while working, but the process has been refined to make it happen in real time. This is something the developer Magnus Lidström has been working to achieve for a long time, and it’s one reason why the Synplant sequel has taken 15 years. Once it’s finished, you can click on each circle to load the patch and hear the results, then play it on the keyboard.
    We test a variety of sources including drum hits, bass sounds and organic instruments and the accuracy of the results vary wildly. Synplant can’t replicate complex sounds, as it is limited by its 48-gene DNA, but it can do simpler sounds and percussion pretty well. Sonic Charge should be commended for a truly groundbreaking idea though, as when it works well, it’s truly astonishing.
    We’re interested to see how AI patch generation develops as computers get more powerful and the concept can be used on synths with more complex architectures. It feels like the right kind of AI in music production, as it inspires creativity, rather than replaces it.
    Chasing perfection
    Talking about the Genopatch though, it’s perhaps better to get the idea of perfection out of your head, and to think as a new, fun and easy way to generate patches with multiple variations. Even if some results aren’t totally accurate to the original, you can come up with some amazingly weird-and-wonderful sounds that you’d never think to program. It could also be used as a learning tool, as you could copy a sampled sound and then go into the settings to see how it’s been built.
    The results don’t always track well up and down the keyboard, as pointed out by producer Dan Larsson on his Letsynthesize YouTube video. To be fair, the manual does suggest that there are a few key tracking parameters that you might want to adjust, but it’s not always possible to obtain smooth results. It’s also frustrating when trying to refine a sound to get it closer to the original while using the complicated envelope controls.

    Unique design
    Synplant 2 is unlike any other synth out there. Its editing occasionally frustrates, but this is more than made up for by the fun that can be had through exploring sound design in a more tangible way. Results, although sometimes unexpected, are also thrilling, with a high quality output that feels alive.
    If you’re into sound design, or you’re not yet comfortable with in-depth synth editing, then this is well worth a look.
    Synplant 2 key features

    Windows and Mac, VST/VST3/AU
    Experimental soft synth with unique plant-based GUI and randomisation features
    2 Oscillators, FM & multimode filter
    Reverb, chorus, saturation, clipper
    Low and high shelf EQs
    New AI Genopatch turns samples into editable patches
    Rewritten and improved audio engine
    Improved visuals and editing for DNA synth parameters
    NEW mono/poly modes, glide and tempo sync
    NEW velocity, key range and layer modesxt
    MIDI Polyphonic Expression (MPE) support

    The post Is Sonic Charge’s Synplant 2 the future of sound design or just a new AI gimmick? appeared first on MusicTech.

    Synplant 2 offers an intuitive and unique way to farm sound design patches, and the new update adds improved editing and more.

  • Cast Your Vote in the AllMusic 2023 Readers' PollAs we review our favorite albums of the year, we turn it over to you to vote in our annual readers' poll. ​Each of you has five votes to spend in our annual year-end poll, with a ballot built from our readership's best-rated albums of 2023.

    We've launched our Year in Review hub, beginning with the year's 100 best albums and continuing with a new genre-specific list each weekday. But now it's time to let us know what…

  • Senator Warren calls out Apple for shutting down Beeper’s ‘iMessage to Android’ solutionU.S. Senator Elizabeth Warren (D-Mass.) is throwing her weight behind Beeper, the app that allowed Android users to message iPhone users via iMessage, until Apple shut it down. Warren, an advocate for stricter antitrust enforcement, posted her support for Beeper on X (formerly Twitter) and questioned why Apple would restrict a competitor. The post indicates […]
    © 2023 TechCrunch. All rights reserved. For personal use only.

    U.S. Senator Elizabeth Warren (D-Mass.) is throwing her weight behind Beeper, the app that allowed Android users to message iPhone users via iMessage,

  • W. A. Production Imprint Multiband Transient Designer Is Free For A Limited Time
    W. A. Production’s Imprint multiband transient designer is now free, thanks to a promotion at Audio Plugin Deals. The full RRP for the plugin is $39.90, and it’s available for Windows and macOS. The free release is only available for a limited time, and it’s only available at the Audio Plugin Deals site (as opposed [...]
    View post: W. A. Production Imprint Multiband Transient Designer Is Free For A Limited Time

    W. A. Production’s Imprint multiband transient designer is now free, thanks to a promotion at Audio Plugin Deals. The full RRP for the plugin is $39.90, and it’s available for Windows and macOS. The free release is only available for a limited time, and it’s only available at the Audio Plugin Deals site (as opposedRead More

  • XiiixxiQ XiiixxiQ XiiixxiQ is a Summed Euclidean, Non Linear, Poly-metric, Polyrhythmic, Morph-able Step Sequencer. XiiixxiQ is designed to be a MidiFX plugin in Logic. It offers a collection... Read More

  • Kaedinger’s Audio Comparison plug-in for WordPress Audio Comparison offers a simple way to synchronise and play up to three audio files on a website page. 

    Audio Comparison offers a simple way to synchronise and play up to three audio files on a website page. 

  • Google fakes an AI demo, Grand Theft Auto VI goes viral and Spotify cuts jobsHey, folks, welcome to Week in Review (WiR), TechCrunch’s regular newsletter that recaps the past few days in tech. AI stole the headlines once again, with tech giants from Google to X (formerly Twitter) heading off against OpenAI for chatbot supremacy. But plenty happened besides. In this edition of WiR, we cover Google faking a […]
    © 2023 TechCrunch. All rights reserved. For personal use only.

    In this edition of TC's Week in Review newsletter, we cover Google faking an AI demo, the GTA 6 trailer going viral and Spotify cutting jobs.

  • Getting It Done: The Week in D.I.Y. & Indie MusicLast week, our tips and advice for independent, do-it-yourselfers covered how to learn more about your fans, the best modern strategies for releasing music, and more…
    The post Getting It Done: The Week in D.I.Y. & Indie Music appeared first on Hypebot.

    Last week, our tips and advice for independent, do-it-yourselfers covered how to learn more about your fans, the best modern strategies for releasing music, and more…