All posts by Noa Dohler

Ivors Academy demands removal of Suno's AI lyrics generator amid copyright concerns, sparking debate over creative ownership in music industry

AI Lyrics Generator Sparks Industry Outrage

Suno’s AI lyrics generator faces fierce backlash for stealing copyrighted lyrics for this song.

The battle between AI and human creativity has reached a critical turning point. The Ivors Academy, representing Europe’s largest community of music creators, has issued a stern warning against Suno’s controversial ReMi lyrics generator. As we’ve seen with previous AI music innovations, this technology raises serious ethical concerns.

As a composer and performer, I’ve spent countless nights crafting lyrics that capture raw emotions and genuine experiences. Last month, I discovered an AI had generated lyrics eerily similar to my unreleased work. The experience left me questioning the future of authentic songwriting in this AI-driven age.

AI Lyrics Generator Threatens Songwriters’ Livelihood

The Ivors Academy has demanded Suno remove its ReMi lyrics generator, following disturbing discoveries of copyright infringement. In a shocking demonstration reported by MusicRadar, the system reproduced entire lyrics from Midnight Oil’s iconic ‘Beds Are Burning’ using simple prompts.

‘Suno’s new tool is the latest example of unethical AI firms stealing the work, art and livelihood of lyricists, songwriters and composers,’ declares Roberto Neri, CEO of the Ivors Academy. The controversy follows last year’s lawsuits from major labels Sony, Universal, and Warner against Suno for ‘unimaginable scales’ of copyright infringement.

The UK government’s consideration to allow AI companies to train on copyrighted content adds another layer of complexity. While MP Chris Bryant suggests this could create new revenue streams through licensing, creators fear losing control over their intellectual property.

Shape Tomorrow’s Music Industry

The future of songwriting stands at a crossroads. While AI offers incredible possibilities, we must ensure it enhances rather than exploits human creativity. What role do you think AI should play in songwriting? Have you experienced AI-generated lyrics that crossed ethical boundaries? Share your thoughts and let’s protect the art of authentic musical expression together.


Quick FAQ Guide

Q: What is Suno’s ReMi lyrics generator?

A: ReMi is an AI tool that generates song lyrics based on user prompts. It’s currently under scrutiny for allegedly reproducing copyrighted lyrics from existing songs.

Q: How are major record labels responding to AI lyrics generators?

A: Sony, Universal, and Warner filed lawsuits against Suno in 2023, claiming extensive copyright infringement through their AI technology.

Q: Can AI companies legally train on copyrighted lyrics?

A: Currently contested. While Suno claims ‘fair use’, organizations like the Ivors Academy are pushing for stronger copyright protection against AI training.

The Who's Roger Daltrey sparks debate: Is music technology killing creativity? His controversial take on modern music production methods.

The Who Singer Challenges Music Technology

Roger Daltrey claims modern technology strips t he heart from music making.

In a stirring critique that echoes recent debates about AI’s impact on creativity, legendary The Who frontman Roger Daltrey ignites controversy by challenging the role of technology in modern music production. His perspective sparks essential questions about authenticity versus innovation in our digital age.

His words hit home. While recording at Stanford’s CCRMA, I’ve often caught myself getting lost in endless plugin options, forgetting that my best vocal takes came from raw emotion rather than perfect technical settings. Sometimes, less really is more.

When Technology Threatens Musical Soul

Daltrey’s bold statement comes during the launch of Buddy Holly’s hardback biography, where he argues that boundless tech innovation is diminishing music’s essential heartbeat. The rock legend reminisces about t he raw early days when ‘throwing stuff at the wall’ led to authentic breakthroughs.

Despite his critique of modern production methods, Daltrey acknowledges t he democratizing power of social media platforms. He celebrates how TikTok and Instagram have revolutionized artist independence, allowing creators to share their stories directly with audiences without traditional industry gatekeepers.

Interestingly, while lamenting technology’s impact on rock music’s essence, Daltrey sees hope in t he genre’s future. He believes rock is becoming ‘hysterical again’ and ‘younger again,’ suggesting a potential return to its rebellious roots despite – or perhaps because of – technological advancement.

Bridge the Digital Divide

Whether you’re team Daltrey or embrace every new plugin and DAW feature, there’s wisdom in finding balance. Technology should enhance, not replace, human creativity. What’s your take on this eternal debate? Have you ever felt lost in technical options while creating music? Share your experiences and let’s explore how we can harness technology while preserving music’s soul.


Quick FAQ Guide

What is Roger Daltrey’s main criticism of modern music technology?

Daltrey believes that excessive technology is making musicians lose the ‘heartbeat’ of music, arguing that the abundance of technical options can detract from authentic musical expression.

How does social media benefit musicians according to Daltrey?

Despite his tech criticism, Daltrey praises platforms like TikTok and Instagram for democratizing music distribution, allowing artists to share their work directly with audiences without industry intermediaries.

What’s Daltrey’s view on current rock music?

While critical of technology’s impact, Daltrey sees positive signs in contemporary rock music, noting it’s becoming more ‘hysterical’ and ‘younger,’ suggesting a return to its energetic roots.

Spotify Premium evolves with groundbreaking advertising model, promising enhanced revenue streams and AI-powered tools for music creators.

Spotify Premium Unlocks Revolutionary Advertising Model

Spotify Premium users rejoice as groundbreaking changes transform music streaming forever.

Music streaming is evolving at breakneck speed, and Spotify’s latest advertising model proves it. While some artists question the platform’s commitment to creators, this game-changing update promises to reshape how we experience music altogether.

As a performer who’s recorded with Madonna, I’ve witnessed firsthand how streaming platforms impact artist revenues. When my first solo tracks hit Spotify, I obsessed over those streaming numbers like they were Olympic scores. The platform’s evolution feels personal to every musician.

Spotify’s Billion-Dollar Advertising Revolution

The streaming giant just dropped major news about its advertising strategy. According to their latest announcement, Spotify’s yearly payouts have skyrocketed from $1 billion to over $10 billion in just ten years.

Enter the Spotify Ad Exchange (SAX), their shiny new tool that lets advertisers tap into real-time user data. Free users spend about two hours daily on the platform, making them prime targets for these enhanced advertising capabilities. The platform’s ad revenue has more than doubled since 2020.

But here’s where it gets interesting: Spotify’s new Generative AI Ads tool can now create scripts and voiceovers automatically. They’re paying roughly two-thirds of every dollar to rights holders, whether from Spotify Premium subscriptions or advertising revenue.

Shape Tomorrow’s Music Streaming

The future of music streaming stands at a crossroads. Whether you’re a devoted Spotify Premium user or considering the switch, your choices shape the platform’s evolution. How do you feel about AI-generated ads? What features would make your premium subscription more valuable? Share your thoughts and let’s craft the future of music streaming together.


Quick FAQ Guide

Q: How much does Spotify pay artists per stream?
A: Spotify pays rights holders approximately two-thirds of every dollar earned, whether from premium subscriptions or advertising revenue.

Q: How long do free users listen to Spotify daily?
A: According to Spotify’s Global Head of Advertising, free users tune in for approximately two hours each day.

Q: How much has Spotify’s payout increased since 2014?
A: Spotify’s yearly payouts to the music industry have increased tenfold, from $1 billion in 2014 to over $10 billion in 2024.

Revolutionary VR music studio RipX Backstage transforms any space into a creative hub, bringing immersive production to VR news enthusiasts

VR Studio Experience Transforms Music Creation

Virtual reality just revolutionized music production, and creators can’t stop talking about it.

The music production landscape is witnessing a seismic shift as virtual reality breaks down traditional studio barriers. This groundbreaking development follows the recent trend of AI’s transformation of music creation, pushing boundaries even further with immersive spatial audio capabilities.

During my time at Stanford’s CCRMA, I dreamed of walking through soundscapes I created. Now, watching producers interact with virtual instruments in three-dimensional space reminds me of those experimental days twisting knobs on microcontrollers, except this is far more intuitive!

Revolutionary VR Music Studio Breaks Reality’s Barriers

Hit’n’Mix has officially launched RipX Backstage, a groundbreaking VR music creation tool that transforms any space into a futuristic creative studio. This innovative platform works seamlessly with Apple Vision Pro, requiring zero additional hardware.

The system’s genius lies in its intuitive design – producers can manipulate sounds by simply looking and tapping, or even apply effects through voice commands. The vertical separation of notes by pitch ensures crystal-clear audio distinction, while the new Conductor feature recognizes song keys for real-time musical input.

April 2025 brings exciting updates including three-dimensional note editing on Player Decks, allowing users to push and pull notes through space. The platform also introduces a new Drum Pad instrument model and an expanded library of loops and sounds, making music creation more accessible than ever.

Shape Tomorrow’s Sound Today

The fusion of virtual reality and music production isn’t just changing how we create – it’s revolutionizing our entire relationship with sound. Whether you’re a seasoned producer or just starting your musical journey, this technology opens doors to unprecedented creative possibilities. Ready to step into the future of music making? Share your thoughts on how VR could transform your creative process!


Quick VR Music FAQ

What equipment do I need to use RipX Backstage?

RipX Backstage works with VR headsets and Apple Vision Pro, requiring no additional hardware. The platform is designed to be accessible for all skill levels.

Can I use real instruments with RipX Backstage?

Yes, you can connect portable Bluetooth MIDI keyboards for added convenience, while also using virtual instruments and voice input for sound creation.

How does spatial audio work in RipX Backstage?

The platform uses vertical separation of notes by pitch and allows users to walk up to individual instruments, hearing them more clearly and moving them around in 3D space.

Udio's new AI tool revolutionizes music creation, letting artists generate new music using existing songs as style references, amid legal debates.

AI Transforms How Artists Create New Music

Udio’s revolutionary AI tool is redefining how artists create new music forever.

The battle between AI and human creativity reaches a pivotal moment as Udio launches its groundbreaking Styles tool. While some artists resist the streaming revolution, others embrace AI’s potential to reshape music production entirely.

Last week in my studio, I experimented with AI-generated reference tracks for a composition. The uncanny accuracy of matching my desired style left me both amazed and slightly unnerved – like watching a digital doppelganger perform my musical thoughts.

AI-Powered Music Creation Gets Style-Smart

Udio’s latest innovation allows musicians to create new music using existing songs as style references. The groundbreaking tool lets users upload audio samples to generate fresh compositions with similar instrumentation and feel. Grammy-winning keyboardist Jordan Rudess calls it ‘incredibly intuitive,’ praising how it helps artists evolve.

This development comes amid serious legal challenges, as Udio faces lawsuits from major record labels including Sony, Universal, and Warner. The labels claim unauthorized use of copyrighted recordings for AI training, while Udio argues fair use protection.

To address rights concerns, Udio’s Styles feature will only work with user-owned content. However, questions remain about enforcement and preventing potential misuse, highlighting the complex balance between innovation and artistic rights in the AI era.

Shape Tomorrow’s Sound Today

Whether you’re excited or concerned about AI’s role in music creation, one thing’s clear: the landscape is changing rapidly. As artists, we have the opportunity to help shape how this technology develops. What’s your take on AI-assisted music creation? Have you experimented with tools like Udio’s Styles? Share your experiences and let’s explore this new frontier together.


Quick FAQ on AI Music Creation

How does Udio’s Styles tool work?

Users upload audio samples which the AI analyzes to create new compositions with similar instrumentation, tone, and feel. It’s designed for both independent artists and commercial creators needing consistent sound across projects.

Is AI-generated music legal?

The legality depends on how the AI is trained and used. Udio faces lawsuits from major labels but argues fair use. The platform now only allows users to upload content they own or control.

Can AI replace human musicians?

While AI can generate music, it currently serves as a tool rather than a replacement. It helps with reference tracks and style matching but lacks human creativity and emotional depth.

Behringer's 2-XM synth brings t he legendary Oberheim Two Voice into modern studios with enhanced features and accessible pricing.

Behringer Revives Legendary Oberheim Two Voice

A classic synth legend returns, sparking t he hearts of musicians worldwide.

As the synth world buzzes with anticipation, Behringer has finally unleashed their take on t he iconic Oberheim Two Voice. Just like we saw with their DeepMind synth revolution, this faithful recreation promises to bring vintage sounds to modern studios.

While performing at CCRMA last month, I spotted an original Oberheim Two Voice gathering dust in the corner. That magnificent beast still had such incredible presence – even silent, it commanded attention. Now, seeing Behringer’s homage brings back those goosebumps.

The Two Voice Legacy Lives Again

Behringer’s 2-XM synthesizer has finally landed, bringing t he iconic Oberheim Two Voice into the modern era. Originally announced in 2022, this Eurorack beauty comes with a $319 price tag – slightly higher than expected, but still incredibly accessible.

The original 1975 design featured two single-voice modules and a 37-note keyboard, pioneering polyphonic synthesis. Behringer’s recreation packs four VCOs, four envelope generators, two LFOs, and dual multi-mode VCFs into its compact frame.

Modern additions include MIDI and USB connectivity, stereo output, and extensive CV/gate routing options. The two voices can be combined, split, or stacked in unison for those thick, vintage analog textures that defined an era.

Your Gateway to Vintage Innovation

Ready to dive into the world of classic polysynth sounds? The 2-XM represents more than just another clone – it’s a bridge between past and future, making legendary sounds accessible to a new generation. What sonic adventures will you create with this reborn classic? Share your thoughts and dream patches below!


Quick FAQ Guide

Q: How does the 2-XM compare to the original Oberheim Two Voice?
A: The 2-XM maintains the core architecture while adding modern features like MIDI, USB, and stereo output, all at a fraction of the vintage price.

Q: What makes the 2-XM special in today’s synth market?
A: It offers four VCOs, four envelope generators, and dual multi-mode filters in a compact Eurorack format, making it uniquely powerful for its price point.

Q: Can the 2-XM integrate with modern studio setups?
A: Yes, it features comprehensive MIDI/USB connectivity and extensive CV/gate routing options for seamless integration.

Explore how digital innovation transforms guitar and music production, as Steven Wilson champions modern processing over traditional methods.

Guitar Evolution Sparks Digital Music Revolution

Traditional guitar amplifiers face extinction as digital plugins revolutionize music production forever.

The guitar world stands at a crossroads between vintage authenticity and digital innovation. While some cling to traditional amp setups, others, like recent industry developments suggest, are embracing a future where software transforms how we create and experience guitar music.

Just last week in my home studio, I switched from my beloved tube amp to a digital modeling setup for a recording session. The flexibility was mind-blowing – I could instantly switch between vintage tones I’d spent years trying to perfect through hardware. My inner purist cringed, but my producer side rejoiced.

Digital Revolution Reshapes Guitar’s Future

Steven Wilson, the mastermind behind Porcupine Tree, is boldly declaring that the guitar belongs to the 20th century. His latest album ‘The Overview’ showcases how modern processing and plugins are breathing new life into the instrument.

Wilson’s collaboration with Randy McStine demonstrates how digital tools are expanding guitar’s vocabulary. They’re creating epic solos that honor tradition while pushing boundaries, proving that innovation doesn’t mean abandoning soul.

The abundance of amp simulators and processing tools has transformed modern guitar production. Wilson embraces this digital realm, mixing vintage tones with fresh sounds to create something entirely new, challenging the notion that authentic guitar tone requires traditional hardware.

Shape Tomorrow’s Guitar Sound

The future of guitar lies at the intersection of tradition and innovation. Whether you’re a digital native or an analog purist, there’s never been a more exciting time to explore new sonic territories. What new sounds will you discover? Share your thoughts on the digital evolution of guitar tone – are you embracing the change or holding onto traditional setups?


Quick Guitar Tech FAQ

What are amp simulators?

Amp simulators are digital software that recreates the sound of traditional guitar amplifiers, offering hundreds of tone options without physical hardware.

Can digital plugins replace traditional amps?

Modern plugins can effectively replicate vintage amp tones while offering additional flexibility, though some guitarists prefer the feel and response of physical amps.

How much do professional guitar plugins cost?

Professional-grade guitar plugin suites typically range from $100-300, significantly less than equivalent physical amp setups that can cost thousands.

Behringer's DeepMind X Series redefines polyphonic synthesizer possibilities with 12-voice power and professional effects processing.

Behringer Unleashes DeepMind Polyphonic Synth Revolution

A legendary synthesizer bloodline evolves as Behringer unveils its ultimate polyphonic synthesizer masterpiece.

In a bold move that’s sending ripples through the music tech community, Behringer has launched its DeepMind X Series, marking a decade of polyphonic innovation. Like the recent groundbreaking developments in FM synthesis, this release promises to redefine what’s possible in hardware synthesis.

Back at CCRMA, I spent countless hours exploring polyphonic synthesis, creating ethereal soundscapes that would fill the lab. One late night, I accidentally triggered all twelve voices simultaneously, creating an overwhelming cascade of sound that had everyone rushing in to see what magical instrument I’d discovered.

DeepMind X Series: The Ultimate Polyphonic Evolution

The DeepMind X Series emerges with three powerful models: the 12XD, 6X, and 12X. The flagship 12X boasts a 49-key interface with impressive 12-voice polyphony, while its keyboard-less sibling, the 12XD, packs identical sonic power.

Each synthesizer features dual oscillators per voice, paired with 2-pole/4-pole filters running at 6dB-per-octave. The architecture includes three ADSR generators and an eight-channel Modulation Matrix, complete with legendary IR3109-style VCFs and stereo VCAs.

The cherry on top? Four simultaneous FX engines featuring over 33 world-class algorithms from TC Electronic, Midas, and Klark Teknik. From lush reverbs to mind-bending phasers, this polyphonic synthesizer delivers studio-quality effects processing.

Shape Tomorrow’s Sound Today

The DeepMind X Series represents more than just another synthesizer launch – it’s a gateway to sonic possibilities that blur the line between vintage charm and modern innovation. Whether you’re crafting celestial pads or punchy leads, this instrument adapts to your creative vision. Ready to explore its vast potential? Share your dream sound in the comments below!


Quick FAQ Guide

What makes the DeepMind X Series special?

The series offers up to 12-voice polyphony, dual oscillators per voice, and four simultaneous FX engines with 33+ professional algorithms, making it one of the most versatile polyphonic synthesizers in its class.

How does the DeepMind 12XD differ from the 12X?

The 12XD is a keyboard-less version of the 12X, offering identical synthesis capabilities in a more compact form factor, ideal for studio integration or rack mounting.

What kind of effects does the DeepMind X Series include?

It features 33+ professional effects from TC Electronic, Midas, and Klark Teknik, including reverb, chorus, flanger, phaser, delay, and multiband distortion.

Discover Lore: a revolutionary pocket-sized analog synth that delivers powerful FM synthesis in a compact, eco-friendly package from Crows Electromusic

Tiny Analog Synth Unleashes Mighty FM Power

Hold tight, synth enthusiasts – a pocket-sized analog powerhouse is about to revolutionize your sound.

In a world where bigger often means better, Crows Electromusic is proving that small can be mighty. Their new analog synth, Lore, is challenging everything we thought we knew about compact sound design. Much like recent innovations in music technology, this tiny marvel is pushing creative boundaries.

This reminds me of my first encounter with FM synthesis at CCRMA, Stanford’s Music Tech lab. I spent hours creating otherworldly sounds with a massive modular system, only to realize that the most interesting textures came from the simplest patches. Sometimes, less really is more!

Pocket-Sized Analog Synth Revolutionizes Sound Design

Get ready for a sonic revolution in a tiny package! Crows Electromusic’s new Lore synthesizer is redefining what’s possible in portable sound design. This pocket-sized powerhouse measures just 3×3 inches but packs two independent triangle-core oscillators and full FM synthesis capabilities.

Following their successful OVUM release, Crows Electromusic designed Lore with ambitious goals: pushing compact synth boundaries while enabling more aggressive soundscapes. The 100% analog circuitry delivers everything from pure tones to metallic, grinding textures, all without digital components or complicated menus.

Here’s what makes this analog synth special: a simple yet powerful FM synthesis implementation, crossfading oscillators, and immediate hands-on control. At just $59 USD, it’s housed in an eco-friendly bioplastic case and powered via USB. No battery needed – just plug in your headphones and start exploring!

Join the Compact Sound Revolution

The future of synthesis is evolving, and it’s getting delightfully smaller. Whether you’re a seasoned sound designer or just starting your sonic journey, Lore proves that massive sound can come in tiny packages. Ready to explore new sonic territories? What kinds of sounds would you create with this pocket-sized powerhouse? Share your ideas below – let’s get creative together!


Quick FAQ Guide

What makes Lore different from other compact synthesizers?

Lore stands out with its 100% analog circuitry, FM synthesis capabilities, and incredibly compact 3×3 inch size, all while maintaining hands-on control without digital menus or screens.

How is Lore powered and what connections does it have?

Lore is USB-powered (compatible with power banks) and features a 3.5mm headphone jack plus line-out for connecting to amplifiers, audio interfaces, or effects.

What kind of sounds can Lore produce?

Lore generates a wide spectrum of sounds from pure tones to complex FM synthesis textures, making it ideal for ambient, drone, and noise music creation.

Bandcamp challenges streaming games with its artist-first approach, proving traditional business models still work in the age of AI music.

Bandcamp Defies Streaming Games Industry-Wide

While streaming games reshape music’s landscape, Bandcamp stands firm with its revolutionary approach.

In an industry obsessed with streaming numbers, Bandcamp’s bold stance against the tide is turning heads. Just as Jerry Garcia’s voice found new life through technology, Bandcamp is breathing fresh air into artist compensation, challenging the status quo of microscopic streaming payouts.

As a performer who’s witnessed both sides of the industry, I remember the day I received my first streaming statement – cents for thousands of plays. It felt like trading a symphony for spare change. That’s when I truly understood why alternative platforms matter.

Bandcamp’s Bold Stand Against Streaming Games

In a groundbreaking interview with MusicRadar, Bandcamp’s General Manager Dan Melnick isn’t mincing words about streaming games. The platform’s direct-to-artist approach sends the majority of revenue straight to creators, unlike streaming’s fractional penny payouts.

Melnick boldly positions Bandcamp’s model as timeless, while suggesting that streaming platforms represent an industry anomaly. Their business approach aligns success with artist earnings – when artists thrive, Bandcamp prospers.

Looking ahead to AI’s impact, Melnick sees Bandcamp’s human-centric model as a shield against AI-generated content flooding. He emphasizes that staying in streaming games could pose more career threats than building genuine fan relationships.

Your Voice in the Music Revolution

The future of music stands at a crossroads, with artists and fans wielding unprecedented power to shape its direction. Whether you’re a creator seeking fair compensation or a fan wanting to directly support your favorite artists, your choices matter. How will you participate in this revolution? Share your thoughts on direct artist support versus streaming platforms below.


Quick FAQ Guide

How does Bandcamp’s payment model differ from streaming platforms?

Bandcamp gives artists 80-85% of each sale directly, compared to streaming platforms which typically pay $0.003-$0.005 per stream. This direct-to-artist model ensures significantly higher earnings per transaction.

Will AI-generated music affect streaming platforms?

Yes, streaming platforms face potential oversaturation from AI-generated content, which could further reduce artist earnings. Bandcamp’s human-verification system helps protect against this threat.

How can fans best support their favorite artists?

Direct purchases through platforms like Bandcamp provide the highest artist compensation, with up to 85% of the sale going to creators, compared to thousands of streams needed to equal one album purchase.