Features

Live From Marfa: In Conversation with DADABOTS

DADBOTS
Live From Marfa: In Conversation with DADABOTSLive From Marfa: In Conversation with DADABOTS

Features

Live From Marfa: In Conversation with DADABOTS

DADBOTS
Features
Live From Marfa: In Conversation with DADABOTS
DADBOTS

DADABOTS is a music duo at the intersection of art, code, and machine learning. Known for fusing underground genres like death metal, drum and bass, and hardcore punk with neural synthesis, they have built tools that generate live music in real time, creating new genres on the spot and reframing what it means to perform. Their work spans viral livestreams, academic research, and global performances, all driven by a belief in human and machine collaboration as a way to push creativity forward.

This interview took place at the Hotel Saint George Hall during Art Blocks Marfa Weekend, where DADABOTS shared how generative tools are reshaping music, why they embrace lo-fi sounds and unpredictability, and what keeps them returning to the Art Blocks community.

Edit: This interview has been edited for length and clarity. 

OpenSea: Let's start off with an introduction from you.

DADABOTS: Hi, I am DADABOTS. I'm one half of a duo of music hackers. We come from the world of music hackathons, with roots in metal, punk, and electronic music. We're very interested in extreme music and wanted to push it somewhere new. We got into making music with code, which led us to machine learning, deep learning, and generative art. It's been a wild ride.

OpenSea: You call DADABOTS a cross between a band, a hackathon team, and a research lab. In practice, what does your creative process look like?

DADABOTS: The creative process is whatever crazy ideas Zach and I have that make us laugh hysterically. Most of our viral works, the ones we're best known for, started as jokes. For example, we made a 24/7 infinite death metal generating livestream on YouTube, which ended up being the longest continuously running livestream. That was just a silly idea: what if we took the humans out of being a metal band? It seemed like the most metal thing possible, and people got the joke. On a high level, that's our process. On a low level, we do a lot of coding and research. We're also a research lab, publishing as independent researchers. Our first paper was called "Generating Black Metal and Math Rock," which we published at NeurIPS. From there, we got pulled into a legit research lab called Harmonai, part of Stability AI. Now, this is our full-time gig: developing neural synthesis.

Biosynthesis has a long history, evolving from classic synthesizers to samplers. Now, when you crank the statistical element of a synthesizer all the way up and train these distributions on existing music, you get the most flexible synthesizer possible. We build these and make them open source, releasing our code on GitHub and our models on Hugging Face. We want to see the world play with this and develop it into a new kind of instrument. Meanwhile, we use it ourselves as an instrument. Our favorite thing is live music. We've played shows in America and Europe, from underground raves in Berlin to dive bar metal shows to cocktail parties at the UN. Our model is trained on all existing genre data, so we can generate, mix, and match any combination of genres or invent new ones. It's basically a music genre invention device, and it's reliable enough to do live improv. At our shows, we ask the audience to shout out genres, Afrobeats, techno, country, deathcore, and then we try to combine them on the spot. We coined the term "prompt jockeys" for this. It's like DJing, but the tracks don't exist yet. Our model is fast enough to generate a three- to four-minute song in three seconds, so we can take suggestions and get full tracks instantly. Normally, a DJ brings a USB stick full of music, but here, the model fills up the collection as we go. The audience knows what we're doing and is in on it, so even if it's a train wreck, it's still entertaining.

PROMPT JOCKEYS

OpenSea: I love that. I'm trying to picture you at all these different venues. How did you end up at the UN?

DADABOTS: Music hackers are well connected and help each other get gigs. One of my best friends and longtime collaborators, LJ Rich, is brilliant. She has a kind of synesthesia between taste and sound and can improvise on piano about anything. We collaborated on hackathon projects, and she ended up MCing an event at the UN called AI for Good. She brought us in as the opening DJs. They told us, "You can play the UN, but don't do any of the metal stuff. Peace music only." The most extreme we could get was drum and bass. Everyone at the UN loves Afrobeats. The fun thing about the UN is meeting people from all over the world and asking about the music from their countries. Someone from Colombia likes Harpo music, someone from Nigeria likes Afrobeats, and we ask, "What happens if you mix these styles together?" The model can statistically find the middle ground and make something new.

My favorite recent genre fusion is what we're playing tomorrow night at Planet Marfa: Ghost Pepper Salsa. Salsa is already a blend of Latin, South American, Caribbean, and Afro-Cuban styles. We fused that with the music Zach and I grew up with: Northeast Mathcore, Braincore, drum and bass, and hardcore punk. Hardcore punk goes way harder with a hand percussion ensemble; it's more frenetic than a punk drummer. Adding bossa nova jazz harmonies makes it even more complex. Whenever you add Brazilian influence, the music becomes denser rhythmically and harmonically. That's the fun, coming up with totally new styles of music that musicians might eventually arrive at on their own, but here we can statistically hear what it sounds like and realize it's an incredible idea.

OpenSea: I love that you're doing this in real time, using algorithms you've built to make it work. You’ve said the goal is human augmentation, not total replacement. Do you still feel that way? What does good human and machine collaboration look like to you?

DADABOTS: When it comes to AI, we each have a choice: take the shortcut or use it to get stronger. There's a lazy path and a path that makes us better. Some resistance to AI comes from realizing that many people just want to take the shortcut and may become de-skilled. We want AI systems that make music harder. With Prompt Track, we're making DJing harder because the tracks don't even exist yet. Now that we have code, we don't need to have just single songs or albums. We can have a generative process that makes music forever. No band would play 24/7 for six years, but an algorithm can, and that's a whole new art form. We're interested to see where this technology can push us, somewhere totally new and challenging.

I've always been drawn to extreme metal because of how hard it is to play. The band Necrophagist, for example, is so fast, dense, and precise. The demands to play it well are alluring. That same impulse brought us into machine learning, which is one of the hardest parts of coding, and into deep learning, the pinnacle of computer science. It's a challenge to reclaim the narrative. Most people think of AI music as push-button apps where you instantly make a song. That's cool, but it leaves the impression that low-effort art is the only outcome. Our challenge is to demonstrate the high-effort side.

Image courtesy of DADABOTS

OpenSea: I couldn't agree more. I love your analogy of AI as either an easy way out or a tool to elevate your work. You often embrace artifacts and lower fidelity in your work. Why is grit part of your aesthetic?

DADABOTS: Great question. Out of necessity, in the early days of parasynthesis, the only things that sounded good were lo-fi. Black metal, for example, is meant to sound terrible, recorded with the cheapest, worst microphones, sounding like the band is in the distance. That gives it the atmosphere that makes it what it is. That was the ideal training data in the beginning. The higher the fidelity, the harder it is to learn. The lower the fidelity, the more reduced the expressive range, the easier it is to learn the patterns. With generative black metal, noise, chaos, and lo-fi are good. That ended up being one of the first examples of good-sounding generative music. When we put our generative black metal album on Bandcamp, people genuinely listened to it as music. That was 2017. Now, in 2025, the models are good at everything: high-def electronic music and sound design. But there's still something appealing about lo-fi. It's about your brain filling in the gaps. There's a Gestalt theory that if you obscure part of your face, your brain fills in the rest. When you listen to lo-fi music, your brain fills in what it's not hearing, which is really cool compared to high-fidelity music where everything is given to you. There's something beautiful about having your brain fill in the gaps.

Image courtesy of DADABOTS

OpenSea: That's a great answer. My last question: What does it mean for you to be here in Marfa, around these people, at this event?

DADABOTS: Here in Marfa with the Art Blocks community, people take generative art seriously. I don't know any other place where you have a meeting point of people whose life's work is generative art and music. These people work on this every day, and they have a certain level of crazy and a philosophical edge. I love probing people. Why is every artwork you've done for 15 years full of clouds? Why clouds? There's something inspiring about taking hold of your life and imbuing meaning into it. Here, it's the world of generative art. In the world of crypto, which is mostly not great vibes and scams, I feel like the 1% slice that's good is here in the Art Blocks community. That's why I've always come here and will always come back.

OpenSea: That's perfect. Thank you. This was great.

DADABOTS: Thank you.

‍Disclaimer: This content is for informational purposes only and should not be construed as financial or trading advice. References to specific projects, products, services, or tokens do not constitute an endorsement, sponsorship, or recommendation by OpenSea. OpenSea does not guarantee the accuracy or completeness of the information presented, and readers should independently verify any claims made herein before acting on them. Readers are solely responsible for conducting their own due diligence before making any decisions.

Related articles