Orkhan Mammadov’s work bridges heritage and technology, weaving centuries-old cultural motifs into generative systems that question how we create, remember, and reinterpret art. The Azerbaijani artist rose to prominence with Carpetdiem, a generative collection that reimagined traditional carpet designs through code.
His latest project, Visions, turns that exploration inward. Inspired by the ways misinformation and digital memory shape our perception of truth, the series uses AI, data, and interactive design to invite viewers into an ever-evolving experience that mirrors how collective memory itself is constantly rewritten.
We spoke with Orkhan about his creative process, his roots in cultural research, and how Visions closes a trilogy exploring art, memory, and machine perception.
Note: This transcript was edited for length and clarity.

OpenSea: Carpetdiem was a breakout project for you. For people just becoming familiar with your work, let’s start from the beginning: how did you get into web3 and creating NFTs?
Orkhan Mammadov: I’m educated in computer science and visual communication design. I’ve worked as a product designer and creative technologist, very design-focused design engineering.
In 2018 I was design director at one of the leading government banks in Azerbaijan, leading around a team of 100 designers and engineers. One day I read an article about art on blockchain. I knew blockchain as a technology, but the “art” part caught my attention. That led me to a Christie’s Art & Tech Summit video. After watching, I thought, “This is what I’ve been hoping for. This technology is amazing.”
I already had a lot of artworks and had participated in the Venice Biennale 2019, one of the most prestigious exhibitions, representing my country in the national pavilion. As a digital artist, I showed internationally at fairs like the Art Dubai, Art Basel, B3 Biennale Moving Images and with galleries. Back then, we sold works via USB drive & DVDs or email with certificates. It was very manual.
I was well paid at the bank, so I couldn’t decide whether to go full-time into art. In 2020, I left the bank. When the pandemic started, I moved to Miami. It felt like the right place amid the global chaos, and I had friends there. I had a O-1 Extra Ordinary Talented U.S. visa, bought a one-way ticket, and landed in the middle of the Crypto & NFT community.
Around February 2021, I finally decided to mint. I had a dilemma: what to mint, how to structure collections, how to price. It took months to decide; I had tons of work but couldn’t press the button.
I minted my first artwork on OpenSea. It sold in two or three minutes, three works each at 5 ETH. The next day I minted on Foundation, and my work sold for 10 ETH. It was a turning point, and more opportunities opened day by day.
Even in the bear market, demand for my work stayed strong. I was working on my generative collection Carpetdiem. A friend at OpenSea told me about OpenSea Studio; it let me structure the drop, have a landing page, and avoid building a separate mint site. OpenSea helped me release the collection; it was very successful and grew my community significantly. Since then, I’ve continued creating.

OpenSea: With your artwork, you bring together technology and tradition, the blend of cultural influences in your digital work. Can you walk me through your creative process?
Orkhan Mammadov: In 2018, I didn’t have a particular style or medium. I tried everything, mostly commissions for digital and sound art festivals.
In mid-2018, Azerbaijan’s Ministry of Culture reached out. Before that, I’d been writing in my language about AI and generative art, although no one was talking about it. Someone smart noticed and understood it as the future of art and a new medium. At the same time, the country was shifting toward a more European, democratic approach, reforming systems and institutions, with younger people in ministries like culture.
They asked why I didn’t research our own culture. That was an “aha” moment. Azerbaijan is famous for carpet-making and textiles, our tangible cultural heritage. We have an amazing museum, the Azerbaijan Carpet Museum, with an archive of carpets created in Azerbaijan, including pieces now outside the country.
The government helped me collect available Azerbaijani carpet references from around the world. We compiled more than 10,000 unique pieces. I trained my first carpet (rug) inspired work, The Idea of Saving Aesthetics, which also sold on OpenSea. After that, I created a work for the Moscow Biennale. A few months later I was invited to participate in the Venice Biennale with a new idea.
Azerbaijan had a cultural strategy of connecting Turkic-speaking countries such as Azerbaijan, Turkey, Turkmenistan, and Kazakhstan. Their languages are related, with different dialects. I focused on shared patterns like architectural motifs, carpets, and paintings, the same patterns appear across these countries. I collected traditional patterns from Turkic states and created Circular Repetition, a circular LED installation running real-time software that analyzes and constantly generates new patterns. After these works, the style gained a lot of attention.
I kept thinking about what to do next and created a non-AI work, hand-painted by painters, cut from miniature paintings, and animated by hand. It’s a 10-minute short about fake news, fake reality, and misinformation. Each painting has different states: one is the real one we know from books and encyclopedias, and the rest are artificially created, like fake memories. The piece was later released on OpenSea in an edition of three.
For the last five years, I’ve researched cultural heritage — not only my own, but cultures we share with other countries. This year I decided to extend that research beyond Middle Eastern heritage to cultural heritage in general, exploring how it shapes our lives as part of collective memory.
The concept of memory, and how it’s altered, has always interested me. We now have tools that can create convincing fake images and deepfake videos. I saw articles about scammers using AI to create fake American war-veteran personas to scam older people, which affected me deeply. Using the image of those who sacrificed for independence felt beyond unethical. It made me think about how these same tools are used by governments for mass surveillance, data analysis, and profiling. From one side it’s effective; from another, it challenges basic human rights. These are the same technologies behind models like Yolo, DenseCap, CLIP and Diffusion The only difference is in how they’re used.
That reflection moved my work from cultural heritage toward collective memory and AI’s effects on society.
OpenSea: Your new collection, Visions, is inspired by the idea that misinformation spreads faster than truth on social media. What led you to create a project about this topic?
Orkhan Mammadov: Exactly, that was the inspiration, but I didn’t yet have the full idea.
The idea behind Visions started with a question: when we send something like Voyager’s Golden Record into space, are we sending the truth or just a version of it? History works the same way. What we remember or write down is never fully true; it’s shaped by who tells the story, what they include, and what they leave out. Even our memories change over time. Science shows memory isn’t storage; it’s something we rewrite.
This project builds on that idea. I look at classical paintings that play with distortion and hidden meaning — Holbein’s The Ambassadors, Bosch’s The Garden of Earthly Delights, Bruegel’s Netherlandish Proverbs, Velázquez’s Las Meninas — works that question power, truth, and belief.
In Visions, I bring that spirit into today’s world of misinformation, conspiracy theories, fake news, and algorithmic chaos. Images travel faster than truth; it’s easy to lose track of what’s real. I built Visions as an interactive experience made from thousands of images that shift and transform when you touch them, an infinite set of possibilities within one painting, each state affecting the next in a domino effect. AI here isn’t just a tool; it’s a mirror to our minds. It imagines, hallucinates, and rewrites, just like human memory. Visions doesn’t offer one clear answer. It lives inside the confusion. If everything can change, can we ever trust what we remember?
OpenSea: I find it fascinating how traditional your work can look while the underlying tech is so advanced. How do you hope people interact with this artwork?
Orkhan Mammadov: When you look at the painting, video, or interactive version, after one state change, you forget the original. It keeps changing. You click segments; double-click, and they change again. As the system regenerates, it constantly evaluates visual coherence and translates computational states into evolving motion, depth, and relation.
My goal is to make people question what they see in the news and on social media. Because the work keeps changing, there’s an “aha” moment: reality can be altered, manipulated, or influenced by foreign actors. As a user, you may not know what was real. I want people to question everything and recognize we’re entering a dangerous era.

OpenSea: Looking at how complex and layered this project is, how long did this take to create?
Orkhan Mammadov: The first iteration started about a year earlier, in March 2024, while I was on vacation in Thailand. I had the idea but didn’t know if it was possible in my workflow. I built custom software to do it.
The dataset behind Visions was built from over 10,000 public-domain artworks, later limited to 666. I collaborated with a curator from London, Farah Piriye Coene. We've collaborated before and she has curated several of my exhibitions. She chose artworks one by one and explained the stories. Over a year, we refined the selection to fit the concept and grab attention.
Process-wise, my custom software is complex: it analyzes images in real time, compares them to the original style, and if a generated state doesn’t fit, it won’t publish. In simple terms, Visions operates as a self-evolving visual ecosystem where image data and code form a continuous feedback loop. It begins with a dataset prepared and expanded using diffusion models. Each frame is analyzed through a computer model that identifies captions, segments objects through several models, and selectively reimagines regions through iterative in-painting using custom trained flux diffusion models — every new layer influencing the next. As the system regenerates, it evaluates visual coherence, filters new elements, and preserves state data for recursive transformation. In parallel, my TouchDesigner & Nodes.io setup visualizes the process in real time, translating computational state into evolving motion, depth, and relations. Through this interplay of analysis and regeneration, the work reveals how a machine interprets, remembers, and dreams as viewers click and explore.
Visions is part of a trilogy about memory and AI. I already released two series on Verse. In 2015, Google published a model for understanding images, not creating them. I reverse-engineered the approach to ask: if a model can identify a dog or cat, can it create one? I fed the painting into my model; it learned the statistical style and texture and generated a new image from scratch — similar, but not a copy. The first was Reveries, based on famous paintings, such as The Scream. In 2016, I built one of my first custom AI models.
The second was Lost Memories. Using GANs (Generative Adversarial Networks), I trained on large datasets. The model stores images in a “latent space.” Common features like women’s faces and animals become most remembered. But there are “lost memories” the AI almost forgets. Instead of generating the most remembered things, I explored deeper to visualize those lost memories.
The third part is Visions, one painting with all its possible states and memories. That completes the trilogy.
OpenSea: What’s cool about this is I’m a viewer and observing the artwork, but since I can also click and change it, I’m a co-creator in a way. What would you hope people take away from this project?
Orkhan Mammadov: Imagine you go to a museum and sit in front of a Rembrandt. You look deeply, and your memory starts to construct the composition and colors; it affects your mood and emotions. People often reimagine details, adding their own ideas. You might see a painting, then describe it to a friend later and unintentionally add your own elements; it merges with your memory of childhood, family, or something else.
That inspired me to give access to viewers. In the beginning it was a video artwork; I didn’t yet know how to make it clickable. I reached out to my teacher and hero in generative art, Marcin Ignac and asked for help. He helped me build the code.
When it was a video, you observed and imagined. With the interactive prototype, you click and change a state. You’re “just” pressing a button, but everyone says, “Look what I created.” It immediately gives a sense of participation. The painting is full of choices, like life. Each decision affects your future options and how you experience the work. You can return to the first iteration and explore different combinations. For many people, that experience was mind-blowing.
OpenSea: You’ve mentioned that Visions was a big undertaking. Who did you collaborate with to bring it to life, and how is it structured for collectors?
Orkhan Mammadov: I collaborated with amazing people: Slava Rybin (Machine Learning), Marcin Ignac (generative system), and Farah Piriye Coene (curatorial statement and dataset curation). This team helped make a very ambitious idea real. There will be 666 paintings in total, with phases that benefit collectors who supported the first two chapters of the trilogy.
OpenSea: Thank you so much for your time today!
Orkhan Mammadov: Thanks, Hannah.
Disclaimer: This content is for informational purposes only and should not be construed as financial or trading advice. References to specific projects, products, services, or tokens do not constitute an endorsement, sponsorship, or recommendation by OpenSea. OpenSea does not guarantee the accuracy or completeness of the information presented, and readers should independently verify any claims made herein before acting on them. Readers are solely responsible for conducting their own due diligence before making any decisions.


.avif)




.png)
.png)
.png)
.png)