Features

Artist interview: divergence_art

Image of PROOF OF {ART}WORK NFT
Artist interview: divergence_artArtist interview: divergence_art

Features

Artist interview: divergence_art

Image of PROOF OF {ART}WORK NFT
Features
Artist interview: divergence_art
Image of PROOF OF {ART}WORK NFT

PROOF OF {ART}WORK, by generative artist divergence, uses NFT IDs as seeds for a mathematical equation to produce artworks. While the algorithm is simple, seeds that produce beautiful pieces are rare—and thanks to the butterfly effect, changing a seed by even a single bit results in a completely different image. As a result, not only are the tokens non-fungible but so too are the artworks.

The equation plots millions of individual points to produce emergent organic forms and smooth gradients, and the interactive explorer allows viewers to magnify greater than 4,000,000x to see the exquisite microstructure—all within the OpenSea listing. The total number of dots is governed by a mathematical property called divergence, which is used to price the tokens at 0.1ETH per million points as larger pieces are rarer.

The latest collection has 10 series, each with 3 qualitatively similar editions, to be listed for direct sale at 16:00UTC on July 14. An exciting addition is the introduction of a further 8x 1/1 editions of multi-seed composites that will be put up for auction shortly after.

Eagle-eyed collectors will see that Composites I and VI are missing… they’ve been withheld for an on-chain treasure hunt.

View the collection: https://opensea.io/collection/proof-of-artwork

<hr>

What’s an on-chain treasure hunt?

Persistence of NFT art is so important that I made sure to include my rendering software on the blockchain. Given a piece’s token ID, anyone can render a static image of the respective artwork at any magnification—to paraphrase DEAFBEEF, you need nothing but a Go compiler.

The treasure hunt is a fun way to demonstrate this by withholding two of the multi-seed composites and instead running a competition for collectors to find and render them. The first person to render one of the unseen artworks and post it on Twitter wins a po{a}w single-seed artwork of their choice, up to 1ETH in value. The runner-up wins one up to 0.5 ETH as long as they render the other unseen piece. Both withheld composites will then be minted and auctioned.

What’s unique about your process compared to what we’ve seen from the likes of ArtBlocks and Autoglyphs?

I love both of these projects, and deeply respect how they’ve paved the way for new directions in generative art.

Autoglyphs are the epitome of on-chain art, literally inside the contract. If I were to do the same, the gas cost alone would make it impossible to mint a po{a}w NFT. I instead store the Go source on the chain and perform rendering elsewhere as this allows me to push the boundaries of interactivity.

The biggest difference with Art Blocks is that their seeds are randomly chosen at the time of minting whereas most po{a}w candidate seeds are discarded. An exhaustive computational search for rare seeds is analogous to blockchain proof of work—finding attractive po{a}w tokens is like finding a valid block hash—so my art is in fact mined and has mathematically enforced scarcity. As with Ethereum mining, it’s hard to find a valid value but very easy to verify that it’s correct. The only difference is that po{a}w verification also uses the human eye.

One of the beautiful things about searching for my tokens is that since everything is based on fairly simple maths, the seeds and their resulting artworks have always been there. I’m merely finding them and giving people a new way to visualize them in all their complexity.

Isn’t proof of work the main driver of the environmental impact of blockchains?

Before I embarked on the project I performed an impact analysis to ensure that my mining would have negligible environmental impact. I calculated that mining a collection of 10×3 editions would use approximately 31kWh of energy, which is equivalent to a half-hour charge of an electric vehicle*.

The massive energy consumption of entire blockchains arises because miners are racing each other; everyone other than the winning miner has simply wasted their energy. I’m the only person mining po{a}w tokens, which is why the impact is negligible.

*Being the nerd that I am, I’ve cited my references in the artist notes of my Genesis collection.

What novel code did you write for this and what libraries did you rely on? Are you piecing together interesting bits of existing tech or building from scratch?

The frontend component of the viewer relies on an open-source project for interactive maps called Leaflet. The standard way of displaying online maps is to split them into a grid of “tiles“, but instead of pointing Leaflet to a geographical map, I wrote entirely bespoke backend software to render portions of the artworks based on the requested magnification and coordinates. Because of the sheer number of tiles that would otherwise be necessary, my software renders them on the fly.

Beyond the Go standard library, the only dependency is on an implementation of Hilbert curves, but it would be trivial to remove this if I wanted to be a purist.

What does it take to get something of this size to function smoothly in the browser? It must be tricky to transition across so many different scales?

I’m a senior engineer at Google and this was one of the most difficult pieces of software I’ve ever written. To give an idea of size, these are exapixel images—that’s kilo, mega, giga, tera, peta, exa. If you were to print one at 250 dpi it would be larger than the area of Jamaica!

Rendering a static po{a}w image without time constraints is fairly easy, but catering for on-demand exploration meant that I had to take a more fundamental theoretic approach. In computer science, we have the concept of algorithmic “complexity”, which is a measure of how the number of steps performed by an algorithm grows with respect to the size of its input. There’s a similar concept that describes how much memory is required, and I’m bound by constraints of both time and RAM.

Using a construct known as a Hilbert curve, I was able to reduce memory usage to the theoretical lower bound (only 64 bits per point, regardless of the number of zoom levels) and a logarithmic time complexity. But even once I’d designed the algorithm, I still lost many nights’ sleep on the implementation!

Can you tell us more about your background and how it informed the project?

I’ve been writing code for almost 25 years now, since I was a kid. Throughout this time I’ve always considered myself to be creative but not in the artistic sense—I would literally create useful software from nothing, and this was a huge thrill. Coupled with my general curiosity and the fact that I become way too interested in subjects (proudly aspy), I’ve spent two and a half years exploring the emergent properties of the po{a}w equation. I’m by no means the first person to obsess over it, but I believe that my renderer provides a perspective never seen before.

My introduction to Hilbert curves came from the S2 geometry library while working as an engineer at Google. The real-world applications are at play every time you use Google maps.

Before becoming a full-time software engineer, I qualified as a medical doctor and worked for a short period in hospitals in Australia, before being lured back into the technical world. Although this didn’t influence the project, it gives you a glimpse of how deeply I go into rabbit holes if I think something is interesting!

Are there any good resources for creators looking to get into the world of generative art?

The p5.js Web Editor and reference are great places to start, especially if you’ve never written code before as they even explain foundational concepts. Art is a wonderful way to learn about software because of the immediate and readily understood feedback—the first code I ever wrote was in Logo.

If you already know the basics but want to take your art to the next level, then Tyler Hobbs’ essays are a must for honing artistic expression, and The Coding Train is an excellent artistic-coding resource. Keep an eye on my Twitter as I’m currently working on a new platform, gener8.art, to help generative artists of all experience levels bring their work on-chain. I’ve got some very cool (but currently secret) features that I hope will help artists and thrill collectors at the same time! I should have something ready to share in a month or two… watch this space.

Related articles