Skip to main content

Project Text: In these speculative videos, I aim to delve into this emerging visual medium, drawing inspiration from past landmark moments in our technological and visual evolution. The names of the pieces are the names that were given to the videos by the different software used to make them, giving insight into how they were produced and into my own curation of them.

Generative AI-models are trained on our own technological reproductions of reality and, with that, the artifacts these reproduction methods produce. These large generative models can analyze and recognize patterns from a huge batch of input data. It shows us a certain "essence" better than we ourselves are capable of understanding and recreating. The artifacts produced in early photographic processes, for example, immediately tell us something about the medium. They are physical surfaces created with chemistry that show incoming photons reflected off other surfaces, warped through a lens, and clustered to capture something that resembles our own visual experience. These "captures" become unique physical things, with their imperfections, because the physical world is imperfect and chaotic at base. The physical image then almost becomes a riddle that can be solved; we can decipher how it was made by looking at its imperfections. With collodion wet plate photographs, we see where on the plate the chemistry used to capture the light ends, and it creates edges that are unique and can't be replicated. With other early light-sensitive processes like cyanotypes, we can even see the brushstrokes that were used to apply this chemistry to the surface.

I think this is also interesting to implement into AI-Generated imagery and film. To make the image into an almost solvable riddle, and to give the viewer an insight into the technology that was used to create it. To not make everything 4K, super smooth and photorealistic, because that tells us nothing about the medium itself, except that it is able to do that. What happens in the realm of the un-decoded latent image? Is the noise seed I used visible? Do I maybe need fewer diffusion steps?

This "new" visual medium has reached adulthood in an extremely rapid time. Within 4 years, it has gone from noisy pixelated images to hyperrealistic videos. And so, with all visual media, I think it is time to try and explore and reflect on what the true qualities are for this specific medium. It showed us that it could replicate what we see (so did painting, after that photography, and after that digital rendering), but what is this new medium good for after reaching this goal?

I think it may have something to do with the fluidity of the digital image, and the way we humans (internally) experience images. It is exceptionally good at visually expanding on how we as humans interact with the digital image, how it flows from meaning to meaning, the only idea of an original long-lost and propelled forward by reproduction upon reproduction upon re-contextualisation upon reproduction, and so on. The moving image becomes a dreamy surface on which we project meaning, and so it flows on again without pause for breath to be regurgitated, and unfolded again into a new surface in a place it has not known before. But still instantly knowing its way around this strange landscape.

To me, the fluid AI video image is also a bit reminiscent of dreams, of a landscape in which we are not surprised when one thing suddenly seems to be another. A place where one's own experience is also a fluid one. Maybe this is also quite a fitting thing for our day and age?

AI generated sound by StabeAudio and Milo Poelman

daily.xyz collection image

Video Dailies are a visual timeline charting the evolution of AI art.

Every day, new artwork is minted by curated emerging and experienced voices, capturing a transformative period in art history in real-time. Our vision is to spotlight and celebrate the defining moments in an artist's journey, framed within the wider narrative of an artform's development.

It’s a creative, cultural and technological exploration that, in the age of AI and the blockchain, unfolds not over years or months, but day by day.

Category Art
Contract Address0xf6d6...1dd8
Token ID190040
Token StandardERC-721
ChainEthereum
Last Updated23 hours ago
Creator Earnings
5%

Milo Poelman - AnimateDiff_00200_chr2_thf4

visibility
4 views
  • Price
    USD Price
    Quantity
    Expiration
    From
  • Price
    USD Price
    Quantity
    Floor Difference
    Expiration
    From
keyboard_arrow_down
Event
Price
From
To
Date

Milo Poelman - AnimateDiff_00200_chr2_thf4

visibility
4 views
  • Price
    USD Price
    Quantity
    Expiration
    From
  • Price
    USD Price
    Quantity
    Floor Difference
    Expiration
    From

Project Text: In these speculative videos, I aim to delve into this emerging visual medium, drawing inspiration from past landmark moments in our technological and visual evolution. The names of the pieces are the names that were given to the videos by the different software used to make them, giving insight into how they were produced and into my own curation of them.

Generative AI-models are trained on our own technological reproductions of reality and, with that, the artifacts these reproduction methods produce. These large generative models can analyze and recognize patterns from a huge batch of input data. It shows us a certain "essence" better than we ourselves are capable of understanding and recreating. The artifacts produced in early photographic processes, for example, immediately tell us something about the medium. They are physical surfaces created with chemistry that show incoming photons reflected off other surfaces, warped through a lens, and clustered to capture something that resembles our own visual experience. These "captures" become unique physical things, with their imperfections, because the physical world is imperfect and chaotic at base. The physical image then almost becomes a riddle that can be solved; we can decipher how it was made by looking at its imperfections. With collodion wet plate photographs, we see where on the plate the chemistry used to capture the light ends, and it creates edges that are unique and can't be replicated. With other early light-sensitive processes like cyanotypes, we can even see the brushstrokes that were used to apply this chemistry to the surface.

I think this is also interesting to implement into AI-Generated imagery and film. To make the image into an almost solvable riddle, and to give the viewer an insight into the technology that was used to create it. To not make everything 4K, super smooth and photorealistic, because that tells us nothing about the medium itself, except that it is able to do that. What happens in the realm of the un-decoded latent image? Is the noise seed I used visible? Do I maybe need fewer diffusion steps?

This "new" visual medium has reached adulthood in an extremely rapid time. Within 4 years, it has gone from noisy pixelated images to hyperrealistic videos. And so, with all visual media, I think it is time to try and explore and reflect on what the true qualities are for this specific medium. It showed us that it could replicate what we see (so did painting, after that photography, and after that digital rendering), but what is this new medium good for after reaching this goal?

I think it may have something to do with the fluidity of the digital image, and the way we humans (internally) experience images. It is exceptionally good at visually expanding on how we as humans interact with the digital image, how it flows from meaning to meaning, the only idea of an original long-lost and propelled forward by reproduction upon reproduction upon re-contextualisation upon reproduction, and so on. The moving image becomes a dreamy surface on which we project meaning, and so it flows on again without pause for breath to be regurgitated, and unfolded again into a new surface in a place it has not known before. But still instantly knowing its way around this strange landscape.

To me, the fluid AI video image is also a bit reminiscent of dreams, of a landscape in which we are not surprised when one thing suddenly seems to be another. A place where one's own experience is also a fluid one. Maybe this is also quite a fitting thing for our day and age?

AI generated sound by StabeAudio and Milo Poelman

daily.xyz collection image

Video Dailies are a visual timeline charting the evolution of AI art.

Every day, new artwork is minted by curated emerging and experienced voices, capturing a transformative period in art history in real-time. Our vision is to spotlight and celebrate the defining moments in an artist's journey, framed within the wider narrative of an artform's development.

It’s a creative, cultural and technological exploration that, in the age of AI and the blockchain, unfolds not over years or months, but day by day.

Category Art
Contract Address0xf6d6...1dd8
Token ID190040
Token StandardERC-721
ChainEthereum
Last Updated23 hours ago
Creator Earnings
5%
keyboard_arrow_down
Event
Price
From
To
Date