Skip to main content

Ziyang Wu & Mark Ramos
Event Modeling - #dump, 2023
Live simulation and online environment (documentation)
Infinite duration

#dump is a simulated "landfill" inspired by IRL mineral mining and fracking environments jointly built by AI, artists and social media users. The simulation is generated in real-time by a PHP Twitter bot (we’re calling it “dump-bot”) and Unreal Engine 5 Blueprints.  "Dump-bot" lives on a hidden server in the NYU Computer Science department and monitors and scrapes Twitter in real-time. When keywords related to an AI-generated 3D model are found, this model will be instantly dumped into the landfill scene. Every moment, social media users around the world use keyboards and mobile phones to trigger these keywords, resulting in a constant dump of AI-generated models filling the landscape.  It’s a world-without-end.


Ziyang Wu
Event Modeling - AI Fossil (Data Center), 2023
AI-Generate sculptures

The work began with the collection and collation of various news and social events that have occurred or are happening based on social media algorithms, and utilized dreamfields3D to generate 3D models using the titles of news/events as the seeding words/sentences. In the era of AI technology blowout (but also in its “baby-like” period), the work records all kinds of human information as “AI fossils” through AI text to 3D model generation technology. In the future “abandoned factory” scene that is mixed with reality and virtual, it is full of different experiences between humans and AI on the same event. Human information is presented as some realistic fragments, some historical fossils, a pile of metal carvings, or a pile of inferior plastic toys. Participants can find their original text by approaching each “fossil”, and they can also dig out stories that have become grand, obscene, meaningful, exciting or moderately boring in the AI generation process.


Ziyang Wu is an artist based in New York and Hangzhou, currently teaching at the School of Design and Innovation at the China Academy of Art, and is a current member of NEW INC at the New Museum. With an MFA from the Rhode Island School of Design, and a BFA from the Florence Academy of Fine Arts, his video, AR, AI simulation and interactive video installation have exhibited internationally, including Institute of Contemporary Art (ICA) Philadelphia, Rhizome at the New Museum, Walker Art Center, Rochester Art Center, SXSW, Art Dubai, Annka Kultys Gallery in London, Eigenheim Gallery in Berlin, Medici Palace, Milan Design Week, Today Art Museum in Beijing, UCCA Center for Contemporary Art, Chengdu Biennale, Song Art Museum in Beijing and Ming Contemporary Art Museum in Shanghai. His recent fellowships and residencies include “The Randall Chair” award at Alfred University; “Kai Wu” Interdisciplinary Studio residency, Media Art Lab, Times Museum; AACYF Top 30 under 30; Residency Unlimited; MacDowell Fellowship; Artist-in-residence at Institute for Electronic Arts (IEA); Winner of The ROCI Road to Peace by Robert Rauschenberg Art Foundation.    Mark Ramos is a Brooklyn-based new media artist. Mark makes fragile post-colonial technology using web/software programming, physical computing (using computers to sense and react to the physical world), and digital sculpture/fabrication to create interactive work that facilitate encounters with our own uncertain digital futures. Mark is deeply committed to the ethos of open source: the free sharing of information and data + creative uses of technology. Mark has exhibited his work and lectured widely both online and AFK including as part of Rhizome's First Look: New Art Online with the New Museum of Contemporary Art in NYC, Yerba Buena Center for the Arts in San Francisco, the Times Museum in Beijing, the Chengdu Biennial, Arebyte Gallery in London, and at the Peter Weibel Institute for Digital Culture in Vienna. He teaches Art after the Internet in the MFA Fine Arts Department at the School of Visual Arts, Form and Code at Pratt Institute, as well as Web Programming and Computer Principles in the Computer Science Department at NYU. You can also find him playing drums for various bands in Brooklyn. 


EPOCH presents, XENOSPACE, a ground-breaking and experimental virtual exhibition that features seven artists exploring the collaborative boundaries between AI and machine learning within their creative processes.

The title “XENOSPACE” alludes to an unusual or unfamiliar environment. EPOCH has processed 360 equirectangular panoramas through Stable Diffusion, generating the subsequent AI-assisted environments that serve as the backdrop for the artworks.

XENOSPACE responds to a significant moment in the field, as it reflects on the growing relationship between humans and machines and the impact of AI on creative expression. The exhibition serves as a benchmark, showcasing the expansive collaborative potential of AI and machine learning in contemporary art practices and exhibition building. – chatGPT

XENOSPACE collection image

EPOCH is proud to present, XENOSPACE, a ground-breaking and experimental virtual exhibition that features seven artists exploring the collaborative boundaries of artificial intelligence and machine learning in their creative processes.

Category Art
Contract Address0x3f67...ba3f
Token ID23
Token StandardERC-721
ChainEthereum
Last Updated1 year ago
Creator Earnings
10%

Event Modeling #3/8

visibility
2 views
  • Price
    USD Price
    Quantity
    Expiration
    From
  • Price
    USD Price
    Quantity
    Floor Difference
    Expiration
    From
keyboard_arrow_down
Event
Price
From
To
Date

Event Modeling #3/8

visibility
2 views
  • Price
    USD Price
    Quantity
    Expiration
    From
  • Price
    USD Price
    Quantity
    Floor Difference
    Expiration
    From

Ziyang Wu & Mark Ramos
Event Modeling - #dump, 2023
Live simulation and online environment (documentation)
Infinite duration

#dump is a simulated "landfill" inspired by IRL mineral mining and fracking environments jointly built by AI, artists and social media users. The simulation is generated in real-time by a PHP Twitter bot (we’re calling it “dump-bot”) and Unreal Engine 5 Blueprints.  "Dump-bot" lives on a hidden server in the NYU Computer Science department and monitors and scrapes Twitter in real-time. When keywords related to an AI-generated 3D model are found, this model will be instantly dumped into the landfill scene. Every moment, social media users around the world use keyboards and mobile phones to trigger these keywords, resulting in a constant dump of AI-generated models filling the landscape.  It’s a world-without-end.


Ziyang Wu
Event Modeling - AI Fossil (Data Center), 2023
AI-Generate sculptures

The work began with the collection and collation of various news and social events that have occurred or are happening based on social media algorithms, and utilized dreamfields3D to generate 3D models using the titles of news/events as the seeding words/sentences. In the era of AI technology blowout (but also in its “baby-like” period), the work records all kinds of human information as “AI fossils” through AI text to 3D model generation technology. In the future “abandoned factory” scene that is mixed with reality and virtual, it is full of different experiences between humans and AI on the same event. Human information is presented as some realistic fragments, some historical fossils, a pile of metal carvings, or a pile of inferior plastic toys. Participants can find their original text by approaching each “fossil”, and they can also dig out stories that have become grand, obscene, meaningful, exciting or moderately boring in the AI generation process.


Ziyang Wu is an artist based in New York and Hangzhou, currently teaching at the School of Design and Innovation at the China Academy of Art, and is a current member of NEW INC at the New Museum. With an MFA from the Rhode Island School of Design, and a BFA from the Florence Academy of Fine Arts, his video, AR, AI simulation and interactive video installation have exhibited internationally, including Institute of Contemporary Art (ICA) Philadelphia, Rhizome at the New Museum, Walker Art Center, Rochester Art Center, SXSW, Art Dubai, Annka Kultys Gallery in London, Eigenheim Gallery in Berlin, Medici Palace, Milan Design Week, Today Art Museum in Beijing, UCCA Center for Contemporary Art, Chengdu Biennale, Song Art Museum in Beijing and Ming Contemporary Art Museum in Shanghai. His recent fellowships and residencies include “The Randall Chair” award at Alfred University; “Kai Wu” Interdisciplinary Studio residency, Media Art Lab, Times Museum; AACYF Top 30 under 30; Residency Unlimited; MacDowell Fellowship; Artist-in-residence at Institute for Electronic Arts (IEA); Winner of The ROCI Road to Peace by Robert Rauschenberg Art Foundation.    Mark Ramos is a Brooklyn-based new media artist. Mark makes fragile post-colonial technology using web/software programming, physical computing (using computers to sense and react to the physical world), and digital sculpture/fabrication to create interactive work that facilitate encounters with our own uncertain digital futures. Mark is deeply committed to the ethos of open source: the free sharing of information and data + creative uses of technology. Mark has exhibited his work and lectured widely both online and AFK including as part of Rhizome's First Look: New Art Online with the New Museum of Contemporary Art in NYC, Yerba Buena Center for the Arts in San Francisco, the Times Museum in Beijing, the Chengdu Biennial, Arebyte Gallery in London, and at the Peter Weibel Institute for Digital Culture in Vienna. He teaches Art after the Internet in the MFA Fine Arts Department at the School of Visual Arts, Form and Code at Pratt Institute, as well as Web Programming and Computer Principles in the Computer Science Department at NYU. You can also find him playing drums for various bands in Brooklyn. 


EPOCH presents, XENOSPACE, a ground-breaking and experimental virtual exhibition that features seven artists exploring the collaborative boundaries between AI and machine learning within their creative processes.

The title “XENOSPACE” alludes to an unusual or unfamiliar environment. EPOCH has processed 360 equirectangular panoramas through Stable Diffusion, generating the subsequent AI-assisted environments that serve as the backdrop for the artworks.

XENOSPACE responds to a significant moment in the field, as it reflects on the growing relationship between humans and machines and the impact of AI on creative expression. The exhibition serves as a benchmark, showcasing the expansive collaborative potential of AI and machine learning in contemporary art practices and exhibition building. – chatGPT

XENOSPACE collection image

EPOCH is proud to present, XENOSPACE, a ground-breaking and experimental virtual exhibition that features seven artists exploring the collaborative boundaries of artificial intelligence and machine learning in their creative processes.

Category Art
Contract Address0x3f67...ba3f
Token ID23
Token StandardERC-721
ChainEthereum
Last Updated1 year ago
Creator Earnings
10%
keyboard_arrow_down
Event
Price
From
To
Date