Our Moderation Principles
Permanence and portability are among the most exciting features of NFT technology – they make it so that owners of a digital item can use it, sell it, and reflect their verified ownership of it across the web3-enabled internet. But those same features also mean that some offensive, distasteful, and otherwise problematic NFTs will inevitably be published to blockchains. And given the decentralized nature of blockchain technology, no person or platform – including OpenSea – has the power to remove offensive content from blockchains.
Platforms have the difficult task of deciding what on the blockchains to reflect, while also creating an environment that users can trust. As leaders in this ecosystem, we’ve developed a set of content moderation policies and enforcement mechanisms that guide the kind of content that can be discovered and purchased using our services.
At a high level, our content moderation policies reflect our position that OpenSea is a blockchain explorer: we want people visiting our platform to experience the broadest possible view of the NFT ecosystem. Our goal is to make information and content on blockchains easily discoverable and accessible. To do this well means erring on the side of neutrality: with the level of intervention and enforcement scaled to the potential for real-world harm or loss. Furthermore, we try to moderate with a scalpel, not a hammer: rather than making broad generalizations and taking sweeping action, we’ve invested in detection mechanisms and systems that allow us to take a range of specific enforcement actions while keeping in mind our role as a blockchain explorer.
Within that framework, our content moderation policies broadly address four general categories of behavior that could negatively impact the people using our service: physical and emotional harm, financial harm, poor user experience, and intellectual property.
At OpenSea, we’re committed to providing a safe, trustworthy, and inclusive platform through which people can explore and own what moves them in web3. We are still early in our journey as a company and an industry, so we will continue to improve our policies to lead to the best blockchain experience possible.
While OpenSea doesn’t have the ability to delete or remove content posted to underlying blockchains, we employ a few different enforcement mechanisms that help uphold our policies and community guidelines across the OpenSea platform. We do our best to tailor the type of enforcement to the type of harm we perceive:
- We can opt to NOT FEATURE AND LIMIT SEARCH DISCOVERY of content that’s not specifically in violation of our policies or guidelines but that we believe may lead to a negative experience on our platform.
- For content that violates our policies but we believe people should still have the opportunity to view on our platform, we can DISABLE the buying, selling, and transferring of a given NFT or collection using OpenSea’s services.
- For content that violates our policies and we believe that even viewing the content may result in harm to users, we can DELIST an item or collection entirely, hiding them and rendering them inaccessible and undiscoverable on OpenSea.
Because we’re a blockchain explorer, we prefer to disable violative content and reserve delisting for certain limited situations, discussed below. We believe that we are more like a search engine or web browser than a social media platform, and if we want to be a trusted explorer, we can’t heavily censor or limit what's on the blockchain.
We delist content if there is a risk of real-world harm or loss or if the law requires us to do so. For example, we have a zero-tolerance policy for CSAM (child sexual abuse material), and delist that content from OpenSea as soon as possible after we’re made aware of it. Similarly, if we think content was created with the specific intent to deceive (for example, if it is a copyminted work), we take action and delist it from OpenSea.
It is the case that we will disable, but not delist, some content that people may find deeply offensive or triggering but that does not directly rise to the level of causing real-world harm or loss. If you have feedback about our content moderation decisions, please let us know by reporting content that appears to violate these policies.