Skip to main content

RoBERTa is a transformers model pre-trained on a large corpus of English data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.

More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.

This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks.

DECONSTRUCT - BLACK BOX collection image

Deconstruct Black Box Manifold Collection.

Category Art
Contract Address0xf1e1...2dc6
Token ID26
Token StandardERC-1155
ChainEthereum
Last Updated1 year ago
Creator Earnings
6.9%

RoBERTa Dataset

visibility
33 views
  • Price
    USD Price
    Quantity
    Expiration
    From
  • Price
    USD Price
    Quantity
    Floor Difference
    Expiration
    From
keyboard_arrow_down
Event
Price
From
To
Date

RoBERTa Dataset

visibility
33 views
  • Price
    USD Price
    Quantity
    Expiration
    From
  • Price
    USD Price
    Quantity
    Floor Difference
    Expiration
    From

RoBERTa is a transformers model pre-trained on a large corpus of English data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.

More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.

This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks.

DECONSTRUCT - BLACK BOX collection image

Deconstruct Black Box Manifold Collection.

Category Art
Contract Address0xf1e1...2dc6
Token ID26
Token StandardERC-1155
ChainEthereum
Last Updated1 year ago
Creator Earnings
6.9%
keyboard_arrow_down
Event
Price
From
To
Date