PlanTL-GOB-ES-roberta-base-bne

©© All rights reserved: https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne ©©

Copy of MarIA ( PlanTL-GOB-ES/roberta-base-bne ) since weight files for this model has been removed permanently because it has been deprecated.

How to use

Here is how to use this model:

>>> from transformers import pipeline
>>> from pprint import pprint
>>> unmasker = pipeline('fill-mask', model='PeterPanecillo/PlanTL-GOB-ES-roberta-base-bne-copy')
>>> pprint(unmasker("Gracias a los datos de la BNE se ha podido <mask> este modelo del lenguaje."))
[{'score': 0.08422081917524338, 
'token': 3832, 
'token_str': ' desarrollar', 
'sequence': 'Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje.'}, 
{'score': 0.06348305940628052, 
'token': 3078, 
'token_str': ' crear', 
'sequence': 'Gracias a los datos de la BNE se ha podido crear este modelo del lenguaje.'}, 
{'score': 0.06148449331521988, 
'token': 2171, 
'token_str': ' realizar', 
'sequence': 'Gracias a los datos de la BNE se ha podido realizar este modelo del lenguaje.'}, 
{'score': 0.056218471378088, 
'token': 10880, 
'token_str': ' elaborar', 
'sequence': 'Gracias a los datos de la BNE se ha podido elaborar este modelo del lenguaje.'}, 
{'score': 0.05133328214287758, 
'token': 31915, 
'token_str': ' validar', 
'sequence': 'Gracias a los datos de la BNE se ha podido validar este modelo del lenguaje.'}]

Here is how to use this model to get the features of a given text in PyTorch:

>>> from transformers import RobertaTokenizer, RobertaModel
>>> tokenizer = RobertaTokenizer.from_pretrained('PeterPanecillo/PlanTL-GOB-ES-roberta-base-bne-copy')
>>> model = RobertaModel.from_pretrained('PeterPanecillo/PlanTL-GOB-ES-roberta-base-bne-copy')
>>> text = "Gracias a los datos de la BNE se ha podido desarrollar este modelo del lenguaje."
>>> encoded_input = tokenizer(text, return_tensors='pt')
>>> output = model(**encoded_input)
>>> print(output.last_hidden_state.shape)
torch.Size([1, 19, 768])

Limitations and bias

At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. Nevertheless, here's an example of how the model can have biased predictions:

>>> from transformers import pipeline, set_seed
>>> from pprint import pprint
>>> unmasker = pipeline('fill-mask', model='PeterPanecillo/PlanTL-GOB-ES-roberta-base-bne-copy')
>>> set_seed(42)
>>> pprint(unmasker("Antonio está pensando en <mask>."))
[{'score': 0.07950365543365479,
  'sequence': 'Antonio está pensando en ti.',
  'token': 486,
  'token_str': ' ti'},
 {'score': 0.03375273942947388,
  'sequence': 'Antonio está pensando en irse.',
  'token': 13134,
  'token_str': ' irse'},
 {'score': 0.031026942655444145,
  'sequence': 'Antonio está pensando en casarse.',
  'token': 24852,
  'token_str': ' casarse'},
 {'score': 0.030703715980052948,
  'sequence': 'Antonio está pensando en todo.',
  'token': 665,
  'token_str': ' todo'},
 {'score': 0.02838558703660965,
  'sequence': 'Antonio está pensando en ello.',
  'token': 1577,
  'token_str': ' ello'}]

>>> set_seed(42)
>>> pprint(unmasker("Mohammed está pensando en <mask>."))
[{'score': 0.05433618649840355,
  'sequence': 'Mohammed está pensando en morir.',
  'token': 9459,
  'token_str': ' morir'},
 {'score': 0.0400255024433136,
  'sequence': 'Mohammed está pensando en irse.',
  'token': 13134,
  'token_str': ' irse'},
 {'score': 0.03705748915672302,
  'sequence': 'Mohammed está pensando en todo.',
  'token': 665,
  'token_str': ' todo'},
 {'score': 0.03658654913306236,
  'sequence': 'Mohammed está pensando en quedarse.',
  'token': 9331,
  'token_str': ' quedarse'},
 {'score': 0.03329474478960037,
  'sequence': 'Mohammed está pensando en ello.',
  'token': 1577,
  'token_str': ' ello'}]

Additional information

Author

Text Mining Unit (TeMU) from Barcelona Supercomputing Center (bsc-temu@bsc.es).

Contact information

For further information, send an email to plantl-gob-es@bsc.es.

Copyright

Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA).

Licensing information

This work is licensed under a Apache License, Version 2.0

Funding

This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.

Citation information

If you use this model, please cite the paper:

@article{,
   title = {MarIA: Spanish Language Models},
   author = {Asier Gutiérrez Fandiño and Jordi Armengol Estapé and Marc Pàmies and Joan Llop Palao and Joaquin Silveira Ocampo and Casimiro Pio Carrino and Carme Armentano Oller and Carlos Rodriguez Penagos and Aitor Gonzalez Agirre and Marta Villegas},
   doi = {10.26342/2022-68-3},
   issn = {1135-5948},
   journal = {Procesamiento del Lenguaje Natural},
   publisher = {Sociedad Española para el Procesamiento del Lenguaje Natural},
   url = {https://upcommons.upc.edu/handle/2117/367156#.YyMTB4X9A-0.mendeley},
   volume = {68},
   year = {2022},
}
Downloads last month
16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for IsGarrido/roberta-base-bne

Finetuned
(69)
this model

Collection including IsGarrido/roberta-base-bne