token-classification entity-recognition foundation-model feature-extraction RoBERTa generic

SOTA Entity Recognition English Foundation Model by NuMind 🔥

This model provides the best embedding for the Entity Recognition task in English.

Checkout other models by NuMind:

About

Roberta-base fine-tuned on an artificially annotated subset of C4.

Metrics:

Read more about evaluation protocol & datasets in our blog post

Model F1 macro
RoBERTa-base 0.7129
ours 0.7500
ours + two emb 0.7686

Usage

Embeddings can be used out of the box or fine-tuned on specific datasets.

Get embeddings:

import torch
import transformers


model = transformers.AutoModel.from_pretrained(
    'numind/entity-recognition-general-sota-v1',
    output_hidden_states=True
)
tokenizer = transformers.AutoTokenizer.from_pretrained(
    'numind/entity-recognition-general-sota-v1'
)

text = [
    "NuMind is an AI company based in Paris and USA.",
    "See other models from us on https://huggingface.co/numind"
]
encoded_input = tokenizer(
    text,
    return_tensors='pt',
    padding=True,
    truncation=True
)
output = model(**encoded_input)

# for better quality
emb = torch.cat(
    (output.hidden_states[-1], output.hidden_states[-7]),
    dim=2
)

# for better speed
# emb = output.hidden_states[-1]