generated_from_keras_callback

RoBERTa-large-Detection-P2G

이 모델은 klue/roberta-large을 국립 국어원 신문 말뭉치 5만개의 문장을 2021을 g2pK로 훈련시켜 G2P된 데이터를 탐지합니다.<br> git : https://github.com/taemin6697<br>

Usage

from transformers import AutoTokenizer, RobertaForSequenceClassification
import torch
import numpy as np

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_dir = "kfkas/RoBERTa-large-Detection-G2P"
tokenizer = AutoTokenizer.from_pretrained('klue/roberta-large')
model = RobertaForSequenceClassification.from_pretrained(model_dir).to(device)

text = "월드커 파나은행 대표티메 행우늬 이달러 이영영장 선물"
with torch.no_grad():
    x = tokenizer(text, padding='max_length', truncation=True, return_tensors='pt', max_length=128)
    y_pred = model(x["input_ids"].to(device))
    logits = y_pred.logits
    y_pred = logits.detach().cpu().numpy()
    y = np.argmax(y_pred)
    print(y)
    #1

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

Training results

Framework versions