SequenceClassification Lepton 古文 文言文 ancient classical letter 书信标题

<font color="IndianRed"> LEPTON (Classical Chinese Letter Prediction)</font>

Open In Colab

Our model <font color="cornflowerblue">LEPTON (Classical Chinese Letter Prediction) </font> is BertForSequenceClassification Classical Chinese model that is intended to predict whether a Classical Chinese sentence is <font color="IndianRed"> a letter title (书信标题) </font> or not. This model is first inherited from the BERT base Chinese model (MLM), and finetuned using a large corpus of Classical Chinese language (3GB textual dataset), then concatenated with the BertForSequenceClassification architecture to perform a binary classification task.

<font color="IndianRed"> Model description </font>

The BertForSequenceClassification model architecture inherits the BERT base model and concatenates a fully-connected linear layer to perform a binary-class classification task.More precisely, it was pretrained with two objectives:

<font color="IndianRed"> Intended uses & limitations </font>

Note that this model is primiarly aimed at predicting whether a Classical Chinese sentence is a letter title (书信标题) or not.

<font color="IndianRed"> How to use </font>

Note that this model is primiarly aimed at predicting whether a Classical Chinese sentence is a letter title (书信标题) or not.

Here is how to use this model to get the features of a given text in PyTorch:

<font color="cornflowerblue"> 1. Import model and packages </font>

from transformers import BertTokenizer
from transformers import BertForSequenceClassification
import torch
from numpy import exp
import numpy as np

tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
model = BertForSequenceClassification.from_pretrained('cbdb/ClassicalChineseLetterClassification',
                                                     output_attentions=False,
                                                     output_hidden_states=False)

<font color="cornflowerblue"> 2. Make a prediction </font>

max_seq_len = 512

def softmax(vector):
	e = exp(vector)
	return e / e.sum()
 
def predict_class(test_sen):
  tokens_test = tokenizer.encode_plus(
      test_sen, 
      add_special_tokens=True, 
      return_attention_mask=True, 
      padding=True, 
      max_length=max_seq_len, 
      return_tensors='pt',
      truncation=True
  )

  test_seq = torch.tensor(tokens_test['input_ids'])
  test_mask = torch.tensor(tokens_test['attention_mask'])

  # get predictions for test data
  with torch.no_grad():
    outputs = model(test_seq, test_mask)
    outputs = outputs.logits.detach().cpu().numpy()

  softmax_score = softmax(outputs)
  pred_class_dict = {k:v for k, v in zip(label2idx.keys(), softmax_score[0])}
  return pred_class_dict

label2idx = {'not-letter': 0,'letter': 1}
idx2label = {v:k for k,v in label2idx.items()}

<font color="cornflowerblue"> 3. Change your sentence here </font>

label2idx = {'not-letter': 0,'letter': 1}
idx2label = {v:k for k,v in label2idx.items()}

test_sen = '上丞相康思公書'
pred_class_proba = predict_class(test_sen)
print(f'The predicted probability for the {list(pred_class_proba.keys())[0]} class: {list(pred_class_proba.values())[0]}')
print(f'The predicted probability for the {list(pred_class_proba.keys())[1]} class: {list(pred_class_proba.values())[1]}')

<font color="IndianRed"> Output: </font> The predicted probability for the not-letter class: 0.002029061783105135

<font color="IndianRed"> Output: </font> The predicted probability for the letter class: 0.9979709386825562

pred_class = idx2label[np.argmax(list(pred_class_proba.values()))]
print(f'The predicted class is: {pred_class}')

<font color="IndianRed"> Output: </font> The predicted class is: letter

<font color="IndianRed">Authors </font>

Queenie Luo (queenieluo[at]g.harvard.edu) <br> Katherine Enright <br> Hongsu Wang <br> Peter Bol <br> CBDB Group

<font color="IndianRed">License </font>

Copyright (c) 2023 CBDB

Except where otherwise noted, content on this repository is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.