sentence-transformers sentence-similarity transformers setfit

German BERT large paraphrase euclidean

This is a sentence-transformers model. It maps sentences & paragraphs (text) into a 1024 dimensional dense vector space. The model is intended to be used together with SetFit to improve German few-shot text classification. It has a sibling model called deutsche-telekom/gbert-large-paraphrase-cosine.

This model is based on deepset/gbert-large. Many thanks to deepset!

Training

Loss Function
We have used BatchHardSoftMarginTripletLoss with eucledian distance as the loss function:

    train_loss = losses.BatchHardSoftMarginTripletLoss(
       model=model,
       distance_metric=BatchHardTripletLossDistanceFunction.eucledian_distance,
   )

Training Data
The model is trained on a carefully filtered dataset of deutsche-telekom/ger-backtrans-paraphrase. We deleted the following pairs of sentences:

Hyperparameters

Evaluation Results

We use the NLU Few-shot Benchmark - English and German dataset to evaluate this model in a German few-shot scenario.

Qualitative results

Licensing

Copyright (c) 2023 Philip May, Deutsche Telekom AG
Copyright (c) 2022 deepset GmbH

Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License by reviewing the file LICENSE in the repository.