feature-extraction sentence-similarity transformers

CoT-MAE MS-Marco Passage Retriever

CoT-MAE is a transformers based Mask Auto-Encoder pretraining architecture designed for Dense Passage Retrieval. CoT-MAE MS-Marco Passage Retriever is a retriever trained with BM25 hard negatives and CoT-MAE retriever mined MS-Marco hard negatives using Tevatron toolkit. Specifically, we trained a stage-one retriever using BM25 HN, using stage-one retriever to mine HN, then trained a stage-two retriever using both BM25 HN & stage-one retriever mined hn. The release is the stage-two retriever.

Details can be found in our paper and codes.

Paper: ConTextual Mask Auto-Encoder for Dense Passage Retrieval.

Code: caskcsg/ir/cotmae

Scores

MS-Marco Passage full-ranking

MRR @10 recall@1 recall@50 recall@1k QueriesRanked
0.394431 0.265903 0.870344 0.986676 6980

Citations

If you find our work useful, please cite our paper.

@misc{https://doi.org/10.48550/arxiv.2208.07670,
  doi = {10.48550/ARXIV.2208.07670},
  url = {https://arxiv.org/abs/2208.07670},
  author = {Wu, Xing and Ma, Guangyuan and Lin, Meng and Lin, Zijia and Wang, Zhongyuan and Hu, Songlin},
  keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {ConTextual Mask Auto-Encoder for Dense Passage Retrieval},
  publisher = {arXiv},
  year = {2022},
  copyright = {arXiv.org perpetual, non-exclusive license}
}