Authors: (1) Liang Wang, Microsoft Corporation, and Correspondence to (wangliang@microsoft.com); (2) Nan Yang, Microsoft Corporation, and correspondence to (nanya@microsoft.com); (3) Xiaolong Huang, Microsoft Corporation; (4) Linjun Yang, Microsoft Corporation; (5) Rangan Majumder, Microsoft Corporation; (6) Furu Wei, Microsoft Corporation and Correspondence to (fuwei@microsoft.com). Table of Links Abstract and 1 Introduction 2 Related Work 3 Method 3.1 Synthetic Data Generation 3.2 Training 4 Experiments 4.1 Statistics of the Synthetic Data 4.2 Model Fine-tuning and Evaluation 4.3 Main Results 4.4 Multilingual Retrieval 5 Analysis 5.1 Is Contrastive Pre-training Necessary? 5.2 Extending to Long Text Embeddings and 5.3 Analysis of Training Hyperparameters 6 Conclusion and References A Implementation Details B Test Set Contamination Analysis C Prompts for Synthetic Data Generation D Instructions for Training and Evaluation B Test Set Contamination Analysis To assess the test set contamination on all the datasets in the MTEB benchmark, we perform a string match based analysis between the test set and our training set, disregarding differences in character case and spacing. We categorize the train-test overlaps into three types: • Low entropy texts. These are texts such as “i need a coffee” and “what does that mean”, which are not considered as contamination because they are common expressions that can occur in various contexts. • Question overlap. We identify 4 test set questions in the DBPedia dataset that also appear in the TriviaQA training set. Given that they constitute a minor portion of the test set, their impact on the overall performance is insignificant. • Retrieval corpus overlap. Several retrieval datasets share the same retrieval corpus. For instance, the DBPedia, NQ, and TriviaQA datasets all use Wikipedia passages, even though their query sets are different. This is a standard evaluation practice in the field of information retrieval, and we do not regard it as contamination. In summary, we did not detect substantial contamination risks that could alter the main findings of this paper. Another aspect to consider is the possibility of test set contamination in the training data of Mistral7B and GPT-4. However, since the training data of these models is not publicly accessible, it is challenging to estimate the degree of such contamination. Given their widespread use in the research community, we believe it is still a valid comparison if other works also employ these models. This paper is available on arxiv under CC0 1.0 DEED license. Authors: (1) Liang Wang, Microsoft Corporation, and Correspondence to (wangliang@microsoft.com); (2) Nan Yang, Microsoft Corporation, and correspondence to (nanya@microsoft.com); (3) Xiaolong Huang, Microsoft Corporation; (4) Linjun Yang, Microsoft Corporation; (5) Rangan Majumder, Microsoft Corporation; (6) Furu Wei, Microsoft Corporation and Correspondence to (fuwei@microsoft.com). Authors: Authors: (1) Liang Wang, Microsoft Corporation, and Correspondence to (wangliang@microsoft.com); (2) Nan Yang, Microsoft Corporation, and correspondence to (nanya@microsoft.com); (3) Xiaolong Huang, Microsoft Corporation; (4) Linjun Yang, Microsoft Corporation; (5) Rangan Majumder, Microsoft Corporation; (6) Furu Wei, Microsoft Corporation and Correspondence to (fuwei@microsoft.com). Table of Links Abstract and 1 Introduction Abstract and 1 Introduction 2 Related Work 2 Related Work 3 Method 3.1 Synthetic Data Generation 3.1 Synthetic Data Generation 3.2 Training 3.2 Training 4 Experiments 4.1 Statistics of the Synthetic Data 4.1 Statistics of the Synthetic Data 4.2 Model Fine-tuning and Evaluation 4.2 Model Fine-tuning and Evaluation 4.3 Main Results 4.3 Main Results 4.4 Multilingual Retrieval 4.4 Multilingual Retrieval 5 Analysis 5.1 Is Contrastive Pre-training Necessary? 5.1 Is Contrastive Pre-training Necessary? 5.2 Extending to Long Text Embeddings and 5.3 Analysis of Training Hyperparameters 5.2 Extending to Long Text Embeddings and 5.3 Analysis of Training Hyperparameters 6 Conclusion and References 6 Conclusion and References A Implementation Details A Implementation Details B Test Set Contamination Analysis B Test Set Contamination Analysis C Prompts for Synthetic Data Generation C Prompts for Synthetic Data Generation D Instructions for Training and Evaluation D Instructions for Training and Evaluation B Test Set Contamination Analysis To assess the test set contamination on all the datasets in the MTEB benchmark, we perform a string match based analysis between the test set and our training set, disregarding differences in character case and spacing. We categorize the train-test overlaps into three types: • Low entropy texts. These are texts such as “i need a coffee” and “what does that mean”, which are not considered as contamination because they are common expressions that can occur in various contexts. • Low entropy texts. • Question overlap. We identify 4 test set questions in the DBPedia dataset that also appear in the TriviaQA training set. Given that they constitute a minor portion of the test set, their impact on the overall performance is insignificant. • Question overlap. • Retrieval corpus overlap. Several retrieval datasets share the same retrieval corpus. For instance, the DBPedia, NQ, and TriviaQA datasets all use Wikipedia passages, even though their query sets are different. This is a standard evaluation practice in the field of information retrieval, and we do not regard it as contamination. • Retrieval corpus overlap. In summary, we did not detect substantial contamination risks that could alter the main findings of this paper. Another aspect to consider is the possibility of test set contamination in the training data of Mistral7B and GPT-4. However, since the training data of these models is not publicly accessible, it is challenging to estimate the degree of such contamination. Given their widespread use in the research community, we believe it is still a valid comparison if other works also employ these models. This paper is available on arxiv under CC0 1.0 DEED license. This paper is available on arxiv under CC0 1.0 DEED license. available on arxiv