Model: GPT-2

Model name: Prebid_Module_GPT2

Model description: This fine-tuned version of the GPT-2 model was trained on a dataset of 1100+ publisher domains' Prebid installed modules. The model aims to provide insights into what Prebid modules other publishers install with their Prebid set-up. Given a Prebid module, such as appnexusBidAdapter, the model can generate a sample Prebid installed modules combination based on the collected data. This helps publishers gain an understanding of how different publishers use Prebid modules.

Intended uses: This model is intended to assist publishers in understanding and exploring how other publishers use Prebid modules. It serves as a reference to gain insights into common configurations, best practices, and different approaches used by publishers across various domains.

Limitations: It's important to note that the generated installed Prebid modules are based on the data from the training set and may not cover all possible combinations or reflect the specific requirements of a particular domain. Publishers should carefully review and adapt the generated installed Prebid modules to their specific needs and business rules.

How to use: To use this model, provide a Prebid module, such as gptPreAuction. The model will generate a sample Prebid installed modules combination related to that input based on the collected data from that point forward. To start from the beginning, use [ as the input.

Training data: This model was trained on a dataset consisting of over 1100+ publisher domains Prebid modules. The dataset was collected from a variety of publishers and represents a wide range of Prebid settings used in the industry.

Training procedure: The model was fine-tuned using the GPT-2 base model with the aforementioned dataset.

Evaluation results: The evaluation of this model focuses on its ability to generate coherent and valid Prebid configuration settings based on the provided Prebid config setting. Human evaluators reviewed the generated configurations for relevance and accuracy.

Safety and bias considerations: The model is trained on data from actual Prebid config files and aims to provide accurate insights into publishers' configurations. However, it's important to note that biases may exist in the original data itself, as the training data is based on real-world configurations. Users should review and validate the generated configurations to ensure they align with their specific requirements and guidelines.

Users are encouraged to exercise caution and use their expertise in interpreting and adapting the generated Prebid module combinations for their own use. The model should be seen as a helpful tool to gain inspiration and understanding of common Prebid settings but not as a substitute for thorough testing and manual review of the final configurations.