Model Card for DistilGutenMystery

<!-- Provide a quick summary of what the model is/does. [Optional] --> Fine-tuned version of DistilGPT2 on a corpus of 20 various mystery/detective style novels collected from Project Gutenberg.

Table of Contents

Model Details

Model Description

<!-- Provide a longer summary of what this model is/does. --> Fine-tuned version of DistilGPT2 on a corpus of 20 various mystery/detective style novels collected from Project Gutenberg.

Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->

Aiding story writing and brainstorming for novels. Possible use for generating nonsensical and absurd texts.

Downstream Use

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->

Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->

This model does not distinguish fact from fiction, therefore the model is not intended to support use-cases that require the generated text to be true.

Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. There's the possibility of out-dated language being used that might reflect certain bias' and if the model is ever to be deployed it is highly recommended to do further bias related fine-tuning and other related testing.

Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Training Details

Training Data

<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

Corpus was created from 20 books about mystery and detective stories collected from project Gutenberg (gutenberg.org/ on 2/20/23) for the purpose of aiding in story writing for mystery/detective novels. In total there are 1,048,519 tokens in the entire corpus collected from the following 20 various mystery/detective style books: The Extraordinary Adventures of Arsène Lupin, Gentleman-Burglar, by Maurice Leblanc: 55,726 tokens The Crimson Cryptogram A Detective Story by Fergus Hume: 60,179 tokens The House of a Thousand Candles by Meredith Nicholson: 83,133 tokens Tracked by Wireless by William Le Queux: 76,236 tokens Behind the Green Door, by Mildred A. Wirt: 43,705 tokens The house on the cliff by Franklin W. Dixon: 41,721 tokens Tales of Secret Egypt by Sax Rohmer: 76,892 tokens The Haunted Bookshop by Christopher Morley: 63,269 tokens Whispering Walls, by Mildred A. Wirt: 42,388 tokens The Clock Struck One by Fergus Hume: 61,614 tokens McAllister and His Double by Arthur Cheney Train: 65,583 tokens The Three Eyes by Maurice Leblanc: 62,887 tokens Ghost Beyond the Gate by Mildred A. Wirt: 41,172 tokens The Motor Rangers Through the Sierras by John Henry Goldfrap: 49,285 tokens Peggy Finds the Theatre by Virginia Hughes: 41,575 tokens The Puzzle in the Pond by Margaret Sutton: 36,485 tokens Jack the runaway; or, On the road with a circus by Frank V. Webster: 42,814 tokens The Camp Fire Girls Solve a Mystery; Or, The Christmas Adventure at Carver House: 50,286 tokens Danger at the Drawbridge by Mildred A. Wirt: 42,075 tokens Voice from the Cave by Mildred A. Wirt: 39,064 tokens

Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

Preprocessing

Each story was downloaded from Project Gutenberg, where the “Gutenberg” specific texts were removed from the document, along with chapter headings. Then stories were combined into a single text document that was then loaded as a dataset, sampled by paragraph. Stated hyper-parameters for training: num_train+epochs=30, per_device+train_batch_size=32, and all other trainer values were left as default values. Additionally, the tokenizer was set with padding_side=’left’, and the model’s pad_token_id was set to the tokenizer.eos_token_id, and num_labels=0.

Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

Testing Data, Factors & Metrics

Testing Data

<!-- This should link to a Data Card if possible. -->

More information needed

Factors

<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->

More information needed

Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

The fine-tuned model was evaluated using the sacrebleu metric.

Results

score: 0.2458566059729917 counts: [56008, 5821, 552, 181] totals: [1014368, 985984, 957908, 930569] precisions: [5.52146755418152, 0.5903746916785668, 0.057625575733786544, 0.019450465252979627] bp: 1.0 sys_len: 1014368 ref_len: 212162

Model Card Authors [optional]

<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->

Hugging Face, Jack Quigley

Model Card Contact

More information needed

How to Get Started with the Model

Use the code below to get started with the model.

<details> <summary> Click to expand </summary>

from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained('jquigl/DistilGutenMystery')
model = AutoModelForCausalLM.from_pretrained('jquigl/DistilGutenMystery')

generator = pipeline('text-generation', model = model, tokenizer = tokenizer) gen = generator("It was a strange ending to a", min_length = 100, max_length = 150, num_return_sequences=3)

</details>