<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. -->
core-350
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| arc_challenge | 0 | acc | 0.2048 | ± | 0.0118 | 
| acc_norm | 0.2509 | ± | 0.0127 | ||
| arc_easy | 0 | acc | 0.4247 | ± | 0.0101 | 
| acc_norm | 0.3965 | ± | 0.0100 | ||
| boolq | 1 | acc | 0.5468 | ± | 0.0087 | 
| hellaswag | 0 | acc | 0.2844 | ± | 0.0045 | 
| acc_norm | 0.3031 | ± | 0.0046 | ||
| openbookqa | 0 | acc | 0.1560 | ± | 0.0162 | 
| acc_norm | 0.2660 | ± | 0.0198 | ||
| piqa | 0 | acc | 0.5854 | ± | 0.0115 | 
| acc_norm | 0.5762 | ± | 0.0115 | ||
| winogrande | 0 | acc | 0.4909 | ± | 0.0141 | 
This model is a fine-tuned version of ./core-350 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.8128
 - Accuracy: 0.8237
 
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
 - train_batch_size: 2
 - eval_batch_size: 8
 - seed: 42
 - gradient_accumulation_steps: 32
 - total_train_batch_size: 64
 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 - lr_scheduler_type: constant
 - num_epochs: 10.0
 
Training results
Framework versions
- Transformers 4.35.0.dev0
 - Pytorch 2.1.0+cu118
 - Datasets 2.14.6
 - Tokenizers 0.14.1