Regression Model for Attention Functioning Levels (ICF b140)

Description

A fine-tuned regression model that assigns a functioning level to Dutch sentences describing attention functions. The model is based on a pre-trained Dutch medical language model (link to be added): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about attention functions in clinical text in Dutch, use the icf-domains classification model.

Functioning levels

Level Meaning
4 No problem with concentrating / directing / holding / dividing attention.
3 Slight problem with concentrating / directing / holding / dividing attention for a longer period of time or for complex tasks.
2 Can concentrate / direct / hold / divide attention only for a short time.
1 Can barely concentrate / direct / hold / divide attention.
0 Unable to concentrate / direct / hold / divide attention.

The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.

Intended uses and limitations

How to use

To generate predictions with the model, use the Simple Transformers library:

from simpletransformers.classification import ClassificationModel

model = ClassificationModel(
    'roberta',
    'CLTL/icf-levels-att',
    use_cuda=False,
)

example = 'Snel afgeleid, moeite aandacht te behouden.'
_, raw_outputs = model.predict([example])
predictions = np.squeeze(raw_outputs)

The prediction on the example is:

2.89

The raw outputs look like this:

[[2.89226103]]

Training data

Training procedure

The default training parameters of Simple Transformers were used, including:

Evaluation results

The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals).

Sentence-level Note-level
mean absolute error 0.99 1.03
mean squared error 1.35 1.47
root mean squared error 1.16 1.21

Authors and references

Authors

Jenia Kim, Piek Vossen

References

TBD