Adversarial Examples for improving the robustness of Eye-State Classification 👁 👁 :

First Aim:

Project aims to improve the robustness of the model by adding the adversarial examples to the training dataset. We investigated that the robustness of the models on the clean test data are always better than the attacks even though added the pertubated data to the training data.

Second Aim:

Using adversarial examples, the project aims to improve the robustness and accuracy of a machine learning model which detects the eye-states against small perturbation of an image and to solve the misclassification problem caused by natural transformation.

Methodologies

The approach for the first aim.

===================================================================

The approach for the second aim.

===================================================================

Neural Network Models

Wide Residual Network

Parseval Network

Convolutional Neural Network

Adversarial Examples

Fast Gradient Sign Method

Examples

Evaluation

Development

Models:


adversarial_examples_parseval_net/models
├── FullyConectedModels
│   ├── model.py
│   └── parseval.py
├── Parseval_Networks
│   ├── constraint.py
│   ├── convexity_constraint.py
│   ├── parsevalnet.py
├── _utility.py
└── wideresnet
    └── wresnet.py


Final Results:

References

[1] Cisse, Bojanowski, Grave, Dauphin and Usunier, Parseval Networks: Improving Robustness to Adversarial Examples, 2017.

[2] Zagoruyko and Komodakis, Wide Residual Networks, 2016.


@misc{ParsevalNetworks,
  author= "Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, Nicolas Usunier"
  title="Parseval Networks: Improving Robustness to Adversarial Examples"
  year= "2017"
}

@misc{Wide Residual Networks
  author= "Sergey Zagoruyko, Nikos Komodakis"
  title= "Wide Residual Networks"
  year= "2016"
}

Author

Sefika Efeoglu

Research Project, Data Science MSc, University of Potsdam