Ressource pédagogique : Safety Verification of Deep Neural Networks

Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perc...
cours / présentation - Date de création : 19-10-2017
Auteur(s) : Marta Kwiatkowska
Partagez !

Présentation de: Safety Verification of Deep Neural Networks

Informations pratiques sur cette ressource

Anglais
Type pédagogique : cours / présentation
Niveau : master, doctorat
Durée d'exécution : 58 minutes 26 secondes
Contenu : image en mouvement
Document : video/mp4
Taille : 261.46 Mo
Droits : libre de droits, gratuit
Droits réservés à l'éditeur et aux auteurs.

Description de la ressource pédagogique

Description (résumé)

Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. This lecture will describe progress with developing a novel automated verification framework for deep neural networks to ensure safety of their classification decisions with respect to image manipulations, for example scratches or changes to camera angle or lighting conditions, that should not affect the classification. The techniques work directly with the network code and, in contrast to existing methods, can offer guarantees that adversarial examples are found if they exist. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples.

"Domaine(s)" et indice(s) Dewey

  • Intelligence artificielle, réseaux neuronaux, automates cellulaires, vie artificielle (006.3)
  • Reconnaissance des formes par ordinateur (006.4)
  • Vision par ordinateur (006.37)

Thème(s)

Intervenants, édition et diffusion

Intervenants

Fournisseur(s) de contenus : INRIA (Institut national de recherche en informatique et automatique), CNRS - Centre National de la Recherche Scientifique, UNS

Diffusion

Partagez !

AUTEUR(S)

  • Marta Kwiatkowska

EN SAVOIR PLUS

  • Identifiant de la fiche
    25215
  • Identifiant
    oai:canal-u.fr:25215
  • Schéma de la métadonnée
  • Entrepôt d'origine
    Canal-u.fr