Version 1.0
Vortrag: Attacking Text
An Introduction to Adversarial Attacks in NLP
Adversarial examples have been making headlines in the computer vision community for a few years now, but did not seem to have a huge impact in natural language processing until very recently. Small changes to an image, mostly invisible to the human eye, can fool a neural network into classifying a turtle as a gun, or a stop sign as a green light. Of course, a single sentence has significantly fewer features to perturb than a 512x512 colour image, still machines can be fooled by slight rephrasing and exploiting real world biases that have crept into the system.
This talk gives a brief introduction into the technology and dangers of adversarial attacks and delves into the possible implications for testing and employing natural language processing systems.
Info
Tag:
30.11.2019
Anfangszeit:
11:50
Dauer:
00:30
Raum:
Schellingstr. 3 R153
Track:
Computational Linguistics
Sprache:
en
Links:
Feedback
Uns interessiert Ihre Meinung! Wie fanden Sie diese Veranstaltung?
Gleichzeitige Events
ReferentInnen
Victor Zimmermann |