TY - GEN
T1 - The Importance of Image Interpretation
T2 - 29th International Conference on MultiMedia Modeling, MMM 2023
AU - Zhao, Zhengyu
AU - Dang, Nga
AU - Larson, Martha
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - Adversarial images are created with the intention of causing an image classifier to produce a misclassification. In this paper, we propose that adversarial images should be evaluated based on semantic mismatch, rather than label mismatch, as used in current work. In other words, we propose that an image of a “mug” would be considered adversarial if classified as “turnip”, but not as “cup”, as current systems would assume. Our novel idea of taking semantic misclassification into account in the evaluation of adversarial images offers two benefits. First, it is a more realistic conceptualization of what makes an image adversarial, which is important in order to fully understand the implications of adversarial images for security and privacy. Second, it makes it possible to evaluate the transferability of adversarial images to a real-world classifier, without requiring the classifier’s label set to have been available during the creation of the images. The paper carries out an evaluation of a transfer attack on a real-world image classifier that is made possible by our semantic misclassification approach. The attack reveals patterns in the semantics of adversarial misclassifications that could not be investigated using conventional label mismatch.
AB - Adversarial images are created with the intention of causing an image classifier to produce a misclassification. In this paper, we propose that adversarial images should be evaluated based on semantic mismatch, rather than label mismatch, as used in current work. In other words, we propose that an image of a “mug” would be considered adversarial if classified as “turnip”, but not as “cup”, as current systems would assume. Our novel idea of taking semantic misclassification into account in the evaluation of adversarial images offers two benefits. First, it is a more realistic conceptualization of what makes an image adversarial, which is important in order to fully understand the implications of adversarial images for security and privacy. Second, it makes it possible to evaluate the transferability of adversarial images to a real-world classifier, without requiring the classifier’s label set to have been available during the creation of the images. The paper carries out an evaluation of a transfer attack on a real-world image classifier that is made possible by our semantic misclassification approach. The attack reveals patterns in the semantics of adversarial misclassifications that could not be investigated using conventional label mismatch.
KW - Adversarial images
KW - Image semantics
KW - Real-world systems
UR - https://www.scopus.com/pages/publications/85152517936
U2 - 10.1007/978-3-031-27818-1_59
DO - 10.1007/978-3-031-27818-1_59
M3 - 会议稿件
AN - SCOPUS:85152517936
SN - 9783031278174
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 718
EP - 725
BT - MultiMedia Modeling - 29th International Conference, MMM 2023, Proceedings
A2 - Dang-Nguyen, Duc-Tien
A2 - Gurrin, Cathal
A2 - Smeaton, Alan F.
A2 - Larson, Martha
A2 - Rudinac, Stevan
A2 - Dao, Minh-Son
A2 - Trattner, Christoph
A2 - Chen, Phoebe
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 9 January 2023 through 12 January 2023
ER -