Skip to main content

Events

16

May

Non-Destructive Anonymization of Training Data for Object Detection

From: 2025-05-16 10:15 to 11:00 Seminarium

Kalle Josefsson and Sebastian Tufvesson presents their master’s thesis Non-Destructive Anonymization of Training Data for Object Detection Friday 16 May at 10:15 in MH:309A

Abstract:
The rapid advancement of computer vision, powered by large-scale visual datasets and deep learning, has raised pressing concerns about privacy, particularly when human faces are involved. This work explores how facial anonymization affects the performance of human detection models, aiming to balance identity protection with model utility. A range of anonymization techniques are applied, including Gaussian blurring, black-boxing, and diffusion-based inpainting, on both a COCO subset and a dataset tailored towards surveillance related use cases. An EfficientNet-based object detector is used to measure detection performance, serving as a benchmark for model utility. To evaluate the effectiveness of anonymization independently, a similarity-based machine learning method is used along with human evaluation to assess how much identity remains visible after anonymization. This enables a quantified measure of the trade-off between privacy preservation and detection performance. By combining technical evaluation of model accuracy with both automated and human assessments of identity concealment, this work provides a comprehensive analysis of privacy-preserving strategies in computer vision, with implications for the development of ethical and responsible AI systems. The results show that classic anonymization techniques, such as black-boxing and Gaussian blurring, have minimal impact on human detection performance—achieving over 98\% relative AP50—while significantly degrading face detection capabilities. This indicates that object detectors may rely largely on non-facial cues. Diffusion-based inpainting methods offer more nuanced trade-offs: while full mask inpainting preserves strong detection performance and enhances privacy, partial mask inpainting retains more facial detail, resulting in higher face detection scores but weaker anonymization. These findings highlight the importance of method selection depending on the privacy-utility balance required by a given application, but that facial anonymization on training data is a possibility without significant drawback.


Examiner: 
Alexandros Sopasakis, Centre for Mathematical Sciences, Lund University

Supervisors:
Kalle Åström, Centre for Mathematical Sciences, Lund University
Amanda Nilsson, Axis Communications AB
Hanna Björgvinsdóttir, Axis Communications AB

 



Om händelsen
From: 2025-05-16 10:15 to 11:00

Plats
MH:309A

Kontakt
karl [dot] astrom [at] math [dot] lth [dot] se

Page Manager: magnus.ullner@compchem.lu.se | 2021-02-01