The quest for reliable testing and verification in machine learning security

Speaker
Antonio Emanuele Cinà - Computer Science - Univerità di Genova

Date
Jul 18, 2025 - Time: 11:00 Aula B - Borgo Roma - Ca' Vignal 1

Abstract:
Machine Learning (ML) has revolutionized numerous domains, becoming the de facto standard for complex decision-making and automation. At the same time, the widespread integration of ML into safety-critical applications is introducing significant security concerns that must be identified and patched before deployment. However, unlike traditional software, ML models do not come with formal guarantees of correctness or robustness. Instead, they rely on patterns learned from data, which makes them inherently difficult to reason about and verify. Consequently, current evaluation approaches rely heavily on empirical testing, often without consistent standards.
 
This talk begins by introducing the foundations of ML security. We highlight key issues such as adversarial attacks that expose the fragility of current systems. We then shift focus to one of the central challenges in the field: evaluating the robustness of ML models remains largely an empirical process, with no widely accepted standards or formal verification methods in place. The seminar concludes by highlighting ongoing efforts to develop frameworks and tools that bring structure, reproducibility, and rigor to ML security evaluation.
 
Data pubblicazione
Jul 11, 2025

Contact person
Alessandro Farinelli
Department
Computer Science

ATTACHMENTS

Antonio Emanuele Cinà CV