NeurIPS 25, Neural Information Processing Systems, San Diego, 2025
DAR Robin, K. Bakong, K. Scaman
A variant of SGD computing the stability ratio (relative noise level) of gradient estimates to automatically compute a scheduler to shrink step-sizes, with proofs of adaptivity in expectated loss last-iterate loss values, matching nearly all best rates of SGD with noise-tuned schedulers.
Full text : [ OpenReview ]
ICLR 24, International Conference on Learning Representations, Vienna, 2024
DAR Robin, K. Scaman, M. Lelarge
Proof of convergence of finite-width multi-layer networks (and transformer-likes) to arbitrarily low loss values by gradient flow, when initialization is diverse and sparse enough. This shows that Probable-Approximate-Correctness is a type of structural guarantee that is achievable for large neural networks of essentially any architecture.
Full text : [ OpenReview ]
NeurIPS 22, Neural Information Processing Systems, New Orleans, 2022
DAR Robin, K. Scaman, M. Lelarge
Proof of convergence of two-layer neural networks of finite width to arbitrarily low loss values under gradient flow. Without over-parameterization assumptions and thus stronger than infinite-width simplifications, this is achieved by integration of Kurdyka-Lojasiewicz inequalities, a technique to show optimal convergence even without convexity.
Full text & code : [ OpenReview ] [ Github ]
NeurIPS 22, Neural Information Processing Systems, NeurReps Workshop, New Orleans, 2022
DAR Robin, K. Scaman, M. Lelarge
Neural networks fail to learn periodic functions of unknown frequency, even with sine-like activations, despite previously claimed fixes. Obstructions identified include need for a more diverse (non-vanishing high-variance) init and non-convex sparsity-promoting regularization. With both, perfect recovery far outside the training interval.
Full text & code : [ OpenReview ] [ Github ]
Euro S&P 25, IEEE European Symposium on Security and Privacy, Venice, 2025
M. Arapinis, V. Danos, M. Racouchot, DAR Robin, T. Zacharias
Android's Protected Confirmation (APC) protocol exhibits two vulnerabilities in its communication with the Trusted Execution Environment, leading to a possible bypass of user consent, shown on Google's Pixel. Patching both leads to a provably correct protocol with intended APC user-consent guarantees, in the Universal Composability framework.
Full text : [ HAL ] [ CISPA Link ]
ASIA CCS 20, ACM Asia Conference on Computer and Communications Security, Taipei, 2020
GA Jaloyan, K. Markantonakis, RN Akram, DAR Robin, K. Mayes, D. Naccache
Prefix-code machine instructions allow hiding malicious instructions using unaligned jumps, crafting sequences of long (32-bit) instructions whose last 16 bits are either a valid instruction or a valid prefix that can be chained into overlapping sequences fooling ROP gadgets detectors. A tree-based detection method identifies them correctly.
Full text : [ ACM Link ] [ ArXiv ]
July 2025 - present, Dauphine University, Paris. LAMSADE / MILES team
with Yann Chevaleyre (LAMSADE, Dauphine) and Rafaël Pinot (LPSM, Jussieu)
Oct 2021 - Jun 2025, INRIA - ENS, Paris. DYOGENE / ARGO Project-team
Advised by Marc Lelarge and Kevin Scaman
Construction and convergence of provably-correct neural networks.
Deep Learning (MAP583) course by Kevin Scaman (INRIA - ENS), École Polytechnique
Practical introduction to deep learning and all implementation details, with a focus on coverage of a large amount of different data domains and network architectures.
Resources : [ Synapses page ] [ Practicals repository ] [ Custom python package ]
Deep Learning course by Marc Lelarge (INRIA - ENS), ENS Paris
Introduction to neural network compression concepts and recent results, with a focus and practical session on activation reconstruction.
Resources : [ Lecture slides ] [ Practical Session ] [ Practical Session Solution ]