I am currently studying in the research-oriented IASD (Artificial Intelligence, Systems, Data) master at Paris-Dauphine-PSL and the computer science magisterium at the Ecole Normale Superieure (ENS) of Rennes. I am interested in Deep Learning and its related fields such as Deep Reinforcement Learning, Cognitive Sciences, Computer Vision and Natural Language Processing. Some of my favorite topics are self-supervised learning, contrastive learning and multimodal models. RESUME GOOGLE SCHOLAR LINKEDIN
This internship led to the writing of a paper with shared first authorship (my internship supervisor Masataka Sawayama and me) that was accepted at ICLR 2022. The project was about developing and evaluating a benchmark test for language-biased vision models based on semantic representations. It was applied on OpenAI's CLIP model and showed how presenting word-added images distorts the image classification by the model across different category levels, an effect that does not depend on the semantic relationship between images and embedded words. This suggests that the semantic word representation in the CLIP visual processing is not shared with the image representation, although the word representation strongly dominates for word-embedded images. SKILLS : Python/PyTorch/Numpy/Scipy, data preparation, using a pre-trained deep learning vision model, performing statistical analysis, writing research reports & paper. ICLR 2022 PAPER
This research project was about investigating methods of defense against adversarial attacks on CNNs. We were especially focused on the detection of adversarial examples using K-Density on the latent representations of a ResNet-32 model and tried to find a new way of constraining these representations, which led to the accidental rediscovery of the effect of logit squeezing and label smoothing on the adversarial robustness of models: constraining the logits to have a low l2 norm, as well as constraining them to be almost equal, seems to be correlated with an increase in adversarial robustness. This result was already discovered in previous works. The relationship between these constraints, the adversarial robustness of models, and the robustness of detection (how easily can a detection method be bypassed by well-crafted attacks) was further investigated: preliminary results seem to show that these constraints do not increase the robustness of adversarial detection.
SKILLS : Python/PyTorch/Numpy/Scipy, training CNNs (ResNets) on MNIST & CIFAR10, implementing new training constraints, implementing adversarial attacks (FGSM/BIM/PGD), writing a research report.
REPORT CODE
We investigated the specificities of adversarial attacks on RNNs (distortion metrics, taking into account the non-linearity introduced by input pre-processing and output decoding steps...) and did a Pytorch implementation of the adversarial attack on audio inputs from [Audio Adversarial Examples: Targeted Attacks on Speech-to-Text Carlini & Wagner, 2018]. The implemented attack is able to compute and add an inaudible noise to any audio of speech in order to fool DeepSpeech2, which will output a target sentence transcription (or "target silence" by outputting no sentence) instead of the initial prediction. This internship was a great introduction to deep learning theory (NN training, RNNs, LSTMs, adversarial examples...) and PyTorch as well as a pleasant opportunity to discover research.
SKILLS : Python/PyTorch/Numpy/Scipy, implementation of a research paper, writing a research report, performing an oral presentation.
REPORT SLIDES CODE
Tense Reflection is an innovative puzzle / shooter where you need to solve puzzles in order to reload your ammo / change the color of your shots.
SKILLS : Unity, Javascript, C#, Graphic Design, Game Design.
WIN64 DEMO (850 Mb)