Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Arxiv, 2023
In this paper, we propose an algorithmic framework to automatically generate efficient deep neural networks and optimize their associated hyperparameters. The framework is based on evolving directed acyclic graphs (DAGs), defining a more flexible search space than the existing ones in the literature. It allows mixtures of different classical operations: convolutions, recurrences and dense layers, but also more newfangled operations such as self-attention. Based on this search space we propose neighbourhood and evolution search operators to optimize both the architecture and hyper-parameters of our networks. These search operators can be used with any metaheuristic capable of handling mixed search spaces. We tested our algorithmic framework with an evolutionary algorithm on a time series prediction benchmark. The results demonstrate that our framework was able to find models outperforming the established baseline on numerous datasets.
Recommended citation: Julie Keisler, El-Ghazali Talbi, Sandra Claudel, Gilles Cabriel. An algorithmic framework for the optimization of deep neural networks architectures and hyperparameters. 2023. ⟨hal-03982852⟩. https://hal.science/hal-03982852v1/document
Published in , 2023
Use of KernelUCB algorithm for hyperparameters optimization.
Recommended citation: Julie Keisler, Margaux Brégère. Algorithme de bandits pour l’optimisation des hyperparametres de réseaux de neurones. 2023. https://drive.google.com/file/d/1SyiOC070UOXvOWrTD-TEiprZR7gVYXqW/view
Published:
Practical Work: Introduction to Deep Learning, Faculté des Sciences d'Orsay - Université Paris Saclay, 2024
Tutorial 1: Introduction to Pytorch
Tutorial Correction
Tutorial 2: Hyperparameters and Architecture Optimization
Tutorial Correction
Tutorial 3: Convolutional Neural Networks (CNNs)
Tutorial Correction
Tutorial 4: Generative Adversarial Networks (GANs)
Tutorial Correction
Tutorial 5: Recurrent Neural Networks (RNNs)
Tutorial Correction
M2 Internships, EDF R&D, 2024
Roxane Goffinet, Global forecasting models for a large number of time series, March 2024 - October 2024, co-supervised with Bachir Hamrouche and Guillaume Lambert, EDF R&D
Alban Derepas, Future evolution of the wind resource and the interest of machine learning methods for statistical wind downscaling , May 2024 - November 2024, co-supervised with Boutheina Oueslati, Yannig Goude and Claire Monteleoni, EDF R&D and INRIA Paris
Keshav Das, Automated selection of adaptive additive models, application to load consumption forecasting, September 2024 - February 2025, co-supervised with Margaux Brégère and Amaury Durand, EDF R&D