Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

An algorithmic framework for the optimization of deep neural networks architectures and hyperparameters.

Published in Arxiv, 2023

In this paper, we propose an algorithmic framework to automatically generate efficient deep neural networks and optimize their associated hyperparameters. The framework is based on evolving directed acyclic graphs (DAGs), defining a more flexible search space than the existing ones in the literature. It allows mixtures of different classical operations: convolutions, recurrences and dense layers, but also more newfangled operations such as self-attention. Based on this search space we propose neighbourhood and evolution search operators to optimize both the architecture and hyper-parameters of our networks. These search operators can be used with any metaheuristic capable of handling mixed search spaces. We tested our algorithmic framework with an evolutionary algorithm on a time series prediction benchmark. The results demonstrate that our framework was able to find models outperforming the established baseline on numerous datasets.

Recommended citation: Julie Keisler, El-Ghazali Talbi, Sandra Claudel, Gilles Cabriel. An algorithmic framework for the optimization of deep neural networks architectures and hyperparameters. 2023. ⟨hal-03982852⟩. https://hal.science/hal-03982852v1/document

talks

teaching

Master MIA

Practical Work: Introduction to Deep Learning, Faculté des Sciences d'Orsay - Université Paris Saclay, 2024

Tutorial 1: Introduction to Pytorch
        Tutorial      Correction
Tutorial 2: Hyperparameters and Architecture Optimization
        Tutorial      Correction
Tutorial 3: Convolutional Neural Networks (CNNs)
        Tutorial      Correction
Tutorial 4: Generative Adversarial Networks (GANs)
        Tutorial      Correction
Tutorial 5: Recurrent Neural Networks (RNNs)
        Tutorial      Correction

Supervision

M2 Internships, EDF R&D, 2024

Roxane Goffinet, Global forecasting models for a large number of time series, March 2024 - October 2024, co-supervised with Bachir Hamrouche and Guillaume Lambert, EDF R&D
Alban Derepas, Future evolution of the wind resource and the interest of machine learning methods for statistical wind downscaling , May 2024 - November 2024, co-supervised with Boutheina Oueslati, Yannig Goude and Claire Monteleoni, EDF R&D and INRIA Paris
Keshav Das, Automated selection of adaptive additive models, application to load consumption forecasting, September 2024 - February 2025, co-supervised with Margaux Brégère and Amaury Durand, EDF R&D