Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published in Arxiv, 2023
In this paper, we propose an algorithmic framework to automatically generate efficient deep neural networks and optimize their associated hyperparameters. The framework is based on evolving directed acyclic graphs (DAGs), defining a more flexible search space than the existing ones in the literature. It allows mixtures of different classical operations: convolutions, recurrences and dense layers, but also more newfangled operations such as self-attention. Based on this search space we propose neighbourhood and evolution search operators to optimize both the architecture and hyper-parameters of our networks. These search operators can be used with any metaheuristic capable of handling mixed search spaces. We tested our algorithmic framework with an evolutionary algorithm on a time series prediction benchmark. The results demonstrate that our framework was able to find models outperforming the established baseline on numerous datasets.
Recommended citation: Julie Keisler, El-Ghazali Talbi, Sandra Claudel, Gilles Cabriel. An algorithmic framework for the optimization of deep neural networks architectures and hyperparameters. 2023. ⟨hal-03982852⟩. https://hal.science/hal-03982852v1/document
Published in , 2023
Use of KernelUCB algorithm for hyperparameters optimization.
Recommended citation: Julie Keisler, Margaux Brégère. Algorithme de bandits pour l’optimisation des hyperparametres de réseaux de neurones. 2023. https://drive.google.com/file/d/1SyiOC070UOXvOWrTD-TEiprZR7gVYXqW/view
Published:
Practical Work: Introduction to Deep Learning, Faculté des Sciences d'Orsay - Université Paris Saclay, 2025
Tutorial 1: Introduction to Pytorch
Tutorial Correction
Tutorial 2: Hyperparameters and Architecture Optimization
Tutorial Correction
Tutorial 3: Convolutional Neural Networks (CNNs)
Tutorial Correction
Tutorial 4: Generative Models (GANs)
Tutorial Correction
Tutorial 5: Generative Models (Diffusion)
Tutorial Correction
Tutorial 6: Recurrent Neural Networks (RNNs)
Tutorial Correction
Tutorial 7: Graph Neural Networks
Tutorial Correction