WorldCat Identities

Chevallier, Sylvain (1980-....).

Overview
Works: 5 works in 6 publications in 2 languages and 6 library holdings
Roles: Author, Opponent, Thesis advisor, Other
Publication Timeline
.
Most widely held works by Sylvain Chevallier
Implémentation d'un système préattentionnel avec des neurones impulsionnels by Sylvain Chevallier( Book )

2 editions published in 2009 in French and held by 2 WorldCat member libraries worldwide

Les neurones impulsionnels prennent en compte une caractéristique fondamentale des neurones biologiques : la capacité d'encoder l'information sous forme d'événements discrets. Nous nous sommes intéressés à l'apport de ce type de modèles dans le cadre de la vision artificielle, dont les contraintes nous ont orienté vers le choix de modèles simples, adaptés à la rapidité de traitement requise. Nous décrivons une architecture de réseaux pour encoder et extraire des saillances utilisant la discrétisation induite par les neurones impulsionnels. La carte de saillances est obtenue à partir de la combinaison, spatiale et temporelle, de différentes cartes de modalités visuelles (contrastes, orientations et couleurs) à différentes échelles spatiales. Nous proposons une méthode de filtrage neuronal pour construire les cartes de modalité visuelle. Cette méthode réalise le filtrage de façon graduelle : au plus le temps de traitement alloué à l'algorithme est important, au plus le résultat est proche de celui obtenu avec un filtrage par convolution. L'architecture proposée donne en sortie les saillances triées temporellement dans l'ordre de leur importance. Nous avons placé en aval de cette architecture un autre réseau de neurones impulsionnels, s'inspirant des champs neuronaux, qui permet de sélectionner la zone la plus saillante et de maintenir une activité constante sur cette zone. Les résultats expérimentaux montrent que l'architecture proposée est capable d'extraire des saillances dans une séquence d'images, de sélectionner la saillance la plus importante et de maintenir la focalisation sur cette saillance, même dans un contexte bruité ou quand la saillance se déplace
Maquette numérique 3D pour la construction : visualiser les connaissances métier et interagir avec des dispositifs immersifs by Hugo Martin( )

1 edition published in 2016 in French and held by 1 WorldCat member library worldwide

The construction show a lack of efficiency to compare to other industries. This is explained by the insufficiency to computerize the design of building. In answer, the world of architecture set up a new process called BIM (Building Information Modeling). This process is based on a3D virtual mock up containing every information needed for the construction. During the implementation of this process, difficulties of interaction has been noted by the BIM users.BIM models are hard to observe and manage, explained by the fact that these models contain a large amount of information. Moreover, the collaborative idea of the BIM is not considered in the actual method. BIM process proposes the same scheme for all the construction profile. This thesis proposes an adapted methodology of interaction for the inspection of architectural projects, using artificial intelligence tools or more particularly virtual reality technologies. The purpose is to offer an adapted environment, considering the profile of each BIM user, while keeping the actual design method. Firstly, this document describes the creation of virtual reality rooms dedicated to the construction. Secondly, it deals with the development of algorithms allowing the classification of components from BIM model, an adaptive system of visualization and a process to handle the model. These development are based on the consideration of the profile of the user, the trade of the user
Contribution des caractéristiques diagnostiques dans la reconnaissance des expressions faciales émotionnelles : une approche neurocognitive alliant oculométrie et électroencéphalographie by Yu-Fang Yang( )

1 edition published in 2018 in English and held by 1 WorldCat member library worldwide

Proficient recognition of facial expression is crucial for social interaction. Behaviour, event-related potentials (ERPs), and eye-tracking techniques can be used to investigate the underlying brain mechanisms supporting this seemingly effortless processing of facial expression. Facial expression recognition involves not only the extraction of expressive information from diagnostic facial features, known as part-based processing, but also the integration of featural information, known as configural processing. Despite the critical role of diagnostic features in emotion recognition and extensive research in this area, it is still not known how the brain decodes configural information in terms of emotion recognition. The complexity of facial information integration becomes evident when comparing performance between healthy subjects and individuals with schizophrenia because those patients tend to process featural information on emotional faces. The different ways in examining faces possibly impact on social-cognitive ability in recognizing emotions. Therefore, this thesis investigates the role of diagnostic features and face configuration in the recognition of facial expression. In addition to behavior, we examined both the spatiotemporal dynamics of fixations using eye-tracking, and early neurocognitive sensitivity to face as indexed by the P100 and N170 ERP components. In order to address the questions, we built a new set of sketch face stimuli by transforming photographed faces from the Radboud Faces Database through the removal of facial texture and retaining only the diagnostic features (e.g., eyes, nose, mouth) with neutral and four facial expressions - anger, sadness, fear, happiness. Sketch faces supposedly impair configural processing in comparison with photographed faces, resulting in increased sensitivity to diagnostic features through part-based processing. The direct comparison of neurocognitive measures between sketch and photographed faces expressing basic emotions has never been tested. In this thesis, we examined (i) eye fixations as a function of stimulus type, and (ii) neuroelectric response to experimental manipulations such face inversion and deconfiguration. The use of these methods aimed to reveal which face processing drives emotion recognition and to establish neurocognitive markers of emotional sketch and photographed faces processing. Overall, the behavioral results showed that sketch faces convey sufficient expressive information (content of diagnostic features) as in photographed faces for emotion recognition. There was a clear emotion recognition advantage for happy expressions as compared to other emotions. In contrast, recognizing sad and angry faces was more difficult. Concomitantly, results of eye-tracking showed that participants employed more part-based processing on sketch and photographed faces during second fixation. The extracting information from the eyes is needed when the expression conveys more complex emotional information and when stimuli are impoverished (e.g., sketch). Using electroencephalographic (EEG), the P100 and N170 components are used to study the effect of stimulus type (sketch, photographed), orientation (inverted, upright), and deconfiguration, and possible interactions. Results also suggest that sketch faces evoked more part-based processing. The cues conveyed by diagnostic features might have been subjected to early processing, likely driven by low-level information during P100 time window, followed by a later decoding of facial structure and its emotional content in the N170 time window. In sum, this thesis helped elucidate elements of the debate about configural and part-based face processing for emotion recognition, and extend our current understanding of the role of diagnostic features and configural information during neurocognitive processing of facial expressions of emotion
Vers des interfaces cérébrales adaptées aux utilisateurs : interaction robuste et apprentissage statistique basé sur la géométrie riemannienne by Emmanuel K Kalunga( )

1 edition published in 2017 in French and held by 1 WorldCat member library worldwide

In the last two decades, interest in Brain-Computer Interfaces (BCI) has tremendously grown, with a number of research laboratories working on the topic. Since the Brain-Computer Interface Project of Vidal in 1973, where BCI was introduced for rehabilitative and assistive purposes, the use of BCI has been extended to more applications such as neurofeedback and entertainment. The credit of this progress should be granted to an improved understanding of electroencephalography (EEG), an improvement in its measurement techniques, and increased computational power.Despite the opportunities and potential of Brain-Computer Interface, the technology has yet to reach maturity and be used out of laboratories. There are several challenges that need to be addresses before BCI systems can be used to their full potential. This work examines in depth some of these challenges, namely the specificity of BCI systems to users physical abilities, the robustness of EEG representation and machine learning, and the adequacy of training data. The aim is to provide a BCI system that can adapt to individual users in terms of their physical abilities/disabilities, and variability in recorded brain signals.To this end, two main avenues are explored: the first, which can be regarded as a high-level adjustment, is a change in BCI paradigms. It is about creating new paradigms that increase their performance, ease the discomfort of using BCI systems, and adapt to the user's needs. The second avenue, regarded as a low-level solution, is the refinement of signal processing and machine learning techniques to enhance the EEG signal quality, pattern recognition and classification.On the one hand, a new methodology in the context of assistive robotics is defined: it is a hybrid approach where a physical interface is complemented by a Brain-Computer Interface (BCI) for human machine interaction. This hybrid system makes use of users residual motor abilities and offers BCI as an optional choice: the user can choose when to rely on BCI and could alternate between the muscular- and brain-mediated interface at the appropriate time.On the other hand, for the refinement of signal processing and machine learning techniques, this work uses a Riemannian framework. A major limitation in this filed is the EEG poor spatial resolution. This limitation is due to the volume conductance effect, as the skull bones act as a non-linear low pass filter, mixing the brain source signals and thus reducing the signal-to-noise ratio. Consequently, spatial filtering methods have been developed or adapted. Most of them (i.e. Common Spatial Pattern, xDAWN, and Canonical Correlation Analysis) are based on covariance matrix estimations. The covariance matrices are key in the representation of information contained in the EEG signal and constitute an important feature in their classification. In most of the existing machine learning algorithms, covariance matrices are treated as elements of the Euclidean space. However, being Symmetric and Positive-Definite (SPD), covariance matrices lie on a curved space that is identified as a Riemannian manifold. Using covariance matrices as features for classification of EEG signals and handling them with the tools provided by Riemannian geometry provide a robust framework for EEG representation and learning
Deep learning methods for motor imagery detection from raw EEG : applications to brain-computer interfaces by Oleksii Avilov( )

1 edition published in 2021 in English and held by 1 WorldCat member library worldwide

Cette thèse présente trois contributions pour améliorer la reconnaissance d'imaginations motrices utilisées par de nombreuses interfaces cerveau-ordinateur (BCI) comme moyen d'interaction. Tout d'abord, nous proposons d'estimer la qualité des images motrices en détectant des valeurs aberrantes et de les supprimer avant apprentissage. Ensuite, nous étudions la sélection des caractéristiques pour sept imaginations de mouvements. Enfin, nous présentons une architecture d'apprentissage profond reprenant les principes du réseaux EEGnet applicable directement sur des signaux électro-encéphalographiques simplement filtrés et adapté au nombre d'électrodes. Nous montrons en particulier ses bénéfices pour l'amélioration de la détection des réveils peropératoires et d'autres applications
 
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.92 (from 0.88 for Vers des i ... to 0.99 for Implément ...)

Alternative Names
Sylvain Chevallier onderzoeker

Languages