École doctorale Sciences et technologies de l'information et de la communication (Orsay, Essonne / 2015....).
Overview
Works:  431 works in 440 publications in 2 languages and 440 library holdings 

Roles:  Other, 996 
Publication Timeline
.
Most widely held works by
Essonne / 2015....) École doctorale Sciences et technologies de l'information et de la communication (Orsay
Machine d'essai de prothèse pour Transtibial et Transfemoral by
Khaled Fouda(
)
2 editions published in 2017 in English and held by 2 WorldCat member libraries worldwide
The objective of this work is to build a testing machine for prosthesis. The machine should be able to reproduce the same dynamic and kinematics conditions applied on the prosthesis during the normal use.Numbers of amputation, and causes of amputation were collected. Different types of prosthesis were classified according to the leg prosthesis per amputation height, passive and active prosthesis, differentiated by the nature of their actuator. Most of the testing machine for the prosthesis were studied form the technological and capability prospective. Determining all the limitations of most of existing testing machines, and the needs to develop a new machine to full fill these needs were developed.Then we studied and analyzed the dynamics of the human gait and run. The equations of motion by taking into consideration the masses and moments of inertia of skeletal segments. Most of the parameters of gait were extracted. In conclusion, we have the kinematic requirements of the human center of gravity to generate 6 DOF that the testing machine should carry out to emulate the normal human gait and run.Three designs were proposed to implement the testing machine; Articulated robot arm, Cartesian manipulator, and Stewart Platform (SP). After implementing the three solutions we found the most suitable solution is the SP attached with it an artificial active hip. We have chosen the hydraulic power as it is the most suitable actuation technique for our solution knowing the required actuation forces.To help in controlling the SP motion, a novel Closedform solution of direct Geometric model for planer and 66 Stewart Platform using the rotary sensors instead of liner sensors as wanted to the hydraulic actuators was developed. Sensitivity analysis was studied for that solution, and analytical calculation for computing the workspace was also developed.The conclusion from this testing machine is that we can create all the dynamics of the human body, i.e. walking or running or going up and down stairs. The developed solution can carry testing procedures for either passive or active prothesis
2 editions published in 2017 in English and held by 2 WorldCat member libraries worldwide
The objective of this work is to build a testing machine for prosthesis. The machine should be able to reproduce the same dynamic and kinematics conditions applied on the prosthesis during the normal use.Numbers of amputation, and causes of amputation were collected. Different types of prosthesis were classified according to the leg prosthesis per amputation height, passive and active prosthesis, differentiated by the nature of their actuator. Most of the testing machine for the prosthesis were studied form the technological and capability prospective. Determining all the limitations of most of existing testing machines, and the needs to develop a new machine to full fill these needs were developed.Then we studied and analyzed the dynamics of the human gait and run. The equations of motion by taking into consideration the masses and moments of inertia of skeletal segments. Most of the parameters of gait were extracted. In conclusion, we have the kinematic requirements of the human center of gravity to generate 6 DOF that the testing machine should carry out to emulate the normal human gait and run.Three designs were proposed to implement the testing machine; Articulated robot arm, Cartesian manipulator, and Stewart Platform (SP). After implementing the three solutions we found the most suitable solution is the SP attached with it an artificial active hip. We have chosen the hydraulic power as it is the most suitable actuation technique for our solution knowing the required actuation forces.To help in controlling the SP motion, a novel Closedform solution of direct Geometric model for planer and 66 Stewart Platform using the rotary sensors instead of liner sensors as wanted to the hydraulic actuators was developed. Sensitivity analysis was studied for that solution, and analytical calculation for computing the workspace was also developed.The conclusion from this testing machine is that we can create all the dynamics of the human body, i.e. walking or running or going up and down stairs. The developed solution can carry testing procedures for either passive or active prothesis
Jambe Humanoïde Hydraulique pour HYDROïD by
Ahmed Abdellatif Hamed Ibrahim(
)
2 editions published in 2018 in English and held by 2 WorldCat member libraries worldwide
Le corps humain a toujours été une source d'inspiration pour les ingénieurs et les scientifiques de tous les domaines dans le monde entier. L'un des sujets les plus intéressants de la dernière décennie a été les robots humanoïdes. Les robots humanoïdes représentent les systèmes robotiques les plus complexes. Ils offrent une plus grande mobilité dans les terrains accidentés et non structurés que les véhicules à roues normaux. À l'avenir, les robots humanoïdes devraient être employés pour une variété de tâches dangereuses dans des domaines tels que les opérations de sauvetage, l'assistance aux personnes âgées, l'éducation et le déminage humanitaire. Le travail réalisé dans cette thèse est réalisé sur le robot hydraulique humanoïde HYDROïD, un humanoïde à commande hydraulique avec 52 degrés de liberté actifs, conçu pour exécuter des tâches très dynamiques comme la marche, la course et le saut. robot puisque les actionneurs hydrauliques ont un excellent rapport poids/puissance et absorbent naturellement les pics de force d'impact lors des différentes activités. L'objectif de cette thèse est de contribuer au développement des mécanismes robotiques de la cheville et du genou avec une dynamique élevée. Un nouveau mécanisme de cheville est développé afin de pallier les inconvénients des performances réalisées avec l'ancien mécanisme de cheville d'origine. Des taux de fuite et de frottement plus faibles sont obtenus en plus d'une optimisation de pression pour les articulations de la cheville. De plus, une nouvelle solution pour optimiser le poids des actionneurs hydrauliques est appliquée sur le mécanisme du genou du robot.Une telle solution comprend l'utilisation de la technologie des matériaux composites légers pour atteindre un poids et une performance optimisés pour le joint. Afin d'appliquer des méthodologies de contrôle sur les mécanismes de la cheville et du genou, un modèle géométrique inverse pour les deux mécanismes est présenté. Le contrôle de position est utilisé pour contrôler les angles des articulations de la cheville et les mécanismes du genou. Enfin, les conclusions et les perspectives d'avenir sont présentées dans le dernier chapitre
2 editions published in 2018 in English and held by 2 WorldCat member libraries worldwide
Le corps humain a toujours été une source d'inspiration pour les ingénieurs et les scientifiques de tous les domaines dans le monde entier. L'un des sujets les plus intéressants de la dernière décennie a été les robots humanoïdes. Les robots humanoïdes représentent les systèmes robotiques les plus complexes. Ils offrent une plus grande mobilité dans les terrains accidentés et non structurés que les véhicules à roues normaux. À l'avenir, les robots humanoïdes devraient être employés pour une variété de tâches dangereuses dans des domaines tels que les opérations de sauvetage, l'assistance aux personnes âgées, l'éducation et le déminage humanitaire. Le travail réalisé dans cette thèse est réalisé sur le robot hydraulique humanoïde HYDROïD, un humanoïde à commande hydraulique avec 52 degrés de liberté actifs, conçu pour exécuter des tâches très dynamiques comme la marche, la course et le saut. robot puisque les actionneurs hydrauliques ont un excellent rapport poids/puissance et absorbent naturellement les pics de force d'impact lors des différentes activités. L'objectif de cette thèse est de contribuer au développement des mécanismes robotiques de la cheville et du genou avec une dynamique élevée. Un nouveau mécanisme de cheville est développé afin de pallier les inconvénients des performances réalisées avec l'ancien mécanisme de cheville d'origine. Des taux de fuite et de frottement plus faibles sont obtenus en plus d'une optimisation de pression pour les articulations de la cheville. De plus, une nouvelle solution pour optimiser le poids des actionneurs hydrauliques est appliquée sur le mécanisme du genou du robot.Une telle solution comprend l'utilisation de la technologie des matériaux composites légers pour atteindre un poids et une performance optimisés pour le joint. Afin d'appliquer des méthodologies de contrôle sur les mécanismes de la cheville et du genou, un modèle géométrique inverse pour les deux mécanismes est présenté. Le contrôle de position est utilisé pour contrôler les angles des articulations de la cheville et les mécanismes du genou. Enfin, les conclusions et les perspectives d'avenir sont présentées dans le dernier chapitre
Data quality for the decision of the ambient systems by
Madjid Kara(
)
2 editions published in 2018 in English and held by 2 WorldCat member libraries worldwide
La qualité des données est une condition commune à tous les projets de technologie de l'information, elle est devenue un domaine de recherche complexe avec la multiplicité et l'expansion des différentes sources de données. Des chercheurs se sont penchés sur l'axe de la modélisation et l'évaluation des données, plusieurs approches ont été proposées mais elles étaient limitées à un domaine d'utilisation bien précis et n'offraient pas un profil de qualité nous permettant d'évaluer un modèle de qualité de données global. L'évaluation basée sur les modèles de qualité ISO a fait son apparition, néanmoins ces modèles ne nous guident pas pour leurs utilisation, le fait de devoir les adapter à chaque cas de figure sans avoir de méthodes précises. Notre travail se focalise sur les problèmes de la qualité des données d'un système ambiant où les contraintes de temps pour la prise de décision sont plus importantes par rapport aux applications traditionnelles. L'objectif principal est de fournir au système décisionnel une vision très spécifique de la qualité des données issues des capteurs. Nous identifions les aspects quantifiables des données capteurs pour les relier aux métriques appropriées de notre modèle de qualité de données spécifique. Notre travail présente les contributions suivantes : (i) création d'un modèle de qualité de données générique basé sur plusieurs standards de qualité existants, (ii) formalisation du modèle de qualité sous forme d'une ontologie qui nous permet l'intégration de ces modèles (de i), en spécifiant les différents liens, appelés relations d'équivalence, qui existent entre les critères composant ces modèles, (iii) proposition d'un algorithme d'instanciation pour extraire le modèle de qualité de données spécifique à partir du modèle de qualité de données générique, (iv) proposition d'une approche d'évaluation globale du modèle de qualité de données spécifique en utilisant deux processus, le premier processus consiste à exécuter les métriques reliées aux données capteurs et le deuxième processus récupère le résultat de cette exécution et utilise le principe de la logique floue pour l'évaluation des facteurs de qualité de notre modèle de qualité de données spécifique. Puis, l'expert établie des valeurs représentant le poids de chaque facteur en se basant sur la table d'interdépendance pour prendre en compte l'interaction entre les différents critères de données et on utilisera la procédure d'agrégation pour obtenir un degré de confiance. En ce basant sur ce résultat final, le composant décisionnel fera une analyse puis prendra une décision
2 editions published in 2018 in English and held by 2 WorldCat member libraries worldwide
La qualité des données est une condition commune à tous les projets de technologie de l'information, elle est devenue un domaine de recherche complexe avec la multiplicité et l'expansion des différentes sources de données. Des chercheurs se sont penchés sur l'axe de la modélisation et l'évaluation des données, plusieurs approches ont été proposées mais elles étaient limitées à un domaine d'utilisation bien précis et n'offraient pas un profil de qualité nous permettant d'évaluer un modèle de qualité de données global. L'évaluation basée sur les modèles de qualité ISO a fait son apparition, néanmoins ces modèles ne nous guident pas pour leurs utilisation, le fait de devoir les adapter à chaque cas de figure sans avoir de méthodes précises. Notre travail se focalise sur les problèmes de la qualité des données d'un système ambiant où les contraintes de temps pour la prise de décision sont plus importantes par rapport aux applications traditionnelles. L'objectif principal est de fournir au système décisionnel une vision très spécifique de la qualité des données issues des capteurs. Nous identifions les aspects quantifiables des données capteurs pour les relier aux métriques appropriées de notre modèle de qualité de données spécifique. Notre travail présente les contributions suivantes : (i) création d'un modèle de qualité de données générique basé sur plusieurs standards de qualité existants, (ii) formalisation du modèle de qualité sous forme d'une ontologie qui nous permet l'intégration de ces modèles (de i), en spécifiant les différents liens, appelés relations d'équivalence, qui existent entre les critères composant ces modèles, (iii) proposition d'un algorithme d'instanciation pour extraire le modèle de qualité de données spécifique à partir du modèle de qualité de données générique, (iv) proposition d'une approche d'évaluation globale du modèle de qualité de données spécifique en utilisant deux processus, le premier processus consiste à exécuter les métriques reliées aux données capteurs et le deuxième processus récupère le résultat de cette exécution et utilise le principe de la logique floue pour l'évaluation des facteurs de qualité de notre modèle de qualité de données spécifique. Puis, l'expert établie des valeurs représentant le poids de chaque facteur en se basant sur la table d'interdépendance pour prendre en compte l'interaction entre les différents critères de données et on utilisera la procédure d'agrégation pour obtenir un degré de confiance. En ce basant sur ce résultat final, le composant décisionnel fera une analyse puis prendra une décision
Nouvelles brasures sans plomb : conception des dispositifs d'essai, fabrication des échantillons et caractérisation by
Quang Bang Tao(
Book
)
2 editions published in 2016 in English and held by 2 WorldCat member libraries worldwide
Nowadays, one of the strategies to improve the reliability of leadfree solder joints is to add minor alloying elements to solders. In this study, new leadfree solders, namely InnoLot and SAC387Bi, which have begun to come into use in the electronic packaging, were considered to study the effect of Ni, Sb and Bi, as well as that of the testing conditions and isothermal aging, on the mechanical properties and microstructure evolution. A new microtensile machine are designed and fabricated, which can do tensile, compressive and cyclic tests with variation of speeds and temperatures, for testing miniature joint and bulk specimens. Additionally, the procedure to fabricate appropriate lapshear joint and bulk specimens are described in this research. The tests, including shear, tensile, creep and fatigue tests, were conducted by microtensile and Instron machine at different test conditions. The first study is to characterize, experimentally, the mechanical behaviors and life time of solder joints submitted to isothermal aging and mechanical tests. The second goal of the project is to perform thermomechanical simulations of IGBT under thermal cycling. The experimental results indicate that, with addition of Ni, Sb and Bi in to SAC solder, the stress levels (UTS, yield stress) are improved. Moreover, testing conditions, such as temperature, strain rate, amplitude, aging time, may have substantial effects on the mechanical behavior and the microstructure features of the solder alloys. The enhanced strength and life time of the solders is attribute to the solid hardening effects of Sb in the Sn matrix and the refinement of the microstructure with the addition of Ni and Bi. The nine Anand material parameters are identified by using the data from shear and tensile tests. And then, the obtained values were utilized to analyze the stressstrain response of an IGBT under thermal cycling. The results of simulations represent that the response to thermal cycling of the new solders is better than the reference solder, suggesting that additions of minor elements can enhance the fatigue life of the solder joints. Finally, the SEM/EDS and EPMA analysis of ascast, asreflowed as well as fractured specimens were done to observe the effects of these above factors on the microstructure of the solder alloys
2 editions published in 2016 in English and held by 2 WorldCat member libraries worldwide
Nowadays, one of the strategies to improve the reliability of leadfree solder joints is to add minor alloying elements to solders. In this study, new leadfree solders, namely InnoLot and SAC387Bi, which have begun to come into use in the electronic packaging, were considered to study the effect of Ni, Sb and Bi, as well as that of the testing conditions and isothermal aging, on the mechanical properties and microstructure evolution. A new microtensile machine are designed and fabricated, which can do tensile, compressive and cyclic tests with variation of speeds and temperatures, for testing miniature joint and bulk specimens. Additionally, the procedure to fabricate appropriate lapshear joint and bulk specimens are described in this research. The tests, including shear, tensile, creep and fatigue tests, were conducted by microtensile and Instron machine at different test conditions. The first study is to characterize, experimentally, the mechanical behaviors and life time of solder joints submitted to isothermal aging and mechanical tests. The second goal of the project is to perform thermomechanical simulations of IGBT under thermal cycling. The experimental results indicate that, with addition of Ni, Sb and Bi in to SAC solder, the stress levels (UTS, yield stress) are improved. Moreover, testing conditions, such as temperature, strain rate, amplitude, aging time, may have substantial effects on the mechanical behavior and the microstructure features of the solder alloys. The enhanced strength and life time of the solders is attribute to the solid hardening effects of Sb in the Sn matrix and the refinement of the microstructure with the addition of Ni and Bi. The nine Anand material parameters are identified by using the data from shear and tensile tests. And then, the obtained values were utilized to analyze the stressstrain response of an IGBT under thermal cycling. The results of simulations represent that the response to thermal cycling of the new solders is better than the reference solder, suggesting that additions of minor elements can enhance the fatigue life of the solder joints. Finally, the SEM/EDS and EPMA analysis of ascast, asreflowed as well as fractured specimens were done to observe the effects of these above factors on the microstructure of the solder alloys
Les algorithmes d'apprentissage pour l'aide au stationnement urbain by
Asma Houissa(
)
2 editions published in 2018 in French and held by 2 WorldCat member libraries worldwide
The objective of this thesis is to develop, to integrate and to test a new algorithmic approach to help parking in urban centers.Given the different types of deployed infrastructure : from inputoutput detection of vehicles to time variation of the number of available places within each street segment, we propose an efficient method to determine an itinerary that minimize the time expectation to find an available place and also to predict the availability of the parking places.We have chosen an urban area and we have considered it as a set of parking resources called street segments. More exactly, this urban area is considered as a graph where the vertexes represent the crossroads and the arcs represent the street segments. The essential parameters of our urban area model are the parking capacity and the time crossing of each street segment. The originality and the innovation of our approach are based on two principles.The first one is the guidance as a resource, i.e., it means that the proposed itinerary is not the one that lead to an available parking place but rather the one that minimized the time expectation to find an available parking place. In order to achieve that we determine, in a an area centered on a given destination, the itinerary to follow by the vehicle in order minimize its time expectation to find an available parking place as quickly aspossible.We have designed and realized a reinforcement learning algorithm based on the LRI method (Linear Reward Inaction) and a Monte Carlo method to minimize the time expectation to find an available parking place in the urban area. We have compared this algorithm to a global approach based on tree evaluation with bounded depth. The second principle is based on the prediction of the parking places by homogeneous time period where we are not interestedon a parking place in real time but rather on the parking places byarea. In other terms, the system predict the potential available parkingplaces by resource for the next time periods. Thus, we don't aim to predict the availability of each parking place, i.e., each resource is considered as stock area and its availability is assessed in major part in function of the street segment inputoutput flow. For this principle, we have determined by a learning algorithm the probability that there is at least one available parking place in a street segment within a given time. The major data needed to compute this probability are the time series of inputoutput of each vehicle in street intersections, and the variation of the available parking places through the time.We have evaluated the performance of this approach by simulation based on random generated data and on real data of a district in Versailles
2 editions published in 2018 in French and held by 2 WorldCat member libraries worldwide
The objective of this thesis is to develop, to integrate and to test a new algorithmic approach to help parking in urban centers.Given the different types of deployed infrastructure : from inputoutput detection of vehicles to time variation of the number of available places within each street segment, we propose an efficient method to determine an itinerary that minimize the time expectation to find an available place and also to predict the availability of the parking places.We have chosen an urban area and we have considered it as a set of parking resources called street segments. More exactly, this urban area is considered as a graph where the vertexes represent the crossroads and the arcs represent the street segments. The essential parameters of our urban area model are the parking capacity and the time crossing of each street segment. The originality and the innovation of our approach are based on two principles.The first one is the guidance as a resource, i.e., it means that the proposed itinerary is not the one that lead to an available parking place but rather the one that minimized the time expectation to find an available parking place. In order to achieve that we determine, in a an area centered on a given destination, the itinerary to follow by the vehicle in order minimize its time expectation to find an available parking place as quickly aspossible.We have designed and realized a reinforcement learning algorithm based on the LRI method (Linear Reward Inaction) and a Monte Carlo method to minimize the time expectation to find an available parking place in the urban area. We have compared this algorithm to a global approach based on tree evaluation with bounded depth. The second principle is based on the prediction of the parking places by homogeneous time period where we are not interestedon a parking place in real time but rather on the parking places byarea. In other terms, the system predict the potential available parkingplaces by resource for the next time periods. Thus, we don't aim to predict the availability of each parking place, i.e., each resource is considered as stock area and its availability is assessed in major part in function of the street segment inputoutput flow. For this principle, we have determined by a learning algorithm the probability that there is at least one available parking place in a street segment within a given time. The major data needed to compute this probability are the time series of inputoutput of each vehicle in street intersections, and the variation of the available parking places through the time.We have evaluated the performance of this approach by simulation based on random generated data and on real data of a district in Versailles
Stratégie de navigation sûre dans un environnement industriel partiellement connu en présence d'activité humaine by
Gabriel Louis Burtin(
)
2 editions published in 2019 in French and held by 2 WorldCat member libraries worldwide
In this work, we propose a safe system for robot navigation in an indoor and structured environment. The main idea is the use of two combined sensors (lidar and monocular camera) to ensure fast computation and robustness. The choice of these sensors is based on the physic principles behind their measures. They are less likely to go blind with the same disturbance. The localization algorithm is fast and efficient while keeping in mind the possibility of a downgraded mode in case of the failure of one sensor. To reach this objective, we optimized the data processing at different levels. We applied a polygonal approximation to the 2D lidar data and a vertical contour detection to the colour image. The fusion of these data in an extended Kalman filter provides a reliable localization system. In case of a lidar failure, the Kalman filter still works, in case of a camera failure the robot can rely upon a lidar scan matching. Data provided by these sensors can also deserve other purposes. The lidar provides us the localization of doors, potential location for encounter with humans. The camera can help to detect and track humans. This work has been done and validated using an advanced robotic simulator (4DVSim), then confirmed with real experiments. This methodology allowed us to both develop our ideas and confirm the usefulness of this robotic tool
2 editions published in 2019 in French and held by 2 WorldCat member libraries worldwide
In this work, we propose a safe system for robot navigation in an indoor and structured environment. The main idea is the use of two combined sensors (lidar and monocular camera) to ensure fast computation and robustness. The choice of these sensors is based on the physic principles behind their measures. They are less likely to go blind with the same disturbance. The localization algorithm is fast and efficient while keeping in mind the possibility of a downgraded mode in case of the failure of one sensor. To reach this objective, we optimized the data processing at different levels. We applied a polygonal approximation to the 2D lidar data and a vertical contour detection to the colour image. The fusion of these data in an extended Kalman filter provides a reliable localization system. In case of a lidar failure, the Kalman filter still works, in case of a camera failure the robot can rely upon a lidar scan matching. Data provided by these sensors can also deserve other purposes. The lidar provides us the localization of doors, potential location for encounter with humans. The camera can help to detect and track humans. This work has been done and validated using an advanced robotic simulator (4DVSim), then confirmed with real experiments. This methodology allowed us to both develop our ideas and confirm the usefulness of this robotic tool
Développement mécatronique et contrôle de l'exosquelette des membres inférieurs SOL0.1 by
Moustafa Fouz(
Book
)
2 editions published in 2019 in English and held by 2 WorldCat member libraries worldwide
The thesis' subject concerns the development of the control architecture and the trajectory generation for a scalable exoskeleton called SOL. The biomedical study outcomes revealed that the progressiveness of the disease could be solved by early and continuous rehabilitation throughout the growth. Thus, the importance of using an exoskeleton has a positive impact since it is used to provide locomotion and rehabilitation, at the same time. However, the current exoskeletons cannot be adapted to fit the continuous change of teenager biomechanics throughout his growth. Hence, the need for developing a scalable exoskeleton that can cope with the growing needs is still a challenging topic. Especially, the control architecture of such a scalable device was tackled in this thesis, in both hardware and software developments to incorporate the scalability features. Initiative steps have been passed towards the goal of achieving a scalable exoskeleton, by contributing in hardware developments that allowing further enhancements to be included throughout the advancement of the project. Firmware developments achieved have addressed the scalability needs in terms of control by considering the three hierarchical levels (which are: High, Middle, and lowlevels of control). More specifically, a focus was dedicated to the generation of the gait reference trajectories for the growing population. Data were collected from healthy subjects wearing a passive exoskeleton to extract the proper joint trajectories, then, the data were processed to build a gait library to be deployed on the exoskeleton controller. Finally, by knowledge of the subject biomechanics, the controller is able to fetch the proper trajectories and inject the reference trajectories to the SOL's actuators. A first prototype of the exoskeleton is used to manifest the outcomes of the proposed Evolutionary Gait Generator (E.G.G.). As a first prototype, A free walking in air motion is tested, where the validation of the proposed hardware and control loops are demonstrated. Studying the exoskeletons' control responses against probable external disturbances and failsafe scenarios are still future work mandatory before achieving first humanexoskeleton testing
2 editions published in 2019 in English and held by 2 WorldCat member libraries worldwide
The thesis' subject concerns the development of the control architecture and the trajectory generation for a scalable exoskeleton called SOL. The biomedical study outcomes revealed that the progressiveness of the disease could be solved by early and continuous rehabilitation throughout the growth. Thus, the importance of using an exoskeleton has a positive impact since it is used to provide locomotion and rehabilitation, at the same time. However, the current exoskeletons cannot be adapted to fit the continuous change of teenager biomechanics throughout his growth. Hence, the need for developing a scalable exoskeleton that can cope with the growing needs is still a challenging topic. Especially, the control architecture of such a scalable device was tackled in this thesis, in both hardware and software developments to incorporate the scalability features. Initiative steps have been passed towards the goal of achieving a scalable exoskeleton, by contributing in hardware developments that allowing further enhancements to be included throughout the advancement of the project. Firmware developments achieved have addressed the scalability needs in terms of control by considering the three hierarchical levels (which are: High, Middle, and lowlevels of control). More specifically, a focus was dedicated to the generation of the gait reference trajectories for the growing population. Data were collected from healthy subjects wearing a passive exoskeleton to extract the proper joint trajectories, then, the data were processed to build a gait library to be deployed on the exoskeleton controller. Finally, by knowledge of the subject biomechanics, the controller is able to fetch the proper trajectories and inject the reference trajectories to the SOL's actuators. A first prototype of the exoskeleton is used to manifest the outcomes of the proposed Evolutionary Gait Generator (E.G.G.). As a first prototype, A free walking in air motion is tested, where the validation of the proposed hardware and control loops are demonstrated. Studying the exoskeletons' control responses against probable external disturbances and failsafe scenarios are still future work mandatory before achieving first humanexoskeleton testing
Machine Learning for Network Resource Management by
Nesrine Ben Hassine(
)
2 editions published in 2017 in English and held by 2 WorldCat member libraries worldwide
An intelligent exploitation of data carried on telecom networks could lead to a very significant improvement in the quality of experience (QoE) for the users. Machine Learning techniques offer multiple operating, which can help optimize the utilization of network resources.In this thesis, two contexts of application of the learning techniques are studied: Wireless Sensor Networks (WSNs) and Content Delivery Networks (CDNs). In WSNs, the question is how to predict the quality of the wireless links in order to improve the quality of the routes and thus increase the packet delivery rate, which enhances the quality of service offered to the user. In CDNs, it is a matter of predicting the popularity of videos in order to cache the most popular ones as close as possible to the users who request them, thereby reducing latency to fulfill user requests.In this work, we have drawn upon learning techniques from two different domains, namely statistics and Machine Learning. Each learning technique is represented by an expert whose parameters are tuned after an offline analysis. Each expert is responsible for predicting the next metric value (i.e. popularity for videos in CDNs, quality of the wireless link for WSNs). The accuracy of the prediction is evaluated by a loss function, which must be minimized. Given the variety of experts selected, and since none of them always takes precedence over all the others, a second level of expertise is needed to provide the best prediction (the one that is the closest to the real value and thus minimizes a loss function). This second level is represented by a special expert, called a forecaster. The forecaster provides predictions based on values predicted by a subset of the best experts.Several methods are studied to identify this subset of best experts. They are based on the loss functions used to evaluate the experts' predictions and the value k, representing the k best experts. The learning and prediction tasks are performed online on real data sets from a real WSN deployed at Stanford, and from YouTube for the CDN. The methodology adopted in this thesis is applied to predicting the next value in a series of values.More precisely, we show how the quality of the links can be evaluated by the Link Quality Indicator (LQI) in the WSN context and how the Single Exponential Smoothing (SES) and Average Moving Window (AMW) experts can predict the next LQI value. These experts react quickly to changes in LQI values, whether it be a sudden drop in the quality of the link or a sharp increase in quality. We propose two forecasters, Exponential Weighted Average (EWA) and Best Expert (BE), as well as the ExpertForecaster combination to provide better predictions.In the context of CDNs, we evaluate the popularity of each video by the number of requests for this video per day. We use both statistical experts (ARMA) and experts from the Machine Learning domain (e.g. DES, polynomial regression). These experts are evaluated according to different loss functions. We also introduce forecasters that differ in terms of the observation horizon used for prediction, loss function and number of experts selected for predictions. These predictions help decide which videos will be placed in the caches close to the users. The efficiency of the caching technique based on popularity prediction is evaluated in terms of hit rate and update rate. We highlight the contributions of this caching technique compared to a classical caching algorithm, Least Frequently Used (LFU).This thesis ends with recommendations for the use of online and offline learning techniques for networks (WSN, CDN). As perspectives, we propose different applications where the use of these techniques would improve the quality of experience for mobile users (cellular networks) or users of IoT (Internet of Things) networks, based, for instance, on Time Slotted Channel Hopping (TSCH)
2 editions published in 2017 in English and held by 2 WorldCat member libraries worldwide
An intelligent exploitation of data carried on telecom networks could lead to a very significant improvement in the quality of experience (QoE) for the users. Machine Learning techniques offer multiple operating, which can help optimize the utilization of network resources.In this thesis, two contexts of application of the learning techniques are studied: Wireless Sensor Networks (WSNs) and Content Delivery Networks (CDNs). In WSNs, the question is how to predict the quality of the wireless links in order to improve the quality of the routes and thus increase the packet delivery rate, which enhances the quality of service offered to the user. In CDNs, it is a matter of predicting the popularity of videos in order to cache the most popular ones as close as possible to the users who request them, thereby reducing latency to fulfill user requests.In this work, we have drawn upon learning techniques from two different domains, namely statistics and Machine Learning. Each learning technique is represented by an expert whose parameters are tuned after an offline analysis. Each expert is responsible for predicting the next metric value (i.e. popularity for videos in CDNs, quality of the wireless link for WSNs). The accuracy of the prediction is evaluated by a loss function, which must be minimized. Given the variety of experts selected, and since none of them always takes precedence over all the others, a second level of expertise is needed to provide the best prediction (the one that is the closest to the real value and thus minimizes a loss function). This second level is represented by a special expert, called a forecaster. The forecaster provides predictions based on values predicted by a subset of the best experts.Several methods are studied to identify this subset of best experts. They are based on the loss functions used to evaluate the experts' predictions and the value k, representing the k best experts. The learning and prediction tasks are performed online on real data sets from a real WSN deployed at Stanford, and from YouTube for the CDN. The methodology adopted in this thesis is applied to predicting the next value in a series of values.More precisely, we show how the quality of the links can be evaluated by the Link Quality Indicator (LQI) in the WSN context and how the Single Exponential Smoothing (SES) and Average Moving Window (AMW) experts can predict the next LQI value. These experts react quickly to changes in LQI values, whether it be a sudden drop in the quality of the link or a sharp increase in quality. We propose two forecasters, Exponential Weighted Average (EWA) and Best Expert (BE), as well as the ExpertForecaster combination to provide better predictions.In the context of CDNs, we evaluate the popularity of each video by the number of requests for this video per day. We use both statistical experts (ARMA) and experts from the Machine Learning domain (e.g. DES, polynomial regression). These experts are evaluated according to different loss functions. We also introduce forecasters that differ in terms of the observation horizon used for prediction, loss function and number of experts selected for predictions. These predictions help decide which videos will be placed in the caches close to the users. The efficiency of the caching technique based on popularity prediction is evaluated in terms of hit rate and update rate. We highlight the contributions of this caching technique compared to a classical caching algorithm, Least Frequently Used (LFU).This thesis ends with recommendations for the use of online and offline learning techniques for networks (WSN, CDN). As perspectives, we propose different applications where the use of these techniques would improve the quality of experience for mobile users (cellular networks) or users of IoT (Internet of Things) networks, based, for instance, on Time Slotted Channel Hopping (TSCH)
A scalable search engine for the Personal Cloud by
Saliha Lallali(
Book
)
2 editions published in 2016 in English and held by 2 WorldCat member libraries worldwide
Un nouveau moteur de recherche embarqué conçu pour les objets intelligents. Ces dispositifs sont généralement équipés d'extrêmement de faible quantité de RAM et une grande capacité de stockage Flash NANAD. Pour faire face à ces contraintes matérielles contradictoires, les moteurs de recherche classique privilégient soit la scalabilité en insertion ou la scalabilité en requête, et ne peut pas répondre à ces deux exigences en même temps. En outre, très peu de solutions prennent en charge les suppressions de documents et mises à jour dans ce contexte. nous avons introduit trois principes de conception, à savoir y WriteOnce Partitioning, Linear Pipelining and Background Linear Merging, et montrent comment ils peuvent être combinés pour produire un moteur de recherche intégré concilier un niveau élevé d'insertion / de suppression / et des mises à jour. Nous avons mis en place notre moteur de recherche sur une Board de développement ayant un représentant de configuration matérielle pour les objets intelligents et avons mené de vastes expériences en utilisant deux ensembles de données représentatives. Le dispositif expérimental résultats démontrent la scalabilité de l'approche et sa supériorité par rapport à l'état des procédés de l'art
2 editions published in 2016 in English and held by 2 WorldCat member libraries worldwide
Un nouveau moteur de recherche embarqué conçu pour les objets intelligents. Ces dispositifs sont généralement équipés d'extrêmement de faible quantité de RAM et une grande capacité de stockage Flash NANAD. Pour faire face à ces contraintes matérielles contradictoires, les moteurs de recherche classique privilégient soit la scalabilité en insertion ou la scalabilité en requête, et ne peut pas répondre à ces deux exigences en même temps. En outre, très peu de solutions prennent en charge les suppressions de documents et mises à jour dans ce contexte. nous avons introduit trois principes de conception, à savoir y WriteOnce Partitioning, Linear Pipelining and Background Linear Merging, et montrent comment ils peuvent être combinés pour produire un moteur de recherche intégré concilier un niveau élevé d'insertion / de suppression / et des mises à jour. Nous avons mis en place notre moteur de recherche sur une Board de développement ayant un représentant de configuration matérielle pour les objets intelligents et avons mené de vastes expériences en utilisant deux ensembles de données représentatives. Le dispositif expérimental résultats démontrent la scalabilité de l'approche et sa supériorité par rapport à l'état des procédés de l'art
Apprentissage d'atlas fonctionnel du cerveau modélisant la variabilité interindividuelle by
Alexandre Abraham(
)
1 edition published in 2015 in English and held by 1 WorldCat member library worldwide
Recent studies have shown that restingstate spontaneous brain activity unveils intrinsic cerebral functioning and complete information brought by prototype task study. From these signals, we will set up a functional atlas of the brain, along with an acrosssubject variability model. The novelty of our approach lies in the integration of neuroscientific priors and interindividual variability in a probabilistic description of the rest activity. These models will be applied to large datasets. This variability, ignored until now, may lead to learning of fuzzy atlases, thus limited in term of resolution. This program yields both numerical and algorithmic challenges because of the data volume but also because of the complexity of modelisation
1 edition published in 2015 in English and held by 1 WorldCat member library worldwide
Recent studies have shown that restingstate spontaneous brain activity unveils intrinsic cerebral functioning and complete information brought by prototype task study. From these signals, we will set up a functional atlas of the brain, along with an acrosssubject variability model. The novelty of our approach lies in the integration of neuroscientific priors and interindividual variability in a probabilistic description of the rest activity. These models will be applied to large datasets. This variability, ignored until now, may lead to learning of fuzzy atlases, thus limited in term of resolution. This program yields both numerical and algorithmic challenges because of the data volume but also because of the complexity of modelisation
An approach to measuring software systems using new combined metrics of complex test by
Sarah Dahab(
)
1 edition published in 2019 in English and held by 1 WorldCat member library worldwide
Most of the measurable software quality metrics are currently based on low level metrics, such as cyclomatic complexity, number of comment lines or number of duplicated blocks. Likewise, quality of software engineering is more related to technical or management factoid, and should provide useful metrics for quality requirements. Currently the assessment of these quality requirements is not automated, not empirically validated in real contexts, and the assessment is defined without considering principles of measurement theory. Therefore it is difficult to understand where and how to improve the software following the obtained result. In this domain, the main challenges are to define adequate and useful metrics for quality requirements, software design documents and other software artifacts, including testing activities.The main scientific problematic that are tackled in this proposed thesis are the following : defining metrics and its supporting tools for measuring modern software engineering activities with respect to efficiency and quality. The second consists in analyzing measurement results for identifying what and how to improve automatically. The last one consists in the measurement process automation in order to reduce the development time. Such highly automated and easy to deploy solution will be a breakthrough solution, as current tools do not support it except for very limited scope
1 edition published in 2019 in English and held by 1 WorldCat member library worldwide
Most of the measurable software quality metrics are currently based on low level metrics, such as cyclomatic complexity, number of comment lines or number of duplicated blocks. Likewise, quality of software engineering is more related to technical or management factoid, and should provide useful metrics for quality requirements. Currently the assessment of these quality requirements is not automated, not empirically validated in real contexts, and the assessment is defined without considering principles of measurement theory. Therefore it is difficult to understand where and how to improve the software following the obtained result. In this domain, the main challenges are to define adequate and useful metrics for quality requirements, software design documents and other software artifacts, including testing activities.The main scientific problematic that are tackled in this proposed thesis are the following : defining metrics and its supporting tools for measuring modern software engineering activities with respect to efficiency and quality. The second consists in analyzing measurement results for identifying what and how to improve automatically. The last one consists in the measurement process automation in order to reduce the development time. Such highly automated and easy to deploy solution will be a breakthrough solution, as current tools do not support it except for very limited scope
Exploitation des mesures électriques en vue de la surveillance et du diagnostic en temps réel des piles à combustible pour
application transport automobile by
Miassa Taleb(
)
1 edition published in 2015 in French and held by 1 WorldCat member library worldwide
In the current global energy context, proton exchange membrane fuel cells represent a promising solution to the future development of a new generation of electrified vehicles, allowing greater autonomy than electrified vehicles using batteries.Nevertheless, the largescale development of fuel cells remains limited due to some technological locks, such as water management. To enable mass production of fuel cells, such problems must be solved. Several working axes may be envisaged both on the hardware aspects of the fuel cell structure, and from the point of view of control, by developing algorithmic tools for monitoring the operating state of the system to detect any failures, or degradations that may occur.The work of this thesis falls within this second approach and focuses specifically on the identification of drying and drowning phenomena which can appear in a fuel cell, to diagnose any moisture problems leading to yield reduction.The methods developed in this work are based on the monitoring of relevant parameters of the fuel cell model which changes, compared to reference values, are characteristic of the state of the fuel cell hydration.The realtime monitoring of these parameters can highlight the drying and drowning phenomena.Adopted models for this work are based on a representation of the electrical impedance of the fuel cell.Thus, following this approach, the adopted strategy is then based on the development of two electrical models: an integer order model and a fractional order model. It appears that the second model formulation is closer to the physical reality of transport phenomena occurring in the fuel cell. It allows a better representation of the fuel cell behavior in time and frequency domain. Indeed, the analyzes based on experimental results performed using a single fuel cell (100 cm2 active area designed by UBzM company) have validated that the fractional order model, in return for an increase of complexity, allows better reproduce, in the one hand of the fuel cell timeseries voltage response (voltage monitoring for a given current profile), on the other hand a better approximation of the measured impedance. Conventional and of fractional order parametric identification methods are then used to extract the model's parameters from timeseries experimental data (voltage / current from the battery) or frequency data (impedance spectroscopy).A sensitivity analysis allows then the defining of the most indicative parameters of the drowning and drying phenomena. The evolution of these parameters associated with the voltage and impedance spectrum of the fuel cell are then combined to build a diagnosis strategy of the fuel cell water management
1 edition published in 2015 in French and held by 1 WorldCat member library worldwide
In the current global energy context, proton exchange membrane fuel cells represent a promising solution to the future development of a new generation of electrified vehicles, allowing greater autonomy than electrified vehicles using batteries.Nevertheless, the largescale development of fuel cells remains limited due to some technological locks, such as water management. To enable mass production of fuel cells, such problems must be solved. Several working axes may be envisaged both on the hardware aspects of the fuel cell structure, and from the point of view of control, by developing algorithmic tools for monitoring the operating state of the system to detect any failures, or degradations that may occur.The work of this thesis falls within this second approach and focuses specifically on the identification of drying and drowning phenomena which can appear in a fuel cell, to diagnose any moisture problems leading to yield reduction.The methods developed in this work are based on the monitoring of relevant parameters of the fuel cell model which changes, compared to reference values, are characteristic of the state of the fuel cell hydration.The realtime monitoring of these parameters can highlight the drying and drowning phenomena.Adopted models for this work are based on a representation of the electrical impedance of the fuel cell.Thus, following this approach, the adopted strategy is then based on the development of two electrical models: an integer order model and a fractional order model. It appears that the second model formulation is closer to the physical reality of transport phenomena occurring in the fuel cell. It allows a better representation of the fuel cell behavior in time and frequency domain. Indeed, the analyzes based on experimental results performed using a single fuel cell (100 cm2 active area designed by UBzM company) have validated that the fractional order model, in return for an increase of complexity, allows better reproduce, in the one hand of the fuel cell timeseries voltage response (voltage monitoring for a given current profile), on the other hand a better approximation of the measured impedance. Conventional and of fractional order parametric identification methods are then used to extract the model's parameters from timeseries experimental data (voltage / current from the battery) or frequency data (impedance spectroscopy).A sensitivity analysis allows then the defining of the most indicative parameters of the drowning and drying phenomena. The evolution of these parameters associated with the voltage and impedance spectrum of the fuel cell are then combined to build a diagnosis strategy of the fuel cell water management
Détection de données aberrantes appliquée à la localisation GPS by
Salim Zair(
)
1 edition published in 2016 in French and held by 1 WorldCat member library worldwide
In this work, we focus on the problem of detection of erroneous GPS measurements. Indeed, in urban areas, acquisitions are highly degraded by multipath phenomena or signal multiple reflections before reaching the receiver antenna. In forest areas, the satellite occlusion reduces the measurements redundancy. While the algorithms embedded in GPS receivers detect at most one erroneous measurement per epoch, the hypothesis of a single error at a time is no longer realistic when we combine data from different navigation systems. The detection and management of erroneous data (faulty, aberrant or outliers depending on the different terminologies) has become a major issue in the autonomous navigation applications and robust localization and raises a new technological challenge.The main contribution of this work is an outlier detection algorithm for GNSS localization with an a contrario modeling. Two criteria based on number of false alarms (NFA) are used to measure the consistency of a set of measurements under the noise model assumption.Our second contribution is the introduction of Doppler measurements in the localization process. We extend the outlier detection to both pseudoranges and Doppler measurements, and we propose a coupling with either the particle filter SIR or the RaoBlackwellized particle filter that allows us to estimate analytically the velocity.Our third contribution is an evidential approach for the detection of outliers in the pseudoranges. Inspired by the RANSAC, we choose among possible combinations of observations, the most compatible one according to a measure of consistency or inconsistency. An evidential filtering step is performed that takes into account the previous solution. The proposed approaches achieve better performance than standard methods and demonstrate the interest of removing the outliers from the localization process
1 edition published in 2016 in French and held by 1 WorldCat member library worldwide
In this work, we focus on the problem of detection of erroneous GPS measurements. Indeed, in urban areas, acquisitions are highly degraded by multipath phenomena or signal multiple reflections before reaching the receiver antenna. In forest areas, the satellite occlusion reduces the measurements redundancy. While the algorithms embedded in GPS receivers detect at most one erroneous measurement per epoch, the hypothesis of a single error at a time is no longer realistic when we combine data from different navigation systems. The detection and management of erroneous data (faulty, aberrant or outliers depending on the different terminologies) has become a major issue in the autonomous navigation applications and robust localization and raises a new technological challenge.The main contribution of this work is an outlier detection algorithm for GNSS localization with an a contrario modeling. Two criteria based on number of false alarms (NFA) are used to measure the consistency of a set of measurements under the noise model assumption.Our second contribution is the introduction of Doppler measurements in the localization process. We extend the outlier detection to both pseudoranges and Doppler measurements, and we propose a coupling with either the particle filter SIR or the RaoBlackwellized particle filter that allows us to estimate analytically the velocity.Our third contribution is an evidential approach for the detection of outliers in the pseudoranges. Inspired by the RANSAC, we choose among possible combinations of observations, the most compatible one according to a measure of consistency or inconsistency. An evidential filtering step is performed that takes into account the previous solution. The proposed approaches achieve better performance than standard methods and demonstrate the interest of removing the outliers from the localization process
Knowledge Discovery for Avionics Maintenance : An Unsupervised Concept Learning Approach by
Luis Palacios Medinacelli(
)
1 edition published in 2019 in English and held by 1 WorldCat member library worldwide
In this thesis we explore the problem of signature analysis in avionics maintenance, to identify failures in faulty equipment and suggest corrective actions to resolve the failure. The thesis takes place in the context of a CIFRE convention between Thales R&T and the Université ParisSud, thus it has both a theoretical and an industrial motivation. The signature of a failure provides all the information necessary to understand, identify and ultimately repair a failure. Thus when identifying the signature of a failure it is important to make it explainable. We propose an ontology based approach to model the domain, that provides a level of automatic interpretation of the highly technical tests performed in the equipment. Once the tests can be interpreted, corrective actions are associated to them. The approach is rooted on concept learning, used to approximate description logic concepts that represent the failure signatures. Since these signatures are not known in advance, we require an unsupervised learning algorithm to compute the approximations. In our approach the learned signatures are provided as description logics (DL) definitions which in turn are associated to a minimal set of axioms in the ABox. These serve as explanations for the discovered signatures. Thus providing a glassbox approach to trace the reasons on how and why a signature was obtained. Current concept learning techniques are either designed for supervised learning problems, or rely on frequent patterns and large amounts of data. We use a different perspective, and rely on a bottomup construction of the ontology. Similarly to other approaches, the learning process is achieved through a refinement operator that traverses the space of concept expressions, but an important difference is that in our algorithms this search is guided by the information of the individuals in the ontology. To this end the notions of justifications in ontologies, most specific concepts and concept refinements, are revised and adapted to our needs. The approach is then adapted to the specific avionics maintenance case in Thales Avionics, where a prototype has been implemented to test and evaluate the approach as a proof of concept
1 edition published in 2019 in English and held by 1 WorldCat member library worldwide
In this thesis we explore the problem of signature analysis in avionics maintenance, to identify failures in faulty equipment and suggest corrective actions to resolve the failure. The thesis takes place in the context of a CIFRE convention between Thales R&T and the Université ParisSud, thus it has both a theoretical and an industrial motivation. The signature of a failure provides all the information necessary to understand, identify and ultimately repair a failure. Thus when identifying the signature of a failure it is important to make it explainable. We propose an ontology based approach to model the domain, that provides a level of automatic interpretation of the highly technical tests performed in the equipment. Once the tests can be interpreted, corrective actions are associated to them. The approach is rooted on concept learning, used to approximate description logic concepts that represent the failure signatures. Since these signatures are not known in advance, we require an unsupervised learning algorithm to compute the approximations. In our approach the learned signatures are provided as description logics (DL) definitions which in turn are associated to a minimal set of axioms in the ABox. These serve as explanations for the discovered signatures. Thus providing a glassbox approach to trace the reasons on how and why a signature was obtained. Current concept learning techniques are either designed for supervised learning problems, or rely on frequent patterns and large amounts of data. We use a different perspective, and rely on a bottomup construction of the ontology. Similarly to other approaches, the learning process is achieved through a refinement operator that traverses the space of concept expressions, but an important difference is that in our algorithms this search is guided by the information of the individuals in the ontology. To this end the notions of justifications in ontologies, most specific concepts and concept refinements, are revised and adapted to our needs. The approach is then adapted to the specific avionics maintenance case in Thales Avionics, where a prototype has been implemented to test and evaluate the approach as a proof of concept
Classification of Poligomorphic groups, conjectures of Cameron and Macpherson by
Justine Falque(
)
1 edition published in 2019 in English and held by 1 WorldCat member library worldwide
This PhD thesis falls under the fields of algebraic combinatorics and group theory. Precisely,it brings a contribution to the domain that studies profiles of oligomorphic permutation groups and their behaviors.The first part of this manuscript introduces most of the tools that will be needed later on, starting with elements of combinatorics and algebraic combinatorics.We define counting functions through classical examples ; with a view of studying them, we argue the relevance of adding a graded algebra structure on the counted objects.We also bring up the notions of order and lattice.Then, we provide an overview of the basic definitions and properties related to permutation groups and to invariant theory. We end this part with a description of the Pólya enumeration method, which allows to count objects under a group action.The second part is dedicated to introducing the domain this thesis comes withinthe scope of. It dwells on profiles of relational structures,and more specifically orbital profiles.If G is an infinite permutation group, its profile is the counting function which maps any n > 0 to the number of orbits of nsubsets, for the inducedaction of G on the finite subsets of elements.Cameron conjectured that the profile of G is asymptotically equivalent to a polynomial whenever it is bounded by apolynomial.Another, stronger conjecture was later made by Macpherson : it involves a certain structure of graded algebra on the orbits of subsetscreated by Cameron, the orbit algebra, and states that if the profile of G is bounded by a polynomial, then its orbit algebra is finitely generated.As a start in our study of this problem, we develop some examples and get our first hints towards a resolution by examining the block systems ofgroups with profile bounded by a polynomial  that we call Poligomorphic , as well as the notion of subdirect product.The third part is the proof of a classification of Poligomorphic groups,with Macpherson's conjecture as a corollary.First, we study the combinatorics of the lattice of block systems,which leads to identifying one special, generalized such system, that consists of blocks of blocks with good properties.We then tackle the elementary case when there is only one such block of blocks, for which we establish a classification. The proof borrows to the subdirect product concept to handle synchronizations within the group, and relied on an experimental approach on computer to first conjecture the classification.In the general case, we evidence the structure of a semidirect product involving the minimal normal subgroup of finite index and some finite group.This allows to formalize a classification of all Poligomorphic groups, the main result of this thesis, and to deduce the form of the orbit algebra: (little more than) an explicit algebra of invariants of a finite group. This implies the conjectures of Macpherson and Cameron, and a deep understanding of these groups.The appendix provides parts of the code that was used, and a glimpse at that resulting from the classification afterwards,that allows to manipulate Poligomorphic groups by apropriate algorithmics. Last, we include our earlier (weaker) proof of the conjectures
1 edition published in 2019 in English and held by 1 WorldCat member library worldwide
This PhD thesis falls under the fields of algebraic combinatorics and group theory. Precisely,it brings a contribution to the domain that studies profiles of oligomorphic permutation groups and their behaviors.The first part of this manuscript introduces most of the tools that will be needed later on, starting with elements of combinatorics and algebraic combinatorics.We define counting functions through classical examples ; with a view of studying them, we argue the relevance of adding a graded algebra structure on the counted objects.We also bring up the notions of order and lattice.Then, we provide an overview of the basic definitions and properties related to permutation groups and to invariant theory. We end this part with a description of the Pólya enumeration method, which allows to count objects under a group action.The second part is dedicated to introducing the domain this thesis comes withinthe scope of. It dwells on profiles of relational structures,and more specifically orbital profiles.If G is an infinite permutation group, its profile is the counting function which maps any n > 0 to the number of orbits of nsubsets, for the inducedaction of G on the finite subsets of elements.Cameron conjectured that the profile of G is asymptotically equivalent to a polynomial whenever it is bounded by apolynomial.Another, stronger conjecture was later made by Macpherson : it involves a certain structure of graded algebra on the orbits of subsetscreated by Cameron, the orbit algebra, and states that if the profile of G is bounded by a polynomial, then its orbit algebra is finitely generated.As a start in our study of this problem, we develop some examples and get our first hints towards a resolution by examining the block systems ofgroups with profile bounded by a polynomial  that we call Poligomorphic , as well as the notion of subdirect product.The third part is the proof of a classification of Poligomorphic groups,with Macpherson's conjecture as a corollary.First, we study the combinatorics of the lattice of block systems,which leads to identifying one special, generalized such system, that consists of blocks of blocks with good properties.We then tackle the elementary case when there is only one such block of blocks, for which we establish a classification. The proof borrows to the subdirect product concept to handle synchronizations within the group, and relied on an experimental approach on computer to first conjecture the classification.In the general case, we evidence the structure of a semidirect product involving the minimal normal subgroup of finite index and some finite group.This allows to formalize a classification of all Poligomorphic groups, the main result of this thesis, and to deduce the form of the orbit algebra: (little more than) an explicit algebra of invariants of a finite group. This implies the conjectures of Macpherson and Cameron, and a deep understanding of these groups.The appendix provides parts of the code that was used, and a glimpse at that resulting from the classification afterwards,that allows to manipulate Poligomorphic groups by apropriate algorithmics. Last, we include our earlier (weaker) proof of the conjectures
Uncertainties in Optimization by
MarieLiesse Cauwet(
)
1 edition published in 2016 in English and held by 1 WorldCat member library worldwide
Ces recherches sont motivées par la nécessité de développer de nouvelles méthodes d'optimisation des systèmes électriques. Dans ce domaine, les méthodes usuelles de contrôle et d'investissement sont à présent limitées de par les problèmes comportant une grande part d'aléa, qui interviennent lors de l'introduction massive d'énergies renouvelables. Après la présentation des différentes facettes de l'optimisation d'un système électrique, nous discuterons le problème d'optimisation continue bruitée de type boîte noire puis des cas bruités comprenant des caractéristiques supplémentaires.Concernant la contribution à l'optimisation continue bruitée de type boîte noire, nous nous intéresserons aux bornes inférieures et supérieures du taux de convergence de différentes familles d'algorithmes. Nous étudierons la convergence d'algorithmes basés sur les comparaisons, en particuliers les Stratégies d'Evolution, face à différents niveaux de bruit (faible, modéré et fort). Nous étendrons également les résultats de convergence des algorithmes basés sur les évaluations lors d'un bruit faible. Finalement, nous proposerons une méthode de sélection pour choisir le meilleur algorithme, parmi un éventail d'algorithme d'optimisation bruitée, sur un problème donné.Pour ce qui est de la contribution aux cas bruités avec des contraintes supplémentaires, les cas délicats, nous introduirons des concepts issus de l'apprentissage par renforcement, de la théorie de la décision et des statistiques. L'objectif est de proposer des méthodes d'optimisation plus proches de la réalité (en termes de modélisation) et plus robuste. Nous rechercherons également des critères de fiabilité des systèmes électriques moins conservatifs
1 edition published in 2016 in English and held by 1 WorldCat member library worldwide
Ces recherches sont motivées par la nécessité de développer de nouvelles méthodes d'optimisation des systèmes électriques. Dans ce domaine, les méthodes usuelles de contrôle et d'investissement sont à présent limitées de par les problèmes comportant une grande part d'aléa, qui interviennent lors de l'introduction massive d'énergies renouvelables. Après la présentation des différentes facettes de l'optimisation d'un système électrique, nous discuterons le problème d'optimisation continue bruitée de type boîte noire puis des cas bruités comprenant des caractéristiques supplémentaires.Concernant la contribution à l'optimisation continue bruitée de type boîte noire, nous nous intéresserons aux bornes inférieures et supérieures du taux de convergence de différentes familles d'algorithmes. Nous étudierons la convergence d'algorithmes basés sur les comparaisons, en particuliers les Stratégies d'Evolution, face à différents niveaux de bruit (faible, modéré et fort). Nous étendrons également les résultats de convergence des algorithmes basés sur les évaluations lors d'un bruit faible. Finalement, nous proposerons une méthode de sélection pour choisir le meilleur algorithme, parmi un éventail d'algorithme d'optimisation bruitée, sur un problème donné.Pour ce qui est de la contribution aux cas bruités avec des contraintes supplémentaires, les cas délicats, nous introduirons des concepts issus de l'apprentissage par renforcement, de la théorie de la décision et des statistiques. L'objectif est de proposer des méthodes d'optimisation plus proches de la réalité (en termes de modélisation) et plus robuste. Nous rechercherons également des critères de fiabilité des systèmes électriques moins conservatifs
Apprentissage de graphes structuré et parcimonieux dans des données de haute dimension avec applications à l'imagerie cérébrale by
Eugene Belilovsky(
)
1 edition published in 2018 in French and held by 1 WorldCat member library worldwide
This dissertation presents novel structured sparse learning methods on graphs that address commonly found problems in the analysis of neuroimaging data as well as other high dimensional data with few samples. The first part of the thesis proposes convex relaxations of discrete and combinatorial penalties involving sparsity and bounded total variation on a graph as well as bounded `2 norm. These are developed with the aim of learning an interpretable predictive linear model and we demonstrate their effectiveness on neuroimaging data as well as a sparse image recovery problem.The subsequent parts of the thesis considers structure discovery of undirected graphical models from few observational data. In particular we focus on invoking sparsity and other structured assumptions in Gaussian Graphical Models (GGMs). To this end we make two contributions. We show an approach to identify differences in Gaussian Graphical Models (GGMs) known to have similar structure. We derive the distribution of parameter differences under a joint penalty when parameters are known to be sparse in the difference. We then show how this approach can be used to obtain confidence intervals on edge differences in GGMs. We then introduce a novel learning based approach to the problem structure discovery of undirected graphical models from observational data. We demonstrate how neural networks can be used to learn effective estimators for this problem. This is empirically shown to be flexible and efficient alternatives to existing techniques
1 edition published in 2018 in French and held by 1 WorldCat member library worldwide
This dissertation presents novel structured sparse learning methods on graphs that address commonly found problems in the analysis of neuroimaging data as well as other high dimensional data with few samples. The first part of the thesis proposes convex relaxations of discrete and combinatorial penalties involving sparsity and bounded total variation on a graph as well as bounded `2 norm. These are developed with the aim of learning an interpretable predictive linear model and we demonstrate their effectiveness on neuroimaging data as well as a sparse image recovery problem.The subsequent parts of the thesis considers structure discovery of undirected graphical models from few observational data. In particular we focus on invoking sparsity and other structured assumptions in Gaussian Graphical Models (GGMs). To this end we make two contributions. We show an approach to identify differences in Gaussian Graphical Models (GGMs) known to have similar structure. We derive the distribution of parameter differences under a joint penalty when parameters are known to be sparse in the difference. We then show how this approach can be used to obtain confidence intervals on edge differences in GGMs. We then introduce a novel learning based approach to the problem structure discovery of undirected graphical models from observational data. We demonstrate how neural networks can be used to learn effective estimators for this problem. This is empirically shown to be flexible and efficient alternatives to existing techniques
Amélioration de la partie supérieure du robot HYDROïD pour les tâches bimanuelles et la manipulation by
Ahmad Tayba(
)
1 edition published in 2017 in English and held by 1 WorldCat member library worldwide
My thesis aims at contributing to the development and improvement of the upper body of HYDROïD robot for bimanual tasks, while basing on a biomechanical study of this part of the human being. To reach our major goal, this work adopts, at first, a novel hybrid structure of 4 degrees of freedom (DOF) for the trunk of the robot, distributed in three DOF at the lumbar level and one DOF at the thoracic level. This structure was identified after an analysis of the workspace of a multibody model feigning the vertebral column of a human being, and an optimization study of that model allowing the synthesis of the envisaged structure. Secondly, an improvement of the kinematics of the robor arm was organized, by introducing the notion of the shoulder complex in the present structure. The choice of this new degree of freedom was the fruit of a systematic approach to increase the anthropomorphism geometry of the arm wished towards a humanitarian arm of the same size.The two proposed structures crossed afterward by the mechanical design phase while respecting all the geometrical constraints and by using the hydraulic energy as being the type of actuation of these systems. Finally, the Inverse Geometrical Model (IGM) for the generic solution of the trunk was established and its adaptation to our particular case was identified. An optimized solution for this mechanism based on 2 various criteria was then given
1 edition published in 2017 in English and held by 1 WorldCat member library worldwide
My thesis aims at contributing to the development and improvement of the upper body of HYDROïD robot for bimanual tasks, while basing on a biomechanical study of this part of the human being. To reach our major goal, this work adopts, at first, a novel hybrid structure of 4 degrees of freedom (DOF) for the trunk of the robot, distributed in three DOF at the lumbar level and one DOF at the thoracic level. This structure was identified after an analysis of the workspace of a multibody model feigning the vertebral column of a human being, and an optimization study of that model allowing the synthesis of the envisaged structure. Secondly, an improvement of the kinematics of the robor arm was organized, by introducing the notion of the shoulder complex in the present structure. The choice of this new degree of freedom was the fruit of a systematic approach to increase the anthropomorphism geometry of the arm wished towards a humanitarian arm of the same size.The two proposed structures crossed afterward by the mechanical design phase while respecting all the geometrical constraints and by using the hydraulic energy as being the type of actuation of these systems. Finally, the Inverse Geometrical Model (IGM) for the generic solution of the trunk was established and its adaptation to our particular case was identified. An optimized solution for this mechanism based on 2 various criteria was then given
Modeling the speedaccuracy tradeoff using the tools of information theory by
Julien Gori(
)
1 edition published in 2018 in English and held by 1 WorldCat member library worldwide
La loi de Fitts, qui relie le temps de mouvement MT dans une tache de pointage aux dimensions de la cible visée D et W est usuellement exprimée à partir d'une imitation de la formule de la capacité de Shannon MT = a + b log 2 (1 + D/W). Toutefois, l'analyse actuelle est insatisfaisante: elle provient d'une simple analogie entre la tache de pointage et la transmission d'un signal sur un canal bruité sans qu'il n'y ait de modèle explicite de communication.Je développe d'abord un modèle de transmission pour le pointage, où l'indice de difficulté ID = log 2 (1 + D/W) s'exprime aussi bien comme une entropie de source et une capacité de canal, permettant ainsi de réconcilier dans un premier temps l'approche de Fitts avec la théorie de l'information de Shannon. Ce modèle est ensuite exploité pour analyser des données de pointage récoltées lors d'expérimentations contrôlées mais aussi en conditions d'utilisations réelles.Je développe ensuite un second modèle, focalisé autour de la forte variabilité caractéristique du mouvement humain et qui prend en compte la forte diversité des mécanismes de contrôle du mouvement: avec ou sans voie de retour, par intermittence ou de manière continue. À partir d'une chronométrie de la variance positionnelle, évaluée à partir d'un ensemble de trajectoires, on remarque que le mouvement peutêtre découpé en deux phases: une première où la variance augmente et une grande partie de la distance à couvrir est parcourue, est suivie d'une deuxième au cours de laquelle la variance diminue pour satisfaire les contraintes de précision requises par la tache.Dans la deuxième phase, le problème du pointage peutêtre ramené à un problème de communication à la Shannon, où l'information est transmise d'une“source” (variance à la fin de la première phase) à une “destination” (extrémité du membre) à travers un canal Gaussien avec la présence d'une voie de retour.Je montre que la solution optimale à ce problème de transmission revient à considérer un schéma proposé par Elias. Je montre que la variance peut décroitre au mieux exponentiellement au cours de la deuxième phase, et que c'est ce résultat qui implique directement la loi de Fitts
1 edition published in 2018 in English and held by 1 WorldCat member library worldwide
La loi de Fitts, qui relie le temps de mouvement MT dans une tache de pointage aux dimensions de la cible visée D et W est usuellement exprimée à partir d'une imitation de la formule de la capacité de Shannon MT = a + b log 2 (1 + D/W). Toutefois, l'analyse actuelle est insatisfaisante: elle provient d'une simple analogie entre la tache de pointage et la transmission d'un signal sur un canal bruité sans qu'il n'y ait de modèle explicite de communication.Je développe d'abord un modèle de transmission pour le pointage, où l'indice de difficulté ID = log 2 (1 + D/W) s'exprime aussi bien comme une entropie de source et une capacité de canal, permettant ainsi de réconcilier dans un premier temps l'approche de Fitts avec la théorie de l'information de Shannon. Ce modèle est ensuite exploité pour analyser des données de pointage récoltées lors d'expérimentations contrôlées mais aussi en conditions d'utilisations réelles.Je développe ensuite un second modèle, focalisé autour de la forte variabilité caractéristique du mouvement humain et qui prend en compte la forte diversité des mécanismes de contrôle du mouvement: avec ou sans voie de retour, par intermittence ou de manière continue. À partir d'une chronométrie de la variance positionnelle, évaluée à partir d'un ensemble de trajectoires, on remarque que le mouvement peutêtre découpé en deux phases: une première où la variance augmente et une grande partie de la distance à couvrir est parcourue, est suivie d'une deuxième au cours de laquelle la variance diminue pour satisfaire les contraintes de précision requises par la tache.Dans la deuxième phase, le problème du pointage peutêtre ramené à un problème de communication à la Shannon, où l'information est transmise d'une“source” (variance à la fin de la première phase) à une “destination” (extrémité du membre) à travers un canal Gaussien avec la présence d'une voie de retour.Je montre que la solution optimale à ce problème de transmission revient à considérer un schéma proposé par Elias. Je montre que la variance peut décroitre au mieux exponentiellement au cours de la deuxième phase, et que c'est ce résultat qui implique directement la loi de Fitts
Unsupervised and weakly supervised deep learning methods for computer vision and medical imaging by
Mihir Sahasrabudhe(
)
1 edition published in 2020 in English and held by 1 WorldCat member library worldwide
The first two contributions of this thesis (Chapter 2 and 3) are models for unsupervised 2D alignment and learning 3D object surfaces, called Deforming Autoencoders (DAE) and Lifting Autoencoders (LAE). These models are capable of identifying canonical space in order to represent different object properties, for example, appearance in a canonical space, deformation associated with this appearance that maps it to the image space, and for human faces, a 3D model for a face, its facial expression, and the angle of the camera. We further illustrate applications of models to other domains_ alignment of lung MRI images in medical image analysis, and alignment of satellite images for remote sensing imagery. In Chapter 4, we concentrate on a problem in medical image analysis_ diagnosis of lymphocytosis. We propose a convolutional network to encode images of blood smears obtained from a patient, followed by an aggregation operation to gather information from all images in order to represent them in one feature vector which is used to determine the diagnosis. Our results show that the performance of the proposed models is atpar with biologists and can therefore augment their diagnosis
1 edition published in 2020 in English and held by 1 WorldCat member library worldwide
The first two contributions of this thesis (Chapter 2 and 3) are models for unsupervised 2D alignment and learning 3D object surfaces, called Deforming Autoencoders (DAE) and Lifting Autoencoders (LAE). These models are capable of identifying canonical space in order to represent different object properties, for example, appearance in a canonical space, deformation associated with this appearance that maps it to the image space, and for human faces, a 3D model for a face, its facial expression, and the angle of the camera. We further illustrate applications of models to other domains_ alignment of lung MRI images in medical image analysis, and alignment of satellite images for remote sensing imagery. In Chapter 4, we concentrate on a problem in medical image analysis_ diagnosis of lymphocytosis. We propose a convolutional network to encode images of blood smears obtained from a patient, followed by an aggregation operation to gather information from all images in order to represent them in one feature vector which is used to determine the diagnosis. Our results show that the performance of the proposed models is atpar with biologists and can therefore augment their diagnosis
more
fewer
Audience Level
0 

1  
General  Special 
Related Identities
 Université ParisSaclay (20152019) Degree grantor
 Université ParisSud (19702019) Other
 Laboratoire des signaux et systèmes (GifsurYvette, Essonne / 1998....). Other
 CentraleSupélec (2015....). Other
 Université de VersaillesSaintQuentinenYvelines Other
 Laboratoire de recherche en informatique (Orsay, Essonne / 19982020) Other
 Laboratoire Traitement et communication de l'information (Paris / 2003....). Other
 Laboratoire d'informatique pour la mécanique et les sciences de l'ingénieur (Orsay, Essonne / 1972....). Other
 Télécom Paris (Palaiseau) Other
 Institut national des télécommunications (Evry) Other