WorldCat Identities

Ecole doctorale Sciences et Technologies de l'Information, des Télécommunications et des Systèmes (Orsay, Essonne / 2000-2015)

Overview
Works: 372 works in 436 publications in 2 languages and 384 library holdings
Roles: Other, Degree grantor
Publication Timeline
.
Most widely held works by des Télécommunications et des Systèmes (Orsay, Essonne / 2000-2015) Ecole doctorale Sciences et Technologies de l'Information
Neural networks as cellular computing models for temporal sequence processing. by Bassem Khouzam( Book )

3 editions published in 2014 in English and held by 3 WorldCat member libraries worldwide

The thesis proposes a sequence learning approach that uses the mechanism of fine grain self-organization. The manuscript initially starts by situating this effort in the perspective of contributing to the promotion of cellular computing paradigm in computer science. Computation within this paradigm is divided into a large number of elementary calculations carried out in parallel by computing cells, with information exchange between them.In addition to their fine grain nature, the cellular nature of such architectures lies in the spatial topology of the connections between cells that complies with to the constraints of the technological evolution of hardware in the future. In the manuscript, most of the distributed architecture known in computer science are examined following this perspective, to find that very few of them fall within the cellular paradigm.We are interested in the learning capacity of these architectures, because of the importance of this notion in the related domain of neural networks for example, without forgetting, however, that cellular systems are complex dynamical systems by construction.This inevitable dynamical component has motivated our focus on the learning of temporal sequences, for which we reviewed the different models in the domains of neural networks and self-organization maps.At the end, we proposed an architecture that contributes to the promotion of cellular computing in the sense that it exhibits self-organization properties employed in the extraction of a representation of a dynamical system states that provides the architecture with its entries, even if the latter are ambiguous such that they partially reflect the system state. We profited from an existing supercomputer to simulate complex architecture, that indeed exhibited a new emergent behavior. Based on these results we pursued a critical study that sets the perspective for future work
Exploitation de corrélations spatiales et temporelles en tomographie par émission de positrons by Florent Sureau( Book )

3 editions published between 2008 and 2010 in French and held by 3 WorldCat member libraries worldwide

In this thesis we propose, implement, and evaluate algorithms improving spatial resolution in reconstructed images and reducing data noise in positron emission tomography imaging. These algorithms have been developed for a high resolution tomograph (HRRT) and applied to brain imaging, but can be used for other tomographs or studies. We first developed an iterative reconstruction algorithm including a stationary and isotropic model of resolution in image space, experimentally measured. We evaluated the impact of such a model of resolution in Monte-Carlo simulations, physical phantom experiments and in two clinical studies by comparing our algorithm with a reference reconstruction algorithm. This study suggests that biases due to partial volume effects are reduced, in particular in the clinical studies. Better spatial and temporal correlations are also found at the voxel level. However, other methods should be developed to further reduce data noise. We then proposed a maximum a posteriori denoising algorithm that can be used for dynamic data to denoise temporally raw data (sinograms) or reconstructed images. The a priori modeled the coefficients in a wavelet basis of all the signals without noise (in an image or sinogram). We compared this technique with a reference denoising method on replicated simulations. This illustrates the potential benefits of our approach of sinogram denoising
Contribution à la conception de systèmes mécatroniques automobiles : méthodologie de pré-dimensionnement multi-niveau multi-physique de convertisseurs statiques by Kamal Ejjabraoui( Book )

2 editions published in 2010 in French and held by 2 WorldCat member libraries worldwide

Les travaux de cette thèse sont effectués dans le cadre du projet O2M (Outil de Modélisation Mécatronique), labellisé par les pôles de compétitivité Mov'eo et System@tic, dont l'objectif est de développer une nouvelle génération d'outils dédiés aux différentes phases de conception de systèmes mécatroniques automobiles. Nous avons montré à travers de cette thèse l'absence d'une plateforme logicielle permettant la conception de l'ensemble des éléments d'une chaine d'actionnement mécatronique avec la même finesse et d'une méthodologie globale capable de formaliser le choix d'architecture et de considérer plusieurs contraintes multi-physiques y compris des contraintes d'intégration 3D. Dans ce contexte et au sein du sous-projet "Pré-dimensionnement" dont lequel ces travaux sont principalement concentrés, nous nous sommes intéressé au développement d'une approche de prédimensionnement de systèmes mécatroniques réalisée en trois niveaux : choix d'architecture et de technologies des composants, optimisation sous contraintes multi-physiques et optimisation avec intégration de la simulation numérique 3D. Une évaluation sur des outils de simulation et de conception les plus répandus sur différents critères a permis de conclure qu'une plateforme logicielle mécatronique peut être une association de certains outils tels que MATLAB-SIMULINK, DYMOLA, AMESim pour les niveaux 1 et 2 de prédimensionnement et COMSOL pour le niveau 3. Une adaptation de la démarche proposée est réalisée sur un élément essentiel de la chaine mécatronique, le convertisseur DC-DC. Des bases de données technologiques de composants actifs et passifs sont mises en place afin d'alimenter la démarche de pré-dimensionnement. Des modèles nécessaires à la réalisation de chaque niveau de cette démarche sont développés. Ils permettent dans le premier niveau de réaliser le choix d'architecture, d'estimer rapidement le volume des composants et de faire le choix technologique des composants selon une contrainte majeure (volume dans notre cas). Ils assurent dans le deuxième niveau l'optimisation sous contraintes multiphysiques (volume, rendement, température, spectre électromagnétique et commande). Enfin, dans le troisième niveau une association de deux logiciels (COMSOL pour la simulation éléments finis et MATLAB comme environnement d'optimisation) a été mise en place en vue d'une optimisation du placement des composants de puissance sous contrainte thermique en utilisant un modèle thermique plus fin basé sur la méthode des éléments finis. La démarche est appliquée à trois cahier des charges : convertisseur Buck, convertisseur Boost et l'onduleur triphasé. Des optimisations mono-objectif (volume) et multi-objectif (volume et rendement, volume et temps de réponse) sous contraintes multi-physiques ont été réalisées. Nous avons montré à travers ces optimisations l'impact direct des contraintes liées à la commande (temps de réponse, stabilité) au même titre que celles classiquement utilisées lors de la conception des convertisseurs statiques. De plus, nous avons montré la possibilité de lever des risques d'intégration 3D très tôt dans la phase de conception de ces convertisseurs. Cette démarche de pré-dimensionnement multi-niveau, multi-physique proposée a permis de répondre à des besoins exprimés par les partenaires industriels du projet O2M en termes de méthodologie de conception de systèmes mécatroniques automobiles
Circuits de lecture innovants pour capteur infrarouge bolométrique by Benoît Dupont( Book )

2 editions published in 2008 in French and held by 2 WorldCat member libraries worldwide

Le travail de thèse concerne l'amélioration de la qualité d'image des détecteurs infrarouges bolométriques par diminution du bruit spatial fixe. Dans un premier temps, la thèse aborde la problématique de l'acquisition d'image infrarouge thermique non-refroidies et montre en quoi le bruit spatial fixe devient un problème prédominant quand il s'agit d'évaluer la qualité d'image des capteurs de l'état de l'art. Une modélisation analytique d'un détecteur est ensuite développée afin d'identifier le paramètre technologique prédominant dans la dispersion de réponse. On démontre ainsi que le préfacteur de résistance est de loin le principal responsable de la dispersion de signal des matrices de bolomètres. Ce travail est étayé par des mesures réalisées sur des composants existants. Un algorithme de correction numérique des dispersions induite par ce facteur prédominant est développé. Sa performance de correction est mise en évidence et ses limites sont explicitées. Afin de dépasser ces limitations, une nouvelle architecture mixte numérique/analogique est proposée et validée par simulation. Enfin deux circuits de réduction des facteurs secondaires sont présentés et testés, on en démontre leur fonctionnalité et leurs limites dans un dernier chapitre. Cette thèse a fait l'objet de cinq dépôts de brevets et de six communications scientifiques. Cinq circuits d'études ont été réalisés pendant ces travaux et leur fonctionnement est explicité dans ce manuscrit
Métamatériaux tout diélectrique micro-ondes by Thomas Lepetit( Book )

2 editions published in 2010 in French and held by 2 WorldCat member libraries worldwide

Metamaterials are periodic structures with a negative permeability and/or permittivity. The unprecedent control of electromagnetic properties afforded by these materials paves the way towards new applications. In this thesis, the study of dielectric metamaterials aims at the reduction of a major inconvenience, losses. A thorough study of dielectric resonateurs, key components of dielectric metamaterials, was done. It lead to experimental proof in the microwave domain of a negative permeability, permittivity and refractive index around resonance frequencies of said resonators. Finally, an alternative to the two resonators paradigm was proposed and experimentally demonstrated to obtain a negative index, a bimodal resonator
Analyse de la dynamique neuronale pour les Interfaces Cerveau-Machines : un retour aux sources by Michel Besserve( Book )

2 editions published in 2007 in French and held by 2 WorldCat member libraries worldwide

Les Interfaces Cerveau-Machine sont des dispositifs permettant d'instaurer un canal de communication entre le cerveau humain et le monde extérieur sans utiliser les voies usuelles nerveuses et musculaires. Le développement de tels systèmes se situe à l'interface entre le traitement du signal, l'apprentissage statistique et la neurophysiologie. Dans cette thèse, nous avons réalisé et étudié un dispositif d'Interface Cerveau-Machine non invasif asynchrone, c'est-à-dire capable d'identifier des actions mentales associées à des tâches motrices ou cognitives imaginées sans synchronisation sur un événement contrôlé par un système externe. Celui-ci est basé sur l'analyse en temps réel de signaux électro-encéphalographiques (BEG) issus d'électrodes disposées à la surface de la tête d'un sujet humain. Du point de vue méthodologique, nous avons implémenté plusieurs méthodes de prétraitement de ces signaux et comparé leur influence sur les performances du système. Ces méthodes comprennent : 1) l'utilisation directe du signal issu des capteurs EEG, 2) l'exploitation de méthodes de séparation de sources qui permettent de résumer les signaux EEG par un faible nombre de composantes spatiales et 3) la reconstruction de l'activité des sources de courant corticales par résolution du problème inverse en EEG. De plus, plusieurs mesures permettant de quantifier l'activité cérébrale sont exploitées et comparées : la puissance spectrale, la cohérence et la synchronie de phase. Nos résultats montrent que la reconstruction préalable de l'activité corticale par problème inverse, ainsi que l'utilisation de mesures d'interaction à distance permettent d'améliorer les performances du dispositif
Optimisation de fonctions coûteuses ; Modèles gaussiens pour une utilisation efficace du budget d'évaluations : théorie et pratique industrielle by Julien Villemonteix( Book )

2 editions published in 2008 in French and held by 2 WorldCat member libraries worldwide

This dissertation is driven by a question central to many industrial optimization problems: how to optimize a function when the budget of evaluations is severely limited by either time or cost? For example, when optimization relies on computationally expensive computer simulations taking several hours, the dimension and complexity of the optimization problem may seem irreconcilable with the budget of evaluation. This work discusses the use of optimization algorithms dedicated to this context, which is out of range for most classical methods. The common principle of the methods discussed is to use Gaussian processes and Kriging to build a cheap proxy for the function to be optimized. This approximation is used to choose iteratively the evaluations. Most of the techniques proposed over the years sample where the optimum is most likely to appear. By contrast, we propose an algorithm, named IAGO for Informational Approach to Global Optimization, which samples where the information gain on the optimizer location is deemed to be highest. The organisation of this dissertation is a direct consequence of the industrial concerns which drove this work. We hope it can be of use to the optimization community, but most of all to practitioners confronted with expensive-to evaluate functions. This is why we insist on industrial applications and on the practical use of IAGO for the optimization of a real function, but also when other industrial concerns have to be considered. In particular, we discuss how to handle constraints, noisy evaluation results, multi-objective problems, derivative evaluation results, or significant manufacturing uncertainties
Contribution to quantitative microwave imaging techniques for biomedical applications by Tommy Henriksson( Book )

2 editions published in 2009 in English and held by 2 WorldCat member libraries worldwide

This dissertation presents a contribution to quantitative microwave imaging for breast tumor detection. The study made in the frame of a joint supervision Ph.D. thesis between University Paris-SUD 11 (France) and Mälardalen University (Sweden), has been conducted through two experimental microwave imaging setups, the existing 2.45 GHz planar camera (France) and the multi-frequency flexible robotic system, (Sweden), under development. In this context a 2D scalar flexible numerical tool based on a Newton-Kantorovich (NK) scheme, has been developed. Quantitative microwave imaging is a three dimensional vectorial nonlinear inverse scattering problem, where the complex permittivity of an abject is reconstructed from the measured scattered field, produced by the object. The NK scheme is used in order to deal with the nonlinearity and the ill posed nature of this problem. A TM polarization and a two dimensional medium configuration have been considered in order to avoid its vectorial aspect. The solution is found iteratively by minimizing the square norm of the error with respect to the scattered field data. Consequently, the convergence of such iterative process requires, at least two conditions. First, an efficient calibration of the experimental system has to be associated to the minimization of model errors. Second, the mean square difference of the scattered field introduced by the presence of the tumor has to be large enough, according to the sensitivity of the imaging system. The existing planar camera associated to a flexible 2D scalar NK code, are considered as an experimental platform for quantitative breast imaging. A preliminary numerical study shows that the multi-view planar system is quite efficient for realistic breast tumor phantoms, according to its characteristics (frequency, planar geometry and water as a coupling medium), as long as realistic noisy data are considered. Furthermore, a multi-incidence planar system, more appropriate in term of antenna-array arrangement, is proposed and its concept is numerically validated. On the other hand, an experimental work which includes a new fluid-mixture for the realization of a narrow band cylindrical breast phantom, a deep investigation in the calibration process and model error minimization, are presented. This conducts to the first quantitative reconstruction of a realistic breast phantom by using the planar camera. Next, both the qualitative and quantitative reconstruction of 3D inclusions into the cylindrical breast phantom, by using data from all the retina, are shown and discussed. Finally, the extended work towards the flexible robotic system is presented
Caractérisation aveugle de la courbe de charge électrique : détection, classification et estimation des usages dans les secteurs résidentiel et tertiaire by Mabrouka El Guedri( )

2 editions published between 2009 and 2010 in French and held by 2 WorldCat member libraries worldwide

Residential and tertiary appliances characterization in real conditions from the unique measurement available at the utility service entry (the active and/or reactive power) has been little studied. This thesis investigates new methods and approaches to an entirely non-intrusive characterization of electric appliances. Our aim is to extract several descriptors of the targeted end-uses, given one source mixture of an unknown number of non-stationary signals. This study emphasizes four areas: appliances detection, classification and estimation (consumed energy, magnitude) and the electric load decomposition problem. The proposed techniques are demonstrated with real data including an experimental house and two “real” houses. One our major contribution is a non-intrusive solution of a residential electric load segmentation and mapping the daily consumed energy into its major components (space-heating by convectors, water heater and refrigerators). Ameliorations of some algorithms and their evaluation on large real data are required in order to evaluate the robustness of the proposed methods. As future works, we detail a generic approach using a probabilistic model of the electric load events which addresses the problem of the electric load decomposition (sources separation) in the framework of Bayesian approaches
Détermination de lois de comportement couplé par des techniques d'homogénéisation : application aux matériaux du génie électrique by Romain Corcolle( Book )

2 editions published in 2009 in French and held by 2 WorldCat member libraries worldwide

This study is focused on the development of accurate homogenization models for coupled behavior (such as piezoelectricity or magnetostriction). The main development in this study is the adaptation of classical uncoupled methods based on a clever decomposition of the fields in different terms, depending on their physical origin. Nonlinear behavior has been taken into account through a linearization process. An improvement has been obtain by including the second order moments of the fields in the models. The developed models have been validated through a comparison of the results with the ones obtained from a Finite Element model. The results show a good agreement with a very lower computational cost for homogenization (ratio over 1000 when dealing with linear constitutive laws). The homogenization model has also been able to catch extrinsic effects, such as the magnetoelectric effect. The ratio between estimation quality / computation time shows the advantages of homogenization methods, which have been successfully adapted to coupled behavior
Algorithmes de différentiation numérique pour l'estimation de systèmes non linéaires by Mohamed Braci( Book )

2 editions published in 2006 in French and held by 2 WorldCat member libraries worldwide

The main motivation of this PhD dissertation is the study of numerical differentiation algorithms which are simple and efficient for signals which are available only through their samples and which are corrupted by noise. Such algorithms are building blacks of some observer structure which combining observability conditions derived from the differential algebraic approach of observability and a synthesis like Kalman observer which incorporates a measurement error (between the !rue measurement and predicted one) in a loop and a prediction device which allows to compensate for the delay created by the differentiation operators. The necessity for these algorithms to be simple (in terms of computation burden) results from the fact that they may be invoked many times in a single observer. Alter having proposed a slight improvement of the observer structure previously mentioned we have preceded to the review of simple differentiation algorithms which are candidates. As is known numerical differentiation is an ill-posed inverse problem. As all operators of this type, its practical implementation necessarily goes through regularization. A numerical differentiation scheme is precisely an operator which regularizes the differentiation. The first one we have examined is the very popular linear filter consisting of an approximate of the Laplace transform of the differentiation operator by a proper transfer function, often of first order. We have shown that we cannot content ourselves in saying that the filter bandwidth, which is the regularization parameter, should be kept small. We have obtained optimal values of the filter bandwidth as a compromise of the necessity of narrow filter bandwidth in order to efficiently filter out the noise and large filler bandwidth in order to precisely reproduce the differentiation operator. There is also a method of numerical differentiation which popular as well, it is the finite differences method. Here, loo, we have shown how to choose the sampling period in an optimal way. The so-called Savitzky-Golay differentiation scheme, very much used in experimental sciences, is also revisited: we have shown how it can be regularized. The results are applied to 2 academic examples: the estimation of the substrate in a bioreactor, and the estimation of the lateral speed of a car
Algorithmes bayésiens variationnels accélérés et applications aux problèmes inverses de grande taille by Yuling Zheng( )

1 edition published in 2014 in French and held by 1 WorldCat member library worldwide

In this thesis, our main objective is to develop efficient unsupervised approaches for large dimensional problems. To do this, we consider Bayesian approaches, which allow us to jointly estimate regularization parameters and the object of interest. In this context, the main difficulty is that the posterior distribution is generally complex. To tackle this problem, we consider variational Bayesian (VB) approximation, which provides a separable approximation of the posterior distribution. Nevertheless, classical VB methods suffer from slow convergence speed. The first contribution of this thesis is to transpose the subspace optimization methods to the functional space involved in VB framework, which allows us to propose a new VB approximation method. We have shown the efficiency of the proposed method by comparisons with the state of the art approaches. Then we consider the application of our new methodology to large dimensional problems in image processing. Moreover, we are interested in piecewise smooth images. As a result, we have considered a Total Variation (TV) prior and a Gaussian location mixture-like hidden variable model. With these two priors, using our VB approximation method, we have developed two fast unsupervised approaches well adapted to piecewise smooth images.In fact, the priors introduced above are correlated which makes the estimation of regularization parameters very complicated: we often have a non-explicit partition function. To sidestep this problem, we have considered working in the wavelet domain. As the wavelet coefficients of natural images are generally sparse, we considered prior distributions of the Gaussian scale mixture family to enforce sparsity. Another contribution is therefore the development of an unsupervised approach for a prior distribution of the GSM family whose density is explicitly known, using the proposed VB approximation method
Analyse et commande sans modèle de quadrotors avec comparaisons by Jing Wang( )

1 edition published in 2013 in English and held by 1 WorldCat member library worldwide

Inspiré par les limitations de contrôleurs PID traditionnels et les différentes performances dans les cas idéals et réalistes, les quadrotors existants, leurs applications et leurs méthodes de contrôle ont été intensivement étudiés dans cette thèse. De nombreux challenges sont dévoilés: les systèmes embarqués ont des limites des ressources de calcul et de l'énergie; la dynamique est assez complexe et souvent mal connu; l'environnement a beaucoup de perturbations et d'incertitudes; de nombreuses méthodes de contrôle ont été proposées dans des scénarios idéaux dans la littérature sans comparaison avec d'autres méthodes. Par conséquent, cette thèse porte sur ces principaux points dans le contrôle de quadrotors.Tout d'abord, les modèles cinématiques et dynamiques sont proposés, y compris toutes les forces et couples aérodynamiques importants. Un modèle dynamique simplifié est également proposé pour certaines applications. Ensuite, la dynamique de quadrotor est analysée. En utilisant la théorie de la forme normale, le modèle de quadrotor est simplifié à une forme plus simple nommée la forme normale, qui présente toutes les propriétés dynamiques possibles du système d'origine. Les bifurcations de cette forme normale sont étudiées, et le système est simplifié à son point de bifurcation en utilisant la théorie de la variété du centre. Basé sur l'étude des applications de quadrotors, cinq scénarios réalistes sont proposés : un cas idéal, les cas avec la perturbation du vent, les incertitudes des paramètres, les bruits de capteurs et les fautes de moteur. Ces cas réalistes peuvent montrer plus globalement les performances des méthodes de contrôle par rapport aux cas idéaux. Un schéma déclenché par événements est également proposé avec le schéma déclenché par. Ensuite, la commande sans modèle est présentée, Il s'agit d'une technique simple mais efficace pour la dynamique non-linéaire, inconnue ou partiellement connue. La commande par backstepping et la commande par mode glissant sont également proposées pour la comparaison.Toutes les méthodes de contrôle sont mises en œuvre sous les schémas déclenchés par temps et par événements dans cinq scénarios différents. Basé sur l'étude des applications de quadrotors, dix critères sont choisis pour évaluer les performances des méthodes de contrôle, telles que l'erreur maximale absolue de suivi, la variance de l'erreur, le nombre d'actionnement, la consommation d'énergie, etc
Commande prédictive des systèmes hybrides et application à la commande de systèmes en électronique de puissance. by Cristina Vlad( )

2 editions published in 2013 in French and held by 1 WorldCat member library worldwide

Actuellement la nécessité des systèmes d'alimentation d'énergie, capables d'assurer un fonctionnement stable dans des domaines de fonctionnement assez larges avec des bonnes performances dynamiques (rapidité du système, variations limitées de la tension de sortie en réponse aux perturbations de charge ou de tension d'alimentation), devient de plus en plus importante. De ce fait, cette thèse est orientée sur la commande des convertisseurs de puissance DC-DC représentés par des modèles hybrides.En tenant compte de la structure variable de ces systèmes à commutation, un modèle hybride permet de décrire plus précisément le comportement dynamique d'un convertisseur dans son domaine de fonctionnement. Dans cette optique, l'approximation PWA est utilisée afin de modéliser les convertisseurs DC-DC. A partir des modèles hybrides développés, on s'est intéressé à la stabilisation des convertisseurs au moyen des correcteurs à gains commutés élaborés sur la base de fonctions de Lyapunov PWQ, et à l'implantation d'une commande prédictive explicite, en considérant des contraintes sur l'entrée de commande. La méthode de modélisation et les stratégies de commande proposées ont été appliquées sur deux topologies : un convertisseur buck, afin de mieux maîtriser le réglage des correcteurs et un convertisseur flyback avec filtre d'entrée. Cette dernière topologie nous a permis de répondre aux difficultés du point de vue de la commande (comportement à déphasage non-minimal) rencontrées dans la majorité des convertisseurs DC-DC. Les performances des commandes élaborées ont été validées en simulation sur les topologies considérées et expérimentalement sur une maquette du convertisseur buck
Fault-detection in Ambient Intelligence based on the modeling of physical effects. by Ahmed Mohamed( )

2 editions published in 2013 in English and held by 1 WorldCat member library worldwide

This thesis takes place in the field of Ambient Intelligence (AmI). AmI Systems are interactive systems composed of many heterogeneous components. From a hardware perspective these components can be divided into two main classes: sensors, using which the system observes its surroundings, and actuators, through which the system acts upon its surroundings in order to execute specific tasks.From a functional point of view, the goal of AmI Systems is to activate some actuators, based on data provided by some sensors. However, sensors and actuators may suffer failures. Our motivation in this thesis is to equip ambient systems with self fault detection capabilities. One of the particularities of AmI systems is that instances of physical resources (mainly sensors and actuators) are not necessarily known at design time; instead they are dynamically discovered at run-time. In consequence, one could not apply classical control theory to pre-determine closed control loops using the available sensors. We propose an approach in which the fault detection and diagnosis in AmI systems is dynamically done at run-time, while decoupling actuators and sensors at design time. We introduce a Fault Detection and Diagnosis framework modeling the generic characteristics of actuators and sensors, and the physical effects that are expected on the physical environment when a given action is performed by the system's actuators. These effects are then used at run-time to link actuators (that produce them) with the corresponding sensors (that detect them). Most importantly the mathematical model describing each effect allows the calculation of the expected readings of sensors. Comparing the predicted values with the actual values provided by sensors allows us to achieve fault-detection
Evaluations des doses dues aux neutrons secondaires reçues par des patients de différents âges traités par protonthérapie pour des tumeurs intracrâniennes by Rima Sayah( )

1 edition published in 2012 in French and held by 1 WorldCat member library worldwide

Proton therapy is an advanced radiation therapy technique that allows delivering high doses to the tumor while saving the healthy surrounding tissues due to the protons' ballistic properties. However, secondary particles, especially neutrons, are created during protons' nuclear reactions in the beam-line and the treatment room components, as well as inside the patient. Those secondary neutrons lead to unwanted dose deposition to the healthy tissues located at distance from the target, which may increase the secondary cancer risks to the patients, especially the pediatric ones. The aim of this work was to calculate the neutron secondary doses received by patients of different ages treated at the Institut Curie-centre de Protonthérapie d'Orsay (ICPO) for intracranial tumors, using a 178 MeV proton beam. The treatments are undertaken at the new ICPO room equipped with an IBA gantry. The treatment room and the beam-line components, as well as the proton source were modeled using the Monte Carlo code MCNPX. The obtained model was then validated by a series of comparisons between model calculations and experimental measurements. The comparisons concerned: a) depth and lateral proton dose distributions in a water phantom, b) neutron spectrometry at one position in the treatment room, c) ambient dose equivalents at different positions in the treatment room and d) secondary absorbed doses inside a physical anthropomorphic phantom. A general good agreement was found between calculations and measurements, thus our model was considered as validated. The University of Florida hybrid voxelized phantoms of different ages were introduced into the MCNPX validated model, and secondary neutron doses were calculated to many of these phantoms' organs. The calculated doses were found to decrease as the organ's distance to the treatment field increases and as the patient's age increases. The secondary doses received by a one year-old patient may be two times higher than the doses received by an adult. A maximum dose of 16.5 mGy for a whole treatment delivering 54 Gy to the tumor was calculated to the salivary glands of a one year-old phantom. The calculated doses for a lateral proton beam incidence (left or right) may be, for some organs, two times higher than doses for an upper incidence (left or right) and four times higher than doses for an antero-superior incidence. Neutron equivalent doses were also calculated for some organs. The neutron weighting factors wR were found to vary between 4 and 10 and the equivalent doses for the considered organs reached at maximum 155 mSv during a whole treatment
Stabilization of periodic orbits in discrete and continuous-time systems by Thiago Perreira Das Chagas( )

1 edition published in 2013 in English and held by 1 WorldCat member library worldwide

The main problem evaluated in this manuscript is the stabilization of periodic orbits of non-linear dynamical systems by use of feedback control. The goal of the control methods proposed in this work is to achieve a stable periodic oscillation. These control methods are applied to systems that present unstable periodic orbits in the state space, and the latter are the orbits to be stabilized.The methods proposed here are such that the resulting stable oscillation is obtained with low control effort, and the control signal is designed to converge to zero when the trajectory tends to the stabilized orbit. Local stability of the periodic orbits is analyzed by studying the stability of some linear time-periodic systems, using the Floquet stability theory. These linear systems are obtained by linearizing the trajectories in the vicinity of the periodic orbits.The control methods used for stabilization of periodic orbits here are the proportional feedback control, the delayed feedback control and the prediction-based feedback control. These methods are applied to discrete and continuous-time systems with the necessary modifications. The main contributions of the thesis are related to these methods, proposing an alternative control gain design, a new control law and related results
Commande robuste de systèmes non linéaires incertains. by Safta De Hillerin( )

1 edition published in 2011 in French and held by 1 WorldCat member library worldwide

This thesis studies the LPV approach for the robust control of nonlinear systems. Its originality is to propose for the first time a rigorous framework allowing to solve efficiently nonlinear synthesis problems.The LPV approach was proposed as an extension of the H-infinity approach in the context of LPV (Linear Parameter-Varying) systems and nonlinear systems. Although this approach seemed promising, it was not much used in practise. Indeed, beyond certain theoretical limitations, the nature itself of the obtained solutions did not seem adequate. This open question constitutes the starting point of our work.We first prove that the observed weak variation of the controllers is in fact mostly due to the information structure traditionally used for LPV synthesis, and that under reasonable assumptions, the LPV framework can overlap feedback linearization strategies. This point having been resolved, a second difficulty lies in the actual achievement of nonlinear controllers yielding performance guarantees. We propose a rigorous framework allowing to solve efficiently an incremental synthesis problem, through the resolution of an LPV problem associated to a specific information structure compatible with the one identified in the first part.This study and its corollary description of a formal framework and of a complete controller synthesis procedure, including complexity reduction methods, provide powerful arguments in favor of the LPV approach for the robust control of nonlinear systems
Modélisation tridimensionnelle des matériaux supraconducteurs by Lotfi Alloui( )

1 edition published in 2012 in French and held by 1 WorldCat member library worldwide

We present a contribution for three-dimensional modeling of coupled electromagnetic and thermal phenomena in high temperature superconductor. The control volume method is used for the resolution of the partial derivative equations characterising of the treated physical phenomena. The electromagnetic and thermal coupling is ensured by an alternate algorithm. All mathematical and numerical models thus developed and implemented in Matlab software, are used for the simulation. The results in magnetic term and those in thermal term are largely presented. The validity of the suggested work is reached by the comparison of the results so obtained to those given by the experiment
Constrained control for time-delay systems. by Warody Lombardi( )

1 edition published in 2011 in English and held by 1 WorldCat member library worldwide

The main interest of the present thesis is the constrained control of time-delay system, more specifically taking into consideration the discretization problem (due to, for example, a communication network) and the presence of constraints in the system's trajectories and control inputs. The effects of data-sampling and modeling problem are studied in detail, where an uncertainty is added into the system due to additional effect of the discretization and delay. The delay variation with respect to the sampling instants is characterized by a polytopic supra-approximation of the discretization/delay induced uncertainty. Some stabilizing techniques, based on Lyapunov's theory, are then derived for the unconstrained case. Lyapunov-Krasovskii candidates were also used to obtain LMI conditions for a state feedback, in the ``original” state-space of the system. For the constrained control purposes, the set invariance theory is used intensively, in order to obtain a region where the system is ``well-behaviored”, despite the presence of constraints and (time-varying) delay. Due to the high complexity of the maximal delayed state admissible set obtained in the augmented state-space approach, in the present manuscript we proposed the concept of set invariance in the ``original” state-space of the system, called D-invariance. Finally, in the las part of the thesis, the MPC scheme is presented, in order to take into account the constraints and the optimality of the control solution
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.94 (from 0.91 for Neural net ... to 0.99 for Fault-dete ...)

Alternative Names
Ecole Doctorale STITS. Orsay, Essonne

Ecole supérieure d'électricité (Gif-sur-Yvette, Essonne). Ecole Doctorale Sciences et Technologies de l'Information des Télécommunications et des Systèmes

ED 422

ED STITS. Orsay, Essonne

ED422

STITS. Orsay, Essonne

Université Paris 11. Ecole Doctorale Sciences et Technologies de l'Information, des Télécommunications et des Systèmes

Université Paris-Sud 11. Ecole Doctorale Sciences et Technologies de l'Information, des Télécommunications et des Systèmes

Université Paris-Sud. Ecole Doctorale Sciences et Technologies de l'Information, des Télécommunications et des Systèmes

Languages
French (25)

English (10)