WorldCat Identities

École doctorale Sciences et Ingénierie (Evry / 2008-2015)

Works: 106 works in 106 publications in 2 languages and 109 library holdings
Roles: Other
Publication Timeline
Most widely held works by École doctorale Sciences et Ingénierie (Evry / 2008-2015)
Conception et réalisation d'une plateforme mécatronique dédiée à la simulation de conduite des véhicules deux-roues motorisés by Lamri Nehaoua( )

1 edition published in 2008 in French and held by 2 WorldCat member libraries worldwide

This thesis deals with the design and realization of a dynamic mechanical platform intended to the motorcycle riding simulation. This dissertation is organized into several principal sections. First, a literature review is conducted to identify the driving simulation problematic in a general way by focusing on the simulator design. In this part, it was aware of the various mechanical architectures used previously as well as the related limitations. The choice of the simulator's mechanical architecture of is driven by the needs to have an sufficient perception during simulated driving situation. Our goal is to reproduce the most relevant inertial effects (acceleration, torque, ..) perceived in a real world driving. The second chapter discusses an exhaustive comparison between automotive vehicles dynamics against the two-wheeled vehicles against. Existing motorcycles dynamic models are adjusted and of have been adapted to meet our needs in terms of privileged inertial cues. The third chapter presents the design aspects, mechanical realization, characterization and identification of the motorcycle simulator developed within the framework of this thesis. It constitutes the main contribution of this research works. Finally, the last two chapters are dedicated to motion cueing /control algorithms and open-loop experimentation on the simulator's platform. These tests were performed for the characterization and validation of performance of the entire simulation loop
Nouvelle approche d'identification dans les bases de données biométriques basée sur une classification non supervisée by Anis Chaari( )

1 edition published in 2009 in French and held by 2 WorldCat member libraries worldwide

The work done in the framework of this thesis deal with the automatic faces identification in databases of digital images. The goal is to simplify biometric identification process that is seeking the query identity among all identities enrolled in the database, also called gallery. Indeed, the classical identification scheme is complex and requires large computational time especially in the case of large biometric databases. The original process that we propose here aims to reduce the complexity and to improve the computing time and the identification rate performances. In this biometric context, we proposed an unsupervised classification or clustering of facial images in order to partition the enrolled database into several coherent and well discriminated subsets. In fact, the clustering algorithm aims to extract, for each face, a specific set of descriptors, called signature. Three facial representation techniques have been developed in order to extract different and complementary information which describe the human face: two factorial methods of multidimensional analysis and data projection (namely called "Eigenfaces" and "Fisherfaces") and a method of extracting geometric Zernike moments. On the basis of the different signatures obtained for each face, several clustering methods are used in competing way in order to achieve the optimal classification which leads to a greater reduction of the gallery. We used either "mobile centers" methods type such as the K-means algorithm of MacQueen and that of Forgy, and the "agglomerative" method of BIRCH. Based on the dependency of the generated partitions, these different classifying strategies are then combined using a parallel architecture in order to maximize the reduction of the search space to the smallest subset of the database. The retained clusters in fine are those which contain the query identity with an almost certain probability
Authentification d'individus par reconnaissance de caractéristiques biométriques liées aux visages 2D/3D by Souhila Guerfi Ababsa( )

1 edition published in 2008 in French and held by 2 WorldCat member libraries worldwide

This thesis deals with the face authentification problem, in particular within a national project framework, namely "TechnoVision". Although the human beings can detect/recognise faces in a scene without much of sorrow, build a system which achieves such tasks is very challenging. This challenge is all the more large when the conditions of images acquisition are variable. There are two kinds of variations associated to the face images: inter and intra subject. The inter-subject variation is limited because owing to the fact that the physical resemblance between the individuals is rather rare. On the other hand, the intra-subject variation is more current because of pose changing, lighting conditions, etc. In this thesis, we developed, first, an approach for face and facial features localization in images containing only one face on a relatively uniform background within light variations. For that we proposed a robust colour segmentation approach in the TLS space which uses a modified watershed algorithm. To extract the facial features (like eyes and stops), we combined a kmeans clustering method with a geometrical approach and applied it on the segmented region of the face. We also proposed a 2D/3D multimodal approach which uses a weighted fusion of the scores obtained by the modular “EigenFace” and our 3D anthropometric facial signature. We evaluated our 3D and 2D/3D face recognition approaches on a sub base of IV2 which contains stereoscopic images of several human faces. The obtained results are very interesting compared to classical techniques of 2D face recognition. Finally, we discussed how to improve the performances of the proposed approaches
Cloning with gesture expressivity by Manoj Kumar Rajagopal( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

Virtual environments allow human beings to be represented by virtual humans or avatars. Users can share a sense of virtual presence is the avatar looks like the real human it represents. This classically involves turning the avatar into a clone with the real human's appearance and voice. However, the possibility of cloning the gesture expressivity of a real person has received little attention so far. Gesture expressivity combines the style and mood of a person. Expressivity parameters have been defined in earlier works for animating embodied conversational agents.In this work, we focus on expressivity in wrist motion. First, we propose algorithms to estimate three expressivity parameters from captured wrist 3D trajectories: repetition, spatial extent and temporal extent. Then, we conducted perceptual study through a user survey the relevance of expressivity for recognizing individual human. We have animated a virtual agent using the expressivity estimated from individual humans, and users have been asked whether they can recognize the individual human behind each animation. We found that, in case gestures are repeated in the animation, this is perceived by users as a discriminative feature to recognize humans, while the absence of repetition would be matched with any human, regardless whether they repeat gesture or not. More importantly, we found that 75 % or more of users could recognize the real human (out of two proposed) from an animated virtual avatar based only on the spatial and temporal extents. Consequently, gesture expressivity is a relevant clue for cloning. It can be used as another element in the development of a virtual clone that represents a person
Transition and optimal monetary policy : an econometric analysis for Central Europe countries by Tareq Sadeq( )

1 edition published in 2008 in English and held by 1 WorldCat member library worldwide

La problématique de cette thèse se résume à deux questions liées aux économies en transition. La première est pourquoi quelques pays convergent vers les critères d'accession à la zone Euro, tandis que d'autres sont toujours loin de ces critères de stabilité. La deuxième question est comment a changé la structure de l'économie et la politique monétaire pendant la transition. Je réponds à ces questions en analysant des modèles dynamiques et stochastiques d'équilibre général (DSGE) en utilisant des méthodes économétriques Bayésiennes. Les techniques d'évaluation habituelles ont été étendues pour considérer des changements structurels de l'économie. Dans le premier chapitre, on a présenté la méthode d'estimation Bayésienne des modèles DSGE linéaires. Dans le deuxième chapitre, on construit un modèle DSGE incorporant quelques caractéristiques des économies en transition et l'ai évalué en utilisant la méthode Bayésienne. Enfin, dans le troisième chapitre, on estime un modèle intégrant une date de rupture structurelle dans les paramètres et de l'heteroskedasticité des chocs
Capital-investissement et performance des introductions en bourse : application aux entreprises nouvellement introduites sur le nouveau marché et le second marché français (1991-2004) by Jihene Cherrak( )

1 edition published in 2009 in French and held by 1 WorldCat member library worldwide

In this dissertation, it is tried to understand the effects of Venture Capital Firms (VCF) on the performance of VC-backed listed companies in France. To do this research, we try to develop, in the first part, theoretical framework and define research hypothesis. This part leads us to examine characteristics of initial public offerings (IPOs) and the role of venture capitalists particularly in conducting an IPO. We develop the argument around the role of VCF in resolving informational problems, characteristics of IPO's market. A VCF, being specialists to draw up contract with entrepreneurs and possessing expertise and knowledge network, could diminish conflicts of interests and certify IPOs. However, these firms could run a problem of adverse selection and/or adopt opportunistic behaviour to serve their own interests. The empirical validation of this problem is dealt with in second part of this dissertation. It consists, in first place, to compare performance of VC backing IPOs to Non-VC backing IPOs. In second place, we determine the relation between the performance of VC backing IPOs and the institutional affiliation of VCF. In last part, we test explanatory power of reputation of VCF and their mechanisms of intermediation, particularly, syndication, staged financing and distribution of cash-flows and control rights
Approche d'assistance aux auteurs pour la réutilisation d'objets d'apprentissage by Ramzi Farhat( )

1 edition published in 2010 in French and held by 1 WorldCat member library worldwide

Cette thèse se situe dans le domaine de la création de contenus pédagogiques par une approche basée sur une structuration à base d'objets d'apprentissage et leur réutilisation au sein d'objets plus complexes. Les modèles récents d'objets pédagogiques comme SCORM ou SIMBAD permettent aux auteurs de construire de nouveaux objets par assemblage d'objets existants. La difficulté pour les auteurs est de concevoir de tels objets en maitrisant la complexité de la composition et en pouvant garantir un haut niveau de qualité, y compris pédagogique. Nous proposons dans ce travail une approche d'assistance aux auteurs basée sur un ensemble d'outils d'analyse permettant de mieux qualifier l'objet composé et d'en vérifier la conformité. Ces analyses portent tant sur le contenu de l'objet, ses métadonnées notamment celles issues de la norme LOM, que sur la structure de composition elle-même. L'objectif est de générer une cartographie détaillée sur l'objet en question. Il s'agit d'offrir à l'auteur des indicateurs divers et variés qui vont lui permettre d'avoir une meilleur vue sur les différentes facettes de l'objet d'apprentissage en cours de conception. En particulier, il aura une analyse de la vue système et de la vue apprenant. Une fois que l'analyse est satisfaisante, des métadonnées complémentaires sont calculées automatiquement par notre environnement en se basant sur les métadonnées éducatives des objets utilisés dans la composition de l'objet. La composition d'un objet peut être guidée par des règles de conformité. Celles-ci permettent de décrire certains critères structurels et sémantiques recherchés. Cette approche offre ainsi un moyen pour la promotion de la réutilisation des objets d'apprentissage. Elle offre le support théorique et les éléments pratiques permettent de rendre la composition par réutilisation pleinement sous contrôle de l'auteur et capable de produire par conséquence des objets d'apprentissage respectant des critères de qualité
Canaux symétriques à base de cyclodextrines amphiphiles : polymérisation divergente d'oxirane by Zahra Eskandani( )

1 edition published in 2011 in French and held by 1 WorldCat member library worldwide

In this work, we present the design of artificial permanent cyclodextrin-based channels, obtained by divergent polymerization. Selective modifications of cyclodextrins have been developed to generate original initiators of ethylene oxide ring-opening polymerization. Considering the experimental conditions used, the demonstration of controlled polymerization was performed, leading to molecules with 14 PEO arms having various molar masses. Among various applications, we focused on the possibility to use this new class of star-polymer architectures as permanent ionic channels exhibiting long residence time (hour scale), paving the way to translocation of molecules and macromolecules for example
Modélisation et test fonctionnel de l'orchestration de services Web by Mounir Lallali( )

1 edition published in 2009 in French and held by 1 WorldCat member library worldwide

Ces dernières années ont vu l'émergence d'architectures orientées services (SOA) conçues pour faciliter la création, l'exposition, l'interconnexion et la réutilisation d'applications à base de services. Les services Web sont la réalisation la plus importante de cette architecture SOA. Ce sont des applications auto descriptives et modulaires fournissant un modèle simple de programmation et de déploiement d'applications. La composition de services Web, en particulier l'orchestration, est au coeur de l'ingénierie à base de services (SOC pour Service Oriented Computing) puisque elle supporte la construction de nouveaux services composés à partir de services de base. De son côté, WS-BPEL (ou BPEL) s'est imposé depuis 2005 comme le langage standard d'orchestration de services Web. Cette thèse de Doctorat s'articule autour du test fonctionnel de l'orchestration de services décrite en langage BPEL, qui consiste à établir la conformité de l'implantation d'un service composé par rapport à sa spécification. Nos activités de recherche ont été motivées par les caractéristiques spécifiques de la composition de services surtout celle décrite en BPEL, et par la nécessité d'automatisation des tests. L'objectif de cette thèse est double : d'une part, proposer une modélisation formelle de l'orchestration de services, et d'autre part, proposer une méthode de test complète de l'orchestration de services, allant de la modélisation formelle de l'orchestration à l'exécution des tests, incluant la génération automatique de cas de test. Notre modèle formel (appelé WS-TEFSM) permet de décrire une grande partie de BPEL et prend en considération les propriétés temporelles de la composition de service. La modélisation formelle est la première phase de notre approche de test. Par conséquent, nous utilisons le modèle formel résultant pour la génération de cas de test satisfaisant un ensemble d'objectifs de test. L'automatisation de la génération de cas de test a été mise en oeuvre par l'implémentation d'une stratégie efficace d'exploration partielle de l'espace d'états (i.e. Hit-Or-Jump) dans le simulateur IF. Pour se focaliser seulement sur les erreurs potentielles du service orchestrateur (service composé), nous proposons une approche de test boîte grise consistant à simuler les services partenaires de cet orchestrateur. Nous avons abordé ces problématiques à la fois d'un point de vue théorique et pratique. En plus de la proposition d'une modélisation formelle de l'orchestration de services et d'un algorithme de génération de cas de test temporisés, nous avons implémenté ces deux concepts en développant deux prototypes. BPEL2IF permet de transformer une orchestration de services décrite en BPEL en une spécification formelle à base d'automates temporisés (spécification IF). TestGen-IF permet de dériver automatiquement des cas de test temporisés. Enfin, pour valider notre démarche, nous avons appliqué notre approche de test à des cas d'études de taille réelle
Ré-identification de personnes à partir des séquences vidéo by Mohamed Ibn Khedher( )

1 edition published in 2014 in French and held by 1 WorldCat member library worldwide

This thesis focuses on the problem of hu man re-identification through a network of cameras with non overlapping fields of view. Human re-identification is defined as the task of determining if a persan leaving the field of one camera reappears in another. It is particularly difficult because of persons' significant appearance change within different cameras vision fields due to various factors. In this work, we propose to exploit the complementarity of the person's appearance and style of movement that leads to a description that is more robust with respect to various complexity factors. This is a new approach for the re-identification problem that is usually treated by appearance methods only. The major contributions proposed in this work include: person's description and features matching. First we study the re-identification problem and classify it into two scenarios: simple and complex. In the simple scenario, we study the feasibility of two approaches: a biometric approach based on gait and an appearance approach based on spatial Interest Points (IPs) and color features. In the complex scenario, we propose to exploit a fusion strategy of two complementary features provided by appearance and motion descriptions. We describe motion using spatiotemporal IPs, and use the spatial IPs for describing the appearance. For feature matching, we use sparse representation as a local matching method between IPs. The fusion strategy is based on the weighted sum of matched IPs votes and then applying the rule of majority vote. Moreover, we have carried out an error analysis to identify the sources of errors in our proposed system to identify the most promising areas for improvement
DRARS, a dynamic risk-aware recommender system by Djallel Bouneffouf( )

1 edition published in 2013 in English and held by 1 WorldCat member library worldwide

The vast amount of information generated and maintained everyday by information systems and their users leads to the increasingly important concern of overload information. In this context, traditional recommender systems provide relevant information to the users. Nevertheless, with the recent dissemination of mobile devices (smartphones and tablets), there is a gradual user migration to the use of pervasive computing environments. The problem with the traditional recommendation approaches is that they do not utilize all available information for producing recommendations. More contextual parameters could be used in the recommendation process to result in more accurate recommendations. Context-Aware Recommender Systems (CARS) combine characteristics from context-aware systems and recommender systems in order to provide personalized recommendations to users in ubiquitous environments. In this perspective where everything about the user is dynamic, his/her content and his/her environment, two main issues have to be addressed: i) How to consider content evolution? and ii) How to avoid disturbing the user in risky situations?. In response to these problems, we have developed a dynamic risk sensitive recommendation system called DRARS (Dynamic Risk-Aware Recommender System), which model the context-aware recommendation as a bandit problem. This system combines a content-based technique and a contextual bandit algorithm. We have shown that DRARS improves the Upper Confidence Bound (UCB) policy, the currently available best algorithm, by calculating the most optimal exploration value to maintain a trade-off between exploration and exploitation based on the risk level of the current user's situation. We conducted experiments in an industrial context with real data and real users and we have shown that taking into account the risk level of users' situations significantly increases the performance of the recommender system
Comparer les morphogenèses urbaines en Europe et aux Etats-Unis par la simulation à base d'agents : approches multi-niveaux et environnements de simulation spatiale by Thomas Louail( )

1 edition published in 2010 in French and held by 1 WorldCat member library worldwide

The multilevel comparison of spatial and hierarchical organisations of urban systems over the world highlights some universal properties (rank-size law, center-periphery structure) but also a variety of more specific patterns (in terms of spatial repartition of populations, densities, prices, activities, etc.). The spatial economy and the urban evolutionnary theory both offer explanations of the emergence of such patterns, but the simulation models they support generally consider one level of spatial organisation only. Understanding and reconstructing those levels' interdependancies is a crucial issue for long-term sustainable urban planning. This thesis presents a set of models and tools that are dedicated to the study of this question through agent-based simulation. They have been developed in the context of the Simpop project, and particulary focus on the comparison of the morphogenesis of urban systems in Europe and in the United States over the period 1800-2000. These tools include the simpopNano agent-based model, and some experimentation modules integrated in an extensible, generic, GIS-based platform dedicated to a systematic, collective and intelligent exploration of spatial simulation models. Together, they reinforce the idea that difference of topology of the streets networks could be sufficient to generate some more diluted spatial repartitions, as observed in US cities when compared to european ones. This intra-urban model is then articulated with an inter-urban one, Simpop2, in a multilevel model. The latter serves to engage a comparison among a variety of approaches in agent-based simulation for coupling models of various levels of abstraction
La réception des discours de développement durable et d'actions de responsabilité sociale des entreprises dans les pays du Sud : le cas d'un don d'ordinateurs au Sénégal dans le cadre d'un projet tripartite de solidarité numérique by Géraldine Guérillot( )

1 edition published in 2012 in French and held by 1 WorldCat member library worldwide

Notre étude questionne la réception des discours et pratiques de développement durable et de RSE dans les pays du Sud. Nous adoptons une hypothèse de départ qui est que ces discours placent ces pays en situation de double bind. Après avoir tracé les contours des débats sur le développement durable et la RSE notre recherche empirique porte sur un projet tripartite de solidarité numérique franco-sénégalais. Une approche quasi-ethnographique, parfois auto-ethnographique, inspirée par K. Stewart nous permet de partir à la recherche d"indice de double bind, de voir comment certaines pratiques, discours ou situations laissent entrevoir un malaise dans la réception. En confrontant ces observations avec le cadre de la théorie du don, nous remarquons que les effets des pratiques et discours dans le cas observé sont à l"opposé de ce que prédit les recherches sur le don. Le don d"ordinateurs semble unilatéral, ne crée pas de lien, au contraire semble éloigner les protagonistes. Les théories de Bateson et de l"école de Palo Alto apportent un regard systémique sur cette situation, montrant que Nord et Sud sont en situation d"injonctions paradoxales, les poussant vers des toujours plus menaçant de faire éclater la relation (schismogenèse). Nous concluons sur le besoin d"une part de laisser la multiplication des voix s"exprimer et d"autre part une critique qui permettra d"enfin enclencher un apprentissage. Cette recherche exploratoire mène finalement moins à une critique radicale du développement durable et des actions de RSE, que de la manière dont ils sont concrétisés dans l"aide au développement. Il faut plusieurs voix, plusieurs acteurs, qui ensemble permettront peut-être un nouveau dialogue Nord-Sud pour une RSE plus responsable, une solidarité numérique plus solidaire, un développement plus durable
Continuity of user tasks execution in pervasive environments by Imen Ben Lahmar( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

The proliferation of small devices and the advancements in various technologies have introduced the concept of pervasive environments. In these environments, user tasks can be executed by using the deployed components provided by devices with different capabilities. One appropriate paradigm for building user tasks for pervasive environments is Service-Oriented Architecture (SOA). Using SOA, user tasks are represented as an assembly of abstract components (i.e., services) without specifying their implementations, thus they should be resolved into concrete components. The task resolution involves automatic matching and selection of components across various devices. For this purpose, we present an approach that allows for each service of a user task, the selection of the best device and component by considering the user preferences, devices capabilities, services requirements and components preferences. Due to the dynamicity of pervasive environments, we are interested in the continuity of execution of user tasks. Therefore, we present an approach that allows components to monitor locally or remotely the changes of properties, which depend on. We also considered the adaptation of user tasks to cope with the dynamicity of pervasive environments. To overcome captured failures, the adaptation is carried out by a partial reselection of devices and components. However, in case of mismatching between an abstract user task and a concrete level, we propose a structural adaptation approach by injecting some defined adaptation patterns, which exhibit an extra-functional behavior. We also propose an architectural design of a middleware allowing the task's resolution, monitoring of the environment and the task adaptation. We provide implementation details of the middleware's components along with evaluation results
Biomechanical online signature modeling applied to verification by Jânio Coutinho Canuto( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

This thesis deals with the modelling and verification of online signatures. The first part has as main theme the biomechanical modelling of hand movements associated to the signing gesture. A model based on the Minimum Jerk (MJ) criterion was chosen amongst the several available motor control theories. Next, the problem of signature trajectory segmentation into strokes that better fit the chosen kinematic model is studied, leading to the development of an iterative segmentation method. Both the choice of the model and the segmentation method are strongly based on the tradeoff between reconstruction quality and compression. On the second part, the polynomial model provided by the MJ criterion is intentionally degraded. The non-Real zeroes of the polynomials are discarded and the effects of this degradation are studied from a biometric verification perspective. This degradation is equivalent to the signal processing technique known as Infinity Clipping, originally applied to speech signals. On signatures, as for speech, the preservation of essential information was observed on signature verification tasks. As a matter of fact, using only the Levenshtein distance over the infinitely clipped representation, verification error rates comparable to those of more elaborate methods were obtained. Furthermore, the symbolic representation yielded by the infinity clipping technique allows for a conceptual relationship between the number of polynomial segments obtained through the Minimum Jerk-Based iterative segmentation and the Lempel-Ziv complexity. This relationship is potentially useful for the analysis of online signature signals and the improvement of recognition systems
La plate-forme RAMSES pour un triple écran interactif : application à la génération automatique de télévision interactive by Julien Royer( )

1 edition published in 2009 in French and held by 1 WorldCat member library worldwide

Avec la révolution du numérique, l'usage de la vidéo a fortement évolué durant les dernières décennies, passant du cinéma à la télévision, puis au web, du récit fictionnel au documentaire et de l'éditorialisation à la création par l'utilisateur. Les médias sont les vecteurs pour échanger des informations, des connaissances, des « reportages » personnels, des émotions... L'enrichissement automatique des documents multimédias est toujours un sujet de recherche depuis l'avènement des médias. Dans ce contexte, nous proposons dans un premier temps une modélisation des différents concepts et acteurs mis en œuvre pour analyser automatiquement des documents multimédias afin de déployer dynamiquement des services interactifs en relation avec le contenu des médias. Nous définissons ainsi les concepts d'analyseur, de service interactif, de description d'un document multimédia et enfin les fonctions nécessaires pour faire interagir ceux-ci. Le modèle d'analyse obtenu se démarque de la littérature en proposant une architecture modulaire, ouverte et évolutive. Nous présentons ensuite l'implantation de ces concepts dans le cadre d'un prototype de démonstration. Ce prototype permet ainsi de mettre en avant les contributions avancées dans la description des modèles. Une implantation ainsi que des recommandations sont détaillées pour chacun des modèles. Afin de montrer les résultats d'implantation des solutions proposées sur la plateforme telles que les standards MPEG-7 pour la description, MPEG-4 BIFS pour les scènes interactives ou encore OSGI pour l'architecture générale, nous présentons différents exemples de services interactifs intégrés dans la plateforme. Ceux-ci permettent de vérifier les capacités d'adaptation aux besoins d'un ou plusieurs services interactifs
Communication abstraction for data synchronization in distributed virtual environments : application to multiplayer games on mobile phones by Abdul Malik Khan( )

1 edition published in 2010 in English and held by 1 WorldCat member library worldwide

Multiplayer games users' have increased since the widespread use of the internet. Withthe arrival of rich portable devices and faster cellular wireless networks, multiplayer games on mobile phones and PDAs are becoming a reality. For multiplayer games to be playable, they should be highly interactive, fair and should have a consistent state for all the players. Because of the high wireless network latency and jitters, the issue of providing interactive games with consistent state across the network is non-trivial. In this thesis, we propose different approaches for achieving consistency in mobile multiplayer games in the face of high latency and large and variable jitters. Although absolute consistency is impossible to achieve because information takes time to travel from one place to another, we exploit the fact that strong consistency is not always required in the virtual world and can be relaxed in many cases. Our proposed approach uses the underlying network latency and the position of different objects in the virtual world to decide when to relax consistency and when to apply strong consistency mechanisms. We evaluate our approach by implementing these algorithms in J2ME based games played on mobile phones. The algorithms for consistency mechanism are very complex and are often intermixed with the game core logic's code, which makes it hard to program a game and to change its code in the future. We propose to separate the consistency mechanisms from the game logic and put them in a distributed component responsible for both consistency maintenance and communication over the network. We call this reusable component a Synchronization Medium
Classification multi-modèles des images dans les bases hétérogènes by Rostom Kachouri( )

1 edition published in 2010 in French and held by 1 WorldCat member library worldwide

La reconnaissance d'images est un domaine de recherche qui a été largement étudié par la communauté scientifique. Les travaux proposés dans ce cadre s'adressent principalement aux diverses applications des systèmes de vision par ordinateur et à la catégorisation des images issues de plusieurs sources. Dans cette thèse, on s'intéresse particulièrement aux systèmes de reconnaissance d'images par le contenu dans les bases hétérogènes. Les images dans ce type de bases appartiennent à différents concepts et représentent un contenu hétérogène. Pour ce faire, une large description permettant d'assurer une représentation fiable est souvent requise. Cependant, les caractéristiques extraites ne sont pas nécessairement toutes appropriées pour la discrimination des différentes classes d'images qui existent dans une base donnée d'images. D'où, la nécessité de sélection des caractéristiques pertinentes selon le contenu de chaque base. Dans ce travail, une méthode originale de sélection adaptative est proposée. Cette méthode permet de considérer uniquement les caractéristiques qui sont jugées comme les mieux adaptées au contenu de la base d'image utilisée. Par ailleurs, les caractéristiques sélectionnées ne disposent pas généralement des mêmes performances. En conséquence, l'utilisation d'un algorithme de classification, qui s'adapte aux pouvoirs discriminants des différentes caractéristiques sélectionnées par rapport au contenu de la base d'images utilisée, est vivement recommandée. Dans ce contexte, l'approche d'apprentissage par noyaux multiples est étudiée et une amélioration des méthodes de pondération des noyaux est présentée. Cette approche s'avère incapable de décrire les relations non-linéaires des différents types de description. Ainsi, nous proposons une nouvelle méthode de classification hiérarchique multi-modèles permettant d'assurer une combinaison plus flexible des caractéristiques multiples. D'après les expérimentations réalisées, cette nouvelle méthode de classification assure des taux de reconnaissance très intéressants. Enfin, les performances de la méthode proposée sont mises en évidence à travers une comparaison avec un ensemble d'approches cité dans la littérature récente du domaine
A framework for distributed 3D graphics applications based on compression and streaming by Ivica Arsov( )

1 edition published in 2011 in English and held by 1 WorldCat member library worldwide

Avec le développement des réseaux informatiques, principalement d'Internet, il devient de plus en plus facile de développer des applications dont l'exécution est répartie entre un ordinateur local, le client, et un ordinateur à distance (à une autre extrémité du canal de transmission), le serveur. Les progrès techniques de ces dernières années au niveau matériel ont rendu possible l'affichage en 3D (jeux, navigation cartographique, mondes virtuels) sur les mobiles. Cependant, l'exécution de ces applications complexes sur le terminal client est impossible, à moins de réduire la qualité des images affichées ou les besoins en calcul de l'application. Différentes solutions ont déjà été proposées dans la littérature mais aucune d'entre elles ne satisfait l'ensemble des besoins. L'objectif de cette thèse est de proposer une solution alternative, c'est à dire une nouvelle architecture client-serveur dans laquelle l'interconnexion des dispositifs mobiles est complètement exploitée. Les principales conditions de mise en œuvre seront traitées: - Minimiser le trafic réseau - Réduire les besoins en puissance de calcul du terminal, et - Préserver l'expérience utilisateur par rapport à une exécution locale. Tout d'abord, un cadre formel est développé afin de définir et modéliser des applications graphiques 3D distribuées. Ensuite, une nouvelle architecture, permettant de dépasser certains inconvénients que l'on trouve dans des architectures de l'état de l'art, est présentée. La conception de la nouvelle architecture client-serveur est validée par l'implémentation d'un jeu et la mise œuvre de simulations
Contribution à l'optimisation des systèmes de transmission optiques cohérents (Nx100 Gbit/s) utilisant le multiplexage en polarisation par des formats de modulation en phase et une conception de ligne limitant l'impact des effets non-linéaires by Aida Seck( )

1 edition published in 2014 in French and held by 1 WorldCat member library worldwide

The ever-increasing demand of capacity in very high bit rate coherent optical transmission systems has paved the way towards the investigation of several techniques such as the use of ultra-low loss fibers, Erbium doped fiber amplifiers, polarization and wavelength division multiplexing (WDM), coherent detection, multi-level modulation formats, spatial division multiplexing, etc. However, there are questions concerning polarization division multiplexing and a development towards some advanced modulation formats including phase modulation and polarization division multiplexing. In this thesis, in order to increase the capacity-by-distance product of future optical coherent systems using wavelength and polarization division multiplexing, we first study spectral shaping of the transmitted signals to increase the information spectral density. For this purpose, we have numerically investigated the multi-channel transmission performance of Polarization Switched Quadrature Phase Shift Keying (PSQPSK) and we have compared it to the performance of Polarization-Division-Multiplexed QPSK (PDM-QPSK), using Root Raised Cosine (RRC) spectral shaping, in the context of a flexible channel grid. In addition we have presented the advantage of PS-QPSK against PDM-QPSK as a function of the system parameters, while we have also discussed the benefit of a RRC spectral shaping against a tight filtering at the transmitter side with a 2nd order super-Gaussian-shaped filter. Furthermore, we have focused on nonlinear effects that limit the transmission distance by degrading the transmitted symbols during propagation. Analyzing and reducing the impact of nonlinear effects is essential when using technologies that increase the information spectral density such as polarization division multiplexing which causes new nonlinear effects due to additional interactions between symbols during the propagation through the fiber. Therefore a reduction of the impact of nonlinear effects is necessary for the development of future systems with higher bit rates of 400 Gbit/s and 1 Tbit/s per channel. We have established in this thesis, design rules to reduce the impact of nonlinear effects in the optical WDM transmission systems at 100 Gbit/s per channel that use polarization multiplexing
moreShow More Titles
fewerShow Fewer Titles
Audience Level
Audience Level
  General Special  
Audience level: 0.89 (from 0.87 for Nouvelle a ... to 0.91 for Authentifi ...)

WorldCat IdentitiesRelated Identities
Alternative Names
École doctorale S&I (Evry)

ED 457

ED Sciences et Ingénierie (Evry)



French (13)

English (7)