WorldCat Identities

Kieffer, Michel

Works: 22 works in 29 publications in 2 languages and 171 library holdings
Roles: 956, Opponent, Author, Thesis advisor
Classifications: QA297.75, 511.42
Publication Timeline
Most widely held works by Michel Kieffer
Joint source-channel decoding : a cross-layer perspective with application in video broadcasting over mobile and wireless networks by Pierre Duhamel( Book )

4 editions published in 2010 in English and held by 50 WorldCat member libraries worldwide

Treats joint source and channel decoding in an integrated way Gives a clear description of the problems in the field together with the mathematical tools for their solution Contains many detailed examples useful for practical applications of the theory to video broadcasting over mobile and wireless networks Traditionally, cross-layer and joint source-channel coding were seen as incompatible with classically structured networks but recent advances in theory changed this situation. Joint source-channel decoding is now seen as a viable alternative to separate decoding of source and channel codes, if the protocol layers are taken into account. A joint source/protocol/channel approach is thus addressed in this book: all levels of the protocol stack are considered, showing how the information in each layer influences the others. This book provides the tools to show how cross-layer and joint source-channel coding and decoding are now compatible with present-day mobile and wireless networks, with a particular application to the key area of video transmission to mobiles. Typical applications are broadcasting, or point-to-point delivery of multimedia contents, which are very timely in the context of the current development of mobile services such as audio (MPEG4 AAC) or video (H263, H264) transmission using recent wireless transmission standards (DVH-H, DVB-SH, WiMAX, LTE). This cross-disciplinary book is ideal for graduate students, researchers, and more generally professionals working either in signal processing for communications or in networking applications, interested in reliable multimedia transmission. This book is also of interest to people involved in cross-layer optimization of mobile networks. Its content may provide them with other points of view on their optimization problem, enlarging the set of tools which they could use. Pierre Duhamel is director of research at CNRS/ LSS and has previously held research positions at Thomson-CSF, CNET, and ENST, where he was head of the Signal and Image Processing Department. He has served as chairman of the DSP committee and associate Editor of the IEEE Transactions on Signal Processing and Signal Processing Letters, as well as acting as a co-chair at MMSP and ICASSP conferences. He was awarded the Grand Prix France Telecom by the French Science Academy in 2000. He is co-author of more than 80 papers in international journals, 250 conference proceedings, and 28 patents. Michel Kieffer is an assistant professor in signal processing for communications at the Universiť Paris-Sud and a researcher at the Laboratoire des Signaux et Sysẗmes, Gif-sur-Yvette, France. His research interests are in joint source-channel coding and decoding techniques for the reliable transmission of multimedia contents. He serves as associate editor of Signal Processing (Elsevier). He is co-author of more than 90 contributions to journals, conference proceedings, and book chapters. Treats joint source and channel decoding in an integrated way Gives a clear description of the problems in the field together with the mathematical tools for their solution Contains many detailed examples useful for practical applications of the theory to video broadcasting over mobile and wireless networks

2 editions published in 1999 in French and held by 3 WorldCat member libraries worldwide

Amélioration des services vidéo fournis à travers les réseaux radio mobiles by Khaled Bouchireb( Book )

in English and held by 2 WorldCat member libraries worldwide

In this thesis, video communication systems are studied for application to video services provided over wireless mobile networks. This work emphasizes on point-to-multipoint communications and proposes many enhancements to the current systems : First, a scheme combining robust decoding with retransmissions is defined so that the number of retransmissions is reduced and the quality of the received video can be controlled. As opposed to current retransmissionless and retransmission-based schemes, this scheme also offers the possibility to trade throughput for quality and vice versa.Then, the transmission of a two-level scalable video sequence towards several clients is considered. Schemes using the basic Go-back-N (GBN) and Selective Repeat (SR) Automatic Repeat reQuest (ARQ) techniques are studied. A new scheme is also proposed and studied. The new scheme reduces the buffering requirement at the receiver end while keeping the performance optimal (in terms of the amount of data successfully transmitted within a given period of time). The different schemes were shown to be applicable to 2G, 3G and WiMAX systems.Finally, we prove that retransmissions can be used in point-to-multipoint communications up to a given limit on the number of receivers (contrary to the current wireless systems where ARQ is only used sin point-to-point communications). If retransmissions are introduced in the current Multicast/Broadcast services (supported by the 3GPP and mobile WiMAX), the system will guarantee a certain amount of receivers to have the nominal quality whereas the current Multicast/Broadcast services do not garantee any receiver of the nominal quality
WIBOX - Une passerelle pour une réception robuste de vidéo diffusée via WIMAX et une rediffusion indoor via WIFI by Usman Ali( Book )

2 editions published in 2010 in English and held by 2 WorldCat member libraries worldwide

This PhD study intends to investigate the tools necessary to implement a device (the WiBOX), which can robustly receive video broadcast over WiMAX and then rebroadcast it over WiFi. WiBOX should not only provide WiMAX services access to a WiFi user, but it should also achieve reasonable video quality even with a very weak WiMAX signal, and at the same time for WiFi rebroadcast, it should utilize alternative recovery techniques and avoid delays caused by the conventional retransmissions. This would help to improve WiFi user quality and to remain consistent with the broadcast scenario. To achieve the said objectives one has to consider several robust tools, which are often deployed to solve problems, like packet loss, synchronization failures, high delay, throughput etc., encountered while receiving video through a WiMAX/WiFi-link. These robust tools can be deployed at several protocol layers, among them few notable are, e.g., Joint Source Channel Decoding (JSCD) techniques deployed at the application (APL) layer, iterative decoding techniques deployed at the physical (PHY) layer, and header recovery, estimation, or synchronization tools deployed at various layers. For an efficient performance of these robust tools some cross-layer approach to enable exchange of useful information between the protocol layers and the complete analysis of the protocol stack is required. Some of these tools have requirements that are not compliant with the Standard Protocol Stack (SPS) and require Soft-Permeable Protocol Stack (SPPS), which can allow flow of erroneous packets, containing the soft information, e.g., A Posteriori Probabilities (APP) or likelihood ratios, to the higher layers. More importantly, for performance enhancement these tools should mutually benefit and reinforce each other instead of undoing each other's advantage. To increase the throughput, in both WiMAX and WiFi communication standards, packet aggregation is used; several packets are aggregated at a given layer of the protocol stack in the same burst to be transmitted. One can deploy Frame Synchronization (FS), i.e., to synchronize and recover the aggregated packets, however, when transmission over a noisy channel is considered, FS can cause loss of several error-free or partially errorfree packets, which could otherwise be beneficial for other tools, e.g., JSCD and header recovery tools, functioning at higher layers of the S-PPS. Rebroadcasting video over WiFi can significantly increase packet loss rate as the retransmission is omitted, which can be overcome by the packet-level Forward Error Correction (FEC) techniques. The FS and packet-level FEC decoder for S-PPS should not only allow flow of soft information from the PHY layer but should also mutually benefit from the JSC decoders deployed at the APL layer. In this thesis, we propose several Joint Protocol-Channel Decoding (JPCD) techniques for FS and packet-level FEC decoders operating at S-PPS. In the first part of this thesis, we propose several robust FS methods for S-PPS based on the implicit redundancies present in protocol and the soft information from the soft decoders at PHY layer. First, we propose a trellis-based algorithm that provides the APPs of packet boundaries. The possible successions of packets forming an aggregated packet are described by a trellis. The resulting algorithm is very efficient (optimal in some sense), but requires the knowledge of the whole aggregated packet beforehand, which might not be possible in latency-constrained situations. Thus in a second step, we propose a low-delay and reduced-complexity Sliding Trellis (ST)-based FS technique, where each burst is divided into overlapping windows in which FS is performed. Finally, we propose an on-the-fly three-state (3S) automaton, where packet length is estimated utilizing implicit redundancies and Bayesian hypothesis testing is performed to retrieve the correct FS. These methods are illustrated for the WiMAX Medium Access Control (MAC) layer and do not need any supplementary framing information. Practically, these improvements will result in increasing the amount of packets that can reach the JSC decoders. In the second part, we propose robust packet-level FEC decoder for S-PPS, which in addition to utilizing the introduced redundant FEC packets, uses the soft information (instead of hard bits, i.e., bit-stream of '1's and '0's) provided by the PHY layer along with the protocol redundancies, in order to provide robustness against bit error. Though, it does not impede the flow of soft information as required for S-PPS, it needs support from the header recovery techniques at the lower layers to forward erroneous packets and from the JSC decoders at the APL layer to detect and remove remaining errors. We have investigated the standard RTP-level FEC, and compared the performance of the proposed FEC decoder with alternative approaches. The proposed FS and packet-level FEC techniques would reduce the amount of packets dropped, increase the number of packets relayed to the video decoder functioning at APL layer, and improve the received video quality
Mémoire du Rhin : la garde du Rhin by Exposition. Strasbourg. 1993( Book )

1 edition published in 1993 in French and held by 1 WorldCat member library worldwide

Joint Source-Network Coding & Decoding by Lana Iwaza( )

1 edition published in 2013 in English and held by 1 WorldCat member library worldwide

While network data transmission was traditionally accomplished via routing, network coding (NC) broke this rule by allowing network nodes to perform linear combinations of the upcoming data packets. Network operations are performed in a specific Galois field of fixed size q. Decoding only involves a Gaussian elimination with the received network-coded packets. However, in practical wireless environments, NC might be susceptible to transmission errors caused by noise, fading, or interference. This drawback is quite problematic for real-time applications, such as multimediacontent delivery, where timing constraints may lead to the reception of an insufficient number of packets and consequently to difficulties in decoding the transmitted sources. At best, some packets can be recovered, while in the worst case, the receiver is unable to recover any of the transmitted packets.In this thesis, we propose joint source-network coding and decoding schemes in the purpose of providing an approximate reconstruction of the source in situations where perfect decoding is not possible. The main motivation comes from the fact that source redundancy can be exploited at the decoder in order to estimate the transmitted packets, even when some of them are missing. The redundancy can be either natural, i.e, already existing, or artificial, i.e, externally introduced.Regarding artificial redundancy, we choose multiple description coding (MDC) as a way of introducing structured correlation among uncorrelated packets. By combining MDC and NC, we aim to ensure a reconstruction quality that improves gradually with the number of received network-coded packets. We consider two different approaches for generating descriptions. The first technique consists in generating multiple descriptions via a real-valued frame expansion applied at the source before quantization. Data recovery is then achieved via the solution of a mixed integerlinear problem. The second technique uses a correlating transform in some Galois field in order to generate descriptions, and decoding involves a simple Gaussian elimination. Such schemes are particularly interesting for multimedia contents delivery, such as video streaming, where quality increases with the number of received descriptions.Another application of such schemes would be multicasting or broadcasting data towards mobile terminals experiencing different channel conditions. The channel is modeled as a binary symmetric channel (BSC) and we study the effect on the decoding quality for both proposed schemes. Performance comparison with a traditional NC scheme is also provided.Concerning natural redundancy, a typical scenario would be a wireless sensor network, where geographically distributed sources capture spatially correlated measures. We propose a scheme that aims at exploiting this spatial redundancy, and provide an estimation of the transmitted measurement samples via the solution of an integer quadratic problem. The obtained reconstruction quality is compared with the one provided by a classical NC scheme
Codage/décodage source-canal conjoint des contenus multimédia by Manel Abid( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

Dans cette thèse, nous nous intéressons aux schémas de codage et de décodage source-canal conjoint des contenus multimédia. Nous montrons comment la redondance laissée par le codeur vidéo peut être exploitée pour réaliser un décodage robuste des séquences transmises sur un lien radio-mobile bruité. grâce au schéma de décodage conjoint proposé, le nombre de paquets corrompus est significativement réduit au prix d'une très légère augmentation du débit. Nous appliquons ensuite ce schéma de décodage robuste à latransmission par descriptions multiples sur une architecture mixte Internet et radio-mobile. Le décodage source-canl conjoint des paquets reçus permet de corriger les erreurs de transmission et d'augmenter ainsi le nombre de paquets utilisés par le décodeur pour compenser les paquets perdus. L'efficacité de ce schéma est étudiée par rapport à un schéma classique basé sur les décisions dures du canal et sur un code correcteur d'erreurs introduisant le même niveau de redondance. Une deuxième partie de la thèse est consacrée à l'étude de schémas de codage source-canal conjoint basés sur une transformation redondante. Deux schémas d'estimation ont été proposés. Dans le premier, nous exploitons la redondance structurée introduite et le caractère borné du bruit de quantification pour construire un estimateur cohérent corrigeant les erreurs de transmission. Dans le deuxième schéma, nous appliquons l'algorithme de propagation de croyances pour évaluer les distributions a posteriori des composantes du signal d'entrée, à partir de sorties bruitées du canal. Nous appliquons alors ces deux schémas pour estimer l'entrée d'un banc de filtres suréchantillonnés
Applied interval analysis by Luc Jaulin( Book )

2 editions published in 2001 in English and held by 1 WorldCat member library worldwide

This book is about guaranteed numerical methods based on interval analysis for approximating sets, and about the application of these methods to vast classes of engineering problems. Guaranteed means here that inner and outer approximations of the sets of interest are obtained, which can be made as precise as desired, at the cost of increasing the computational effort. It thus becomes possible to achieve tasks still thought by many to be out of the reach of numerical methods, such as finding all solutions of sets of non-linear equations and inequality or all global optimizers of possibly multi-modal criteria. The basic methodology is explained as simply as possible, in a concrete and readily applicable way, with a large number of figures and illustrative examples. Some of the techniques reported appear in book format for the first time. The ability of the approach advocated here to solve non-trivial engineering problems is demonstrated through examples drawn from the fields of parameter and state estimation, robust control and robotics. Enough detail is provided to allow readers with other applications in mind to grasp their significance. An in-depth treatment of implementation issues facilitates the understanding and use of freely available software that makes interval computation about as easy as computation with floating-point numbers. The reader is even given the basic information needed to build his or her own C++ interval library
La synchronisation robuste en temps et en fréquence dans un système de communication sans fil de type 802.11a. by Cong Luong Nguyen( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

Time and frequency synchronization problem in the IEEE 802.11a OFDM (Orthogonal Frequency Division Multiplexing) wireless communication system is investigated. To enhance the frame synchronization between mobile stations, although solutions to compensate time and frequency offsets have already been proposed, we developed a new approach conform to the IEEE 802.11a standard. This approach exploits not only the reference information usually specified by the standard such as training sequences but also additional sources of information available at the physical layer further known by both the transmitter and receiver to be then exploited. According to the knowledge protocol, we showed that the parts of the identified SIGNAL field considered as a reference sequence of the physical frame are either known or predictable from the RtS (Request to Send) and CtS (Clear to Send) control frames when the CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) mechanism is triggered jointly to bit-rate adaptation algorithms to the channel. Moreover the received RtS control frame allows the receiver to estimate the channel before synchronization stage. According to the knowledge of the SIGNAL field and the channel information, we developed multistage joint time/frequency synchronization and channel estimation algorithms conform to the standard. Simulation results showed a strongly improved performance in terms of synchronization failure probability in comparison with the existing algorithms
Codage de sources avec information adjacente et connaissance incertaine des corrélations by Elsa Dupraz( )

1 edition published in 2013 in French and held by 1 WorldCat member library worldwide

In this thesis, we considered the problem of source coding with side information available at the decoder only. More in details, we considered the case where the joint distribution between the source and the side information is not perfectly known. In this context, we performed a performance analysis of the lossless source coding scheme. This performance analysis was realized from information theory tools. Then, we proposed a practical coding scheme able to deal with the uncertainty on the joint probability distribution. This coding scheme is based on non-binary LDPC codes and on an Expectation-Maximization algorithm. For this problem, a key issue is to design efficient LDPC codes. In particular, good code degree distributions have to be selected. Consequently, we proposed an optimization method for the selection of good degree distributions. To finish, we considered a lossy coding scheme. In this case, we assumed that the correlation channel between the source and the side information is described by a Hidden Markov Model with Gaussian emissions. For this model, we performed again some performance analysis and proposed a practical coding scheme. The proposed scheme is based on non-binary LDPC codes and on MMSE reconstruction using an MCMC method. In our solution, these two components are able to exploit the memory induced by the Hidden Markov model
Codage de Wyner-Ziv en présence de qualité incertaine de l'information adjacente by Francesca Bassi( Book )

1 edition published in 2010 in English and held by 1 WorldCat member library worldwide

L'objectif principal de cette thèse est de proposer un cadre théorique pour la description des problèmes de codage de source en présence d'information adjacente au décodeur rencontrés dans le contexte des applications pratiques. La théorie du codage de source distribué repose en grande partie sur l'hypothèse que les sources considérées sont stationnaires, et que les caractéristiques statistiques des signaux sont connues a priori. Pourtant, ces conditions ne sont pas vérifiées dans le cadre des applications pratiques, comme, par exemple, un schéma de codage vidéo distribué. Nous définissons une modélisation de signal qui est alternative au modèle quadratique gaussien qui est généralement pris comme référence. Cette modélisation permet de capturer les caractéristiques des signaux naturels, qui possèdent des niveaux de bruit de corrélation variables avec le temps. Nous considérons plusieurs problèmes de codage, définis par différents degrés d'accès à l'information adjacente, au codeur et au décodeur, et nous discutons leur capacité de capturer la nature du problème de codage de Wyner-Ziv considéré dans le cadre des applications pratiques. La dernière partie de cette thèse se concentre sur les problèmes relatifs à la construction de schémas pratiques. Nous définissons le problème de codage propre des systèmes où l'adaptation du débit ne peut pas être réalisée de la façon habituelle, à cause de la non disponibilité d'un canal de retour. L'application des solutions standards, pourtant possible, n'apparaît pas convenable. Nous proposons donc une architecture alternative, reposant sur des composantes optimisées pour le modèle quadratique gaussien des sources
Vers une solution réaliste de décodage source-canal conjoint de contenus multimédia by Cédric Marin( Book )

1 edition published in 2009 in French and held by 1 WorldCat member library worldwide

Caractérisation analytique et optimisation de codes source-canal conjoints by Amadou Tidiane Diallo( )

1 edition published in 2012 in French and held by 1 WorldCat member library worldwide

Les codes source-canal conjoints sont des codes réalisant simultanément une compression de données et une protection du train binaire généré par rapport à d'éventuelles erreurs de transmission. Ces codes sont non-linéaires, comme la plupart des codes de source. Leur intérêt potentiel est d'offrir de bonnes performances en termes de compression et de correction d'erreur pour des longueurs de codes réduites.La performance d'un code de source se mesure par la différence entre l'entropie de la source à compresser et le nombre moyen de bits nécessaire pour coder un symbole de cette source. La performance d'un code de canal se mesure par la distance minimale entre mots de codes ou entre suite de mots de codes, et plus généralement à l'aide du spectre des distances. Les codes classiques disposent d'outils pour évaluer efficacement ces critères de performance. Par ailleurs, la synthèse de bons codes de source ou de bons codes de canal est un domaine largement exploré depuis les travaux de Shannon. Par contre des outils analogues pour des codes source-canal conjoints, tant pour l'évaluation de performance que pour la synthèse de bons codes restaient à développer, même si certaines propositions ont déjà été faites dans le passé.Cette thèse s'intéresse à la famille des codes source-canal conjoints pouvant être décrits par des automates possédant un nombre fini d'états. Les codes quasi-arithmétiques correcteurs d'erreurs et les codes à longueurs variables correcteurs d'erreurs font partie de cette famille. La manière dont un automate peut être obtenu pour un code donné est rappelée.A partir d'un automate, il est possible de construire un graphe produit permettant de décrire toutes les paires de chemins divergeant d'un même état et convergeant vers un autre état. Nous avons montré que grâce à l'algorithme de Dijkstra, il est alors possible d'évaluer la distance libre d'un code conjoint avec une complexité polynomiale.Pour les codes à longueurs variables correcteurs d'erreurs, nous avons proposé des bornes supplémentaires, faciles à évaluer. Ces bornes constituent des extensions des bornes de Plotkin et de Heller aux codes à longueurs variables. Des bornes peuvent également être déduites du graphe produit associé à un code dont seule une partie des mots de codes a été spécifiée.Ces outils pour borner ou évaluer exactement la distance libre d'un code conjoint permettent de réaliser la synthèse de codes ayant des bonnes propriétés de distance pour une redondance donnée ou minimisant la redondance pour une distance libre donnée.Notre approche consiste à organiser la recherche de bons codes source-canal conjoints à l'aide d'arbres. La racine de l'arbre correspond à un code dont aucun bit n'est spécifié, les feuilles à des codes dont tous les bits sont spécifiés, et les nœuds intermédiaires à des codes partiellement spécifiés. Lors d'un déplacement de la racine vers les feuilles de l'arbre, les bornes supérieures sur la distance libre décroissent, tandis que les bornes inférieures croissent. Ceci permet d'appliquer un algorithme de type branch-and-prune pour trouver le code avec la plus grande distance libre, sans avoir à explorer tout l'arbre contenant les codes. L'approche proposée a permis la construction de codes conjoints pour les lettres de l'alphabet. Comparé à un schéma tandem équivalent (code de source suivi d'un code convolutif), les codes obtenus ont des performances comparables (taux de codage, distance libre) tout en étant moins complexes en termes de nombre d'état du décodeur.Plusieurs extensions de ces travaux sont en cours : 1) synthèse de codes à longueurs variables correcteurs d'erreurs formalisé comme un problème de programmation linéaire mixte sur les entiers ; 2) exploration à l'aide d'un algorithme de type A* de l'espace des codes de à longueurs variables correcteur d'erreurs
Régulation de la qualité lors de la transmission de contenus vidéo sur des canaux sans fils by Nesrine Changuel( )

1 edition published in 2011 in English and held by 1 WorldCat member library worldwide

Due to the emergence of new generation mobiles and media streaming services, data traffic on mobile networks is continuously exploding. Despite the emergence of standards such as LTE, resources still remain scarce and limited. Thus, efficiently sharing resources among broadcasters or between unicast receivers connected to the same base station is necessary. An efficient resources allocation, where a fair received video quality between users and an equal transmission delay are achieved, is targeted. To that end, the variety of the rate-distortion trade-off of multimedia content is exploited. First, a centralized joint encoding and transmission rate control of multiple programs sharing the same channel is considered. A satisfactory and a comparable video quality among the transmitted programs, with limited variations, as well as a comparable transmission delay are targeted. The problem is solved using constrained optimization tools. Second, only the bandwidth allocation control is centralized, the control of the encoding rate characteristics of each stream is carried in a distributed manner. By modeling the problem as a feedback control system, the centralized bandwidth allocation is required to feed back only the buffer level to its associated remote content provider. The equilibrium and stability issues are addressed for both bit and second buffer control. In the case of simple unicast connection, a cross-layer optimization of scalable video delivery over wireless channel is performed. The optimization problem is cast in the context of dynamic programming. When low complex model are considered and when the system characteristics are known, optimal solutions can be obtained. When the system is partially known, for example, when the state of the channel reaches the control process with delay, learning techniques are implemented
Théorie des jeux et apprentissage pour les réseaux sans fil distribués by François Mériaux( )

1 edition published in 2013 in French and held by 1 WorldCat member library worldwide

In this thesis, we study wireless networks in which mobile terminals are free to choose their communication configuration. Theses configuration choices include access wireless technology, access point association, coding-modulation scheme, occupied bandwidth, power allocation, etc. Typically, these configuration choices are made to maximize some performance metrics associated to every terminals. Under the assumption that mobile terminals take their decisions in a rational manner, game theory can be applied to model the interactions between the terminals. Precisely, the main objective of this thesis is to study energy-efficient power control policies from which no terminal has an interest to deviate. The framework of stochastic games is particularly suited to this problem and allows to characterize the achievable utility region for equilibrium power control strategies. When the number of terminals in the network is large, we invoke mean field game theory to simplify the study of the system. Indeed, in a mean field game, the interactions between a player and all the other players are not considered individually. Instead, one only studies the interactions between each player and a mean field, which is the distribution of the states of all the other players. Optimal power control strategies from the mean field formulation are studied. Another part of this thesis has been focused on learning equilibria in distributed games. In particular, we show how best response dynamics and learning algorithms can converge to an equilibrium in a base station location game. For another scenario, namely a power control problem, we study the convergence of the best response dynamics. In this case, we propose a power control behavioral rule that converges to an equilibrium with very little information about the network
Performance Analysis of Iterative Soft Interference Cancellation Algorithms and New Link Adaptation Strategies for Coded MIMO Systems. by Baozhu Ning( )

1 edition published in 2013 in English and held by 1 WorldCat member library worldwide

Les systèmes de communication sans fil actuels évoluent vers un renforcement des réactivités des protocles de la gestion des ressources radio (RRM) et adaptation du lien radipe (FLA) afin d'optimiser conjointement les couches MAC et PHY. En parallèle, la technologie d'antenne multiples et turbo récepteurs avancés ont un grand potentiel pour augmenter l'efficacité spectrale dans les futurs systèmes de communication sans fil. Ces deux tendances, à savoir, l'optimisation inter couche et le traitement de turbo, nécessitent le développement de nouvelles abstractions de la couche PHY (aussi appelée méthode de prédiction de la performance) qui peuvent capturer les performances du récepteur itératif par itération pour permettre l'introduction en douceur de ces récepteurs avancés dans FLA et RRM.La thèse de doctorat revisite en détail l'architecture du turbo récepteur, plus particulièrement, la classe d'algorithme itératif effectuant la détection linéaire par minimisation d'erreur quadratique moyenne avec l'annulation d'interférence (LMMSE-IC). Ensuite, une méthode semi-analytique de prédiction de la performance est proposée pour analyser son l'évolution par la modélisation stochastique de chacun des composants. Intrinsèquement, la méthode de prédiction de la performance est subordonnée à la disposition de connaissance d'information d'état du canal au niveau du récepteur (CSIR), le type de codage de canal (code convolutif ou un code turbo), le nombre de mots de code ainsi que le type d'information probabilistic sur les bits codés réinjectée par le décodeur pour la reconstruction et l'annulation d'interférence à l'intérieur d'algorithme de LMMSE -IC itératif.Dans la deuxième partie, l'adaptation du lien en boucle fermée dans les systèmes MIMO codés basés sur les abstractions de la couche PHY proposées pour les récepteurs LMMSE -IC itératifs ont été abordés. Le schéma proposé d'adaptation de liaison repose sur un faible taux de rétroaction et exploite la sélection du précodeur spatiale (par exemple, la sélection d'antennes) et du schéma de modulation et de codage (MCS) de façon à maximiser le taux moyen soumis à une contrainte de taux d'erreur de bloc. Différents schémas de codage sont testés, tels qu'un codage parcourant tous les antennes où un codage par antenne. Les simulations montrent bien le gain important obtenu avec les turbo récepteurs comparée à celui d'un récepteur MMSE classique
Traffic-Aware Resource Allocation and Feedback Design in Wireless Networks by Apostolos Destounis( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

Les réseaux sans fil sont confrontés à une augmentation croissante en demande de données, qui devrait continuer à croitre dans les années à venir. La raison principale de cette croissance est liée à la demande en services vidéo et données. Les plus importantes approches proposées pour faire face à ce problème, notamment l'utilisation des antennes multiples, le codage OFDMA (qui font déjà partie des standards 3GPP et LTE), et le déploiement de réseaux à petites cellules, ont été examinées plutôt d'un point de vue couche physique, en se concentrant sur des mesures de performance tel que le débit total du système. Cependant, les caractéristiques du trafic vidéo et des données ainsi que les demandes individuelles des utilisateurs doivent être prises en compte pour la conception des algorithmes d'allocation de ressources radio. L'objectif de cette thèse est d'étudier l'impact des algorithmes d'allocation de ressources radio (contrôle de puissance, pré-codage, ordonnancement) ainsi que les informations concernant l'état du canal sur le comportement des files d'attente des utilisateurs. Nous étudions, en particulier, le problème de pré-codage et de contrôle de puissance dans le canal d'interférence, dans le but de réguler le comportement des files d'attente des utilisateurs et conjointement la rétroaction/estimation de canal et la sélection et ordonnancement des utilisateurs. Ceci afin d'assurer la stabilité des files d'attentes pour une grande partie des demandes de trafic dans les systèmes de diffusion MISO-OFDMA. Pour assurer cela, nous utilisons des outils mathématiques de la théorie des modèles asymptotiques "heavy traffic" et de la théorie de la stabilité stochastique
TCP and network coding : equilibrium and dynamic properties by Hamlet Medina Ruiz( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

Communication networks today share the same fundamental principle of operation: information is delivered to their destination by nodes intermediate in a store-and-forward manner.Network coding (NC) is a technique that allows intermediate nodes to send out packets that are linear combinations of previously received information. The main benefits of NC are the potential throughput improvements and a high degree of robustness, which is translated into loss resilience. These benefits have motivated deployment efforts for practical applications of NC, e.g., incorporating NC into congestion control schemes such as TCP-Reno to get a TCP-NC congestion protocol. In TCP-NC, TCP-Reno throughput is improved by sending a fixed amount of redundant packets, which mask part of the losses due, e.g., to channel transmission errors. In this thesis, we first analyze the dynamics of TCP-NC with random early detection (RED) as active queue management (AQM) using tools from convex optimization and feedback control. We study the network equilibrium point and the stability properties of TCP-Reno when NC is incorporated into the TCP/IP protocol stack. The existence and uniqueness of an equilibrium point is proved, and characterized in terms of average throughput, loss rate, and queue length. Our study also shows that TCP-NC/RED becomes unstable when delay or link capacities increases, but also, when the amount of redundant packets added by NC increases. Using a continuous-time model and neglecting feedback delays, we prove that TCP-NC is globally stable. We provide a sufficient condition for local stability when feedback delays are present. The fairness of TCP-NC with respect to TCP-Reno-like protocols is also studied. Second, we propose an algorithm to dynamically adjust the amount of redundant linear combinations of packets transmitted by NC. In TCP-NC with adaptive redundancy (TCP-NCAR), the redundancy is adjusted using a loss differentiation scheme, which estimates the amount of losses due to channel transmission errors and due to congestion. Simulation results show that TCP-NCAR outperforms TCP-NC in terms of throughput. Finally, we analyze the equilibrium and stability properties of TCP-NCAR/RED. The existence and uniqueness of an equilibrium point is characterized experimentally. The TCP-NCAR/RED dynamics are modeled using a continuous-time model. Theoretical and simulation results show that TCP-NCAR tracks the optimal value for the redundancy for small values of the packet loss rate. Moreover, simulations of the linearized model around equilibrium show that TCP-NCAR increases the size of the TCP-Reno stability region. We show that this is due to the compensator effect of the redundancy adaptation dynamics to TCP-Reno. These characteristics of TCP-NCAR allow the congestion window adaptation mechanism of TCP-Reno to react in a smooth way to channel losses, avoiding some unnecessary rate reductions, and increasing the local stability of TCP-Reno
Secure Communication and Cooperation in Interference-Limited Wireless Networks by German Bassi( )

1 edition published in 2015 in French and held by 1 WorldCat member library worldwide

In this thesis, we conduct an information-theoretic study on two important aspects of wireless communications: the improvement of data throughput in interference-limited networks by means of cooperation between users and the strengthening of the security of transmissions with the help of feedback.In the first part of the thesis, we focus on the simplest model that encompasses interference and cooperation, the Interference Relay Channel (IRC). Our goal is to characterize within a fixed number of bits the capacity region of the Gaussian IRC, independent of any channel conditions. To do so, we derive a novel outer bound and two inner bounds. Specifically, the outer bound is obtained thanks to a nontrivial extension we propose of the injective semideterministic class of channels, originally derived by Telatar and Tse for the Interference Channel (IC).In the second part of the thesis, we investigate the Wiretap Channel with Generalized Feedback (WCGF) and our goal is to provide a general transmission strategy that encompasses the existing results for different feedback models found in the literature. To this end, we propose two different inner bounds on the capacity of the memoryless WCGF. We first derive an inner bound that is based on the use of joint source-channel coding, which introduces time dependencies between the feedback outputs and the channel inputs through different time blocks. We then introduce a second inner bound where the feedback link is used to generate a key that encrypts the message partially or completely
Optimized broadcasting in wireless ad-hoc networks using network coding by Nour Kadi( Book )

1 edition published in 2010 in English and held by 1 WorldCat member library worldwide

Network coding is a novel technique which attracts the research interest since its emergence in 2000. It was shown that network coding, combined with wireless broadcasting, can potentially improve the performance in term of throughput, energy efficiency and bandwidth utilization. Our study begins with integrating network coding with multipoint relay (MPR) technique. MPR is an efficient broadcast mechanism which has been used in many wireless protocols. We show how combining the two techniques together can reduce the number of transmitted packets and increase the throughput. We further reduce the complexity by proposing an opportunistic coding scheme which performs coding operations on the binary field. Instead of linearly combining packets, we employ arithmetic summing packets in modulo 2, which simply corresponds to XOR the corresponding bits in each packet. These operations are computationally cheap. Using neighbors state information, a node in our scheme chooses packets to encode and transmit at each transmission trying to deliver a maximum number of packets. Therefore, an exchange of the reception information between the neighbors is required. To reduce the overhead of the required feedback, we propose a new coding scheme. It uses LT-code (a type of fountain code) to eliminate the need of a perfect feedback among neighbors. This scheme performs encoding and decoding with a logarithmic complexity. We optimize LT-code to speed up the decoding process. The optimization is achieved by proposing a new degree distribution to be used during the encoding process. This distribution allows intermediate nodes to decode more symbols even when few encoded packets are received
moreShow More Titles
fewerShow Fewer Titles
Audience Level
Audience Level
  Kids General Special  
Audience level: 0.65 (from 0.52 for Joint sour ... to 0.99 for Régulatio ...)

Joint source-channel decoding : a cross-layer perspective with application in video broadcasting over mobile and wireless networks
English (19)

French (8)

Applied interval analysis