Kieffer, Michel (19......; professeur d'automatique)
Overview
Works:  9 works in 11 publications in 2 languages and 11 library holdings 

Publication Timeline
.
Most widely held works by
Michel Kieffer
Amélioration des services vidéo fournis à travers les réseaux radio mobiles
by
Khaled Bouchireb(
Book
)
in English and held by 2 WorldCat member libraries worldwide
Cette thèse est dédiée à l'étude des systèmes de communications pour application aux services vidéo fournis par les réseaux radio mobiles. Ce travail met l'accent sur les systèmes point à multipoints et propose plusieurs améliorations : Dans un premier temps, on définit un système qui combine le décodage robuste aux retransmissions ARQ, et ce de telle façon à réduire le nombre de retransmissions tout en gardant le même niveau de qualité. Contrairement aux systèmes actuels (avec ou sans retransmissions), ce systèmes offre également la possibilité de choisir le compromis débit/qualité via un paramètre du système. Par la suite, on considère les sytèmes de transmission d'une vidéo scalable vers plusieurs terminaux. Des extensions des systèmes GoBackN (GBN) Automatic Repeat reQuest (ARQ) et Selective Repeat (SR) ARQ sont étudiées et comparées à un nouveau système. On montre que ce dernier limite les besoins de bufferisation au niveau du terminal récepteur tout en ayant des performances optimales (en termes de quantité de données transmises avec succès sur une période donnée). Finalement, on montre que même sous une contrainte débit on peut utiliser les retransmissions dans les communications point à multipoints à condition de ne pas dépasser une certaine limite sur le nombre d'utilisateurs. Si l'utilisation des retransmissions ARQ est introduite dans les sytèmes de Multicast/Broadcast 3GPP et/ou WiMAX, le système pourra garantir une qualité nominale à un certain nombre d'utilisateurs, ce qui n'est pas le cas des systèmes de Multicast/Broadcast actuels
in English and held by 2 WorldCat member libraries worldwide
Cette thèse est dédiée à l'étude des systèmes de communications pour application aux services vidéo fournis par les réseaux radio mobiles. Ce travail met l'accent sur les systèmes point à multipoints et propose plusieurs améliorations : Dans un premier temps, on définit un système qui combine le décodage robuste aux retransmissions ARQ, et ce de telle façon à réduire le nombre de retransmissions tout en gardant le même niveau de qualité. Contrairement aux systèmes actuels (avec ou sans retransmissions), ce systèmes offre également la possibilité de choisir le compromis débit/qualité via un paramètre du système. Par la suite, on considère les sytèmes de transmission d'une vidéo scalable vers plusieurs terminaux. Des extensions des systèmes GoBackN (GBN) Automatic Repeat reQuest (ARQ) et Selective Repeat (SR) ARQ sont étudiées et comparées à un nouveau système. On montre que ce dernier limite les besoins de bufferisation au niveau du terminal récepteur tout en ayant des performances optimales (en termes de quantité de données transmises avec succès sur une période donnée). Finalement, on montre que même sous une contrainte débit on peut utiliser les retransmissions dans les communications point à multipoints à condition de ne pas dépasser une certaine limite sur le nombre d'utilisateurs. Si l'utilisation des retransmissions ARQ est introduite dans les sytèmes de Multicast/Broadcast 3GPP et/ou WiMAX, le système pourra garantir une qualité nominale à un certain nombre d'utilisateurs, ce qui n'est pas le cas des systèmes de Multicast/Broadcast actuels
WIBOX  Une passerelle pour une réception robuste de vidéo diffusée via WIMAX et une rediffusion indoor via WIFI
by
Usman Ali(
Book
)
2 editions published in 2010 in English and held by 2 WorldCat member libraries worldwide
This PhD study intends to investigate the tools necessary to implement a device (the WiBOX), which can robustly receive video broadcast over WiMAX and then rebroadcast it over WiFi. WiBOX should not only provide WiMAX services access to a WiFi user, but it should also achieve reasonable video quality even with a very weak WiMAX signal, and at the same time for WiFi rebroadcast, it should utilize alternative recovery techniques and avoid delays caused by the conventional retransmissions. This would help to improve WiFi user quality and to remain consistent with the broadcast scenario. To achieve the said objectives one has to consider several robust tools, which are often deployed to solve problems, like packet loss, synchronization failures, high delay, throughput etc., encountered while receiving video through a WiMAX/WiFilink. These robust tools can be deployed at several protocol layers, among them few notable are, e.g., Joint Source Channel Decoding (JSCD) techniques deployed at the application (APL) layer, iterative decoding techniques deployed at the physical (PHY) layer, and header recovery, estimation, or synchronization tools deployed at various layers. For an efficient performance of these robust tools some crosslayer approach to enable exchange of useful information between the protocol layers and the complete analysis of the protocol stack is required. Some of these tools have requirements that are not compliant with the Standard Protocol Stack (SPS) and require SoftPermeable Protocol Stack (SPPS), which can allow flow of erroneous packets, containing the soft information, e.g., A Posteriori Probabilities (APP) or likelihood ratios, to the higher layers. More importantly, for performance enhancement these tools should mutually benefit and reinforce each other instead of undoing each other's advantage. To increase the throughput, in both WiMAX and WiFi communication standards, packet aggregation is used; several packets are aggregated at a given layer of the protocol stack in the same burst to be transmitted. One can deploy Frame Synchronization (FS), i.e., to synchronize and recover the aggregated packets, however, when transmission over a noisy channel is considered, FS can cause loss of several errorfree or partially errorfree packets, which could otherwise be beneficial for other tools, e.g., JSCD and header recovery tools, functioning at higher layers of the SPPS. Rebroadcasting video over WiFi can significantly increase packet loss rate as the retransmission is omitted, which can be overcome by the packetlevel Forward Error Correction (FEC) techniques. The FS and packetlevel FEC decoder for SPPS should not only allow flow of soft information from the PHY layer but should also mutually benefit from the JSC decoders deployed at the APL layer. In this thesis, we propose several Joint ProtocolChannel Decoding (JPCD) techniques for FS and packetlevel FEC decoders operating at SPPS. In the first part of this thesis, we propose several robust FS methods for SPPS based on the implicit redundancies present in protocol and the soft information from the soft decoders at PHY layer. First, we propose a trellisbased algorithm that provides the APPs of packet boundaries. The possible successions of packets forming an aggregated packet are described by a trellis. The resulting algorithm is very efficient (optimal in some sense), but requires the knowledge of the whole aggregated packet beforehand, which might not be possible in latencyconstrained situations. Thus in a second step, we propose a lowdelay and reducedcomplexity Sliding Trellis (ST)based FS technique, where each burst is divided into overlapping windows in which FS is performed. Finally, we propose an onthefly threestate (3S) automaton, where packet length is estimated utilizing implicit redundancies and Bayesian hypothesis testing is performed to retrieve the correct FS. These methods are illustrated for the WiMAX Medium Access Control (MAC) layer and do not need any supplementary framing information. Practically, these improvements will result in increasing the amount of packets that can reach the JSC decoders. In the second part, we propose robust packetlevel FEC decoder for SPPS, which in addition to utilizing the introduced redundant FEC packets, uses the soft information (instead of hard bits, i.e., bitstream of '1's and '0's) provided by the PHY layer along with the protocol redundancies, in order to provide robustness against bit error. Though, it does not impede the flow of soft information as required for SPPS, it needs support from the header recovery techniques at the lower layers to forward erroneous packets and from the JSC decoders at the APL layer to detect and remove remaining errors. We have investigated the standard RTPlevel FEC, and compared the performance of the proposed FEC decoder with alternative approaches. The proposed FS and packetlevel FEC techniques would reduce the amount of packets dropped, increase the number of packets relayed to the video decoder functioning at APL layer, and improve the received video quality
2 editions published in 2010 in English and held by 2 WorldCat member libraries worldwide
This PhD study intends to investigate the tools necessary to implement a device (the WiBOX), which can robustly receive video broadcast over WiMAX and then rebroadcast it over WiFi. WiBOX should not only provide WiMAX services access to a WiFi user, but it should also achieve reasonable video quality even with a very weak WiMAX signal, and at the same time for WiFi rebroadcast, it should utilize alternative recovery techniques and avoid delays caused by the conventional retransmissions. This would help to improve WiFi user quality and to remain consistent with the broadcast scenario. To achieve the said objectives one has to consider several robust tools, which are often deployed to solve problems, like packet loss, synchronization failures, high delay, throughput etc., encountered while receiving video through a WiMAX/WiFilink. These robust tools can be deployed at several protocol layers, among them few notable are, e.g., Joint Source Channel Decoding (JSCD) techniques deployed at the application (APL) layer, iterative decoding techniques deployed at the physical (PHY) layer, and header recovery, estimation, or synchronization tools deployed at various layers. For an efficient performance of these robust tools some crosslayer approach to enable exchange of useful information between the protocol layers and the complete analysis of the protocol stack is required. Some of these tools have requirements that are not compliant with the Standard Protocol Stack (SPS) and require SoftPermeable Protocol Stack (SPPS), which can allow flow of erroneous packets, containing the soft information, e.g., A Posteriori Probabilities (APP) or likelihood ratios, to the higher layers. More importantly, for performance enhancement these tools should mutually benefit and reinforce each other instead of undoing each other's advantage. To increase the throughput, in both WiMAX and WiFi communication standards, packet aggregation is used; several packets are aggregated at a given layer of the protocol stack in the same burst to be transmitted. One can deploy Frame Synchronization (FS), i.e., to synchronize and recover the aggregated packets, however, when transmission over a noisy channel is considered, FS can cause loss of several errorfree or partially errorfree packets, which could otherwise be beneficial for other tools, e.g., JSCD and header recovery tools, functioning at higher layers of the SPPS. Rebroadcasting video over WiFi can significantly increase packet loss rate as the retransmission is omitted, which can be overcome by the packetlevel Forward Error Correction (FEC) techniques. The FS and packetlevel FEC decoder for SPPS should not only allow flow of soft information from the PHY layer but should also mutually benefit from the JSC decoders deployed at the APL layer. In this thesis, we propose several Joint ProtocolChannel Decoding (JPCD) techniques for FS and packetlevel FEC decoders operating at SPPS. In the first part of this thesis, we propose several robust FS methods for SPPS based on the implicit redundancies present in protocol and the soft information from the soft decoders at PHY layer. First, we propose a trellisbased algorithm that provides the APPs of packet boundaries. The possible successions of packets forming an aggregated packet are described by a trellis. The resulting algorithm is very efficient (optimal in some sense), but requires the knowledge of the whole aggregated packet beforehand, which might not be possible in latencyconstrained situations. Thus in a second step, we propose a lowdelay and reducedcomplexity Sliding Trellis (ST)based FS technique, where each burst is divided into overlapping windows in which FS is performed. Finally, we propose an onthefly threestate (3S) automaton, where packet length is estimated utilizing implicit redundancies and Bayesian hypothesis testing is performed to retrieve the correct FS. These methods are illustrated for the WiMAX Medium Access Control (MAC) layer and do not need any supplementary framing information. Practically, these improvements will result in increasing the amount of packets that can reach the JSC decoders. In the second part, we propose robust packetlevel FEC decoder for SPPS, which in addition to utilizing the introduced redundant FEC packets, uses the soft information (instead of hard bits, i.e., bitstream of '1's and '0's) provided by the PHY layer along with the protocol redundancies, in order to provide robustness against bit error. Though, it does not impede the flow of soft information as required for SPPS, it needs support from the header recovery techniques at the lower layers to forward erroneous packets and from the JSC decoders at the APL layer to detect and remove remaining errors. We have investigated the standard RTPlevel FEC, and compared the performance of the proposed FEC decoder with alternative approaches. The proposed FS and packetlevel FEC techniques would reduce the amount of packets dropped, increase the number of packets relayed to the video decoder functioning at APL layer, and improve the received video quality
Codage de WynerZiv en présence de qualité incertaine de l'information adjacente
by
Francesca Bassi(
Book
)
1 edition published in 2010 in English and held by 1 WorldCat member library worldwide
The main objective of the work presented in this thesis is the attempt to delineate a theoretical framework for source coding with side information at the decoder, able to account for the characteristics inherent to the coding problem encountered in practical applications. Distributed source coding theory works under the assumptions of the stationarity of the sources, and of the a priori knowledge of the statistical characteristics of the signals involved. These conditions are seldom verified in the context of systems inspired by the distributed coding principle, Iike distributed video coding applications. We define a new signal model, alternative to the quadratic Gaussian setup which is generally assumed as the reference, to better capture the features of natural signals, characterized by correlation noise levels varying during the transmission time. We define several coding problems, determined by different degrees of access to the side information at the encoder and at the decoder side, and we discuss their capability to capture the WynerZiv coding problem arising in practical applications. Finally, we focus on the practical design of WynerZiv coding solutions. ln particular, we define the WynerZiv coding problem arising in systems where rate adaptation cannot be performed in the conventional way, due to the unavailability of a physical feedback channel. The application of standard design solutions remains possible, but appears highly unpractical. We propose then an alternative coding architecture, based on components optimized for the quadratic Gaussian model of the sources
1 edition published in 2010 in English and held by 1 WorldCat member library worldwide
The main objective of the work presented in this thesis is the attempt to delineate a theoretical framework for source coding with side information at the decoder, able to account for the characteristics inherent to the coding problem encountered in practical applications. Distributed source coding theory works under the assumptions of the stationarity of the sources, and of the a priori knowledge of the statistical characteristics of the signals involved. These conditions are seldom verified in the context of systems inspired by the distributed coding principle, Iike distributed video coding applications. We define a new signal model, alternative to the quadratic Gaussian setup which is generally assumed as the reference, to better capture the features of natural signals, characterized by correlation noise levels varying during the transmission time. We define several coding problems, determined by different degrees of access to the side information at the encoder and at the decoder side, and we discuss their capability to capture the WynerZiv coding problem arising in practical applications. Finally, we focus on the practical design of WynerZiv coding solutions. ln particular, we define the WynerZiv coding problem arising in systems where rate adaptation cannot be performed in the conventional way, due to the unavailability of a physical feedback channel. The application of standard design solutions remains possible, but appears highly unpractical. We propose then an alternative coding architecture, based on components optimized for the quadratic Gaussian model of the sources
Théorie des jeux et apprentissage pour les réseaux sans fil distribués
by
François Mériaux(
)
1 edition published in 2013 in French and held by 1 WorldCat member library worldwide
Dans cette thèse, nous étudions des réseaux sans fil dans lesquels les terminaux mobiles sont autonomes dans le choix de leurs configurations de communication. Cette autonomie de décision peut notamment concerner le choix de la technologie d'accès au réseau, le choix du point d'accès, la modulation du signal, les bandes de fréquences occupées, la puissance du signal émis, etc. Typiquement, ces choix de configuration sont réalisés dans le but de maximiser des métriques de performances propres à chaque terminal. Sous l'hypothèse que les terminaux prennent leurs décisions de manière rationnelle afin de maximiser leurs performances, la théorie des jeux s'applique naturellement pour modéliser les interactions entre les décisions des différents terminaux. Plus précisément, l'objectif principal de cette thèse est d'étudier des stratégies d'équilibre de contrôle de puissance d'émission afin de satisfaire des considérations d'efficacité énergétique. Le cadre des jeux stochastiques est particulièrement adapté à ce problème et nous permet notamment de caractériser la région de performance atteignable pour toutes les stratégies de contrôle de puissance qui mènent à un état d'équilibre. Lorsque le nombre de terminaux en jeu est grand, nous faisons appel à la théorie des jeux à champ moyen pour simplifier l'étude du système. Cette théorie nous permet d'étudier non pas les interactions individuelles entre les terminaux, mais l'interaction de chaque terminal avec un champ moyen qui représente l'état global des autres terminaux. Des stratégies de contrôle de puissance optimales du jeu à champ moyen sont étudiées. Une autre partie de la thèse a été consacrée à des problématiques d'apprentissage de points d'équilibre dans les réseaux distribués. En particulier, après avoir caractérisé les positions d'équilibre d'un jeu de positionnement de points d'accès, nous montrons comment des dynamiques de meilleures réponses et d'apprentissage permettent de converger vers un équilibre. Enfin, pour un jeu de contrôle de puissance, la convergence des dynamiques de meilleures réponses vers des points d'équilibre a été étudiée. Il est notamment proposé un algorithme d'adaptation de puissance convergeant vers un équilibre avec une faible connaissance du réseau
1 edition published in 2013 in French and held by 1 WorldCat member library worldwide
Dans cette thèse, nous étudions des réseaux sans fil dans lesquels les terminaux mobiles sont autonomes dans le choix de leurs configurations de communication. Cette autonomie de décision peut notamment concerner le choix de la technologie d'accès au réseau, le choix du point d'accès, la modulation du signal, les bandes de fréquences occupées, la puissance du signal émis, etc. Typiquement, ces choix de configuration sont réalisés dans le but de maximiser des métriques de performances propres à chaque terminal. Sous l'hypothèse que les terminaux prennent leurs décisions de manière rationnelle afin de maximiser leurs performances, la théorie des jeux s'applique naturellement pour modéliser les interactions entre les décisions des différents terminaux. Plus précisément, l'objectif principal de cette thèse est d'étudier des stratégies d'équilibre de contrôle de puissance d'émission afin de satisfaire des considérations d'efficacité énergétique. Le cadre des jeux stochastiques est particulièrement adapté à ce problème et nous permet notamment de caractériser la région de performance atteignable pour toutes les stratégies de contrôle de puissance qui mènent à un état d'équilibre. Lorsque le nombre de terminaux en jeu est grand, nous faisons appel à la théorie des jeux à champ moyen pour simplifier l'étude du système. Cette théorie nous permet d'étudier non pas les interactions individuelles entre les terminaux, mais l'interaction de chaque terminal avec un champ moyen qui représente l'état global des autres terminaux. Des stratégies de contrôle de puissance optimales du jeu à champ moyen sont étudiées. Une autre partie de la thèse a été consacrée à des problématiques d'apprentissage de points d'équilibre dans les réseaux distribués. En particulier, après avoir caractérisé les positions d'équilibre d'un jeu de positionnement de points d'accès, nous montrons comment des dynamiques de meilleures réponses et d'apprentissage permettent de converger vers un équilibre. Enfin, pour un jeu de contrôle de puissance, la convergence des dynamiques de meilleures réponses vers des points d'équilibre a été étudiée. Il est notamment proposé un algorithme d'adaptation de puissance convergeant vers un équilibre avec une faible connaissance du réseau
Codage/décodage sourcecanal conjoint des contenus multimédia
by
Manel Abid(
)
1 edition published in 2012 in English and held by 1 WorldCat member library worldwide
This thesis aims at proposing and implementing efficient joint sourcechannel coding and decoding schemes in order to enhance the robustness of multimedia contents transmitted over unreliable networks. In a first time, we propose to identify and exploit the residual redundancy left by wavelet video coders in the compressed bit streams. an efficient jointsource channel decoding scheme is proposed to detect and correct some of the transmission errors occurring during a noisy transmission. This technique is further applied to multiple description video streams transmitted over a mixed architecture consisting of a wired lossy part and a wireless noisy part. In a second time, we propose to use the structured redundancy deliberately introduced by multirate coding systems, such as oversampled filter banks, in order to perform a robust estimation of the input signals transmitted over noisy channels. Two efficient estimation approaches are proposed and compared. The first one exploits the linear dependencies between the output variables, jointly to the bounded quantization noise, in order to perform a consistent estmiation of the source outcome. The second approach uses the belief propagation algorithm to estimate the input signal via a message passing procedure along the graph representing the linear dependencies between the variables. These schemes ares then applied to estimate the input of an oversampled filter bank and their performance are compared
1 edition published in 2012 in English and held by 1 WorldCat member library worldwide
This thesis aims at proposing and implementing efficient joint sourcechannel coding and decoding schemes in order to enhance the robustness of multimedia contents transmitted over unreliable networks. In a first time, we propose to identify and exploit the residual redundancy left by wavelet video coders in the compressed bit streams. an efficient jointsource channel decoding scheme is proposed to detect and correct some of the transmission errors occurring during a noisy transmission. This technique is further applied to multiple description video streams transmitted over a mixed architecture consisting of a wired lossy part and a wireless noisy part. In a second time, we propose to use the structured redundancy deliberately introduced by multirate coding systems, such as oversampled filter banks, in order to perform a robust estimation of the input signals transmitted over noisy channels. Two efficient estimation approaches are proposed and compared. The first one exploits the linear dependencies between the output variables, jointly to the bounded quantization noise, in order to perform a consistent estmiation of the source outcome. The second approach uses the belief propagation algorithm to estimate the input signal via a message passing procedure along the graph representing the linear dependencies between the variables. These schemes ares then applied to estimate the input of an oversampled filter bank and their performance are compared
Vers une solution réaliste de décodage sourcecanal conjoint de contenus multimédia
by
Cédric Marin(
Book
)
1 edition published in 2009 in French and held by 1 WorldCat member library worldwide
1 edition published in 2009 in French and held by 1 WorldCat member library worldwide
Caractérisation analytique et optimisation de codes sourcecanal conjoints
by
Amadou Tidiane Diallo(
)
1 edition published in 2012 in French and held by 1 WorldCat member library worldwide
Joint sourcechannel codes are codes simultaneously providing data compression and protection of the generated bitstream from transmission errors. These codes are nonlinear, as most source codes. Their potential is to offer good performance in terms of compression and errorcorrection for reduced code lengths.The performance of a source code is measured by the difference between the entropy of the source to be compressed and the average number of bits needed to encode a symbol of this source. The performance of a channel code is measured by the minimum distance between codewords or sequences of codewords, and more generally with the distance spectrum. The classic codes have tools to effectively evaluate these performance criteria. Furthermore, the design of good source codes or good channel codes is a largely explored since the work of Shannon. But, similar tools for joint sourcechannel codes, for performances evaluation or for design good codes remained to develop, although some proposals have been made in the past.This thesis focuses on the family of joint sourcechannel codes that can be described by automata with a finite number of states. Errorcorrecting quasiarithmetic codes and errorcorrecting variablelength codes are part of this family. The way to construct an automaton for a given code is recalled.From an automaton, it is possible to construct a product graph for describing all pairs of paths diverging from some state and converging to the same or another state. We have shown that, using Dijkstra's algorithm, it is possible to evaluate the free distance of a joint code with polynomial complexity. For errorscorrecting variablelength codes, we proposed additional bounds that are easy to evaluate. These bounds are extensions of Plotkin and Heller bounds to variablelength codes. Bounds can also be deduced from the product graph associated to a code, in which only a part of code words is specified.These tools to accurately assess or bound the free distance of a joint code allow the design of codes with good distance properties for a given redundancy or minimizing redundancy for a given free distance. Our approach is to organize the search for good joint sourcechannel codes with trees. The root of the tree corresponds to a code in which no bit is specified, the leaves of codes in which all bits are specified, and the intermediate nodes to partially specified codes. When moving from the root to the leaves of the tree, the upper bound on the free distance decreases, while the lower bound grows. This allows application of an algorithm such as branchandprune for finding the code with the largest free distance, without having to explore the whole tree containing the codes.The proposed approach has allowed the construction of joint codes for the letters of the alphabet. Compared to an equivalent tandem scheme (source code followed by a convolutional code), the codes obtained have comparable performance (rate coding, free distance) while being less complex in terms of the number of states of the decoder. Several extensions of this work are in progress: 1) synthesis of errorcorrecting variablelength codes formalized as a mixed linear programming problem on integers, 2) Explore the search space of errorcorrecting variablelength codes using an algorithm such as A* algorithm
1 edition published in 2012 in French and held by 1 WorldCat member library worldwide
Joint sourcechannel codes are codes simultaneously providing data compression and protection of the generated bitstream from transmission errors. These codes are nonlinear, as most source codes. Their potential is to offer good performance in terms of compression and errorcorrection for reduced code lengths.The performance of a source code is measured by the difference between the entropy of the source to be compressed and the average number of bits needed to encode a symbol of this source. The performance of a channel code is measured by the minimum distance between codewords or sequences of codewords, and more generally with the distance spectrum. The classic codes have tools to effectively evaluate these performance criteria. Furthermore, the design of good source codes or good channel codes is a largely explored since the work of Shannon. But, similar tools for joint sourcechannel codes, for performances evaluation or for design good codes remained to develop, although some proposals have been made in the past.This thesis focuses on the family of joint sourcechannel codes that can be described by automata with a finite number of states. Errorcorrecting quasiarithmetic codes and errorcorrecting variablelength codes are part of this family. The way to construct an automaton for a given code is recalled.From an automaton, it is possible to construct a product graph for describing all pairs of paths diverging from some state and converging to the same or another state. We have shown that, using Dijkstra's algorithm, it is possible to evaluate the free distance of a joint code with polynomial complexity. For errorscorrecting variablelength codes, we proposed additional bounds that are easy to evaluate. These bounds are extensions of Plotkin and Heller bounds to variablelength codes. Bounds can also be deduced from the product graph associated to a code, in which only a part of code words is specified.These tools to accurately assess or bound the free distance of a joint code allow the design of codes with good distance properties for a given redundancy or minimizing redundancy for a given free distance. Our approach is to organize the search for good joint sourcechannel codes with trees. The root of the tree corresponds to a code in which no bit is specified, the leaves of codes in which all bits are specified, and the intermediate nodes to partially specified codes. When moving from the root to the leaves of the tree, the upper bound on the free distance decreases, while the lower bound grows. This allows application of an algorithm such as branchandprune for finding the code with the largest free distance, without having to explore the whole tree containing the codes.The proposed approach has allowed the construction of joint codes for the letters of the alphabet. Compared to an equivalent tandem scheme (source code followed by a convolutional code), the codes obtained have comparable performance (rate coding, free distance) while being less complex in terms of the number of states of the decoder. Several extensions of this work are in progress: 1) synthesis of errorcorrecting variablelength codes formalized as a mixed linear programming problem on integers, 2) Explore the search space of errorcorrecting variablelength codes using an algorithm such as A* algorithm
Codage de sources avec information adjacente et connaissance incertaine des corrélations
by
Elsa Dupraz(
)
1 edition published in 2013 in French and held by 1 WorldCat member library worldwide
Dans cette thèse, nous nous sommes intéressés au problème de codage de sources avec information adjacente au décodeur seulement. Plus précisément, nous avons considéré le cas où la distribution jointe entre la source et l'information adjacente n'est pas bien connue. Dans ce contexte, pour un problème de codage sans pertes, nous avons d'abord effectué une analyse de performance à l'aide d'outils de la théorie de l'information. Nous avons ensuite proposé un schéma de codage pratique efficace malgré le manque de connaissance sur la distribution de probabilité jointe. Ce schéma de codage s'appuie sur des codes LDPC nonbinaires et sur un algorithme de type EspéranceMaximisation. Le problème du schéma de codage proposé, c'est que les codes LDPC nonbinaires utilisés doivent être performants. C'est à dire qu'ils doivent être construits à partir de distributions de degrés qui permettent d'atteindre un débit proche des performances théoriques. Nous avons donc proposé une méthode d'optimisation des distributions de degrés des codes LDPC. Enfin, nous nous sommes intéressés à un cas de codage avec pertes. Nous avons supposé que le modèle de corrélation entre la source et l'information adjacente était décrit par un modèle de Markov caché à émissions Gaussiennes. Pour ce modèle, nous avons également effectué une analyse de performance, puis nous avons proposé un schéma de codage pratique. Ce schéma de codage s'appuie sur des codes LDPC nonbinaires et sur une reconstruction MMSE. Ces deux composantes exploitent la structure avec mémoire du modèle de Markov caché
1 edition published in 2013 in French and held by 1 WorldCat member library worldwide
Dans cette thèse, nous nous sommes intéressés au problème de codage de sources avec information adjacente au décodeur seulement. Plus précisément, nous avons considéré le cas où la distribution jointe entre la source et l'information adjacente n'est pas bien connue. Dans ce contexte, pour un problème de codage sans pertes, nous avons d'abord effectué une analyse de performance à l'aide d'outils de la théorie de l'information. Nous avons ensuite proposé un schéma de codage pratique efficace malgré le manque de connaissance sur la distribution de probabilité jointe. Ce schéma de codage s'appuie sur des codes LDPC nonbinaires et sur un algorithme de type EspéranceMaximisation. Le problème du schéma de codage proposé, c'est que les codes LDPC nonbinaires utilisés doivent être performants. C'est à dire qu'ils doivent être construits à partir de distributions de degrés qui permettent d'atteindre un débit proche des performances théoriques. Nous avons donc proposé une méthode d'optimisation des distributions de degrés des codes LDPC. Enfin, nous nous sommes intéressés à un cas de codage avec pertes. Nous avons supposé que le modèle de corrélation entre la source et l'information adjacente était décrit par un modèle de Markov caché à émissions Gaussiennes. Pour ce modèle, nous avons également effectué une analyse de performance, puis nous avons proposé un schéma de codage pratique. Ce schéma de codage s'appuie sur des codes LDPC nonbinaires et sur une reconstruction MMSE. Ces deux composantes exploitent la structure avec mémoire du modèle de Markov caché
Joint SourceNetwork Coding & Decoding
by
Lana Iwaza(
)
1 edition published in 2013 in English and held by 1 WorldCat member library worldwide
While network data transmission was traditionally accomplished via routing, network coding (NC) broke this rule by allowing network nodes to perform linear combinations of the upcoming data packets. Network operations are performed in a specific Galois field of fixed size q. Decoding only involves a Gaussian elimination with the received networkcoded packets. However, in practical wireless environments, NC might be susceptible to transmission errors caused by noise, fading, or interference. This drawback is quite problematic for realtime applications, such as multimediacontent delivery, where timing constraints may lead to the reception of an insufficient number of packets and consequently to difficulties in decoding the transmitted sources. At best, some packets can be recovered, while in the worst case, the receiver is unable to recover any of the transmitted packets.In this thesis, we propose joint sourcenetwork coding and decoding schemes in the purpose of providing an approximate reconstruction of the source in situations where perfect decoding is not possible. The main motivation comes from the fact that source redundancy can be exploited at the decoder in order to estimate the transmitted packets, even when some of them are missing. The redundancy can be either natural, i.e, already existing, or artificial, i.e, externally introduced.Regarding artificial redundancy, we choose multiple description coding (MDC) as a way of introducing structured correlation among uncorrelated packets. By combining MDC and NC, we aim to ensure a reconstruction quality that improves gradually with the number of received networkcoded packets. We consider two different approaches for generating descriptions. The first technique consists in generating multiple descriptions via a realvalued frame expansion applied at the source before quantization. Data recovery is then achieved via the solution of a mixed integerlinear problem. The second technique uses a correlating transform in some Galois field in order to generate descriptions, and decoding involves a simple Gaussian elimination. Such schemes are particularly interesting for multimedia contents delivery, such as video streaming, where quality increases with the number of received descriptions.Another application of such schemes would be multicasting or broadcasting data towards mobile terminals experiencing different channel conditions. The channel is modeled as a binary symmetric channel (BSC) and we study the effect on the decoding quality for both proposed schemes. Performance comparison with a traditional NC scheme is also provided.Concerning natural redundancy, a typical scenario would be a wireless sensor network, where geographically distributed sources capture spatially correlated measures. We propose a scheme that aims at exploiting this spatial redundancy, and provide an estimation of the transmitted measurement samples via the solution of an integer quadratic problem. The obtained reconstruction quality is compared with the one provided by a classical NC scheme
1 edition published in 2013 in English and held by 1 WorldCat member library worldwide
While network data transmission was traditionally accomplished via routing, network coding (NC) broke this rule by allowing network nodes to perform linear combinations of the upcoming data packets. Network operations are performed in a specific Galois field of fixed size q. Decoding only involves a Gaussian elimination with the received networkcoded packets. However, in practical wireless environments, NC might be susceptible to transmission errors caused by noise, fading, or interference. This drawback is quite problematic for realtime applications, such as multimediacontent delivery, where timing constraints may lead to the reception of an insufficient number of packets and consequently to difficulties in decoding the transmitted sources. At best, some packets can be recovered, while in the worst case, the receiver is unable to recover any of the transmitted packets.In this thesis, we propose joint sourcenetwork coding and decoding schemes in the purpose of providing an approximate reconstruction of the source in situations where perfect decoding is not possible. The main motivation comes from the fact that source redundancy can be exploited at the decoder in order to estimate the transmitted packets, even when some of them are missing. The redundancy can be either natural, i.e, already existing, or artificial, i.e, externally introduced.Regarding artificial redundancy, we choose multiple description coding (MDC) as a way of introducing structured correlation among uncorrelated packets. By combining MDC and NC, we aim to ensure a reconstruction quality that improves gradually with the number of received networkcoded packets. We consider two different approaches for generating descriptions. The first technique consists in generating multiple descriptions via a realvalued frame expansion applied at the source before quantization. Data recovery is then achieved via the solution of a mixed integerlinear problem. The second technique uses a correlating transform in some Galois field in order to generate descriptions, and decoding involves a simple Gaussian elimination. Such schemes are particularly interesting for multimedia contents delivery, such as video streaming, where quality increases with the number of received descriptions.Another application of such schemes would be multicasting or broadcasting data towards mobile terminals experiencing different channel conditions. The channel is modeled as a binary symmetric channel (BSC) and we study the effect on the decoding quality for both proposed schemes. Performance comparison with a traditional NC scheme is also provided.Concerning natural redundancy, a typical scenario would be a wireless sensor network, where geographically distributed sources capture spatially correlated measures. We propose a scheme that aims at exploiting this spatial redundancy, and provide an estimation of the transmitted measurement samples via the solution of an integer quadratic problem. The obtained reconstruction quality is compared with the one provided by a classical NC scheme
Audience Level
0 

1  
Kids  General  Special 
Related Identities
 Télécom ParisTech
 Laboratoire des signaux et systèmes (L2S) (GifsurYvette, Essonne)
 Ecole doctorale Sciences et Technologies de l'Information, des Télécommunications et des Systèmes (Orsay, Essonne)
 Université de ParisSud
 Duhamel, Pierre (1953....; professeur de physique)
 Bouchireb, Khaled (1982....). Author
 Université de ParisSud Faculté des Sciences d'Orsay (Essonne)
 Ali, Usman (1982....). Author
 Marin, Cédric (1980 ...). Author
 Laboratoire traitement et communication de l'information (Paris)