Kieffer, Michel
Overview
Works:  22 works in 29 publications in 2 languages and 171 library holdings 

Roles:  956, Opponent, Author, Thesis advisor 
Classifications:  QA297.75, 511.42 
Publication Timeline
.
Most widely held works by
Michel Kieffer
Joint sourcechannel decoding : a crosslayer perspective with application in video broadcasting over mobile and wireless
networks by
Pierre Duhamel(
Book
)
4 editions published in 2010 in English and held by 50 WorldCat member libraries worldwide
Treats joint source and channel decoding in an integrated way Gives a clear description of the problems in the field together with the mathematical tools for their solution Contains many detailed examples useful for practical applications of the theory to video broadcasting over mobile and wireless networks Traditionally, crosslayer and joint sourcechannel coding were seen as incompatible with classically structured networks but recent advances in theory changed this situation. Joint sourcechannel decoding is now seen as a viable alternative to separate decoding of source and channel codes, if the protocol layers are taken into account. A joint source/protocol/channel approach is thus addressed in this book: all levels of the protocol stack are considered, showing how the information in each layer influences the others. This book provides the tools to show how crosslayer and joint sourcechannel coding and decoding are now compatible with presentday mobile and wireless networks, with a particular application to the key area of video transmission to mobiles. Typical applications are broadcasting, or pointtopoint delivery of multimedia contents, which are very timely in the context of the current development of mobile services such as audio (MPEG4 AAC) or video (H263, H264) transmission using recent wireless transmission standards (DVHH, DVBSH, WiMAX, LTE). This crossdisciplinary book is ideal for graduate students, researchers, and more generally professionals working either in signal processing for communications or in networking applications, interested in reliable multimedia transmission. This book is also of interest to people involved in crosslayer optimization of mobile networks. Its content may provide them with other points of view on their optimization problem, enlarging the set of tools which they could use. Pierre Duhamel is director of research at CNRS/ LSS and has previously held research positions at ThomsonCSF, CNET, and ENST, where he was head of the Signal and Image Processing Department. He has served as chairman of the DSP committee and associate Editor of the IEEE Transactions on Signal Processing and Signal Processing Letters, as well as acting as a cochair at MMSP and ICASSP conferences. He was awarded the Grand Prix France Telecom by the French Science Academy in 2000. He is coauthor of more than 80 papers in international journals, 250 conference proceedings, and 28 patents. Michel Kieffer is an assistant professor in signal processing for communications at the Universiť ParisSud and a researcher at the Laboratoire des Signaux et Sysẗmes, GifsurYvette, France. His research interests are in joint sourcechannel coding and decoding techniques for the reliable transmission of multimedia contents. He serves as associate editor of Signal Processing (Elsevier). He is coauthor of more than 90 contributions to journals, conference proceedings, and book chapters. Treats joint source and channel decoding in an integrated way Gives a clear description of the problems in the field together with the mathematical tools for their solution Contains many detailed examples useful for practical applications of the theory to video broadcasting over mobile and wireless networks
4 editions published in 2010 in English and held by 50 WorldCat member libraries worldwide
Treats joint source and channel decoding in an integrated way Gives a clear description of the problems in the field together with the mathematical tools for their solution Contains many detailed examples useful for practical applications of the theory to video broadcasting over mobile and wireless networks Traditionally, crosslayer and joint sourcechannel coding were seen as incompatible with classically structured networks but recent advances in theory changed this situation. Joint sourcechannel decoding is now seen as a viable alternative to separate decoding of source and channel codes, if the protocol layers are taken into account. A joint source/protocol/channel approach is thus addressed in this book: all levels of the protocol stack are considered, showing how the information in each layer influences the others. This book provides the tools to show how crosslayer and joint sourcechannel coding and decoding are now compatible with presentday mobile and wireless networks, with a particular application to the key area of video transmission to mobiles. Typical applications are broadcasting, or pointtopoint delivery of multimedia contents, which are very timely in the context of the current development of mobile services such as audio (MPEG4 AAC) or video (H263, H264) transmission using recent wireless transmission standards (DVHH, DVBSH, WiMAX, LTE). This crossdisciplinary book is ideal for graduate students, researchers, and more generally professionals working either in signal processing for communications or in networking applications, interested in reliable multimedia transmission. This book is also of interest to people involved in crosslayer optimization of mobile networks. Its content may provide them with other points of view on their optimization problem, enlarging the set of tools which they could use. Pierre Duhamel is director of research at CNRS/ LSS and has previously held research positions at ThomsonCSF, CNET, and ENST, where he was head of the Signal and Image Processing Department. He has served as chairman of the DSP committee and associate Editor of the IEEE Transactions on Signal Processing and Signal Processing Letters, as well as acting as a cochair at MMSP and ICASSP conferences. He was awarded the Grand Prix France Telecom by the French Science Academy in 2000. He is coauthor of more than 80 papers in international journals, 250 conference proceedings, and 28 patents. Michel Kieffer is an assistant professor in signal processing for communications at the Universiť ParisSud and a researcher at the Laboratoire des Signaux et Sysẗmes, GifsurYvette, France. His research interests are in joint sourcechannel coding and decoding techniques for the reliable transmission of multimedia contents. He serves as associate editor of Signal Processing (Elsevier). He is coauthor of more than 90 contributions to journals, conference proceedings, and book chapters. Treats joint source and channel decoding in an integrated way Gives a clear description of the problems in the field together with the mathematical tools for their solution Contains many detailed examples useful for practical applications of the theory to video broadcasting over mobile and wireless networks
ESTIMATION ENSEMBLISTE PAR ANALYSE PAR INTERVALLES. APPLICATION A LA LOCALISATION D'UN VEHICULE by
Michel Kieffer(
Book
)
2 editions published in 1999 in French and held by 3 WorldCat member libraries worldwide
DANS CE TRAVAIL, NOUS DEVELOPPONS DES OUTILS D'ANALYSE PAR INTERVALLES POUR L'AUTOMATIQUE. NOUS NOUS INTERESSONS PLUS PARTICULIEREMENT A L'IDENTIFICATION DE PARAMETRES ET A L'ESTIMATION D'ETAT POUR DES MODELES NONLINEAIRES. POUR L'IDENTIFICATION, L'ALGORITHME D'OPTIMISATION GLOBALE DE HANSEN FOURNIT UN ENCADREMENT DE TOUS LES VECTEURS DE PARAMETRES MINIMISANT UNE FONCTION COUT METTANT EN JEU LES GRANDEURS MESUREES SUR UN DISPOSITIF REEL A MODELISER ET LEUR PENDANT PREDIT PAR SON MODELE. NOUS MONTRONS QUE CECI PEUT METTRE EN EVIDENCE D'EVENTUELS PROBLEMES D'IDENTIFIABILITE SANS ETUDE PREALABLE. DANS L'APPROCHE A ERREURS BORNEES, MEME LORSQUE DES DONNEES ABERRANTES SONT PRESENTES, DES ENCADREMENTS INTERIEUR ET EXTERIEUR DES ENSEMBLES DE VECTEURS DE PARAMETRES ADMISSIBLES SONT FOURNIS PAR LES ALGORITHMES D'INVERSION ENSEMBLISTE PAR ANALYSE PAR INTERVALLES. QUAND LES BORNES SUR LES ERREURS NE SONT PAS CONNUES, UNE METHODE ORIGINALE EVALUANT LA PLUS PETITE BORNE D'ERREUR FOURNISSANT UN ENSEMBLE DE VECTEURS DE PARAMETRES ADMISSIBLES NON VIDE EST PROPOSEE. UN NOUVEL ALGORITHME RECURSIF D'ESTIMATION D'ETAT GARANTI EST PRESENTE. D'UNE STRUCTURE ANALOGUE AU FILTRE DE KALMAN, MAIS DANS UN CONTEXTE D'ERREURS BORNEES, IL FOURNIT A TOUT INSTANT UN ENSEMBLE CONTENANT LES VALEURS DE L'ETAT COMPATIBLES AVEC LES INFORMATIONS DISPONIBLES. CET ALGORITHME EST CONSTRUIT A L'AIDE D'UN ALGORITHME D'INVERSION ENSEMBLISTE ET D'UN ALGORITHME ORIGINAL DE CALCUL D'IMAGE DIRECTE. TOUS DEUX EXPLOITENT LA NOTION DE SOUSPAVAGES DECRITS PAR DES ARBRES BINAIRES, QUI PERMET UNE DESCRIPTION APPROCHEE D'ENSEMBLES COMPACTS. CES TECHNIQUES SONT APPLIQUEES A LA LOCALISATION PUIS AU SUIVI D'UN ROBOT A L'INTERIEUR D'UNE PIECE CARTOGRAPHIEE. LA PRESENCE DE DONNEES ABERRANTES, COMME LES AMBIGUITES LIEES AUX SYMETRIES DE LA PIECE DANS LAQUELLE SE TROUVE LE ROBOT SONT PRISES EN COMPTE SANS DIFFICULTE. DES ENSEMBLES DE CONFIGURATIONS POSSIBLES DISJOINTS PEUVENT ETRE CONSIDEREES ET LEUR TRAITEMENT NE POSE AUCUN PROBLEME. EN OUTRE, LE SUIVI, MEME EN PRESENCE DE DONNEES ABERRANTES, EST FAIT EN TEMPS REEL SUR LES EXEMPLES TRAITES
2 editions published in 1999 in French and held by 3 WorldCat member libraries worldwide
DANS CE TRAVAIL, NOUS DEVELOPPONS DES OUTILS D'ANALYSE PAR INTERVALLES POUR L'AUTOMATIQUE. NOUS NOUS INTERESSONS PLUS PARTICULIEREMENT A L'IDENTIFICATION DE PARAMETRES ET A L'ESTIMATION D'ETAT POUR DES MODELES NONLINEAIRES. POUR L'IDENTIFICATION, L'ALGORITHME D'OPTIMISATION GLOBALE DE HANSEN FOURNIT UN ENCADREMENT DE TOUS LES VECTEURS DE PARAMETRES MINIMISANT UNE FONCTION COUT METTANT EN JEU LES GRANDEURS MESUREES SUR UN DISPOSITIF REEL A MODELISER ET LEUR PENDANT PREDIT PAR SON MODELE. NOUS MONTRONS QUE CECI PEUT METTRE EN EVIDENCE D'EVENTUELS PROBLEMES D'IDENTIFIABILITE SANS ETUDE PREALABLE. DANS L'APPROCHE A ERREURS BORNEES, MEME LORSQUE DES DONNEES ABERRANTES SONT PRESENTES, DES ENCADREMENTS INTERIEUR ET EXTERIEUR DES ENSEMBLES DE VECTEURS DE PARAMETRES ADMISSIBLES SONT FOURNIS PAR LES ALGORITHMES D'INVERSION ENSEMBLISTE PAR ANALYSE PAR INTERVALLES. QUAND LES BORNES SUR LES ERREURS NE SONT PAS CONNUES, UNE METHODE ORIGINALE EVALUANT LA PLUS PETITE BORNE D'ERREUR FOURNISSANT UN ENSEMBLE DE VECTEURS DE PARAMETRES ADMISSIBLES NON VIDE EST PROPOSEE. UN NOUVEL ALGORITHME RECURSIF D'ESTIMATION D'ETAT GARANTI EST PRESENTE. D'UNE STRUCTURE ANALOGUE AU FILTRE DE KALMAN, MAIS DANS UN CONTEXTE D'ERREURS BORNEES, IL FOURNIT A TOUT INSTANT UN ENSEMBLE CONTENANT LES VALEURS DE L'ETAT COMPATIBLES AVEC LES INFORMATIONS DISPONIBLES. CET ALGORITHME EST CONSTRUIT A L'AIDE D'UN ALGORITHME D'INVERSION ENSEMBLISTE ET D'UN ALGORITHME ORIGINAL DE CALCUL D'IMAGE DIRECTE. TOUS DEUX EXPLOITENT LA NOTION DE SOUSPAVAGES DECRITS PAR DES ARBRES BINAIRES, QUI PERMET UNE DESCRIPTION APPROCHEE D'ENSEMBLES COMPACTS. CES TECHNIQUES SONT APPLIQUEES A LA LOCALISATION PUIS AU SUIVI D'UN ROBOT A L'INTERIEUR D'UNE PIECE CARTOGRAPHIEE. LA PRESENCE DE DONNEES ABERRANTES, COMME LES AMBIGUITES LIEES AUX SYMETRIES DE LA PIECE DANS LAQUELLE SE TROUVE LE ROBOT SONT PRISES EN COMPTE SANS DIFFICULTE. DES ENSEMBLES DE CONFIGURATIONS POSSIBLES DISJOINTS PEUVENT ETRE CONSIDEREES ET LEUR TRAITEMENT NE POSE AUCUN PROBLEME. EN OUTRE, LE SUIVI, MEME EN PRESENCE DE DONNEES ABERRANTES, EST FAIT EN TEMPS REEL SUR LES EXEMPLES TRAITES
Amélioration des services vidéo fournis à travers les réseaux radio mobiles by
Khaled Bouchireb(
Book
)
in English and held by 2 WorldCat member libraries worldwide
In this thesis, video communication systems are studied for application to video services provided over wireless mobile networks. This work emphasizes on pointtomultipoint communications and proposes many enhancements to the current systems : First, a scheme combining robust decoding with retransmissions is defined so that the number of retransmissions is reduced and the quality of the received video can be controlled. As opposed to current retransmissionless and retransmissionbased schemes, this scheme also offers the possibility to trade throughput for quality and vice versa.Then, the transmission of a twolevel scalable video sequence towards several clients is considered. Schemes using the basic GobackN (GBN) and Selective Repeat (SR) Automatic Repeat reQuest (ARQ) techniques are studied. A new scheme is also proposed and studied. The new scheme reduces the buffering requirement at the receiver end while keeping the performance optimal (in terms of the amount of data successfully transmitted within a given period of time). The different schemes were shown to be applicable to 2G, 3G and WiMAX systems.Finally, we prove that retransmissions can be used in pointtomultipoint communications up to a given limit on the number of receivers (contrary to the current wireless systems where ARQ is only used sin pointtopoint communications). If retransmissions are introduced in the current Multicast/Broadcast services (supported by the 3GPP and mobile WiMAX), the system will guarantee a certain amount of receivers to have the nominal quality whereas the current Multicast/Broadcast services do not garantee any receiver of the nominal quality
in English and held by 2 WorldCat member libraries worldwide
In this thesis, video communication systems are studied for application to video services provided over wireless mobile networks. This work emphasizes on pointtomultipoint communications and proposes many enhancements to the current systems : First, a scheme combining robust decoding with retransmissions is defined so that the number of retransmissions is reduced and the quality of the received video can be controlled. As opposed to current retransmissionless and retransmissionbased schemes, this scheme also offers the possibility to trade throughput for quality and vice versa.Then, the transmission of a twolevel scalable video sequence towards several clients is considered. Schemes using the basic GobackN (GBN) and Selective Repeat (SR) Automatic Repeat reQuest (ARQ) techniques are studied. A new scheme is also proposed and studied. The new scheme reduces the buffering requirement at the receiver end while keeping the performance optimal (in terms of the amount of data successfully transmitted within a given period of time). The different schemes were shown to be applicable to 2G, 3G and WiMAX systems.Finally, we prove that retransmissions can be used in pointtomultipoint communications up to a given limit on the number of receivers (contrary to the current wireless systems where ARQ is only used sin pointtopoint communications). If retransmissions are introduced in the current Multicast/Broadcast services (supported by the 3GPP and mobile WiMAX), the system will guarantee a certain amount of receivers to have the nominal quality whereas the current Multicast/Broadcast services do not garantee any receiver of the nominal quality
WIBOX  Une passerelle pour une réception robuste de vidéo diffusée via WIMAX et une rediffusion indoor via WIFI by
Usman Ali(
Book
)
2 editions published in 2010 in English and held by 2 WorldCat member libraries worldwide
This PhD study intends to investigate the tools necessary to implement a device (the WiBOX), which can robustly receive video broadcast over WiMAX and then rebroadcast it over WiFi. WiBOX should not only provide WiMAX services access to a WiFi user, but it should also achieve reasonable video quality even with a very weak WiMAX signal, and at the same time for WiFi rebroadcast, it should utilize alternative recovery techniques and avoid delays caused by the conventional retransmissions. This would help to improve WiFi user quality and to remain consistent with the broadcast scenario. To achieve the said objectives one has to consider several robust tools, which are often deployed to solve problems, like packet loss, synchronization failures, high delay, throughput etc., encountered while receiving video through a WiMAX/WiFilink. These robust tools can be deployed at several protocol layers, among them few notable are, e.g., Joint Source Channel Decoding (JSCD) techniques deployed at the application (APL) layer, iterative decoding techniques deployed at the physical (PHY) layer, and header recovery, estimation, or synchronization tools deployed at various layers. For an efficient performance of these robust tools some crosslayer approach to enable exchange of useful information between the protocol layers and the complete analysis of the protocol stack is required. Some of these tools have requirements that are not compliant with the Standard Protocol Stack (SPS) and require SoftPermeable Protocol Stack (SPPS), which can allow flow of erroneous packets, containing the soft information, e.g., A Posteriori Probabilities (APP) or likelihood ratios, to the higher layers. More importantly, for performance enhancement these tools should mutually benefit and reinforce each other instead of undoing each other's advantage. To increase the throughput, in both WiMAX and WiFi communication standards, packet aggregation is used; several packets are aggregated at a given layer of the protocol stack in the same burst to be transmitted. One can deploy Frame Synchronization (FS), i.e., to synchronize and recover the aggregated packets, however, when transmission over a noisy channel is considered, FS can cause loss of several errorfree or partially errorfree packets, which could otherwise be beneficial for other tools, e.g., JSCD and header recovery tools, functioning at higher layers of the SPPS. Rebroadcasting video over WiFi can significantly increase packet loss rate as the retransmission is omitted, which can be overcome by the packetlevel Forward Error Correction (FEC) techniques. The FS and packetlevel FEC decoder for SPPS should not only allow flow of soft information from the PHY layer but should also mutually benefit from the JSC decoders deployed at the APL layer. In this thesis, we propose several Joint ProtocolChannel Decoding (JPCD) techniques for FS and packetlevel FEC decoders operating at SPPS. In the first part of this thesis, we propose several robust FS methods for SPPS based on the implicit redundancies present in protocol and the soft information from the soft decoders at PHY layer. First, we propose a trellisbased algorithm that provides the APPs of packet boundaries. The possible successions of packets forming an aggregated packet are described by a trellis. The resulting algorithm is very efficient (optimal in some sense), but requires the knowledge of the whole aggregated packet beforehand, which might not be possible in latencyconstrained situations. Thus in a second step, we propose a lowdelay and reducedcomplexity Sliding Trellis (ST)based FS technique, where each burst is divided into overlapping windows in which FS is performed. Finally, we propose an onthefly threestate (3S) automaton, where packet length is estimated utilizing implicit redundancies and Bayesian hypothesis testing is performed to retrieve the correct FS. These methods are illustrated for the WiMAX Medium Access Control (MAC) layer and do not need any supplementary framing information. Practically, these improvements will result in increasing the amount of packets that can reach the JSC decoders. In the second part, we propose robust packetlevel FEC decoder for SPPS, which in addition to utilizing the introduced redundant FEC packets, uses the soft information (instead of hard bits, i.e., bitstream of '1's and '0's) provided by the PHY layer along with the protocol redundancies, in order to provide robustness against bit error. Though, it does not impede the flow of soft information as required for SPPS, it needs support from the header recovery techniques at the lower layers to forward erroneous packets and from the JSC decoders at the APL layer to detect and remove remaining errors. We have investigated the standard RTPlevel FEC, and compared the performance of the proposed FEC decoder with alternative approaches. The proposed FS and packetlevel FEC techniques would reduce the amount of packets dropped, increase the number of packets relayed to the video decoder functioning at APL layer, and improve the received video quality
2 editions published in 2010 in English and held by 2 WorldCat member libraries worldwide
This PhD study intends to investigate the tools necessary to implement a device (the WiBOX), which can robustly receive video broadcast over WiMAX and then rebroadcast it over WiFi. WiBOX should not only provide WiMAX services access to a WiFi user, but it should also achieve reasonable video quality even with a very weak WiMAX signal, and at the same time for WiFi rebroadcast, it should utilize alternative recovery techniques and avoid delays caused by the conventional retransmissions. This would help to improve WiFi user quality and to remain consistent with the broadcast scenario. To achieve the said objectives one has to consider several robust tools, which are often deployed to solve problems, like packet loss, synchronization failures, high delay, throughput etc., encountered while receiving video through a WiMAX/WiFilink. These robust tools can be deployed at several protocol layers, among them few notable are, e.g., Joint Source Channel Decoding (JSCD) techniques deployed at the application (APL) layer, iterative decoding techniques deployed at the physical (PHY) layer, and header recovery, estimation, or synchronization tools deployed at various layers. For an efficient performance of these robust tools some crosslayer approach to enable exchange of useful information between the protocol layers and the complete analysis of the protocol stack is required. Some of these tools have requirements that are not compliant with the Standard Protocol Stack (SPS) and require SoftPermeable Protocol Stack (SPPS), which can allow flow of erroneous packets, containing the soft information, e.g., A Posteriori Probabilities (APP) or likelihood ratios, to the higher layers. More importantly, for performance enhancement these tools should mutually benefit and reinforce each other instead of undoing each other's advantage. To increase the throughput, in both WiMAX and WiFi communication standards, packet aggregation is used; several packets are aggregated at a given layer of the protocol stack in the same burst to be transmitted. One can deploy Frame Synchronization (FS), i.e., to synchronize and recover the aggregated packets, however, when transmission over a noisy channel is considered, FS can cause loss of several errorfree or partially errorfree packets, which could otherwise be beneficial for other tools, e.g., JSCD and header recovery tools, functioning at higher layers of the SPPS. Rebroadcasting video over WiFi can significantly increase packet loss rate as the retransmission is omitted, which can be overcome by the packetlevel Forward Error Correction (FEC) techniques. The FS and packetlevel FEC decoder for SPPS should not only allow flow of soft information from the PHY layer but should also mutually benefit from the JSC decoders deployed at the APL layer. In this thesis, we propose several Joint ProtocolChannel Decoding (JPCD) techniques for FS and packetlevel FEC decoders operating at SPPS. In the first part of this thesis, we propose several robust FS methods for SPPS based on the implicit redundancies present in protocol and the soft information from the soft decoders at PHY layer. First, we propose a trellisbased algorithm that provides the APPs of packet boundaries. The possible successions of packets forming an aggregated packet are described by a trellis. The resulting algorithm is very efficient (optimal in some sense), but requires the knowledge of the whole aggregated packet beforehand, which might not be possible in latencyconstrained situations. Thus in a second step, we propose a lowdelay and reducedcomplexity Sliding Trellis (ST)based FS technique, where each burst is divided into overlapping windows in which FS is performed. Finally, we propose an onthefly threestate (3S) automaton, where packet length is estimated utilizing implicit redundancies and Bayesian hypothesis testing is performed to retrieve the correct FS. These methods are illustrated for the WiMAX Medium Access Control (MAC) layer and do not need any supplementary framing information. Practically, these improvements will result in increasing the amount of packets that can reach the JSC decoders. In the second part, we propose robust packetlevel FEC decoder for SPPS, which in addition to utilizing the introduced redundant FEC packets, uses the soft information (instead of hard bits, i.e., bitstream of '1's and '0's) provided by the PHY layer along with the protocol redundancies, in order to provide robustness against bit error. Though, it does not impede the flow of soft information as required for SPPS, it needs support from the header recovery techniques at the lower layers to forward erroneous packets and from the JSC decoders at the APL layer to detect and remove remaining errors. We have investigated the standard RTPlevel FEC, and compared the performance of the proposed FEC decoder with alternative approaches. The proposed FS and packetlevel FEC techniques would reduce the amount of packets dropped, increase the number of packets relayed to the video decoder functioning at APL layer, and improve the received video quality
Mémoire du Rhin : la garde du Rhin by Exposition. Strasbourg. 1993(
Book
)
1 edition published in 1993 in French and held by 1 WorldCat member library worldwide
1 edition published in 1993 in French and held by 1 WorldCat member library worldwide
Joint SourceNetwork Coding & Decoding by
Lana Iwaza(
)
1 edition published in 2013 in English and held by 1 WorldCat member library worldwide
While network data transmission was traditionally accomplished via routing, network coding (NC) broke this rule by allowing network nodes to perform linear combinations of the upcoming data packets. Network operations are performed in a specific Galois field of fixed size q. Decoding only involves a Gaussian elimination with the received networkcoded packets. However, in practical wireless environments, NC might be susceptible to transmission errors caused by noise, fading, or interference. This drawback is quite problematic for realtime applications, such as multimediacontent delivery, where timing constraints may lead to the reception of an insufficient number of packets and consequently to difficulties in decoding the transmitted sources. At best, some packets can be recovered, while in the worst case, the receiver is unable to recover any of the transmitted packets.In this thesis, we propose joint sourcenetwork coding and decoding schemes in the purpose of providing an approximate reconstruction of the source in situations where perfect decoding is not possible. The main motivation comes from the fact that source redundancy can be exploited at the decoder in order to estimate the transmitted packets, even when some of them are missing. The redundancy can be either natural, i.e, already existing, or artificial, i.e, externally introduced.Regarding artificial redundancy, we choose multiple description coding (MDC) as a way of introducing structured correlation among uncorrelated packets. By combining MDC and NC, we aim to ensure a reconstruction quality that improves gradually with the number of received networkcoded packets. We consider two different approaches for generating descriptions. The first technique consists in generating multiple descriptions via a realvalued frame expansion applied at the source before quantization. Data recovery is then achieved via the solution of a mixed integerlinear problem. The second technique uses a correlating transform in some Galois field in order to generate descriptions, and decoding involves a simple Gaussian elimination. Such schemes are particularly interesting for multimedia contents delivery, such as video streaming, where quality increases with the number of received descriptions.Another application of such schemes would be multicasting or broadcasting data towards mobile terminals experiencing different channel conditions. The channel is modeled as a binary symmetric channel (BSC) and we study the effect on the decoding quality for both proposed schemes. Performance comparison with a traditional NC scheme is also provided.Concerning natural redundancy, a typical scenario would be a wireless sensor network, where geographically distributed sources capture spatially correlated measures. We propose a scheme that aims at exploiting this spatial redundancy, and provide an estimation of the transmitted measurement samples via the solution of an integer quadratic problem. The obtained reconstruction quality is compared with the one provided by a classical NC scheme
1 edition published in 2013 in English and held by 1 WorldCat member library worldwide
While network data transmission was traditionally accomplished via routing, network coding (NC) broke this rule by allowing network nodes to perform linear combinations of the upcoming data packets. Network operations are performed in a specific Galois field of fixed size q. Decoding only involves a Gaussian elimination with the received networkcoded packets. However, in practical wireless environments, NC might be susceptible to transmission errors caused by noise, fading, or interference. This drawback is quite problematic for realtime applications, such as multimediacontent delivery, where timing constraints may lead to the reception of an insufficient number of packets and consequently to difficulties in decoding the transmitted sources. At best, some packets can be recovered, while in the worst case, the receiver is unable to recover any of the transmitted packets.In this thesis, we propose joint sourcenetwork coding and decoding schemes in the purpose of providing an approximate reconstruction of the source in situations where perfect decoding is not possible. The main motivation comes from the fact that source redundancy can be exploited at the decoder in order to estimate the transmitted packets, even when some of them are missing. The redundancy can be either natural, i.e, already existing, or artificial, i.e, externally introduced.Regarding artificial redundancy, we choose multiple description coding (MDC) as a way of introducing structured correlation among uncorrelated packets. By combining MDC and NC, we aim to ensure a reconstruction quality that improves gradually with the number of received networkcoded packets. We consider two different approaches for generating descriptions. The first technique consists in generating multiple descriptions via a realvalued frame expansion applied at the source before quantization. Data recovery is then achieved via the solution of a mixed integerlinear problem. The second technique uses a correlating transform in some Galois field in order to generate descriptions, and decoding involves a simple Gaussian elimination. Such schemes are particularly interesting for multimedia contents delivery, such as video streaming, where quality increases with the number of received descriptions.Another application of such schemes would be multicasting or broadcasting data towards mobile terminals experiencing different channel conditions. The channel is modeled as a binary symmetric channel (BSC) and we study the effect on the decoding quality for both proposed schemes. Performance comparison with a traditional NC scheme is also provided.Concerning natural redundancy, a typical scenario would be a wireless sensor network, where geographically distributed sources capture spatially correlated measures. We propose a scheme that aims at exploiting this spatial redundancy, and provide an estimation of the transmitted measurement samples via the solution of an integer quadratic problem. The obtained reconstruction quality is compared with the one provided by a classical NC scheme
Codage/décodage sourcecanal conjoint des contenus multimédia by
Manel Abid(
)
1 edition published in 2012 in English and held by 1 WorldCat member library worldwide
Dans cette thèse, nous nous intéressons aux schémas de codage et de décodage sourcecanal conjoint des contenus multimédia. Nous montrons comment la redondance laissée par le codeur vidéo peut être exploitée pour réaliser un décodage robuste des séquences transmises sur un lien radiomobile bruité. grâce au schéma de décodage conjoint proposé, le nombre de paquets corrompus est significativement réduit au prix d'une très légère augmentation du débit. Nous appliquons ensuite ce schéma de décodage robuste à latransmission par descriptions multiples sur une architecture mixte Internet et radiomobile. Le décodage sourcecanl conjoint des paquets reçus permet de corriger les erreurs de transmission et d'augmenter ainsi le nombre de paquets utilisés par le décodeur pour compenser les paquets perdus. L'efficacité de ce schéma est étudiée par rapport à un schéma classique basé sur les décisions dures du canal et sur un code correcteur d'erreurs introduisant le même niveau de redondance. Une deuxième partie de la thèse est consacrée à l'étude de schémas de codage sourcecanal conjoint basés sur une transformation redondante. Deux schémas d'estimation ont été proposés. Dans le premier, nous exploitons la redondance structurée introduite et le caractère borné du bruit de quantification pour construire un estimateur cohérent corrigeant les erreurs de transmission. Dans le deuxième schéma, nous appliquons l'algorithme de propagation de croyances pour évaluer les distributions a posteriori des composantes du signal d'entrée, à partir de sorties bruitées du canal. Nous appliquons alors ces deux schémas pour estimer l'entrée d'un banc de filtres suréchantillonnés
1 edition published in 2012 in English and held by 1 WorldCat member library worldwide
Dans cette thèse, nous nous intéressons aux schémas de codage et de décodage sourcecanal conjoint des contenus multimédia. Nous montrons comment la redondance laissée par le codeur vidéo peut être exploitée pour réaliser un décodage robuste des séquences transmises sur un lien radiomobile bruité. grâce au schéma de décodage conjoint proposé, le nombre de paquets corrompus est significativement réduit au prix d'une très légère augmentation du débit. Nous appliquons ensuite ce schéma de décodage robuste à latransmission par descriptions multiples sur une architecture mixte Internet et radiomobile. Le décodage sourcecanl conjoint des paquets reçus permet de corriger les erreurs de transmission et d'augmenter ainsi le nombre de paquets utilisés par le décodeur pour compenser les paquets perdus. L'efficacité de ce schéma est étudiée par rapport à un schéma classique basé sur les décisions dures du canal et sur un code correcteur d'erreurs introduisant le même niveau de redondance. Une deuxième partie de la thèse est consacrée à l'étude de schémas de codage sourcecanal conjoint basés sur une transformation redondante. Deux schémas d'estimation ont été proposés. Dans le premier, nous exploitons la redondance structurée introduite et le caractère borné du bruit de quantification pour construire un estimateur cohérent corrigeant les erreurs de transmission. Dans le deuxième schéma, nous appliquons l'algorithme de propagation de croyances pour évaluer les distributions a posteriori des composantes du signal d'entrée, à partir de sorties bruitées du canal. Nous appliquons alors ces deux schémas pour estimer l'entrée d'un banc de filtres suréchantillonnés
Applied interval analysis by
Luc Jaulin(
Book
)
2 editions published in 2001 in English and held by 1 WorldCat member library worldwide
This book is about guaranteed numerical methods based on interval analysis for approximating sets, and about the application of these methods to vast classes of engineering problems. Guaranteed means here that inner and outer approximations of the sets of interest are obtained, which can be made as precise as desired, at the cost of increasing the computational effort. It thus becomes possible to achieve tasks still thought by many to be out of the reach of numerical methods, such as finding all solutions of sets of nonlinear equations and inequality or all global optimizers of possibly multimodal criteria. The basic methodology is explained as simply as possible, in a concrete and readily applicable way, with a large number of figures and illustrative examples. Some of the techniques reported appear in book format for the first time. The ability of the approach advocated here to solve nontrivial engineering problems is demonstrated through examples drawn from the fields of parameter and state estimation, robust control and robotics. Enough detail is provided to allow readers with other applications in mind to grasp their significance. An indepth treatment of implementation issues facilitates the understanding and use of freely available software that makes interval computation about as easy as computation with floatingpoint numbers. The reader is even given the basic information needed to build his or her own C++ interval library
2 editions published in 2001 in English and held by 1 WorldCat member library worldwide
This book is about guaranteed numerical methods based on interval analysis for approximating sets, and about the application of these methods to vast classes of engineering problems. Guaranteed means here that inner and outer approximations of the sets of interest are obtained, which can be made as precise as desired, at the cost of increasing the computational effort. It thus becomes possible to achieve tasks still thought by many to be out of the reach of numerical methods, such as finding all solutions of sets of nonlinear equations and inequality or all global optimizers of possibly multimodal criteria. The basic methodology is explained as simply as possible, in a concrete and readily applicable way, with a large number of figures and illustrative examples. Some of the techniques reported appear in book format for the first time. The ability of the approach advocated here to solve nontrivial engineering problems is demonstrated through examples drawn from the fields of parameter and state estimation, robust control and robotics. Enough detail is provided to allow readers with other applications in mind to grasp their significance. An indepth treatment of implementation issues facilitates the understanding and use of freely available software that makes interval computation about as easy as computation with floatingpoint numbers. The reader is even given the basic information needed to build his or her own C++ interval library
La synchronisation robuste en temps et en fréquence dans un système de communication sans fil de type 802.11a. by
Cong Luong Nguyen(
)
1 edition published in 2014 in English and held by 1 WorldCat member library worldwide
Time and frequency synchronization problem in the IEEE 802.11a OFDM (Orthogonal Frequency Division Multiplexing) wireless communication system is investigated. To enhance the frame synchronization between mobile stations, although solutions to compensate time and frequency offsets have already been proposed, we developed a new approach conform to the IEEE 802.11a standard. This approach exploits not only the reference information usually specified by the standard such as training sequences but also additional sources of information available at the physical layer further known by both the transmitter and receiver to be then exploited. According to the knowledge protocol, we showed that the parts of the identified SIGNAL field considered as a reference sequence of the physical frame are either known or predictable from the RtS (Request to Send) and CtS (Clear to Send) control frames when the CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) mechanism is triggered jointly to bitrate adaptation algorithms to the channel. Moreover the received RtS control frame allows the receiver to estimate the channel before synchronization stage. According to the knowledge of the SIGNAL field and the channel information, we developed multistage joint time/frequency synchronization and channel estimation algorithms conform to the standard. Simulation results showed a strongly improved performance in terms of synchronization failure probability in comparison with the existing algorithms
1 edition published in 2014 in English and held by 1 WorldCat member library worldwide
Time and frequency synchronization problem in the IEEE 802.11a OFDM (Orthogonal Frequency Division Multiplexing) wireless communication system is investigated. To enhance the frame synchronization between mobile stations, although solutions to compensate time and frequency offsets have already been proposed, we developed a new approach conform to the IEEE 802.11a standard. This approach exploits not only the reference information usually specified by the standard such as training sequences but also additional sources of information available at the physical layer further known by both the transmitter and receiver to be then exploited. According to the knowledge protocol, we showed that the parts of the identified SIGNAL field considered as a reference sequence of the physical frame are either known or predictable from the RtS (Request to Send) and CtS (Clear to Send) control frames when the CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) mechanism is triggered jointly to bitrate adaptation algorithms to the channel. Moreover the received RtS control frame allows the receiver to estimate the channel before synchronization stage. According to the knowledge of the SIGNAL field and the channel information, we developed multistage joint time/frequency synchronization and channel estimation algorithms conform to the standard. Simulation results showed a strongly improved performance in terms of synchronization failure probability in comparison with the existing algorithms
Codage de sources avec information adjacente et connaissance incertaine des corrélations by
Elsa Dupraz(
)
1 edition published in 2013 in French and held by 1 WorldCat member library worldwide
In this thesis, we considered the problem of source coding with side information available at the decoder only. More in details, we considered the case where the joint distribution between the source and the side information is not perfectly known. In this context, we performed a performance analysis of the lossless source coding scheme. This performance analysis was realized from information theory tools. Then, we proposed a practical coding scheme able to deal with the uncertainty on the joint probability distribution. This coding scheme is based on nonbinary LDPC codes and on an ExpectationMaximization algorithm. For this problem, a key issue is to design efficient LDPC codes. In particular, good code degree distributions have to be selected. Consequently, we proposed an optimization method for the selection of good degree distributions. To finish, we considered a lossy coding scheme. In this case, we assumed that the correlation channel between the source and the side information is described by a Hidden Markov Model with Gaussian emissions. For this model, we performed again some performance analysis and proposed a practical coding scheme. The proposed scheme is based on nonbinary LDPC codes and on MMSE reconstruction using an MCMC method. In our solution, these two components are able to exploit the memory induced by the Hidden Markov model
1 edition published in 2013 in French and held by 1 WorldCat member library worldwide
In this thesis, we considered the problem of source coding with side information available at the decoder only. More in details, we considered the case where the joint distribution between the source and the side information is not perfectly known. In this context, we performed a performance analysis of the lossless source coding scheme. This performance analysis was realized from information theory tools. Then, we proposed a practical coding scheme able to deal with the uncertainty on the joint probability distribution. This coding scheme is based on nonbinary LDPC codes and on an ExpectationMaximization algorithm. For this problem, a key issue is to design efficient LDPC codes. In particular, good code degree distributions have to be selected. Consequently, we proposed an optimization method for the selection of good degree distributions. To finish, we considered a lossy coding scheme. In this case, we assumed that the correlation channel between the source and the side information is described by a Hidden Markov Model with Gaussian emissions. For this model, we performed again some performance analysis and proposed a practical coding scheme. The proposed scheme is based on nonbinary LDPC codes and on MMSE reconstruction using an MCMC method. In our solution, these two components are able to exploit the memory induced by the Hidden Markov model
Codage de WynerZiv en présence de qualité incertaine de l'information adjacente by
Francesca Bassi(
Book
)
1 edition published in 2010 in English and held by 1 WorldCat member library worldwide
L'objectif principal de cette thèse est de proposer un cadre théorique pour la description des problèmes de codage de source en présence d'information adjacente au décodeur rencontrés dans le contexte des applications pratiques. La théorie du codage de source distribué repose en grande partie sur l'hypothèse que les sources considérées sont stationnaires, et que les caractéristiques statistiques des signaux sont connues a priori. Pourtant, ces conditions ne sont pas vérifiées dans le cadre des applications pratiques, comme, par exemple, un schéma de codage vidéo distribué. Nous définissons une modélisation de signal qui est alternative au modèle quadratique gaussien qui est généralement pris comme référence. Cette modélisation permet de capturer les caractéristiques des signaux naturels, qui possèdent des niveaux de bruit de corrélation variables avec le temps. Nous considérons plusieurs problèmes de codage, définis par différents degrés d'accès à l'information adjacente, au codeur et au décodeur, et nous discutons leur capacité de capturer la nature du problème de codage de WynerZiv considéré dans le cadre des applications pratiques. La dernière partie de cette thèse se concentre sur les problèmes relatifs à la construction de schémas pratiques. Nous définissons le problème de codage propre des systèmes où l'adaptation du débit ne peut pas être réalisée de la façon habituelle, à cause de la non disponibilité d'un canal de retour. L'application des solutions standards, pourtant possible, n'apparaît pas convenable. Nous proposons donc une architecture alternative, reposant sur des composantes optimisées pour le modèle quadratique gaussien des sources
1 edition published in 2010 in English and held by 1 WorldCat member library worldwide
L'objectif principal de cette thèse est de proposer un cadre théorique pour la description des problèmes de codage de source en présence d'information adjacente au décodeur rencontrés dans le contexte des applications pratiques. La théorie du codage de source distribué repose en grande partie sur l'hypothèse que les sources considérées sont stationnaires, et que les caractéristiques statistiques des signaux sont connues a priori. Pourtant, ces conditions ne sont pas vérifiées dans le cadre des applications pratiques, comme, par exemple, un schéma de codage vidéo distribué. Nous définissons une modélisation de signal qui est alternative au modèle quadratique gaussien qui est généralement pris comme référence. Cette modélisation permet de capturer les caractéristiques des signaux naturels, qui possèdent des niveaux de bruit de corrélation variables avec le temps. Nous considérons plusieurs problèmes de codage, définis par différents degrés d'accès à l'information adjacente, au codeur et au décodeur, et nous discutons leur capacité de capturer la nature du problème de codage de WynerZiv considéré dans le cadre des applications pratiques. La dernière partie de cette thèse se concentre sur les problèmes relatifs à la construction de schémas pratiques. Nous définissons le problème de codage propre des systèmes où l'adaptation du débit ne peut pas être réalisée de la façon habituelle, à cause de la non disponibilité d'un canal de retour. L'application des solutions standards, pourtant possible, n'apparaît pas convenable. Nous proposons donc une architecture alternative, reposant sur des composantes optimisées pour le modèle quadratique gaussien des sources
Vers une solution réaliste de décodage sourcecanal conjoint de contenus multimédia by
Cédric Marin(
Book
)
1 edition published in 2009 in French and held by 1 WorldCat member library worldwide
1 edition published in 2009 in French and held by 1 WorldCat member library worldwide
Caractérisation analytique et optimisation de codes sourcecanal conjoints by
Amadou Tidiane Diallo(
)
1 edition published in 2012 in French and held by 1 WorldCat member library worldwide
Les codes sourcecanal conjoints sont des codes réalisant simultanément une compression de données et une protection du train binaire généré par rapport à d'éventuelles erreurs de transmission. Ces codes sont nonlinéaires, comme la plupart des codes de source. Leur intérêt potentiel est d'offrir de bonnes performances en termes de compression et de correction d'erreur pour des longueurs de codes réduites.La performance d'un code de source se mesure par la différence entre l'entropie de la source à compresser et le nombre moyen de bits nécessaire pour coder un symbole de cette source. La performance d'un code de canal se mesure par la distance minimale entre mots de codes ou entre suite de mots de codes, et plus généralement à l'aide du spectre des distances. Les codes classiques disposent d'outils pour évaluer efficacement ces critères de performance. Par ailleurs, la synthèse de bons codes de source ou de bons codes de canal est un domaine largement exploré depuis les travaux de Shannon. Par contre des outils analogues pour des codes sourcecanal conjoints, tant pour l'évaluation de performance que pour la synthèse de bons codes restaient à développer, même si certaines propositions ont déjà été faites dans le passé.Cette thèse s'intéresse à la famille des codes sourcecanal conjoints pouvant être décrits par des automates possédant un nombre fini d'états. Les codes quasiarithmétiques correcteurs d'erreurs et les codes à longueurs variables correcteurs d'erreurs font partie de cette famille. La manière dont un automate peut être obtenu pour un code donné est rappelée.A partir d'un automate, il est possible de construire un graphe produit permettant de décrire toutes les paires de chemins divergeant d'un même état et convergeant vers un autre état. Nous avons montré que grâce à l'algorithme de Dijkstra, il est alors possible d'évaluer la distance libre d'un code conjoint avec une complexité polynomiale.Pour les codes à longueurs variables correcteurs d'erreurs, nous avons proposé des bornes supplémentaires, faciles à évaluer. Ces bornes constituent des extensions des bornes de Plotkin et de Heller aux codes à longueurs variables. Des bornes peuvent également être déduites du graphe produit associé à un code dont seule une partie des mots de codes a été spécifiée.Ces outils pour borner ou évaluer exactement la distance libre d'un code conjoint permettent de réaliser la synthèse de codes ayant des bonnes propriétés de distance pour une redondance donnée ou minimisant la redondance pour une distance libre donnée.Notre approche consiste à organiser la recherche de bons codes sourcecanal conjoints à l'aide d'arbres. La racine de l'arbre correspond à un code dont aucun bit n'est spécifié, les feuilles à des codes dont tous les bits sont spécifiés, et les nœuds intermédiaires à des codes partiellement spécifiés. Lors d'un déplacement de la racine vers les feuilles de l'arbre, les bornes supérieures sur la distance libre décroissent, tandis que les bornes inférieures croissent. Ceci permet d'appliquer un algorithme de type branchandprune pour trouver le code avec la plus grande distance libre, sans avoir à explorer tout l'arbre contenant les codes. L'approche proposée a permis la construction de codes conjoints pour les lettres de l'alphabet. Comparé à un schéma tandem équivalent (code de source suivi d'un code convolutif), les codes obtenus ont des performances comparables (taux de codage, distance libre) tout en étant moins complexes en termes de nombre d'état du décodeur.Plusieurs extensions de ces travaux sont en cours : 1) synthèse de codes à longueurs variables correcteurs d'erreurs formalisé comme un problème de programmation linéaire mixte sur les entiers ; 2) exploration à l'aide d'un algorithme de type A* de l'espace des codes de à longueurs variables correcteur d'erreurs
1 edition published in 2012 in French and held by 1 WorldCat member library worldwide
Les codes sourcecanal conjoints sont des codes réalisant simultanément une compression de données et une protection du train binaire généré par rapport à d'éventuelles erreurs de transmission. Ces codes sont nonlinéaires, comme la plupart des codes de source. Leur intérêt potentiel est d'offrir de bonnes performances en termes de compression et de correction d'erreur pour des longueurs de codes réduites.La performance d'un code de source se mesure par la différence entre l'entropie de la source à compresser et le nombre moyen de bits nécessaire pour coder un symbole de cette source. La performance d'un code de canal se mesure par la distance minimale entre mots de codes ou entre suite de mots de codes, et plus généralement à l'aide du spectre des distances. Les codes classiques disposent d'outils pour évaluer efficacement ces critères de performance. Par ailleurs, la synthèse de bons codes de source ou de bons codes de canal est un domaine largement exploré depuis les travaux de Shannon. Par contre des outils analogues pour des codes sourcecanal conjoints, tant pour l'évaluation de performance que pour la synthèse de bons codes restaient à développer, même si certaines propositions ont déjà été faites dans le passé.Cette thèse s'intéresse à la famille des codes sourcecanal conjoints pouvant être décrits par des automates possédant un nombre fini d'états. Les codes quasiarithmétiques correcteurs d'erreurs et les codes à longueurs variables correcteurs d'erreurs font partie de cette famille. La manière dont un automate peut être obtenu pour un code donné est rappelée.A partir d'un automate, il est possible de construire un graphe produit permettant de décrire toutes les paires de chemins divergeant d'un même état et convergeant vers un autre état. Nous avons montré que grâce à l'algorithme de Dijkstra, il est alors possible d'évaluer la distance libre d'un code conjoint avec une complexité polynomiale.Pour les codes à longueurs variables correcteurs d'erreurs, nous avons proposé des bornes supplémentaires, faciles à évaluer. Ces bornes constituent des extensions des bornes de Plotkin et de Heller aux codes à longueurs variables. Des bornes peuvent également être déduites du graphe produit associé à un code dont seule une partie des mots de codes a été spécifiée.Ces outils pour borner ou évaluer exactement la distance libre d'un code conjoint permettent de réaliser la synthèse de codes ayant des bonnes propriétés de distance pour une redondance donnée ou minimisant la redondance pour une distance libre donnée.Notre approche consiste à organiser la recherche de bons codes sourcecanal conjoints à l'aide d'arbres. La racine de l'arbre correspond à un code dont aucun bit n'est spécifié, les feuilles à des codes dont tous les bits sont spécifiés, et les nœuds intermédiaires à des codes partiellement spécifiés. Lors d'un déplacement de la racine vers les feuilles de l'arbre, les bornes supérieures sur la distance libre décroissent, tandis que les bornes inférieures croissent. Ceci permet d'appliquer un algorithme de type branchandprune pour trouver le code avec la plus grande distance libre, sans avoir à explorer tout l'arbre contenant les codes. L'approche proposée a permis la construction de codes conjoints pour les lettres de l'alphabet. Comparé à un schéma tandem équivalent (code de source suivi d'un code convolutif), les codes obtenus ont des performances comparables (taux de codage, distance libre) tout en étant moins complexes en termes de nombre d'état du décodeur.Plusieurs extensions de ces travaux sont en cours : 1) synthèse de codes à longueurs variables correcteurs d'erreurs formalisé comme un problème de programmation linéaire mixte sur les entiers ; 2) exploration à l'aide d'un algorithme de type A* de l'espace des codes de à longueurs variables correcteur d'erreurs
Régulation de la qualité lors de la transmission de contenus vidéo sur des canaux sans fils by
Nesrine Changuel(
)
1 edition published in 2011 in English and held by 1 WorldCat member library worldwide
Due to the emergence of new generation mobiles and media streaming services, data traffic on mobile networks is continuously exploding. Despite the emergence of standards such as LTE, resources still remain scarce and limited. Thus, efficiently sharing resources among broadcasters or between unicast receivers connected to the same base station is necessary. An efficient resources allocation, where a fair received video quality between users and an equal transmission delay are achieved, is targeted. To that end, the variety of the ratedistortion tradeoff of multimedia content is exploited. First, a centralized joint encoding and transmission rate control of multiple programs sharing the same channel is considered. A satisfactory and a comparable video quality among the transmitted programs, with limited variations, as well as a comparable transmission delay are targeted. The problem is solved using constrained optimization tools. Second, only the bandwidth allocation control is centralized, the control of the encoding rate characteristics of each stream is carried in a distributed manner. By modeling the problem as a feedback control system, the centralized bandwidth allocation is required to feed back only the buffer level to its associated remote content provider. The equilibrium and stability issues are addressed for both bit and second buffer control. In the case of simple unicast connection, a crosslayer optimization of scalable video delivery over wireless channel is performed. The optimization problem is cast in the context of dynamic programming. When low complex model are considered and when the system characteristics are known, optimal solutions can be obtained. When the system is partially known, for example, when the state of the channel reaches the control process with delay, learning techniques are implemented
1 edition published in 2011 in English and held by 1 WorldCat member library worldwide
Due to the emergence of new generation mobiles and media streaming services, data traffic on mobile networks is continuously exploding. Despite the emergence of standards such as LTE, resources still remain scarce and limited. Thus, efficiently sharing resources among broadcasters or between unicast receivers connected to the same base station is necessary. An efficient resources allocation, where a fair received video quality between users and an equal transmission delay are achieved, is targeted. To that end, the variety of the ratedistortion tradeoff of multimedia content is exploited. First, a centralized joint encoding and transmission rate control of multiple programs sharing the same channel is considered. A satisfactory and a comparable video quality among the transmitted programs, with limited variations, as well as a comparable transmission delay are targeted. The problem is solved using constrained optimization tools. Second, only the bandwidth allocation control is centralized, the control of the encoding rate characteristics of each stream is carried in a distributed manner. By modeling the problem as a feedback control system, the centralized bandwidth allocation is required to feed back only the buffer level to its associated remote content provider. The equilibrium and stability issues are addressed for both bit and second buffer control. In the case of simple unicast connection, a crosslayer optimization of scalable video delivery over wireless channel is performed. The optimization problem is cast in the context of dynamic programming. When low complex model are considered and when the system characteristics are known, optimal solutions can be obtained. When the system is partially known, for example, when the state of the channel reaches the control process with delay, learning techniques are implemented
Théorie des jeux et apprentissage pour les réseaux sans fil distribués by
François Mériaux(
)
1 edition published in 2013 in French and held by 1 WorldCat member library worldwide
In this thesis, we study wireless networks in which mobile terminals are free to choose their communication configuration. Theses configuration choices include access wireless technology, access point association, codingmodulation scheme, occupied bandwidth, power allocation, etc. Typically, these configuration choices are made to maximize some performance metrics associated to every terminals. Under the assumption that mobile terminals take their decisions in a rational manner, game theory can be applied to model the interactions between the terminals. Precisely, the main objective of this thesis is to study energyefficient power control policies from which no terminal has an interest to deviate. The framework of stochastic games is particularly suited to this problem and allows to characterize the achievable utility region for equilibrium power control strategies. When the number of terminals in the network is large, we invoke mean field game theory to simplify the study of the system. Indeed, in a mean field game, the interactions between a player and all the other players are not considered individually. Instead, one only studies the interactions between each player and a mean field, which is the distribution of the states of all the other players. Optimal power control strategies from the mean field formulation are studied. Another part of this thesis has been focused on learning equilibria in distributed games. In particular, we show how best response dynamics and learning algorithms can converge to an equilibrium in a base station location game. For another scenario, namely a power control problem, we study the convergence of the best response dynamics. In this case, we propose a power control behavioral rule that converges to an equilibrium with very little information about the network
1 edition published in 2013 in French and held by 1 WorldCat member library worldwide
In this thesis, we study wireless networks in which mobile terminals are free to choose their communication configuration. Theses configuration choices include access wireless technology, access point association, codingmodulation scheme, occupied bandwidth, power allocation, etc. Typically, these configuration choices are made to maximize some performance metrics associated to every terminals. Under the assumption that mobile terminals take their decisions in a rational manner, game theory can be applied to model the interactions between the terminals. Precisely, the main objective of this thesis is to study energyefficient power control policies from which no terminal has an interest to deviate. The framework of stochastic games is particularly suited to this problem and allows to characterize the achievable utility region for equilibrium power control strategies. When the number of terminals in the network is large, we invoke mean field game theory to simplify the study of the system. Indeed, in a mean field game, the interactions between a player and all the other players are not considered individually. Instead, one only studies the interactions between each player and a mean field, which is the distribution of the states of all the other players. Optimal power control strategies from the mean field formulation are studied. Another part of this thesis has been focused on learning equilibria in distributed games. In particular, we show how best response dynamics and learning algorithms can converge to an equilibrium in a base station location game. For another scenario, namely a power control problem, we study the convergence of the best response dynamics. In this case, we propose a power control behavioral rule that converges to an equilibrium with very little information about the network
Performance Analysis of Iterative Soft Interference Cancellation Algorithms and New Link Adaptation Strategies for Coded MIMO
Systems. by
Baozhu Ning(
)
1 edition published in 2013 in English and held by 1 WorldCat member library worldwide
Les systèmes de communication sans fil actuels évoluent vers un renforcement des réactivités des protocles de la gestion des ressources radio (RRM) et adaptation du lien radipe (FLA) afin d'optimiser conjointement les couches MAC et PHY. En parallèle, la technologie d'antenne multiples et turbo récepteurs avancés ont un grand potentiel pour augmenter l'efficacité spectrale dans les futurs systèmes de communication sans fil. Ces deux tendances, à savoir, l'optimisation inter couche et le traitement de turbo, nécessitent le développement de nouvelles abstractions de la couche PHY (aussi appelée méthode de prédiction de la performance) qui peuvent capturer les performances du récepteur itératif par itération pour permettre l'introduction en douceur de ces récepteurs avancés dans FLA et RRM.La thèse de doctorat revisite en détail l'architecture du turbo récepteur, plus particulièrement, la classe d'algorithme itératif effectuant la détection linéaire par minimisation d'erreur quadratique moyenne avec l'annulation d'interférence (LMMSEIC). Ensuite, une méthode semianalytique de prédiction de la performance est proposée pour analyser son l'évolution par la modélisation stochastique de chacun des composants. Intrinsèquement, la méthode de prédiction de la performance est subordonnée à la disposition de connaissance d'information d'état du canal au niveau du récepteur (CSIR), le type de codage de canal (code convolutif ou un code turbo), le nombre de mots de code ainsi que le type d'information probabilistic sur les bits codés réinjectée par le décodeur pour la reconstruction et l'annulation d'interférence à l'intérieur d'algorithme de LMMSE IC itératif.Dans la deuxième partie, l'adaptation du lien en boucle fermée dans les systèmes MIMO codés basés sur les abstractions de la couche PHY proposées pour les récepteurs LMMSE IC itératifs ont été abordés. Le schéma proposé d'adaptation de liaison repose sur un faible taux de rétroaction et exploite la sélection du précodeur spatiale (par exemple, la sélection d'antennes) et du schéma de modulation et de codage (MCS) de façon à maximiser le taux moyen soumis à une contrainte de taux d'erreur de bloc. Différents schémas de codage sont testés, tels qu'un codage parcourant tous les antennes où un codage par antenne. Les simulations montrent bien le gain important obtenu avec les turbo récepteurs comparée à celui d'un récepteur MMSE classique
1 edition published in 2013 in English and held by 1 WorldCat member library worldwide
Les systèmes de communication sans fil actuels évoluent vers un renforcement des réactivités des protocles de la gestion des ressources radio (RRM) et adaptation du lien radipe (FLA) afin d'optimiser conjointement les couches MAC et PHY. En parallèle, la technologie d'antenne multiples et turbo récepteurs avancés ont un grand potentiel pour augmenter l'efficacité spectrale dans les futurs systèmes de communication sans fil. Ces deux tendances, à savoir, l'optimisation inter couche et le traitement de turbo, nécessitent le développement de nouvelles abstractions de la couche PHY (aussi appelée méthode de prédiction de la performance) qui peuvent capturer les performances du récepteur itératif par itération pour permettre l'introduction en douceur de ces récepteurs avancés dans FLA et RRM.La thèse de doctorat revisite en détail l'architecture du turbo récepteur, plus particulièrement, la classe d'algorithme itératif effectuant la détection linéaire par minimisation d'erreur quadratique moyenne avec l'annulation d'interférence (LMMSEIC). Ensuite, une méthode semianalytique de prédiction de la performance est proposée pour analyser son l'évolution par la modélisation stochastique de chacun des composants. Intrinsèquement, la méthode de prédiction de la performance est subordonnée à la disposition de connaissance d'information d'état du canal au niveau du récepteur (CSIR), le type de codage de canal (code convolutif ou un code turbo), le nombre de mots de code ainsi que le type d'information probabilistic sur les bits codés réinjectée par le décodeur pour la reconstruction et l'annulation d'interférence à l'intérieur d'algorithme de LMMSE IC itératif.Dans la deuxième partie, l'adaptation du lien en boucle fermée dans les systèmes MIMO codés basés sur les abstractions de la couche PHY proposées pour les récepteurs LMMSE IC itératifs ont été abordés. Le schéma proposé d'adaptation de liaison repose sur un faible taux de rétroaction et exploite la sélection du précodeur spatiale (par exemple, la sélection d'antennes) et du schéma de modulation et de codage (MCS) de façon à maximiser le taux moyen soumis à une contrainte de taux d'erreur de bloc. Différents schémas de codage sont testés, tels qu'un codage parcourant tous les antennes où un codage par antenne. Les simulations montrent bien le gain important obtenu avec les turbo récepteurs comparée à celui d'un récepteur MMSE classique
TrafficAware Resource Allocation and Feedback Design in Wireless Networks by
Apostolos Destounis(
)
1 edition published in 2014 in English and held by 1 WorldCat member library worldwide
Les réseaux sans fil sont confrontés à une augmentation croissante en demande de données, qui devrait continuer à croitre dans les années à venir. La raison principale de cette croissance est liée à la demande en services vidéo et données. Les plus importantes approches proposées pour faire face à ce problème, notamment l'utilisation des antennes multiples, le codage OFDMA (qui font déjà partie des standards 3GPP et LTE), et le déploiement de réseaux à petites cellules, ont été examinées plutôt d'un point de vue couche physique, en se concentrant sur des mesures de performance tel que le débit total du système. Cependant, les caractéristiques du trafic vidéo et des données ainsi que les demandes individuelles des utilisateurs doivent être prises en compte pour la conception des algorithmes d'allocation de ressources radio. L'objectif de cette thèse est d'étudier l'impact des algorithmes d'allocation de ressources radio (contrôle de puissance, précodage, ordonnancement) ainsi que les informations concernant l'état du canal sur le comportement des files d'attente des utilisateurs. Nous étudions, en particulier, le problème de précodage et de contrôle de puissance dans le canal d'interférence, dans le but de réguler le comportement des files d'attente des utilisateurs et conjointement la rétroaction/estimation de canal et la sélection et ordonnancement des utilisateurs. Ceci afin d'assurer la stabilité des files d'attentes pour une grande partie des demandes de trafic dans les systèmes de diffusion MISOOFDMA. Pour assurer cela, nous utilisons des outils mathématiques de la théorie des modèles asymptotiques "heavy traffic" et de la théorie de la stabilité stochastique
1 edition published in 2014 in English and held by 1 WorldCat member library worldwide
Les réseaux sans fil sont confrontés à une augmentation croissante en demande de données, qui devrait continuer à croitre dans les années à venir. La raison principale de cette croissance est liée à la demande en services vidéo et données. Les plus importantes approches proposées pour faire face à ce problème, notamment l'utilisation des antennes multiples, le codage OFDMA (qui font déjà partie des standards 3GPP et LTE), et le déploiement de réseaux à petites cellules, ont été examinées plutôt d'un point de vue couche physique, en se concentrant sur des mesures de performance tel que le débit total du système. Cependant, les caractéristiques du trafic vidéo et des données ainsi que les demandes individuelles des utilisateurs doivent être prises en compte pour la conception des algorithmes d'allocation de ressources radio. L'objectif de cette thèse est d'étudier l'impact des algorithmes d'allocation de ressources radio (contrôle de puissance, précodage, ordonnancement) ainsi que les informations concernant l'état du canal sur le comportement des files d'attente des utilisateurs. Nous étudions, en particulier, le problème de précodage et de contrôle de puissance dans le canal d'interférence, dans le but de réguler le comportement des files d'attente des utilisateurs et conjointement la rétroaction/estimation de canal et la sélection et ordonnancement des utilisateurs. Ceci afin d'assurer la stabilité des files d'attentes pour une grande partie des demandes de trafic dans les systèmes de diffusion MISOOFDMA. Pour assurer cela, nous utilisons des outils mathématiques de la théorie des modèles asymptotiques "heavy traffic" et de la théorie de la stabilité stochastique
TCP and network coding : equilibrium and dynamic properties by
Hamlet Medina Ruiz(
)
1 edition published in 2014 in English and held by 1 WorldCat member library worldwide
Communication networks today share the same fundamental principle of operation: information is delivered to their destination by nodes intermediate in a storeandforward manner.Network coding (NC) is a technique that allows intermediate nodes to send out packets that are linear combinations of previously received information. The main benefits of NC are the potential throughput improvements and a high degree of robustness, which is translated into loss resilience. These benefits have motivated deployment efforts for practical applications of NC, e.g., incorporating NC into congestion control schemes such as TCPReno to get a TCPNC congestion protocol. In TCPNC, TCPReno throughput is improved by sending a fixed amount of redundant packets, which mask part of the losses due, e.g., to channel transmission errors. In this thesis, we first analyze the dynamics of TCPNC with random early detection (RED) as active queue management (AQM) using tools from convex optimization and feedback control. We study the network equilibrium point and the stability properties of TCPReno when NC is incorporated into the TCP/IP protocol stack. The existence and uniqueness of an equilibrium point is proved, and characterized in terms of average throughput, loss rate, and queue length. Our study also shows that TCPNC/RED becomes unstable when delay or link capacities increases, but also, when the amount of redundant packets added by NC increases. Using a continuoustime model and neglecting feedback delays, we prove that TCPNC is globally stable. We provide a sufficient condition for local stability when feedback delays are present. The fairness of TCPNC with respect to TCPRenolike protocols is also studied. Second, we propose an algorithm to dynamically adjust the amount of redundant linear combinations of packets transmitted by NC. In TCPNC with adaptive redundancy (TCPNCAR), the redundancy is adjusted using a loss differentiation scheme, which estimates the amount of losses due to channel transmission errors and due to congestion. Simulation results show that TCPNCAR outperforms TCPNC in terms of throughput. Finally, we analyze the equilibrium and stability properties of TCPNCAR/RED. The existence and uniqueness of an equilibrium point is characterized experimentally. The TCPNCAR/RED dynamics are modeled using a continuoustime model. Theoretical and simulation results show that TCPNCAR tracks the optimal value for the redundancy for small values of the packet loss rate. Moreover, simulations of the linearized model around equilibrium show that TCPNCAR increases the size of the TCPReno stability region. We show that this is due to the compensator effect of the redundancy adaptation dynamics to TCPReno. These characteristics of TCPNCAR allow the congestion window adaptation mechanism of TCPReno to react in a smooth way to channel losses, avoiding some unnecessary rate reductions, and increasing the local stability of TCPReno
1 edition published in 2014 in English and held by 1 WorldCat member library worldwide
Communication networks today share the same fundamental principle of operation: information is delivered to their destination by nodes intermediate in a storeandforward manner.Network coding (NC) is a technique that allows intermediate nodes to send out packets that are linear combinations of previously received information. The main benefits of NC are the potential throughput improvements and a high degree of robustness, which is translated into loss resilience. These benefits have motivated deployment efforts for practical applications of NC, e.g., incorporating NC into congestion control schemes such as TCPReno to get a TCPNC congestion protocol. In TCPNC, TCPReno throughput is improved by sending a fixed amount of redundant packets, which mask part of the losses due, e.g., to channel transmission errors. In this thesis, we first analyze the dynamics of TCPNC with random early detection (RED) as active queue management (AQM) using tools from convex optimization and feedback control. We study the network equilibrium point and the stability properties of TCPReno when NC is incorporated into the TCP/IP protocol stack. The existence and uniqueness of an equilibrium point is proved, and characterized in terms of average throughput, loss rate, and queue length. Our study also shows that TCPNC/RED becomes unstable when delay or link capacities increases, but also, when the amount of redundant packets added by NC increases. Using a continuoustime model and neglecting feedback delays, we prove that TCPNC is globally stable. We provide a sufficient condition for local stability when feedback delays are present. The fairness of TCPNC with respect to TCPRenolike protocols is also studied. Second, we propose an algorithm to dynamically adjust the amount of redundant linear combinations of packets transmitted by NC. In TCPNC with adaptive redundancy (TCPNCAR), the redundancy is adjusted using a loss differentiation scheme, which estimates the amount of losses due to channel transmission errors and due to congestion. Simulation results show that TCPNCAR outperforms TCPNC in terms of throughput. Finally, we analyze the equilibrium and stability properties of TCPNCAR/RED. The existence and uniqueness of an equilibrium point is characterized experimentally. The TCPNCAR/RED dynamics are modeled using a continuoustime model. Theoretical and simulation results show that TCPNCAR tracks the optimal value for the redundancy for small values of the packet loss rate. Moreover, simulations of the linearized model around equilibrium show that TCPNCAR increases the size of the TCPReno stability region. We show that this is due to the compensator effect of the redundancy adaptation dynamics to TCPReno. These characteristics of TCPNCAR allow the congestion window adaptation mechanism of TCPReno to react in a smooth way to channel losses, avoiding some unnecessary rate reductions, and increasing the local stability of TCPReno
Secure Communication and Cooperation in InterferenceLimited Wireless Networks by
German Bassi(
)
1 edition published in 2015 in French and held by 1 WorldCat member library worldwide
In this thesis, we conduct an informationtheoretic study on two important aspects of wireless communications: the improvement of data throughput in interferencelimited networks by means of cooperation between users and the strengthening of the security of transmissions with the help of feedback.In the first part of the thesis, we focus on the simplest model that encompasses interference and cooperation, the Interference Relay Channel (IRC). Our goal is to characterize within a fixed number of bits the capacity region of the Gaussian IRC, independent of any channel conditions. To do so, we derive a novel outer bound and two inner bounds. Specifically, the outer bound is obtained thanks to a nontrivial extension we propose of the injective semideterministic class of channels, originally derived by Telatar and Tse for the Interference Channel (IC).In the second part of the thesis, we investigate the Wiretap Channel with Generalized Feedback (WCGF) and our goal is to provide a general transmission strategy that encompasses the existing results for different feedback models found in the literature. To this end, we propose two different inner bounds on the capacity of the memoryless WCGF. We first derive an inner bound that is based on the use of joint sourcechannel coding, which introduces time dependencies between the feedback outputs and the channel inputs through different time blocks. We then introduce a second inner bound where the feedback link is used to generate a key that encrypts the message partially or completely
1 edition published in 2015 in French and held by 1 WorldCat member library worldwide
In this thesis, we conduct an informationtheoretic study on two important aspects of wireless communications: the improvement of data throughput in interferencelimited networks by means of cooperation between users and the strengthening of the security of transmissions with the help of feedback.In the first part of the thesis, we focus on the simplest model that encompasses interference and cooperation, the Interference Relay Channel (IRC). Our goal is to characterize within a fixed number of bits the capacity region of the Gaussian IRC, independent of any channel conditions. To do so, we derive a novel outer bound and two inner bounds. Specifically, the outer bound is obtained thanks to a nontrivial extension we propose of the injective semideterministic class of channels, originally derived by Telatar and Tse for the Interference Channel (IC).In the second part of the thesis, we investigate the Wiretap Channel with Generalized Feedback (WCGF) and our goal is to provide a general transmission strategy that encompasses the existing results for different feedback models found in the literature. To this end, we propose two different inner bounds on the capacity of the memoryless WCGF. We first derive an inner bound that is based on the use of joint sourcechannel coding, which introduces time dependencies between the feedback outputs and the channel inputs through different time blocks. We then introduce a second inner bound where the feedback link is used to generate a key that encrypts the message partially or completely
Optimized broadcasting in wireless adhoc networks using network coding by
Nour Kadi(
Book
)
1 edition published in 2010 in English and held by 1 WorldCat member library worldwide
Network coding is a novel technique which attracts the research interest since its emergence in 2000. It was shown that network coding, combined with wireless broadcasting, can potentially improve the performance in term of throughput, energy efficiency and bandwidth utilization. Our study begins with integrating network coding with multipoint relay (MPR) technique. MPR is an efficient broadcast mechanism which has been used in many wireless protocols. We show how combining the two techniques together can reduce the number of transmitted packets and increase the throughput. We further reduce the complexity by proposing an opportunistic coding scheme which performs coding operations on the binary field. Instead of linearly combining packets, we employ arithmetic summing packets in modulo 2, which simply corresponds to XOR the corresponding bits in each packet. These operations are computationally cheap. Using neighbors state information, a node in our scheme chooses packets to encode and transmit at each transmission trying to deliver a maximum number of packets. Therefore, an exchange of the reception information between the neighbors is required. To reduce the overhead of the required feedback, we propose a new coding scheme. It uses LTcode (a type of fountain code) to eliminate the need of a perfect feedback among neighbors. This scheme performs encoding and decoding with a logarithmic complexity. We optimize LTcode to speed up the decoding process. The optimization is achieved by proposing a new degree distribution to be used during the encoding process. This distribution allows intermediate nodes to decode more symbols even when few encoded packets are received
1 edition published in 2010 in English and held by 1 WorldCat member library worldwide
Network coding is a novel technique which attracts the research interest since its emergence in 2000. It was shown that network coding, combined with wireless broadcasting, can potentially improve the performance in term of throughput, energy efficiency and bandwidth utilization. Our study begins with integrating network coding with multipoint relay (MPR) technique. MPR is an efficient broadcast mechanism which has been used in many wireless protocols. We show how combining the two techniques together can reduce the number of transmitted packets and increase the throughput. We further reduce the complexity by proposing an opportunistic coding scheme which performs coding operations on the binary field. Instead of linearly combining packets, we employ arithmetic summing packets in modulo 2, which simply corresponds to XOR the corresponding bits in each packet. These operations are computationally cheap. Using neighbors state information, a node in our scheme chooses packets to encode and transmit at each transmission trying to deliver a maximum number of packets. Therefore, an exchange of the reception information between the neighbors is required. To reduce the overhead of the required feedback, we propose a new coding scheme. It uses LTcode (a type of fountain code) to eliminate the need of a perfect feedback among neighbors. This scheme performs encoding and decoding with a logarithmic complexity. We optimize LTcode to speed up the decoding process. The optimization is achieved by proposing a new degree distribution to be used during the encoding process. This distribution allows intermediate nodes to decode more symbols even when few encoded packets are received
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
 Duhamel, Pierre Ph. D. Opponent Thesis advisor Author
 Walter, Éric
 Jaulin, Luc Author
 Didrit, Olivier
 Ecole doctorale Sciences et Technologies de l'Information, des Télécommunications et des Systèmes (Orsay, Essonne)
 Laboratoire des signaux et systèmes (L2S) (GifsurYvette, Essonne)
 Université de ParisSud Degree grantor
 Télécom ParisTech
 Supélec (GifsurYvette, Essonne) Degree grantor
 Université de ParisSud Faculté des Sciences d'Orsay (Essonne)