Lecturers

Paolo Banelli

Paolo Banelli

Website: https://www.unipg.it/personale/paolo.banelli/en/
http://dante.diei.unipg.it/~banelli/
University of Perugia, Italy

GOAL-ORIENTED WIRELESS COMMUNICATIONS FOR EDGE-ASSISTED MACHINE LEARNING

ABSTRACT: New generation (5G/6G) mobile networks are evolving, from classical communication infrastructures, to networks supporting and enabling advanced services based on (deep) machine-learning, like augmented reality, autonomous driving, Internet of Things, etc. Edge Machine Learning (EML) is a relatively recent paradigm that allows User Equipments (UEs), connected to a mobile network, to opportunistically offload their computational and learning tasks to Edge Servers (ESs) that are physically close to the Radio Access Network (RAN). The goal of EML is to efficiently manage the system resources, such as transmission power, bandwidth, and computational energy, both at the UEs and ESs, while performing learning tasks with guaranteed accuracy, and within a prescribed latency, for each specific application. This can be done by proper resource management strategies, and the solution of dynamic optimization problems. In a (limited) resource optimization perspective, it is intuitive the UEs should offload to the ESs the (minimum) amount of information that is enough to complete the learning task with the desired accuracy, e.g., to meet the requirements of an user-specific application. This intuitive idea, has been recently formalized under the so called Goal-Oriented (or task-oriented) communications. This lecture provides an introduction to goal-oriented wireless communications for edge-assisted machine learning services, and will show some practical examples, where this framework is exploited to define and dynamically solve optimal resource allocation strategies, whose objective is to flexibly trade energy consumption for learning accuracy, under guaranteed service latency.

Paolo Banelli got the Laurea degree and the Ph.D. in telecommunications from University of Perugia, Perugia, Italy, in 1993 and 1998, respectively. Since 2019, he is Full Professor with the Department of Engineering, University of Perugia, where he was an Associate Professor, since 2005, and Assistant Professor, since 1998. He was a visiting researcher with the University of Minnesota, Minneapolis, MN, USA, in 2001, and a visiting professor with Stony Brook University, Stony Brook, NY, USA, from 2019 to 2020. His research interests include signal processing and optimization for wireless communications, graph signal processing and learning, and signal processing for biomedical applications. He was a member of the IEEE Signal Processing for Communications and Networking Technical Committee, from 2011 to 2013. In 2009, he was General Co-Chair of the IEEE International Symposium on Signal Processing Advances for Wireless Communications. He served as an Associate Editor for the IEEE Transactions on Signal Processing (from 2013 to 2016), EURASIP Journal on Advances in Signal Processing (from 2013 to 2020), and IEEE Open Journal of Signal Processing (since 2020).


Paolo Banelli

Sergio Barbarossa

Website: https://sites.google.com/a/uniroma1.it/sergiobarbarossa
/home

Sapienza University of Rome, Italy

TOPOLOGICAL SIGNAL PROCESSING AND LEARNING

ABSTRACT: The goal of this lecture is to introduce the basic tools for processing signals defined over a topological space. Nowadays, processing signals defined over a graph has become a well established technology. Graphs are just an example of topological space, incorporating only pairwise relations. In this lecture, after reviewing the fundamentals of signal processing over graphs, we will introduce simplicial and cell complexes as spaces able to incorporate higher order relations in the representation space, still possessing a rich algebraic structure that facilitates the extraction of information from the data. We will motivate the introduction of a simplicial (or cell) Fourier Transform and we will show how to design filters over a cell complex and to derive sampling strategies. Then we will move to the design of topological neural networks, operating over signals defined over topological spaces of different order, and we will present methods to infer the structure of space from data and to design neural networks operating on data living on a topological space. Finally, we will show a number of applications.

Sergio Barbarossa is a Full Professor at Sapienza University of Rome and a Senior Research Fellow of Sapienza School for Advanced Studies (SSAS). He is an IEEE Fellow and a EURASIP Fellow. He received the IEEE Best Paper Awards from the IEEE Signal Processing Society in the years 2000, 2014, and 2020, and the Technical Achievements Award from the European Association for Signal Processing (EURASIP) society in 2010. He served as an IEEE Distinguished Lecturer in 2013-2014. He has been the scientific coordinator of several European projects. His main current research interests include topological signal processing and learning, semantic and goal-oriented communications, 6G networks and distributed edge machine learning.


Paolo Banelli

Mérouane Debbah

Website: https://www.tii.ae/team/prof-merouane-debbah
Technology Innovation Institute (TII), Abu Dhabi

RECENT ADVANCES IN MACHINE LEARNING FOR WIRELESS

ABSTRACT: This talk focuses on the use of emerging deep learning techniques in future wireless communication networks. It will be shown that data-driven approaches should not replace, but rather complement traditional design techniques based on mathematical models. Extensive motivation is given for why deep learning based on artificial neural networks will be an indispensable tool for the design and operation of future wireless communication networks, and our vision of how artificial neural networks should be integrated into the architecture of future wireless communication networks is presented. A thorough description of deep learning methodologies is provided, starting with the general machine learning paradigm, followed by a more in-depth discussion about deep learning and artificial neural networks, covering the most widely-used artificial neural network architectures and their training methods.

Mérouane Debbah is Chief Researcher at the Technology Innovation Institute in Abu Dhabi. He is an Adjunct Professor with the Department of Machine Learning at the Mohamed Bin Zayed University of Artificial Intelligence. He received the M.Sc. and Ph.D. degrees from the Ecole Normale Supérieure Paris-Saclay, France. He was with Motorola Labs, Saclay, France, from 1999 to 2002, and also with the Vienna Research Center for Telecommunications, Vienna, Austria, until 2003. From 2003 to 2007, he was an Assistant Professor with the Mobile Communications Department, Institut Eurecom, Sophia Antipolis, France. In 2007, he was appointed Full Professor at CentraleSupelec, Gif-sur-Yvette, France. From 2007 to 2014, he was the Director of the Alcatel-Lucent Chair on Flexible Radio. From 2014 to 2021, he was Vice-President of the Huawei France Research Center. He was jointly the director of the Mathematical and Algorithmic Sciences Lab as well as the director of the Lagrange Mathematical and Computing Research Center. Since 2021, he is leading the AI & Digital Science Research centers at the Technology Innovation Institute. He has managed 8 EU projects and more than 24 national and international projects. His research interests lie in fundamental mathematics, algorithms, statistics, information, and communication sciences research. He is an IEEE Fellow, a WWRF Fellow, a Eurasip Fellow, an AAIA Fellow, an Institut Louis Bachelier Fellow and a Membre émérite SEE. He was a recipient of the ERC Grant MORE (Advanced Mathematical Tools for Complex Network Engineering) from 2012 to 2017. He was a recipient of the Mario Boella Award in 2005, the IEEE Glavieux Prize Award in 2011, the Qualcomm Innovation Prize Award in 2012, the 2019 IEEE Radio Communications Committee Technical Recognition Award and the 2020 SEE Blondel Medal. He received more than 20 best paper awards, among which the 2007 IEEE GLOBECOM Best Paper Award, the Wi-Opt 2009 Best Paper Award, the 2010 Newcom++ Best Paper Award, the WUN CogCom Best Paper 2012 and 2013 Award, the 2014 WCNC Best Paper Award, the 2015 ICC Best Paper Award, the 2015 IEEE Communications Society Leonard G. Abraham Prize, the 2015 IEEE Communications Society Fred W. Ellersick Prize, the 2016 IEEE Communications Society Best Tutorial Paper Award, the 2016 European Wireless Best Paper Award, the 2017 Eurasip Best Paper Award, the 2018 IEEE Marconi Prize Paper Award, the 2019 IEEE Communications Society Young Author Best Paper Award, the 2021 Eurasip Best Paper Award, the 2021 IEEE Marconi Prize Paper Award, the 2022 IEEE Communications Society Outstanding Paper Award, the 2022 ICC Best paper Award as well as the Valuetools 2007, Valuetools 2008, CrownCom 2009, Valuetools 2012, SAM 2014, and 2017 IEEE Sweden VT-COM-IT Joint Chapter best student paper awards. He is an Associate Editor-in-Chief of the journal Random Matrix: Theory and Applications. He was an Associate Area Editor and Senior Area Editor of the IEEE TRANSACTIONS ON SIGNAL PROCESSING from 2011 to 2013 and from 2013 to 2014, respectively. From 2021 to 2022, he serves as an IEEE Signal Processing Society Distinguished Industry Speaker.


Georgios B. Giannakis

Georgios B. Giannakis

Website: https://spincom.umn.edu
University of Minnesota, USA

ONLINE LEARNING FOR IOT MONITORING AND MANAGEMENT

ABSTRACT: Internet-of-Things (IoT) offers an intelligent infrastructure of networked smart devices offering task-specific monitoring and control services. IoT’s unique features include heterogeneity, ubiquitous low-power devices, and unpredictable dynamics also due to human participation. The need naturally arises for foundational innovations in network monitoring and management to allow efficient adaptation to changing environments, and low-cost service provisioning, subject to stringent latency constraints. To this end, the overarching theme of this tutorial is a unifying framework for online monitoring and management for IoT through contemporary communication, networking, learning, and optimization advances. From the network architecture vantage point, the unified framework leverages a promising architecture termed fog that enables smart devices to have proximity access to cloud functionalities at the network edge, along the cloud-to-things continuum. From the algorithmic perspective, key innovations include online approaches based on ensemble Gaussian processes that can offer adaptivity to different degrees of nonstationary in IoT dynamics. Scalability of implementation further motivates bandit operation along with local information exchanges that enable distributed approaches. The outlined framework can serve as a stepping stone that leads to systematic designs and rigorous analysis of task-specific learning and management schemes for IoT.

GEORGIOS B. GIANNAKIS received his Diploma in Electrical Engr. (EE) from the Ntl. Tech. U. of Athens, Greece, 1981. From 1982 to 1986 he was with the U. of Southern California (USC), where he received his MSc. in EE, 1983, MSc. in Mathematics, 1986, and Ph.D. in EE, 1986. He was with the U. of Virginia from 1987 to 1998, and since 1999 he has been with the U. of Minnesota (UMN), where he held an Endowed Chair of Telecommunications, served as director of the Digital Technology Center from 2008-21, and since 2016 he has been a UMN Presidential Chair in ECE. His interests span the areas of statistical learning, communications, and networking - subjects on which he has published more than 485 journal papers, 785 conference papers, 25 book chapters, two edited books and two research monographs. Current research focuses on Data Science with applications to IoT, and power networks with renewables. He is the (co-) inventor of 34 issued patents, and the (co-)recipient of 10 best journal paper awards from the IEEE Signal Processing (SP) and Communications Societies, including the G. Marconi Prize. He also received the IEEE-SPS `Nobert Wiener’ Society Award (2019); EURASIP's `A. Papoulis’ Society Award (2020); Tech. Achievement Awards from the IEEE-SPS (2000), and from EURASIP (2005); the IEEE ComSoc Education Award (2019); the IEEE Fourier Technical Field Award (2015). He is a member of the Academia Europaea, the Academy of Athens, Greece, and Fellow of the US Ntl. Academy of Inventors, the European Academy of Sciences, IEEE and EURASIP. He has served the IEEE in a number of posts, including that of a Distinguished Lecturer for the IEEE-SPS.


Geert Leus

Geert Leus

Website: https://cas.tudelft.nl/People/bio.php?id=3
Delft University of Technology, The Netherlands

SIGNAL PROCESSING AND MACHINE LEARNING ON GRAPHS

Abstract: The field of graph signal processing extends classical signal processing tools to signals (data) with an irregular structure that can be characterized by means of a graph (e.g., network data). One of the building blocks of this field are graph filters, direct analogues of time-domain filters, but intended for signals defined on graphs. In this tutorial, we introduce the field of graph signal processing and specifically give an overview of the graph filtering problem. We look at the family of finite impulse response (FIR) and infinite impulse response (IIR) graph filters and show how they can be implemented in a distributed manner. To further limit the communication and computational complexity of such a distributed implementation, we also generalize the state-of-the-art distributed graph filters to filters whose weights show a dependency on the nodes sharing information. These so-called edge-variant graph filters yield significant benefits in terms of filter order reduction and can be used for solving specific distributed optimization problems with an extremely fast convergence. Finally, we will overview how graph filters can be used in deep learning applications involving data sets with an irregular structure. Different types of graph filters can be used in the convolution step of graph convolutional networks leading to different trade-offs in performance and complexity. The numerical results presented in this talk illustrate the potential of graph filters in signal processing and machine learning.

Geert Leus received the M.Sc. and Ph.D. degree in Electrical Engineering from the KU Leuven, Belgium, in June 1996 and May 2000, respectively. Geert Leus is now a Full Professor at the Faculty of Electrical Engineering, Mathematics and Computer Science of the Delft University of Technology, The Netherlands. His research interests are in the broad area of signal processing, with a specific focus on wireless communications, array processing, sensor networks, and graph signal processing. Geert Leus received the 2021 EURASIP Individual Technical Achievement Award, a 2005 IEEE Signal Processing Society Best Paper Award, and a 2002 IEEE Signal Processing Society Young Author Best Paper Award. He is a Fellow of the IEEE and a Fellow of EURASIP. Geert Leus was a Member-at-Large of the Board of Governors of the IEEE Signal Processing Society, the Chair of the IEEE Signal Processing for Communications and Networking Technical Committee, the Chair of the EURASIP Technical Area Committee on Signal Processing for Multisensor Systems, a Member of the IEEE Sensor Array and Multichannel Technical Committee, a Member of the IEEE Big Data Special Interest Group, a Member of the EURASIP Signal Processing for Communications and Networking Special Area Team, and the Editor in Chief of the EURASIP Journal on Advances in Signal Processing. He was also on the Editorial Boards of the IEEE Transactions on Signal Processing, the IEEE Transactions on Wireless Communications, the IEEE Signal Processing Letters, and the EURASIP Journal on Advances in Signal Processing. Currently, he is a Member of the IEEE Signal Processing Theory and Methods Technical Committee, an Associate Editor of Foundations and Trends in Signal Processing, and the Editor in Chief of EURASIP Signal Processing.


Geoffrey Ye Li

Geoffrey Ye Li

Website: https://www.imperial.ac.uk/intelligent-transmission-and-processing-laboratory
Imperial College London, United Kingdom

FROM CONVENTIONAL TO SEMANTIC COMMUNICATIONS BASED ON DEEP LEARNING

Abstract: To transmit text messages, speeches, or pictures, we usually convert them into a symbol sequence and transmit the symbols in a conventional communication system, which is designed based on the block structure with coding, decoding, modulation, demodulation, etc. It has been demonstrated recently that deep learning (DL) has great potentials to break the bottleneck of the block-based communication system. In this talk, we first provide our recent endeavors in developing end-to-end (E2E) communications, which combine all blocks at the transmitter by a neural network and those at the receiver by another neural network. Even if deep learning based E2E communication systems have a potential to outperform the conventional block-based communication systems in terms of performance and complexity, their spectrum efficiency is still limited by Shannon capacity since they essentially transmit bits or symbols. Semantic communication systems transmit and recover the desired meaning of the transmitted content (for example, a text message or a picture) directly and can significantly improve transmission efficiency. We will present our initial results on semantic communications.

Geoffrey Ye Li has been a Chair Professor at Imperial College London since 2020. Before moving to Imperial, he was with Georgia Institute of Technology as a Professor for 20 years and with AT&T Labs - Research in New Jersey, USA, as a Principal Technical Staff Member for five years. His general research interests include statistical signal processing and machine learning for wireless communications. In these areas, he has published over 600 referred journal and conference papers in addition to over 40 granted patents. His publications have been cited over 50,000 times with H-index of over 100 and he has been listed as the World’s Most Influential Scientific Mind, also known as a Highly-Cited Researcher, by Thomson Reuters almost every year since 2001. He has been an IEEE Fellow since 2006 and an IET Fellow since 2021. He received several prestigious awards from IEEE ComSoc, IEEE VTS, and IEEE SPS, including 2019 IEEE ComSoc Edwin Howard Armstrong Achievement Award


Gonzalo Mateos

Gonzalo Mateos

Website: http://www.ece.rochester.edu/~gmateosb/
University of Rochester, USA

CONNECTING THE DOTS: LEARNING GRAPHS FROM DATA

Abstract: Under the assumption that network data are related to the topology of the graph where they are supported, the goal of graph signal processing (GSP) is to develop algorithms that fruitfully leverage this relational structure and can make inferences about these relationships, even when they are only partially observed. Many GSP efforts to date assume that the underlying network is known, and then analyze how the graph’s algebraic and spectral characteristics impact the properties of the graph signals of interest. However, such an assumption is often untenable in practice and arguably most graph construction schemes are largely informal, distinctly lacking an element of validation. In this lecture, we offer an overview of network topology inference methods developed to bridge the aforementioned gap, by using information available from graph signals along with innovative GSP tools and models to infer the underlying graph structure. It will also introduce the attendees to challenges and opportunities for SP research in emerging topic areas at the crossroads of modeling, prediction, and control of complex behavior arising with large-scale networked systems that evolve over time. Accordingly, this lecture stretches all the way from (nowadays rather mature) statistical approaches including correlation analyses to recent GSP advances in a comprehensive and unifying manner. Through rigorous problem formulations and intuitive reasoning, concepts are made accessible to SP researchers not well versed in network-analytic issues. A diverse gamut of network inference challenges and application domains will be selectively covered, based on importance and relevance to SP expertise, as well as the instructor's own experience and contributions

Gonzalo Mateos earned the B.Sc. degree from Universidad de la Republica, Uruguay, in 2005, and the M.Sc. and Ph.D. degrees from the University of Minnesota, Twin Cities, in 2009 and 2011, all in electrical engineering. He joined the University of Rochester, Rochester, NY, in 2014, where he is currently an Associate Professor with the Department of Electrical and Computer Engineering, as well as an Asaro Biggar Family Fellow in Data Science. During the 2013 academic year, he was a visiting scholar with the Computer Science Department at Carnegie Mellon University. From 2004 to 2006, he worked as a Systems Engineer at Asea Brown Boveri (ABB), Uruguay. His research interests lie in the areas of statistical learning from complex data, network science, decentralized optimization, and graph signal processing, with applications in brain connectivity, dynamic network health monitoring, social, power grid, and Big Data analytics.


Petar Popovski

Petar Popovski

Website: http://petarpopovski.es.aau.dk
Aalborg University, Denmark

STATISTICAL LEARNING FOR ULTRA-RELIABLE LOW LATENCY COMMUNICATIONS

Abstract: The stringent, and often extreme, reliability guarantees posed for ultra-reliable low-latency communications (URLLC) can be attained and claimed in a credible way only by relying on a proper statistical characterization and statistical learning. This characterization should justify the claim that a user will be guaranteed a reliability of, say, 99.999 %, given the observed data on the wireless channel as well as any other prior knowledge. This lecture presents the statistical aspects of URLLC, detailing both frequentist and Bayesian approaches. URLLC has different definitions for reliability, availability, latency, etc., depending on which communication layers are included. The lecture focuses on the physical layer since it is fundamental to the understanding of the overall performance. It presents the statistical features and guarantees for outage probability in a narrowband wireless channel. Outage can occur when the instantaneous channel quality cannot support the data rate selected by the transmitter. For the frequentist approach, there is a need for excessive number of observations required to characterize ultra-rare events. The required number of observations can be decreased by using prior knowledge, which brings in the Bayesian method. As a motivating example, we treat the practical case in which a base station (BS) collects spatial channel statistics for users at different locations and attempts to predict the performance of a user at a new location. The lecture will discuss how this case can be addressed by using statistical characterization of radio maps. An example will be presented based on a synthetic dataset generated by ray-tracing, where the BS can obtain high-quality predictions of the reliability performance even for locations that are not in proximity.

Petar Popovski is a Professor at Aalborg University, where he heads the section on Connectivity and a Visiting Excellence Chair at the University of Bremen. He received his Dipl.-Ing and M. Sc. degrees in communication engineering from the University of Sts. Cyril and Methodius in Skopje and the Ph.D. degree from Aalborg University in 2005. He is a Fellow of the IEEE. He received an ERC Consolidator Grant (2015), the Danish Elite Researcher award (2016), IEEE Fred W. Ellersick prize (2016), IEEE Stephen O. Rice prize (2018), Technical Achievement Award from the IEEE Technical Committee on Smart Grid Communications (2019), the Danish Telecommunication Prize (2020) and Villum Investigator Grant (2021). He is currently the Editor-in-Chief of IEEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS. Prof. Popovski was the General Chair for IEEE SmartGridComm 2018 and IEEE Communication Theory Workshop 2019. His research interests are in communication theory and wireless connectivity. He authored the book ``Wireless Connectivity: An Intuitive and Fundamental Guide'', published by Wiley in 2020.


Ljubiša Stanković

Ljubiša Stanković

Website: http://www.tfsa.ac.me/
University of Montenegro

MATCHED FILTERING APPROACH TO UNDERSTANDING THE BASIS OF GRAPH CONVOLUTIONAL NEURAL NETWORKS

Abstract: The operation of CNNs from the matched filter perspective will be explained, and it will be shown that their very backbone – the convolution operation – represents a matched filter which examines the input for the presence of characteristic patterns in data. This fact serves as a vehicle for a unifying account on the overall functionality of CNNs, whereby the convolution-activation-pooling chain and learning strategies are also shown to admit a compact and elegant interpretation under the umbrella of matched filtering. Then, a review of graphs, as a basis for a signal and signal processing on irregular domain will be given. Graph Convolutional Neural Networks (GCNN) are becoming a model of choice for learning on irregular domains; yet due to the black-box nature of neural networks (NNs) their underlying principles are rarely examined in depth. To this end, we revisit the operation of GCNNs and show that the convolutional layer effectively performs, as in the standard CNN, graph matched filtering of its inputs with a set of predefined patterns (features). We then show how this serves as a basis for an analogous framework for understanding GCNNs which maintains physical relevance throughout the information flow, including nonlinear activations and max-pooling. Such an approach is shown to be quite general, and yields both standard CNNs and fully connected NNs as special cases. For enhanced intuition, a step-by-step numerical example is provided through which the information propagation through GCNNs is visualised, illuminating all stages of GCNN operation and learning.

Ljubiša Stanković was born in Montenegro on June 1, 1960. He received a B.S. degree in electrical engineering from the University of Montenegro in 1982 with the award as the best student at the University. As a student, he won two competitions in mathematics for electrical engineering students of Yugoslavia in 1980 and 1982. He received an M.S. degree in communications from the University of Belgrade, and a Ph.D. in the theory of electromagnetic waves from the University of Montenegro in 1988. As a Fulbright grantee, he spent the 1984/85 academic year at the Worcester Polytechnic Institute, Worcester, MA. Since 1982, he has been on the faculty at the University of Montenegro, where he has been a Full Professor since 1995. From 1997 to 1999, he was on leave at the Ruhr University Bochum, Germany, supported by the Alexander von Humboldt Foundation. At the beginning of 2001, he was at the Technische Universiteit Eindhoven, The Netherlands, as a Visiting Professor. During the period of 2003 to 2008, he was the Rector of the University of Montenegro. He was the Ambassador of Montenegro to the United Kingdom, Ireland, and Iceland from 2011 to 2015 and a visiting academic to the Imperial College London 2012-2013. His current interests are in Signal Processing. He published about 450 technical papers, almost 200 of them in the leading journals, mainly the IEEE editions. Stanković received the highest state award of Montenegro in 1997 for scientific achievements. He was a member of the IEEE Signal Processing Society's Technical Committee on Theory and Methods, an Associate Editor of the IEEE Transactions on Image Processing, an Associate Editor of the IEEE Signal Processing Letters, an Associate Editor of the IEEE Transactions on Signal Processing, and a Senior Area Editor of the IEEE Transactions on Image Processing. He is the Deputy Editor-in-Chief of the IET Signal Processing, and a member of the Editorial Board of Signal Processing. He is a member of the National Academy of Science and Arts of Montenegro (CANU) since 1996, its vice-president since 2016, and a member of the Academia Europaea and the European Academy of Sciences and Arts. Stanković (with coauthors) won the Best paper award from the European Association for Signal Processing (EURASIP) in 2017 for a paper published in the Signal Processing journal and the IEEE Signal Processing Magazine Best Column Award for 2020 and the Outstanding Paper Award at the IEEE ICASSP 2021. Stanković is a Fellow of the IEEE for contributions to time-frequency signal analysis.


Andrea Tonello

Andrea Tonello

Website: https://www.andreatonello.com/
University of Klagenfurt, Austria

STATISTICAL LEARNING WITH GENERATIVE MODELS FOR COMMUNICATIONS

Abstract: Learning the statistics of physical phenomena has been a long-time research objective. The advent of machine learning methods has offered effective tools to tackle such an objective in several data science domains. Some of those tools can be used in the domain of communication systems and networks. We should, however, distinguish between data learning from signal learning. The former paradigm is typically applied to higher protocol layers, while the latter to the physical layer. Historically, stochastic models derived from the laws of physics have been exploited to describe the physical layer. From these models, transmission technology has been developed and performance analysis carried out. Nevertheless, this approach has shown some shortcomings in complex and uncertain environments. Based on these preliminary considerations, we will review basic concepts about the statistical description of random processes and conventional random signal generation methods. Then, recent generative models capable of firstly learning the hidden/implicit distribution and then generating synthetic signals will be discussed. We will review the concept of copula and motivate the use of recently introduced segmented neural network architectures that operate in the uniform probability space. The application of such generative models to classic (but still open) problems in communications will be illustrated, including: synthetic channel and noise modeling, channel capacity estimation, coding/decoding design in unknown channels. A key enabler component is the ability to estimate mutual information. This will lead us to the description of novel estimators and their application to learning the channel capacity, which is an ambitious goal being capacity mostly unknown even for channels analytically described. More constructively, we will show that autoencoders can be used to design capacity approaching coding/decoding schemes. Finally, just focusing at the receiver side, we will present optimal decoding strategies based on deep learning neural architectures. The lecture will substantiate the theoretical aspects with several application examples not only in the wireless communication context but also in the less known power line communication domain; the latter domain being perhaps more challenging giving the extremely complex nature of the channel and noise.

Andrea Tonello is a professor of embedded communication systems at the University of Klagenfurt, Austria. He has been an associate professor at the University of Udine, Italy, technical manager at Bell Labs-Lucent Technologies, USA, and managing director of Bell Labs Italy where he was responsible for research activities on cellular technology. He is co-founder of the spin-off company WiTiKee and has a part-time associate professor post at the University of Udine, Italy. Dr. Tonello received the PhD from the University of Padova, Italy (2002). He was the recipient of several awards including: the Lucent Bell Labs Recognition of Excellence Award (1999), the RAENG (UK) Distinguished Visiting Fellowship (2010), the IEEE Vehicular Technology Society Distinguished Lecturer Award (2011-15), the IEEE Communications Society (ComSoc) Distinguished Lecturer Award (2018-19), the IEEE ComSoc TC-PLC Interdisciplinary and Research Award (2019), the IEEE ComSoc TC-PLC Outstanding Service Award (2019), and the Chair of Excellence from UC3M (2019-20). He also received 10 best paper awards. He was the chair of the IEEE ComSoc Technical Committee on PLC (2014-18), and the director for industry outreach in the IEEE ComSoc board of governors (2020-21). He was/is associate editor of IEEE TVT, IEEE TCOM, IEEE ACCESS, IET Smart Grid, Elsevier Journal of Energy and Artificial Intelligence. He serves as the chair of the IEEE ComSoc Technical Committee on Smart Grid Communications (2020-TD).


Dejan Vukobratović

Dejan Vukobratović

Website: https://sites.google.com/view/vukobratovic
University of Novi Sad, Serbia

GAUSSIAN BELIEF PROPAGATION: FROM THEORY TO APPLICATIONS

Abstract: In this lecture, we present a tutorial overview of a powerful class of probabilistic graphical models called factor graphs and specialize their model in the large-scale multivariate Gaussian distribution setting. Our focus is on Gaussian Belief Propagation (GBP), a simple, efficient and widely applicable algorithm for probabilistic inference on factor graphs. We discuss the key issues of GBP applied in real-world probabilistic systems, such as correctness, convergence and complexity through the prism of GBP applications in the fundamental problem of state estimation in power systems. Future extensions of these concepts using modern data-based machine (deep) learning tools are also presented as our ongoing work.

Dejan Vukobratović received a PhD degree in electrical engineering from the University of Novi Sad, Serbia, in 2008, where he is a full Professor from 2019. During 2009 and 2010, he was Marie Curie Intra-European Fellow at the University of Strathclyde, Glasgow, UK. He published more than 40 journal and 90 conference papers in top-tier IEEE journals and conference venues. He received the best paper award at IEEE MMSP 2010 and his PhD student received the best student paper award at IEEE SmartGridComm 2017. He served as a TPC-chair at IEEE Vehicular Technology Conference - Spring 2020 (Antwerp Belgium) and Symposium Chair at IEEE SmartGridComm 2021 (Aachen, Germany), ans will serve as TPC Chair at IEEE SmartGridComm 2022 (Singapore) and Symposium Chair at IEEE ICC 2023 (Rome, Italy). His research interests are in the broad area of information and coding theory, wireless communications, distributed signal and information processing in Smart Grids, and massive machine-type communications in mobile cellular networks.


Thomas Watteyne

Thomas Watteyne

Website: www.thomaswatteyne.com
https://team.inria.fr/eva/
Inria, Paris, France

DUST ACADEMY: GETTING YOUR HANDS DIRTY!

Abstract: This is a tremendously fun hands-on course on low-power wireless, where you will be designing, assembling, programming and deploying you very own low-power wireless network. I will be bringing cases full of low-power wireless boards, programming cables, sensors and batterie. Low-power wireless is deeply embedded, and touches upon many technologies, including low-power electronics and wireless. There is no better way of learning about low-power wireless than trying it for real. In lockstep with the “theory” seen through slideware, you will all be playing with state-of-the-art technology. Specifically, we will be using the market leading IoT technology used in industrial application, called SmartMesh, by Analog Devices. A SmartMesh network offers over 99.999% end-to-end requires, with each device consuming less than 50 μA. Each of you will receive a set of SmartMesh devices to apply to a real-world use case. Throughout this hands-on course, you will be able to build a low-power wireless network, attach sensors to it, deploy it, and connect it to the Internet. The module will end with a project in which we will all be building one large sensor-to-cloud solution. This course will be super fun. The instructor is a super enthusiastic IoT professional who is 200% driven to teach you the ropes and build something together. This course will be fulfilling because it allows you to put many different things you have learned to good use: embedded systems, electronics, networking, web programming, etc. This course will be nice challenging in that the entire class will work together in building a complete solution where each element has to work together. This course provides you with a very detailed introduction to low-power wireless, what it is, what it is not, what are the challenges and what are the solutions that exist today. We will put a particular on talking about what low-power wireless “really” is, building up all the material presented from real-world use cases, both from exploratory academic experiments and real-world commercial (and critical) applications. All of these examples will be taken from my personal experience.

Thomas Watteyne is an insatiable enthusiast of low-power wireless technologies. He holds a Research Director position at Inria in Paris, in the EVA research team, where he leads a team that designs, models and builds networking solutions based on a variety of Internet-of-Things (IoT) standards. He is Wireless System Architect at Analog Devices, the undisputed leader in supplying low power wireless mesh networking solutions for critical applications for industrial and beyond. Since 2013, he co-chairs the IETF 6TiSCH working group, which standardizes how to use IEEE802.15.4e TSCH in IPv6-enabled mesh networks, and is member of the IETF Internet-of-Things Directorate. In 2019, he co-founded Wattson Elements, the company that develops the award-winning Falco marina management solution. Thomas was a postdoctoral research lead in Prof. Kristofer Pister’s team at the University of California, Berkeley. He founded and co-leads Berkeley’s OpenWSN project, an open-source initiative to promote the use of fully standards-based protocol stacks for the IoT. Between 2005 and 2008, he was a research engineer at France Telecom, Orange Labs. He holds a PhD in Computer Science (2008), an MSc in Networking (2005) and an MEng in Telecommunications (2005) from INSA Lyon, France. He is a Senior member of IEEE. He is fluent in 4 languages.

Contact Information

2022 IEEE-SPS/EURASIP SUMMER SCHOOL ON NETWORK- AND DATA-DRIVEN LEARNING

Business Centre “Integra”, Banja Luka