Electronic & Electrical Eng (Theses and Dissertations)Electronic & Electrical Eng (Theses and Dissertations)http://hdl.handle.net/2262/2052020-07-05T13:22:09Z2020-07-05T13:22:09ZMassive MIMO technology for next generation of wireless networksSABETI, PARNAhttp://hdl.handle.net/2262/927132020-06-02T17:01:29Z2020-01-01T00:00:00ZMassive MIMO technology for next generation of wireless networks
SABETI, PARNA
Large scale antenna or massive multiple input multiple output (MIMO) systems are one of the key enabling technologies for fifth generation (5G) of wireless communications networks and beyond. This technology offers huge advantages in terms of energy efficiency, spectral efficiency, robustness and reliability. However, there are some challenges that prevent the realization of full potential of massive MIMO technology. For instance, performance of massive MIMO systems heavily relies on accurate synchronization. While orthogonal frequency division multiplexing (OFDM) is commonly used as a multicarrier modulation technique in massive MIMO systems, and it is very sensitive to frequency synchronization errors. In contrast, filter-bank multicarrier modulation (FBMC) based waveforms are more robust against frequency offset. Thus, the application of FBMC pulse amplitude modulation (FBMC-PAM) to massive MIMO is proposed in this thesis as an alternative to OFDM. It is also demonstrated that due to the absence of cycle prefix (CP), FBMC-PAM can provide a better bit error rate (BER) performance than OFDM. In addition, it is observed that as the number of base station (BS) antennas increases, the performance of the massive MIMO system with FBMC-PAM saturates. In fact, in an asymptotic regime, the noise effect and multi-user interference (MUI) are averaged out, but some residual effects of multipath channel will remain. This saturation level, which is the upper bound for the system performance, is mathematically calculated and confirmed with simulations. Moreover, it is shown that by increasing the number of subcarriers, a higher upper bound can be achieved. On the other hand, the CP in OFDM systems effectively mitigates the effects of multipath channel, and there will be no saturation in an asymptotic regime. Following this results, we focus on OFDM-based massive MIMO systems for the rest of this study. Hence, to meet the requirement of accurate synchronization in these systems, a low-complexity frequency synchronization technique is proposed. It is shown that the phase information of the covariance of the received signals at the BS antennas is a function of carrier frequency offset (CFO), and if real-valued pilots are utilized, CFO can be simply calculated from the phase information. It should be noted that due to the spatial multiplexing in massive MIMO systems, all the users can simultaneously share the entire available bandwidth, and channel state information (CSI) of users is used to distinguish their signals. However, at the CFO estimation stage, the CSI knowledge is unknown. Thus, a set of rectangular-shaped real-valued pilots are designed to preserve the orthogonality of the users, and a closed form formula is calculated for CFO estimation. Since this technique has strict limitations on pilot design, another CFO estimation technique is proposed which is more general and can work with any pilots. In this technique, the desired user signal is separated from the received signal by using a matrix orthogonal to the space spanned by the pilot of that user. It is proved that the objective function of the proposed optimization problem is unimodal and can be simply solved by Golden search algorithm. Furthermore, it is noticed that the massive MIMO system with the time domain CFO compensation requires a separate receiver for each user which imposes a huge amount of computational complexity to the system. Therefore, a frequency domain CFO compensation technique is proposed which takes place after combining the received signals at the BS. Thus, one receiver is sufficient for all the users in the network, and the complexity of the receiver is considerably reduced. In addition, it is proved that by applying this CFO compensation technique, even in the presence of CFO estimation error, the scattering effect of CFO is removed and only a phase shift remains. Hence, two iterative error correction algorithms are proposed to improve the synchronization accuracy. Simulation results demonstrate that the BER of the system with this synchronization technique matches that of the perfect synchronous system.
We then move on to the challenge of CSI acquisition in massive MIMO systems. This is important because the CSI estimation becomes a bottleneck as the scale of the antenna array increases or the number of users in the network grows large. To avoid the pilot overhead, a deep learning (DL) aided blind channel estimation technique is proposed. First, the MUI is canceled by calculating the orthogonal complement space matrix of the MUI. Then, based on the asymptotic orthogonality of the massive MIMO channels, the first OFDM symbol of all the users is extracted as a virtual pilots. However, in practice, noise and MUI are not completely removed, and the remaining part is also intensified after this process. Therefore, denoising convolutional neural network (DnCNN) is deployed as a denoiser to deal with the remaining interference. Another DL-based algorithm called U-Net is also proposed to be used as a denoiser that can outperform DnCNN when the noise level is high. Moreover, a ResNet architecture followed by a feedforward neural network is proposed to force the network to converge to the expected values. This further enhances the performance of the virtual pilot detection. Finally, maximum likelihood (ML) estimator is employed to estimate the CSIs.
APPROVED
2020-01-01T00:00:00ZEnabling Adaptable Future Networks: Trade-Offs and Resource Allocation ProblemsSEXTON, CONORhttp://hdl.handle.net/2262/926892020-05-29T17:03:42Z2020-01-01T00:00:00ZEnabling Adaptable Future Networks: Trade-Offs and Resource Allocation Problems
SEXTON, CONOR
In this thesis, we illuminate the various trade-offs arising from the trend towards customisable networks, and propose resource allocation procedures to balance these trade-offs and facilitate the necessary coexistence of Radio Access Technologies (RATs) required to enable adaptable future networks. The problems we address in this thesis can be stated in the form of the following research questions:
1) What choices do the range of proposed RATs and system-level techniques afford operators
in the context of enabling an adaptable network?
2) How can the diverse use cases in future networks be satisfied through the coexistence of multiple waveforms, with each service employing a waveform that is best suited for it?
3) What are the implications of a system comprising tailored virtual networks on radio resource allocation and admission control?
4) How can the twin goals of slice-tailored performance and increased resource utilisation in network slicing be simultaneously realised in future networks?
To address these questions we apply a range of analytical tools, including optimisation,
matching theory, and stochastic analysis. We verify our results using large-scale system-level simulations where possible.
APPROVED
2020-01-01T00:00:00ZTaking advantage of correlated information for energy-aware scheduling in the IoT: A deep reinforcement learning approachHRIBAR, JERNEJhttp://hdl.handle.net/2262/924552020-05-07T17:01:49Z2020-01-01T00:00:00ZTaking advantage of correlated information for energy-aware scheduling in the IoT: A deep reinforcement learning approach
HRIBAR, JERNEJ
Millions of battery-powered sensors deployed for monitoring purposes in a multitude of scenarios, e.g., agriculture, smart cities, industry, etc., require energy-efficient solutions to prolong their lifetime. When these sensors observe a phenomenon distributed in space and evolving in time, it is expected that collected observations will be correlated in time and space.
In this thesis, we first outline how data gathered from correlated sensors can be used to improve the timeliness of updates of another sensor in the network. We consider a system of two correlated information sources, i.e. sensors, which periodically send updates to a gateway, regarding the observed physical phenomenon distributed in space and evolving in time. The optimal use of updates in such a system greatly depends on the correlation between the two sources, and to explore this effect we investigate three different models of the covariance between independently obtained observations of the phenomenon of the interest. We extract values for the parameters in the covariance models from data coming from a real sensor network, to provide the reader with a realistic feel for scaling parameters values and the applicability of our analysis in a real scenario. We demonstrate that using correlated information results in a significant increase in device lifetime and compare our approach to others proposed in the literature.
In the second part, we build on the gained insight and propose a Deep Reinforcement Learning (DRL) based scheduling mechanism capable of taking advantage of correlated information. We design our solution using the two DRL algorithms, namely Deep Q Network and Deep Deterministic Policy Gradient. The proposed mechanism is capable of determining the frequency with which sensors should transmit their updates, to ensure accurate collecting of observations, while simultaneously considering the energy available. To evaluate our scheduling mechanism, we use multiple datasets containing multiple environmental observations obtained in a real deployment. We show that it is capable of significantly extending sensors lifetime. We compare the mechanism to an idealized, all-knowing, scheduler to demonstrate that its performance is near-optimal. Additionally, we present the unique feature of our design-energy-awareness by displaying the impact of sensors' energy levels on the set frequency of updates.
APPROVED
2020-01-01T00:00:00ZInvestigating cortical encoding of auditory space & motion in humans using EEGBEDNAR, ADAMhttp://hdl.handle.net/2262/923262020-04-22T17:01:17Z2020-01-01T00:00:00ZInvestigating cortical encoding of auditory space & motion in humans using EEG
BEDNAR, ADAM
This work uses a novel linear regression-based framework together with scalp recorded electroencephalography (EEG) to study various aspects of spatial hearing in humans.
In our first study, we showed that in an acoustic scene with one sound source, auditory cortex tracks the time-varying location of a continuously moving sound. Specifically, we identified two distinct frequency components, namely, delta (0-2Hz) and the alpha power (8-12Hz) of EEG that track the sound location. The delta and the power of alpha EEG encoding had different spatio-temporal characteristics, which suggested that they potentially reflect different aspects of auditory motion processing. Importantly, we also showed that the trajectory tracking is not specific to a particular type of spatial acoustic cue and is independent from the well-known sound envelope tracking of the cortex.
In our second study we created an experiment where subjects listened to two concurrent sound stimuli that were moving independently within the horizontal plane and were tasked with paying attention to one of them. We showed that the attended sound source trajectory can be reliably reconstructed from EEG, even in the presence of other competing sources and demonstrated that the trajectory tracking works for noise as well as more complex speech stimuli. We also observed weak tracking of the unattended source location for the speech stimuli, however, this applied only to delta but not to the alpha power EEG component. This further suggests that location tracking by delta and alpha power EEG possibly represent different neural mechanisms. Finally, with more practical applications in mind, we demonstrated that the trajectory reconstruction approach can be used to decode selective attention.
In our third study we investigated cortical sensitivity to sound position, velocity, speed and acceleration. We found that sound speed but not velocity can be reconstructed from EEG independently from sound position. Surprisingly, our results also indicated that sound acceleration might be independently represented at the cortical level, which has not been reported before.
In the last study, we deployed our reconstruction method in a naturalistic scenario where subjects were allowed to move their heads and received visual input over a virtual reality headset. We were primarily interested in whether sound location is cortically encoded using cranio- or allo-centric coordinates. Although our initial analysis indicated a cranio-centric representation of sound location, we were not able to reconstruct the trajectory from EEG after we removed the head motion-related artefacts. Therefore, we were unable to find strong evidence for cortical encoding in either frame of reference. Our secondary goal was to test the feasibility of using the Oculus Rift headset together with EEG recording. Although we found it is possible to use this setup, we have encountered several practical issues, such as subject discomfort during longer recordings that need to be addressed.
APPROVED
2020-01-01T00:00:00Z