dc.contributor.advisor | Helberg, A.S.J. | en_US |
dc.contributor.advisor | Ferreira, M. | en_US |
dc.contributor.author | Van Dyk, T.G. | en_US |
dc.date.accessioned | 2021-01-28T06:23:30Z | |
dc.date.available | 2021-01-28T06:23:30Z | |
dc.date.issued | 2020 | en_US |
dc.identifier.uri | https://orcid.org/0000-0002-8042-1934 | en_US |
dc.identifier.uri | http://hdl.handle.net/10394/36529 | |
dc.description | MEng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus | |
dc.description.abstract | To reach the maximum flow capacity of a network, network coding can be used. There are various implementations of network coding. The most widely used method of network coding is Random Linear Network Coding (RLNC). RLNC, however, is quite susceptible to network errors. Due to this drawback, Matrix Network Coding (MNC) was developed by Kim, et al. in 2011. MNC improves on RLNC since it has better error-correcting capabilities. However, since MNC uses matrices for encoding rather than coefficients, it adds complexity to the system. Decoding is also more complex for MNC since the G-matrix is larger and not always invertible, which is necessary for decoding. We focused on methods to improve the decodability of MNC by improving on the invertibility of the G-matrix for each decoding problem. This was done by selectively choosing encoding matrices at source- and intermediate nodes to determine the effect on the resulting G-matrix. For the first experiment, we only used encoding matrices that were invertible at the source nodes. For the second method we attempted only to choose encoding matrices that would, when used to encode data at intermediate nodes, result in invertible output encoding matrices. Finally, we attempted to choose only upper triangular matrices as encoding matrices. From these experiments, we found that both of the first methods showed improvement in invertibility of the G-matrix. The first method showed considerable improvement for RLNC and a slight improvement for MNC while the second showed a substan-tial improvement for MNC. Most network cases showed little improvement; however, approximately 15% of network cases showed an improvement of 50% or more using Method 2 on MNC. These network cases were comparable to RLNC with Method 1 applied. The final method has never been tested before and produces quite surprising results, since no invertible G-matrices were found. MNC has inherent error-correction abilities, and by limiting the encoding factors we might influence this capability. We, therefore, tested the influence of network depth and burst errors on networks having only invertible matrices as encoding matrices. The effect of burst errors have not been tested on MNC in previous studies. Among other things, we found that more extensive networks have a lower increase in network error propagation than smaller networks. Further research on the effect of burst errors can still be done using different encoding factors. | |
dc.language.iso | en | en_US |
dc.publisher | North-West University (South Africa) | en_US |
dc.subject | Random Linear Network Coding | |
dc.subject | Matrix Network Coding | |
dc.subject | Invertibility | |
dc.subject | Error Correction Coding | |
dc.title | Improving the decoding efficiency of Matrix Network Coding | en_US |
dc.type | Thesis | en_US |
dc.description.thesistype | Masters | en_US |
dc.contributor.researchID | 12363626 - Helberg, Albertus Stephanus Jacobus (Supervisor) | en_US |
dc.contributor.researchID | 13041274 - Ferreira, Melvin (Supervisor) | en_US |