Show simple item record

dc.contributor.advisorHelberg, A.S.J.en_US
dc.contributor.advisorFerreira, M.en_US
dc.contributor.authorVan Dyk, T.G.en_US
dc.date.accessioned2021-01-28T06:23:30Z
dc.date.available2021-01-28T06:23:30Z
dc.date.issued2020en_US
dc.identifier.urihttps://orcid.org/0000-0002-8042-1934en_US
dc.identifier.urihttp://hdl.handle.net/10394/36529
dc.descriptionMEng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus
dc.description.abstractTo reach the maximum flow capacity of a network, network coding can be used. There are various implementations of network coding. The most widely used method of network coding is Random Linear Network Coding (RLNC). RLNC, however, is quite susceptible to network errors. Due to this drawback, Matrix Network Coding (MNC) was developed by Kim, et al. in 2011. MNC improves on RLNC since it has better error-correcting capabilities. However, since MNC uses matrices for encoding rather than coefficients, it adds complexity to the system. Decoding is also more complex for MNC since the G-matrix is larger and not always invertible, which is necessary for decoding. We focused on methods to improve the decodability of MNC by improving on the invertibility of the G-matrix for each decoding problem. This was done by selectively choosing encoding matrices at source- and intermediate nodes to determine the effect on the resulting G-matrix. For the first experiment, we only used encoding matrices that were invertible at the source nodes. For the second method we attempted only to choose encoding matrices that would, when used to encode data at intermediate nodes, result in invertible output encoding matrices. Finally, we attempted to choose only upper triangular matrices as encoding matrices. From these experiments, we found that both of the first methods showed improvement in invertibility of the G-matrix. The first method showed considerable improvement for RLNC and a slight improvement for MNC while the second showed a substan-tial improvement for MNC. Most network cases showed little improvement; however, approximately 15% of network cases showed an improvement of 50% or more using Method 2 on MNC. These network cases were comparable to RLNC with Method 1 applied. The final method has never been tested before and produces quite surprising results, since no invertible G-matrices were found. MNC has inherent error-correction abilities, and by limiting the encoding factors we might influence this capability. We, therefore, tested the influence of network depth and burst errors on networks having only invertible matrices as encoding matrices. The effect of burst errors have not been tested on MNC in previous studies. Among other things, we found that more extensive networks have a lower increase in network error propagation than smaller networks. Further research on the effect of burst errors can still be done using different encoding factors.
dc.language.isoenen_US
dc.publisherNorth-West University (South Africa)en_US
dc.subjectRandom Linear Network Coding
dc.subjectMatrix Network Coding
dc.subjectInvertibility
dc.subjectError Correction Coding
dc.titleImproving the decoding efficiency of Matrix Network Codingen_US
dc.typeThesisen_US
dc.description.thesistypeMastersen_US
dc.contributor.researchID12363626 - Helberg, Albertus Stephanus Jacobus (Supervisor)en_US
dc.contributor.researchID13041274 - Ferreira, Melvin (Supervisor)en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record