Coding theory, sometimes called algebraic coding theory, deals with the design of error-correcting codes for the reliable transmission of information across noisy channels. It makes use of classical and modern algebraic techniques involving finite fields, group theory, and polynomial algebra. It has connections with other areas of discrete mathematics, especially number theory and the theory of experimental designs" Three areas that are commonly associated with Coding Theory are Data Compression, Cryptology, and Error Correcting Codes.Data CompressionData Compression is efficiently encoding source information so that it uses the smallest amount of space possible. This is accomplished ...view middle of the document...
A visual representation of the errors looks like the following:Due to the fact that noise duration most often occurs for the duration of more than one bit, burst errors are much more common than Single Bit or Multiple Bit Errors. Noise will cause the data being transmitted to change, or become corrupt; the number of bits corrupted during transmittal is directly related to the length of time the data is exposed to the noise and to the rate at which it is being transmitted.Error correction is a more difficult process than error detection. In order perform Error Correction one or more errors must first be detected, the error(s) within the data located, and a correction process applied. With Error Detection, the process is complete once it is known whether an error has or has not occurred.A high-level overview of the Error Correction process is as follows:Errors within a transmittal are detectedThe corrupt bits are located within the transmittal including:the number of errorsthe location of the errorsAn Error Correction code appliedImportant factors in Error Correction are the number of errors within a transmittal and the size of the message being transmitted. There are eight possible error locations for one single error within one 8-bit data unit. There are twenty-eight error locations possible for two 8-bit data units; the possible error locations grow exponentially as the errors increase.RedundancyTo detect or correct errors, it is required to send extra bits with data.For example if we need to correct one single error in an 8-bit data unit, we need to consider eight possible error locations which is quite difficult. If we need to correct two errors in a data unit of the same size we need to consider 28 possibilities. So it is not easy forreceiver to find 10 errors in 1000 bits of data.Hamming DistanceThe Hamming Distance, named after Richard Hamming, is used to count the number of flipped bits in a fixed-length binary word during telecommunication. It is sometimes referred to as the signal distance. The Hamming Distance can be defined as the number of differences between corresponding bits.Hamming Distance can be quickly calculated by applying the XOR operation on two words and counting the number of 1's in the result. "The Hamming distance d(x, y) between the bit strings x = x1x2 . . . xn and y = y1y2 . . . yn is the number of positions in which these strings differ, that is, the number of i (i = 1, 2 . . . , n) for which xi ≠ yi" (Rosen, 1999).The smallest Hamming Distance among all possible data words in a set is called the Minimum Hamming Distance. In a coding scheme, is used to define the Minimum Hamming Distance.Linear CodesLinear codes are more efficient than other codes for encoding and decoding algorithms. Linear codes are special sets of words of length n over an alphabet {0,..,q -1}, where q is a power of prime. Sets of words Fqn will be considered as vector spaces V(n, q) of vectors of length n with elements from the set {0,....