todomathmth2002lecturenotes

QUOTE

If you state some information, you must either rigorously prove the claim or reference where someone else does so. - Yuri

Coding Theory Practical 1

What is Coding Theory?

Basically just the study of encoding and decoding information, where the process between is a transformation, such as the transformation between languages or to encode files, e.g. .mp3 files.

Mathematical Definition

Let be a finite set with elements, . Let us call the elements of symbols and will be called an alphabet. A message (or data, or information), over the alphabet , is just a finite sequence of symbols out of . Traditionally, a message consisting of symbols is called a word (or vector or string) of length . For example, a word would have length .


Building Blocks of a Code

EXAMPLE

The modern latin alphabet , the Greek alphabet , Egyptian hieroglyphs, decimal digits, and even the set of pheromones of ants are alphabets (as these are the chemical signals they use for communication).

Essentially, any set of symbols that are used for communication can be labelled an alphabet. Any combination of these symbols is called a word, where one or more of these is then a set of messages over (the alphabet set).


Encoding Process

Let and be alphabets, and let be some set of messages over . An encoding-decoding rule (or process) for with target is just a transformation

that can be reversed. Here, is called the original alphabet, and (the target set) has elements named codewords (or coded messages, or coded words). is just the alphabet for , as previously defined.

EXAMPLE

The original alphabet is thus , the set of messages is again just , the alphabet of the code will be the binary set , the code is , and the encoding-decoding function is the function , where each element maps to an element of , e.g. .


Assumptions during encoding
Assumption one

For practical purposes, it is common for data (words, messages, information) to be transmitted in blocks through a rigid transmission scheme, i.e the transmission channel and means are not changing during transmission. Essentially, we assume that once an alphabet is set, transmissions usually send a messaged of a fixed length .


Assumption two

Another natural assumption is that as few errors as possible have occurred, otherwise the problem likely lies in transmission channels, not with the code itself. In particular, if we received a sequence of symbols that is not a valid word, we will look for a valid codeword that looks as much as possible like the word received. At lot like choosing the closest multiple-choice answer to the result you got on a test.

EXAMPLE

Suppose our code is the set and that in two transmissions we receive the words and . In the first transmission, it is more likely that the intended first word to be received was . In the second transmission, it could have been or . However, extending the error correction from the first word, we can assume the error is within the last digit again, so the second word would be logically assumed to be .


Assumption three

We suppose that each symbol being transmitted has the same probability of being received in error. We call the number the symbol error probability of the channel. This is essentially an assumption that most of the symbols are transmitted successfully, as mentioned in assumption two.

If a symbol in a given position is received in error, each of the remaining (where is the length of the alphabet ) symbols is equally likely: is the probability that a particular symbol appears in a given position instead of the correct one. This formula isn’t often used, but is useful for general knowledge about coding theory.

EXAMPLE

Consider the code . Suppose the channel has symbol probability and that a word has been transmitted. The chance that the word is received with no errors is equal to the probability that no errors occur in the first digit and that no errors occur in the second digit. Thus this probability is , a very high chance.


Why do we study Coding Theory?

  • Do we want to have a fast transmission channel? (studied within Data Compression, trying to shrink data as much as possible, sometimes adjusting the transmission channel).
  • Do we want to exchange information secretly? (studied within Cryptography, transforming data so that only the receiver understands, thus focuses on the encoding-decoding process)
  • Do we want to transmit efficient and accurate data? (studied within Coding Theory)

Coding Theory is the subarea of information theory that studies codes that detect errors and/or correct errors. Thus, this discipline provides codes designed in such a way that if errors occur in transmission, then the receiver can detect and/or correct errors based on the remaining symbols received and so recover the intended codeword.

NOTE

I studied this when generating QR Codes, when I implement Reed-Solomon Error Correction, as well as when I studied error correction algorithms using parity.

How do we formalise Coding Theory?

Given an alphabet with symbols, a code (over ) of length is a subset . That is, is some set of words length taken from an alphabet consisting of symbols. A (valid) codeword is an element of .

We say that detects errors if, whenever up to errors occur in the transmission of any codeword, then one can recognise this from the code . Similarly, we can correct errors. For example, mentioned in numerous example within this note does not detect or correct any errors. On the contrary, we can design a code to do just that:

The simplest error correction, parity checking

A useful and relevant video.

This is , with an addition digit sent representing the parity of the other two digits (the sum of the first two under the additive rule for the binary set - modular arithmetic).

Hence, if the parity does not line up across the digits, then we have detected an error. This falls apart if any of our three assumptions made during explaining encoding are not correct, such as both digits being transmitted incorrectly, but under our assumption this error correction is perfect, despite its simplicity.

This code also does not correct any errors, as we don’t know where the error occurred, only simply that one did occur. This is fixed with Hamming codes, where you implement an additional parity digit which counts the parity of the entire matrix. The video linked at the beginning of this section explains this very well: watch it! The are numerous other ways of fixing this flaw, both more efficient and less efficient.

Many error correction algorithms simply extend this logic, like with Reed-Solomon instead checking the behaviour of a polynomial generated from the co-efficients of the data matrix.

ISBN Example

Hamming Distance

Computes the number of positions across two words where their symbols are different, e.g. and has as two symbols are different in the same positions, given that () and .

Thus, it’s a useful measure of how many errors might have occurred in transmission, and can be used as a measurement to minimise error. This is known as the nearest neighbour decoding strategy.

Hamming’s Original code

is the subset of consisting of all elements () satisfying , , and . Thus the first four digits are known as the source word and the last three as parity bits.

Given a codeword , we could receive codeword with some possible errors. Usually, it can correct odd errors, but if there are an even number of errors that it isn’t always corrected.

NEXT TIME: Distance theorem, chances of detection/correction, parameters of codes, main problem.