Revision as of 15:28, 16 April 2014 by Wang1460 (Talk | contribs)

What is a code, and what could "error-correcting" mean?

Explain what Hamming codes are and how they try to correct errors. And why you would want that in the first place.

  • What is a code?

The most relevant definition for a code in this context, given by Merriam Webster, is the following: that it is "a system of signals or symbols for communication". Codes are, very commonly, obfuscated during communication such that only the sender and the receiver can understand their contents, but this is not a quality which belongs to all codes.

In essence, a code is just an agreed upon language which two people could use to communicate. The English language itself could be thought of as a code, especially when viewed from the context of someone who doesn't speak it. Another example is Morse code, which is used to communicate over analog radio signals. Finally, a common code that is used by computers in information exchange is ASCII/Unicode, which maps integer numbers to symbols in many languages. For instance, the number "1" in ASCII is not 1; it is 49.

  • What could "error-correcting" mean?
  • What are Hamming Codes?

Hamming Codes were invented by Richard Hamming in 1950. In general, Hamming code is a set of error-correction codes that can be used to detect and correct bit errors that can occur when computer data is moved or stored.[1] Hamming Codes are important invention. The simple parity codes cannot correct errors and can detect an odd number of bits error. Compared with previous simple parity codes, hamming codes can detect two-bit errors or less or correct one-bit errors. Hence, hamming codes are more effective.

  • How do Hamming Codes attempt to correct errors?
  • Why would we want error correction in the first place?



References: [1]


Back to MA375 Spring 2014

Alumni Liaison

BSEE 2004, current Ph.D. student researching signal and image processing.

Landis Huffman