Normal | If the bit with value 2n is set, the polynomial contains xn. xN is implicit. | |
Reverse | If the bit with value 2n is set, the polynomial contains xN - n - 1. xN is implicit. | |
Koopman | If the bit with value 2n is set, the polynomial contains xn + 1. x0 is implicit. | |
Width (N) | ||
Polynomial |
This degree polynomial will generate a bit checksum.
The MSB of the polynomial must be 1, otherwise the computation doesn't work.
Trailing (LSB) zeros in the polynomial lead to an equal number of trailing unused bits in the CRC.
Click an entry to show it in the view above!
Parity | 1 bit | 0x1 | |
CAN-Bus | 15 bits | 0x4599 | |
Modbus | 16 bits | 0x8005 | |
CAN FD (messages ≤ 16 bytes) | 17 bits | 0x1685b | |
CAN FD (messages > 16 bytes) | 21 bits | 0x102899 | |
CRC-32 | 32 bits | 0x04c11db7 | Ethernet, etc. |
CRC-32C | 32 bits | 0x1edc6f41 | Implemented in hardware on x64 processors. |
I hope that this will help me remember how CRC computations work in the future. This represents how one would implement a CRC with a shift register.
Keep in mind: CRCs operate within an arithmetic where plus and minus both are replaced by xor. The system shown in the diagram computes the remainder of an input bitstream divided, using this xor-arithmetic, by the polynomial.
The final contents of the shift register are used as the checksum. In all implementations I have seen the leftmost bit in the shift reigster is the most significant bit of the checksum.
If the shift register definition given here is to match the division-definition of a CRC, the bitstream should start with the most significant bit. The concept of most-significant bit doesn't really make sense when considering large chunks of data. Therefore, this property probably is more usefull in the other direction: If you want to analyze a given CRC-computation mathematically, you must define the first bit of the bitstream as the most significant bit.
Given a message including a valid CRC checksum, the Hamming distance of the message describes the number of bits that need to be flipped to get another message with valid checksum. A Hamming distance of n means that all (n - 1)-bit errors will lead to invalid messages (i.e. the errors can be detected). All messages with checksum have a Hamming distance of at least 2, meaning that a single bit flip always will be detected using a CRC (as long as you don't make a degenerate polynomial, e.g. by setting all bits to 0).
Philip Koopman hosts a list of best polynomials, which includes information about the maximum message lengths of these polynomials at various Hamming distances. The notation used for polynomials on Koopman's page matches the "Koopman" field further up on this page. Koopman also provides a tool to compute this data.
On SSE4.2 processors (98.92% on the Steam hardware survey, July 2022), there is a crc32 instruction which implements the CRC-32C polynomial. Variants of the instruction exist that can process 1, 2, 4 or 8 bytes in a single step. With regards to the table generator, this instruction matches having the Least-significant bit first checkbox checked. To match "normal" CRC-32C implementations, the CRC-register must be set to 0xffffffff initially, and bitflipped after processing all data.
On x64, there also are SIMD-algorithms for going faster than using table lookups. For example, see "Fast CRC Computation for Generic Polynomials Using PCLMULQDQ Instruction" by V. Gopal, E. Ozturk, et al. (Intel).
On some STM32 Cortex-M processors, there is a CRC peripheral which allows for fast CRC computation without a lookup table.
This code implements an optimized variant of what was described in the section on shift registers.
Ross N. Williams guide provides a solid introduction to this idea.
In what order are bits put into the shift register?
My understanding is that, since the properties of CRCs are defined in terms of errors on sequentially transmitted bit streams, it also is important that bits are put into the shift register in the order they are transmitted.
In all implementations I've seen, data is put into the registers starting with the byte at the lowest address, which matches how most data-transmission functions in OSes work.
However, some implementations will, within a given byte, start by feeding the least-significant bit into the shift register.
Other implementations start by feeding the most-significant bit into the shift register.
There is a checkbox to choose which variant you want to generate code for.
In all implementations I've encountered, the following is true, and the generated code matches this: When, within a given byte, the bits are fed into the shift register most-significant first, the shift register can be drawn as in the section on shift registers, with the output having the bit that was about to be shifted out as the most-significant bit. When, on the other hand, the bits are fed into the shift register least-significant first, the output has the bit that was about to be shifted out as the least-significant bit. This essentially means that the values in the table might look bitwise reversed compared to what you would expect when the Least-significant bit first-checkbox is checked.
Some implementations start out the value of the shift register as all ones. This is provided for by the Start with all one?-checkbox. Some implementations flip all the bits of the shift reigster at the end. This is provided for by the Flip at end?-checkbox. In his guide, Williams has a section titled Initial and Final Values which gives some justification for this practice.