BigNum

BigNum, also commonly referred to BigInt or BigInteger, allows the use of very large numbers, greater than the primitives allow in a basic programming language.

Versions of BigNum can be found, such as GMP for C, and Java comes with their version of BigInteger.

Implementation
There are many similarities between working with BigNums and polynomials. This is because any integer $$a_na_{n-1}\ldots a_1a_0$$ in base $$B$$ can be viewed as the polynomial $$a_nx^n+a_{n-1}x^{n-1}+\ldots+a_1x+a_0$$, evaluated at $$B$$. The following will use a base-10 for simplicity.

Addition
An "grade school" algorithm, addition is an operator that can be broken up into two cases:

Addition of like signs
Use the "grade school" algorithm of lining the digits up, and adding them one by one, with a carry: 1        1         0  1 2 3 4     1 2 3 4     1 2 3 4     1 2 3 4     1 2 3 4 + 4 5 6 8   + 4 5 6 8   + 4 5 6 8   + 4 5 6 8   + 4 5 6 8 -   -   -   -   -                    2         0 2       8 0 2     5 8 0 2 The final sign will simply be what the like sign was to begin with.

Addition of opposite signs
Use the subtraction algorithm for equal signs.

Subtraction
Another straightforward implementation, with two cases:

Subtraction of equal signs
Another "grade school" algorithm of lining the digits up, and subtract them one by one, borrowing when needed: 6      5   6       5   6  6 3 7 5     6 3  15      13  15      13  15 - 1 7 2 6   - 1 7 2 6   - 1 7 2 6   - 1 7 2 6 -   -   -   -                  4 9       6 4 9     4 6 4 9 The final sign will depend on which number is bigger.

Subtraction of opposite signs
Can be converted into addition of equal signs.

Multiplication
The simple algorithm for multiplication is to shift, multiply, and add. This takes $$O(n^2)$$ time. 1 9 4    1 9 4     1 9 4       1 9 4 x 5 2 6   x 5 2 6   x 5 2 6     x 5 2 6 ---  ---   ---     ---          1 1 6 4   1 1 6 4     1 1 6 4                    3 8 8       3 8 8                              9 7 0                            ---                            1 0 2 0 4 4

Karatsuba Multiplication
A faster algorithm that is still fairly simple to implement is Karatsuba multiplication. Its runtime complexity is $$O(n^{\lg 3}) \approx O(n^{1.58})$$, although it involves more additions and subtractions than the naive algorithm. It is based on the clever observation that to find $$(ax+b)(cx+d)$$ (which is $$acx^2+(ad+bc)x+bd$$), we need only three multiplications, while the naive method of multiplying them out would take four multiplications. The three multiplications are: The coefficient of $$x$$, namely $$ad+bc$$ can therefore be found by subtracting the results of the first two multiplications from that of the third, thereby saving us one multiplication.
 * 1) $$ac$$
 * 2) $$bd$$
 * 3) $$(a+b)(c+d) = ac+bd + ad+bc$$

What this means for BigNums is that given two integers $$m$$ and $$n$$ of about $$2k$$ digits each, we can write them as $$m = a10^k+b$$ and $$n = c10^k+d$$ (by taking $$a$$ to be the integer formed by the first half of the digits of $$m$$, etc). Now, we can use the above observation to compute $$mn = (ac)10^{2k} + ((a+b)(c+d)-ac-bd)10^k + bd$$. (Note that multiplying by a power of 10 (or in general, the base we are working with) simply consists of shifting the digits a certain number of places, as we are doing in the naive multiplication algorithm, too.) Thus, the time $$T(l)$$ taken by this algorithm two multiply two integers of length $$k$$ is $$T(k) = 3T(k/2) + O(k)$$, which gives a runtime of $$O(k^{\lg 3})$$.

Extensions of Karatsuba
In Karatsuba multiplication, we break the multiplicands into two parts. What happens if, say, we break them into three parts? This leads to the Toom-Cook 3-way multiplication algorithm, which has an asymptotic runtime of $$O(n^{\log_3 5}) \approx O(n^{1.465})$$. This is asymptotically faster than Karatsuba, but the algorithm requires more work to be done in post-processing after the recursive multiplications are done.

Obviously the idea can be extended to break the multiplicands into even more pieces, but at some point the amount of work needed to recover the actual digits outweighs the work saved in the clever formulation of the problem.

Multiplication of BigNums can also be done in $$O(n (\log n)^{2+\epsilon})$$, with Fast Fourier Transform.