The symbol used by mathematicians to represent the ratio of a circle's circumference to its diameter is the lowercase Greek letter , sometimes spelled out as pi, and derived from the first letter of the Greek word perimetros, meaning circumference. In English, is pronounced as "pie" (, pa). In mathematical use, the lowercase letter (or in sans-serif font) is distinguished from its capitalized and enlarged counterpart , which denotes a product of a sequence, analogous to how denotes summation.
The choice of the symbol is discussed in the section Adoption of the symbol .

Any complex number, say z, can be expressed using a pair of real numbers. In the polar coordinate system, one number (radius or r) is used to represent z's distance from the origin of the complex plane and the other (angle or ) to represent a counter-clockwise rotation from the positive real line as follows:
z
=
r
(
cos
+
i
sin
)
,
{\displaystyle z=r\cdot (\cos \varphi +i\sin \varphi ),}
where i is the imaginary unit satisfying i2 = 1. The frequent appearance of in complex analysis can be related to the behavior of the exponential function of a complex variable, described by Euler's formula:
e
i
=
cos
+
i
sin
,
{\displaystyle e^{i\varphi }=\cos \varphi +i\sin \varphi ,}
where the constant e is the base of the natural logarithm. This formula establishes a correspondence between imaginary powers of e and points on the unit circle centered at the origin of the complex plane. Setting = in Euler's formula results in Euler's identity, celebrated by mathematicians because it contains the five most important mathematical constants:
e
i
+
1
=
0.
{\displaystyle e^{i\pi }+1=0.}
There are n different complex numbers z satisfying zn = 1, and these are called the "n-th roots of unity". They are given by this formula:
e
2
i
k
/
n
(
k
=
0
,
1
,
2
,
,
n
1
)
.
{\displaystyle e^{2\pi ik/n}\qquad (k=0,1,2,\dots ,n-1).}

The development of computers in the mid-20th century again revolutionized the hunt for digits of . American mathematicians John Wrench and Levi Smith reached 1,120 digits in 1949 using a desk calculator. Using an inverse tangent (arctan) infinite series, a team led by George Reitwiesner and John von Neumann that same year achieved 2,037 digits with a calculation that took 70 hours of computer time on the ENIAC computer. The record, always relying on an arctan series, was broken repeatedly (7,480 digits in 1957; 10,000 digits in 1958; 100,000 digits in 1961) until 1 million digits were reached in 1973.
Two additional developments around 1980 once again accelerated the ability to compute . First, the discovery of new iterative algorithms for computing , which were much faster than the infinite series; and second, the invention of fast multiplication algorithms that could multiply large numbers very rapidly. Such algorithms are particularly important in modern computations, because most of the computer's time is devoted to multiplication. They include the Karatsuba algorithm, ToomCook multiplication, and Fourier transform-based methods.
The iterative algorithms were independently published in 19751976 by American physicist Eugene Salamin and Australian scientist Richard Brent. These avoid reliance on infinite series. An iterative algorithm repeats a specific calculation, each iteration using the outputs from prior steps as its inputs, and produces a result in each step that converges to the desired value. The approach was actually invented over 160 years earlier by Carl Friedrich Gauss, in what is now termed the arithmeticgeometric mean method (AGM method) or GaussLegendre algorithm. As modified by Salamin and Brent, it is also referred to as the BrentSalamin algorithm.
The iterative algorithms were widely used after 1980 because they are faster than infinite series algorithms: whereas infinite series typically increase the number of correct digits additively in successive terms, iterative algorithms generally multiply the number of correct digits at each step. For example, the Brent-Salamin algorithm doubles the number of digits in each iteration. In 1984, the Canadian brothers John and Peter Borwein produced an iterative algorithm that quadruples the number of digits in each step; and in 1987, one that increases the number of digits five times in each step. Iterative methods were used by Japanese mathematician Yasumasa Kanada to set several records for computing between 1995 and 2002. This rapid convergence comes at a price: the iterative algorithms require significantly more memory than infinite series.