Introduction
Sine, one of the fundamental trigonometric functions, plays a crucial role in various fields, including mathematics, physics, engineering, and computer science. Its calculation is not trivial, especially when it comes to implementing it in electronic calculators, where efficiency and accuracy are paramount.
In previous entries of the series, we looked into how calculators solve equations and how they calculate square roots. In this blog post, we’ll delve into the intricate process of calculating the sine function, starting from simple approximations to more sophisticated methods.
How sine is calculated
To begin, let’s inspect the plot of the sine function:
It is immediately obvious, that the function is periodic, and has a strong symmetry of the interval between 0 and $\pi/2$:
In other words, it is enough to calculate the function on the interval $[0;\pi/2]$. Then, we can just use flipping and negation to get the final value. To calculate $\sin(x)$ within the reduced interval, one option is to use the well-known Taylor series approximation:
$$\sin(x) = x-\frac{x^3}{3!}+\frac{x^5}{5!}-\ldots$$Visualized:
While this method is simple, we need to calculate very high exponentials, and the approximation errors can get quite large around $\pi/2$. For example, with a nine-degree approximation, the result at $ \pi/2 $ would be 1.00000354258428. This is an error of 3e-6, which is quite bad, since most calculations are done to 15 digits precision. In other words, that’s a loss of precision around 10 digits!
How sine is really calculated
While the method presented in the previous paragraph is quite bad, it does serve as a blueprint for better methods. Essentially, every implementation of sine uses the following three steps:
- Reduction: Using some algebraic tricks, reduce $x$ to a small number $r$.
- Approximation: Calculate the value of $\sin(r)$ using an approximation method, such as the Taylor series.
- Reconstruction: Calculate the final value of $\sin(x)$ based on $\sin(r)$.
There are many ways to approach this problem. In the following , I present what Intel uses in their processors, based on their paper. They start with the formula
$$r = x-N\frac{\pi}{16}.$$ Here, $N$ is the integer chosen such that $|r|$ is minimized. In other words, we approximate $x$ by $N\cdot\frac{\pi}{16}$, and $r$ is the approximation error. How can we use this? Using the identities of the sum of arguments:
$$\displaylines{\sin(x)=\sin\left(r+N\frac{\pi}{16}\right)=\sin\left(N\frac{\pi}{16}\right)\cos(r)+\cos\left(N\frac{\pi}{16}\right)\sin(r)=\\=\sin\left(\frac{N}{32}2\pi\right)\cos(r)+\cos\left(\frac{N}{32}2\pi\right)\sin(r)}$$
We have to calculate $\sin(r)$ and $\cos(r)$ – this is the approximation step, more on this later. For now, assume we know both $\sin(r)$ and $\cos(r)$. We still have to find the sine and cosine of $\frac{N}{32}2\pi$. Notice, that both $\sin$ and $\cos$ are periodic by $2\pi$, so we only really have to calculate $N$ for $N=0,1,2,\ldots,31$. These are 32 values in total, we can easily precompute them. During the calculation of the final value, we just have to look up them up from a list, which is quite efficient.
Only one piece of the puzzle remains: how to calculate $\sin(r)$? In the paper, Intel does not publish the exact polynomial but they do mention that they use minimax approximation. This approximation finds a polynomial that minimizes the maximum error over an interval:
$$\max_{0\le r<\frac{\pi}{16}}|p(r)-\sin(r)|,$$ where $p$ is the approximating polynomial. One way to calculate $p$ is Remez’s algorithm. The results might look like this:$$x -0.166667x^3 + 0.00833x^5 -0.00019x^7 + 2.6019\cdot10^{-6}x^9.$$The maximum error of this polynomial is $4.1\cdot10^{-9}$, which is a thousand times better than the original Taylor-approximation!
Conclusion
In conclusion, calculating sine in computers involves a combination of reduction, approximation, and reconstruction steps. From simple reduction and Taylor series to more precise methods like minimax approximation, computers employ various techniques to compute sine efficiently while maintaining acceptable levels of accuracy. Understanding these methods sheds light on the underlying mathematics that power computational tools and simulations in numerous fields.