# Introduction

In the previous posts [1,2] about $\sin(2\pi \cdot x)$ approximations, I quickly discarded the Taylor series believing their approximation error characteristics were not ideal. However, this was based on using the Taylor derived polynomial for the entire domain $x \in [0 \ldots 1)$. In the ‘better sin(x)‘ post, the domain was divided in four parts and the symmetry of sin(x) was exploited to reduce the approximation error. This trick is, of course, also possible with Taylor-based polynomial.

I tried it with the following results..

# Taylor series of $\sin(2\pi \cdot x)$

The Taylor series of $\sin (2\pi \cdot x)$ developed around $x=0$ is:

$P(x) = 2\pi\cdot x - \frac{8\pi^3}{3!} \cdot x^3 + \frac{32\pi^5}{5!} \cdot x^5 - \ldots,$

which, given $N$ terms, can be compactly written as:

$P(x) = \sum_{i=1}^{N} (-1)^{(i+1)} \cdot {(2\pi)^{(2i - 1)} \over {(2i-1)!}} x^{(2i-1)}$.

A very useful property of this particular series is that the sign of each term is the opposite of the one that comes before it. In addition, the next term is always smaller than the previous one. This means that if we truncate the series at a certain term, the total approximation error is always smaller or equal to the first term we left out. So, we can choose the number of terms to include based on the desired approximation error!

The approximation error of our Taylor series is largest at the largest value of $x$ we’re going to use. By diving the domain by four, the largest value of $x$ we’ll encounter is $\frac{1}{4}$. Knowing this, we can make a table of the approximation error versus the number of terms included in our polynomial:

Terms, N P(x) order max. error
1 1 0.64596
2 3 0.07969
3 5 0.00468
4 7 0.00016
5 9 3.60e-6

Clearly, the maximum approximation error drops rapidly as the order of the polynomial is increased.

# Performance evaluation

## The third-order Taylor polynomial versus the ‘better sin(x)’ version.

Which approximation is better, the one from the previous post or a 3rd order Taylor approximation?

The following graph shows the spectrum of the approximated sinusoidal wave. It was generated using an 65536-point FFT and 2129 sine wave periods.

The third harmonic can be found at -35.0 dBc, which is 11.9 dB worse. The signal-to-noise ratio (SNR) of the Taylor-based version is 33.2 dB, which is 11.7 dB worse. Summarizing, you’re better off with the ‘better sin(x)’ approximation version.

## Ninth-order Taylor performance

If you want a high-purity sine oscillator and have cycles to spare, but don’t want the memory footprint of a table-lookup oscillator, you can use a 9th-order Taylor approximation.

Here are the performance graphs for the approximation:

The following graph shows the spectrum of the approximated sinusoidal wave. It was generated using an 65536-point FFT and 2129 sine wave periods.

The SNR is around 121.2 dB when all calculations are implemented using floating-point arithmetic.

# Conclusions

The Taylor series are well suited for approximating $\sin(2\pi \cdot x)$, especially when the domain is kept small. The precision of the approximation is easily determined and can be increased simply by adding additional terms. A 9th order polynomial is capable of reaching professional audio quality, in terms of SNR.

## 5 thoughts on “Sin(x) using Taylor series: better than expected”

1. I was writing my own post on Taylor series and I also happen to look at the error of the first few approximations (not up to 9, though). I even have the same picture (almost) for the 3rd order Taylor series error.

I was wondering – what software do you use to produce your graphs? They sort of look like matplotlib – is that right?

• Hi David,

The plots were made using MATLAB.

• Oh, that’s a bit embarrassing on my part. I haven’t used MATLAB in a long time, but I do like those graphs. Thank you!

2. Your 9th order polynomial requires 6 multiplications, and has a discontinuous derivative at 1/4. You may consider instead the approximation
cos(π/2 x) ≈ (1 − x²)(1 + x²(−0.2335216 + 0.0190963 x²))
It requires only 4 multiplications and ensures continuity of the first derivative. The max error, however, is somewhat worse: 9.20e-6.