February 28, 2012
A couple of months ago (already!) 0xjfdube produced an excellent piece on table-based trigonometric function computations. The basic idea was that you can look-up the value of a trigonometric function rather than actually computing it, on the premise that computing such functions directly is inherently slow. Turns out that’s actually the case, even on fancy CPUs. He discusses the precision of the estimate based on the size of the table and shows that you needn’t a terrible number of entries to get decent precision. He muses over the possibility of using interpolation to augment precision, speculating that it might be slow anyway.

I started thinking about how to interpolate efficiently between table entries but then I realized that it’s not fundamentally more complicated to compute a polynomial approximation of the functions than to search the table then use polynomial interpolation between entries.

Read the rest of this entry »

1 Comment | algorithms, assembly language, C, C-plus-plus, C99, embedded programming, Mathematics, programming | Tagged: cos, cosine, MacLaurin Series, Polynomial, Polynomial Approximation, SIN, Sine, Taylor Series | Permalink

Posted by Steven Pigeon

February 21, 2012
In a previous post I discussed lossless audio encoding and presented a Bash script using `flac` to rip and process CD audio files. I also commented on how the psycho-acoustic model used in a MP3 encoder will dominate encoding as bit-rate increases, without much quantitative evidence. In this post, I present some.

Read the rest of this entry »

Leave a Comment » | algorithms, Bash (Shell), data compression | Tagged: FLAC, Lossess Compression, Lossy Compression, MP3, psychoacoustic model, psychoacoustics | Permalink

Posted by Steven Pigeon

February 14, 2012
We do not know closed form solutions for all optimization problems, even when they are somewhat innoccent-looking. One of the many possible methods in such as case is to use (stochastic) gradient descent to iteratively refine the solution to the problem. This involves the computation of… yes, the gradient.

In its simplest form, the gradient descent algorithm computes the gradient of an objective function relative to the parameters, evaluated on all training examples, and uses that gradient to adjust the model’s parameters. The gradient of a (not necessarily objective) function has the general form

Read the rest of this entry »

1 Comment | algorithms, machine learning, Mathematics | Tagged: Calculus, Derivative, gradient descent, Intregral, Leibniz, objective function, optimization, stochastic gradient descent | Permalink

Posted by Steven Pigeon

February 7, 2012
In a previous post, I told you about a short script to rip and encode CDs using Flac, and I discussed a bit about how LPC works. In this post, let us have a look on how efficient Flac is.

Let us use a quantitative approach to this. Since I have a great number of songs, we can use statistics to give us a good idea of what kind of compression we can expect.

Read the rest of this entry »

Leave a Comment » | data compression, Data Visualization | Tagged: box plot, box-and-whiskers, boxplot, Compression Ratio, FLAC | Permalink

Posted by Steven Pigeon