You could do that. They have a Tensor data type also now which might substitute for numpy matrices I used in my Python implementation (although I have not checked Mojo in a long time). The thing to keep in mind is the width of the data type. The values of numbers in the Fibonacci sequence grow very quickly and you need to handle large numbers. Python has arbitrary precision integers which works nicely, or in numpy you can use 128 or 256 bits floats to have room for computing large values.

That was very interesting. Does the “alternate proof of this form using linear algebra” involve diagonalizing the 2 by 2 matrix? If so, that’s a nice elementary application of eigenvectors and eigenvalues.

I see what you meant. Someone pointed out the proof using diagnolization on hacker news, this is very elegant as you said. Check it out: https://news.ycombinator.com/item?id=38166540

Thanks for the link to the thread. I mentioned the Fibonacci diagonalization in my linear algebra class today, partly in reply to a student who recently asked me the “when are we going to use this stuff?” question about eigenvectors and eigenvalues.

Thank you, Richard. The linear algebra based proof does not explicitly mention using eigen vectors. Let me try to reproduce it, it's pretty short. But I should say that the book is very terse, and spends no time in trying to be helpful in explaining what is going on :)

It defines a vector space of all infinite sequences with coordinate-wise addition and multiplication operations defined. Then it defines a subspace W of all sequences which satisfy the equation: u(n+2) = u(n+1) + u(n) -- here, u(n) is the nth term of the sequence.

Then it tries to find a basis of W, two sequences whose terms are defined by the formula u(n) = τ^n. It turns out solving for τ gives the quadratic equation τ^2 = τ + 1 which gives the solution as τ_1 = (1 + sqrt(5)) / 2 and τ_2 = (1 - sqrt(5)) / 2

This results in the basis of W being found as u = (τ_1^0, τ_1^1, τ_1^2, ...) and v = (τ_2^0, τ_2^1, τ_2^2, ...)

Based on this, the Fibonacci sequence F = (F_0, F_1, ...) can be written using the basis,

F = αu + βv.

The values of α,β can be calculated using the first two known values of Fibonacci sequence. Solving for those, gives us the closed form formula for the nth value in the Fibonacci sequence.

Thank you, Anna. It's a pretty interesting book. Each chapter or miniature is short, 1-4 pages long (most are 1-2 pages). And, at least to me, every miniature is teaching something new and interesting.

There is a free/open edition of the book available online (the cover mentions it as a preliminary edition, probably the print edition is the final version).

What really bugs me is when people apply relatively fancy techniques like memoization and recursion without any need to do so. It's a telltale sign of an intern straight off LeetCode with very little coding experience. I mean, just do a simple loop - it's less code, works faster and easier to understand

Representing recurrences as matrix multiplications is one of the most mind-blowing applications of algebra. Surely one of my favourites. Nice one!

Thanks, Alberto. This was my first encounter with this technique and I was blown away when I understood it.

This was really interesting and led me down a rabbit hole, thank you!

On the way down the hole, I noticed that the TAOCP reference for the closed form should probably be page 79, not 99.

I'm glad you liked it.

Thank you for pointing out the error, I updated the article with the right reference.

has anyone tried the mojo compiler on this? I've seen the demo/notebook here: https://github.com/modularml/mojo/blob/main/examples/notebooks/Matmul.ipynb

it uses:

https://mlir.llvm.org/

edited Nov 6, 2023You could do that. They have a Tensor data type also now which might substitute for numpy matrices I used in my Python implementation (although I have not checked Mojo in a long time). The thing to keep in mind is the width of the data type. The values of numbers in the Fibonacci sequence grow very quickly and you need to handle large numbers. Python has arbitrary precision integers which works nicely, or in numpy you can use 128 or 256 bits floats to have room for computing large values.

That was very interesting. Does the “alternate proof of this form using linear algebra” involve diagonalizing the 2 by 2 matrix? If so, that’s a nice elementary application of eigenvectors and eigenvalues.

I see what you meant. Someone pointed out the proof using diagnolization on hacker news, this is very elegant as you said. Check it out: https://news.ycombinator.com/item?id=38166540

Thanks for the link to the thread. I mentioned the Fibonacci diagonalization in my linear algebra class today, partly in reply to a student who recently asked me the “when are we going to use this stuff?” question about eigenvectors and eigenvalues.

Thank you, Richard. The linear algebra based proof does not explicitly mention using eigen vectors. Let me try to reproduce it, it's pretty short. But I should say that the book is very terse, and spends no time in trying to be helpful in explaining what is going on :)

It defines a vector space of all infinite sequences with coordinate-wise addition and multiplication operations defined. Then it defines a subspace W of all sequences which satisfy the equation: u(n+2) = u(n+1) + u(n) -- here, u(n) is the nth term of the sequence.

Then it tries to find a basis of W, two sequences whose terms are defined by the formula u(n) = τ^n. It turns out solving for τ gives the quadratic equation τ^2 = τ + 1 which gives the solution as τ_1 = (1 + sqrt(5)) / 2 and τ_2 = (1 - sqrt(5)) / 2

This results in the basis of W being found as u = (τ_1^0, τ_1^1, τ_1^2, ...) and v = (τ_2^0, τ_2^1, τ_2^2, ...)

Based on this, the Fibonacci sequence F = (F_0, F_1, ...) can be written using the basis,

F = αu + βv.

The values of α,β can be calculated using the first two known values of Fibonacci sequence. Solving for those, gives us the closed form formula for the nth value in the Fibonacci sequence.

That was a fun read, thank you! I'm intrigued now to possibly buy the book - this post served as an excellent advertisement for it.

Thank you, Anna. It's a pretty interesting book. Each chapter or miniature is short, 1-4 pages long (most are 1-2 pages). And, at least to me, every miniature is teaching something new and interesting.

There is a free/open edition of the book available online (the cover mentions it as a preliminary edition, probably the print edition is the final version).

Great article. Looking forward to your next post.

Thank you, Robert :)

You have managed to skillfully craft yet another addictive confession :) Great job, Abhi! Keep them coming 🎊

Thanks, Nat! :)

Neat!

What really bugs me is when people apply relatively fancy techniques like memoization and recursion without any need to do so. It's a telltale sign of an intern straight off LeetCode with very little coding experience. I mean, just do a simple loop - it's less code, works faster and easier to understand