# Introduction

A Padé Approximant is as the book by George A. Baker, Jr., "Essentials of Padé Approximants", the ratio of two polynomials created from the coefficients of the Taylor series of a function. The technique was developed by Henri Padé, hence the name, at around 1890, according to the Wikipedia page. Padé approximants not just provides real and complex approximations, but they also provide the ability to approximate some unknown underlying functions.

## Taylor Series

A Taylor series is the representation of a function of the infinite sum of polynomials whose coefficients are created from the derivative of the function at a point. It's definition in sigma notation is as follows $$f(x)=\sum_{i=0}^\infty \frac{f^{(i)}(a)}{i!}(x-a)^i$$ So, for example, the function $f(x)=e^x$ and we set the Taylor series as a Maclaurin series or $a = 0$; we get $f(0)=1, f'(0)=1, \cdots,f^{(n)}(0)=1$. So, the summation turns into $$f(x)=e^x=\sum_{i=0}^\infty \frac{x^i}{i!}$$ If we limit the number terms we can break down the calculation of a complex function into elementary operations. Let's limit the summation to 2-8 terms and set $x=1$.
\begin{align}
f(1) &\approx A_2 = 1 + x =& 2\\
f(1) &\approx A_3 = 1 + x + \frac{x^2}{2} =& 2.5\\
f(1) &\approx A_4 = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} =& 2.66667\\
f(1) &\approx A_5 =& 2.70833\\
f(1) &\approx A_6 =& 2.71667\\
f(1) &\approx A_7 =& 2.71806\\
f(1) &\approx A_8 =& 2.71825\\
f(1) &\approx A_9 = & 2.718278769\\
f(1) &\approx A_{10} =& 2.718281525\\
f(1) &= e = & 2.718281828
\end{align}
As you can see above, as we increase the number of terms we get closer to the value of function.

As said above, Padé Approximants are a ratio of polynomials which approximate a function. The Taylor series shown in it's formal form where the exponents of the polynomials are denoted by $a_i$ and the series as a whole as $A(x)$. Padé Approximats are denoted by the following notation $$[L/M]=\frac{P_L (x)}{Q_M (x)}$$ where $P_L(x)$ is the polynomial of degree $L$ at the numerator of the fraction and $Q_M(x)$ is the polynomial of degree $M$ at the denominator. The ratio of polynomials takes the place of the taylor series plus an error term to form an equation we can use to solve for the coefficients of both $P_L(x)$ and $Q_M(x)$
\begin{align*}
A(x)=\frac{P_L(x)}{Q_M(x)}+O(x^{L+M+1})\\
A(x)-\frac{P_L(x)}{Q_M(x)}=O(x^{L+M+1})\\
A_{L+M}(x)-\frac{P_L(x)}{Q_M(x)}=0
\end{align*}
To be able to solve for the coefficient we set $Q_M(0)=1$ resulting in the first term of the polynomial $Q_M(x)$ as $1$. Let's set the coefficients of the polynomials as the lower case version of the parent letter.
\begin{align*}
P_L(x)=p_0+p_1 x + \cdots + p_Lx^L \\
Q_M(x)=1 + q_1 x + \cdots + q_M x^M
\end{align*}
Let's multiply the above equation by $Q_M(x)$
\begin{align*}
(A(x)_{ L+M }-\frac{P_L(x)}{Q_M(x)})Q_M(x)&=0 \\
A(x)_{L+M} Q_M(x)-P_L(x)&=0
\end{align*}
According to the litrature, from the book by Baker above, on Padé Approximants the above equation can be expressed as a series of linear equations
\begin{align*}
&a_0 &= p_0\\
&a_1 + a_0q_1 &= p_1\\
&a_2 + a_1q_1 + a_0q_2 &= p_2\\
&\vdots &\vdots\\
&a_L + a_{L-1}q_1 + \cdots + a_0q_l &= p_L\\
&a_{L+1} + a_Lq_1 + \cdots + a_{L-M+1}q_M &= 0\\
&\vdots &\vdots\\
&a_{L+M} + a_{L+M-1}q_1+\cdots+a_Lq_M &= 0
\end{align*}
The system of linear equations are non-singular, meaning that it's only solution is $0$ and the use of Levinson recursion is out of the question. In the book "Numerical Recipes in Fortran 77: The Art of Scientific Computing" by Willian H. Press, et al., section 5.12, the author uses LU decomposition to solve the equations.

# LU Decomposition

LU Decomposition is called LU Decomposition since it involves the decomposition of a matrix into lower and upper triangular matrices. For example, let's say there is a $R^{2\times 2}$ matrix denoted by $A$. If multiply the $L$ and $U$ decomposed matrices we would get the $A$ matrix. $$A=LU$$ where $$A=\begin{bmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{bmatrix},L=\begin{bmatrix}l_{11} & 0 \\ l_{21} & l_{22} \end{bmatrix},U=\begin{bmatrix}u_{11} & u_{12} \\ 0 & u_{22}\end{bmatrix}$$ LU Decomposition is useful for solving systems of linear equations because in Triangular Matrix form we can use forward and backwards substitutions.

\begin{align*}
Ax&=b\\
(LU)x&=b\\
L(Ux)&=b
\end{align*}

We set $y=Ux$

\begin{cases}
Ly=b\\
Ux=y
\end{cases}
\begin{align*}
&\begin{bmatrix}
l_{11} & 0 \\
l_{21} & l_{22}
\end{bmatrix}
\begin{bmatrix}
y_1 \\ y_2
\end{bmatrix}=\begin{bmatrix}
b_1 \\ b_2
\end{bmatrix}\\
&\begin{cases}
l_{11}y_1 = b_1 \\
l_{21}y_1 + l_{22}y_2 = b_2
\end{cases}\\
&y_1=\frac{b_1}{l_{11}}\\
&y_2=\frac{b_2-l_{21}y_1}{l_{22}}
\end{align*}
It can be generalized to a matrix $A\in R^{N\times N}$
\begin{align*}
y_1&=\frac{b_1}{l_{11}}\\
y_i&=\frac{1}{l_{ii}}\bigg[ b_i - \sum_{j=1}^i-1 l_{ij}y_j \bigg] &i=2,3,\cdots,N
\end{align*}
Finally, to solve for $x$
\begin{align*}
x_2&=\frac{y_2}{u_{22}}\\
x_1&=\frac{y_1 - u_{12}x_2}{u_{11}}
\end{align*}
And more generally
\begin{align*}
x_N&=\frac{y_N}{u_{NN}}\\
x_i&=\frac{1}{u_{ii}}\bigg[y_i-\sum_{j=i+1}^N u_{ij}x_j\bigg] &i=N-1,N-2,\cdots,1
\end{align*}

## Decomposing Matrices

A question remains, how do we decompose a matrix into lower and upper matrices? Well, let's setup the product of the matrices, $LU=A$, in terms of linear equations.

\begin{align*}
a_{11}&=l_{11}u_{11}\\
a_{12}&=l_{11}u_{12}\\
a_{21}&=l_{21}u_{11}\\
a_{22}&=l_{21}u_{12}+l_{22}u_{22}
\end{align*}

In Crout's Algorithm, we set $u_{ij}=1$

\begin{align*}
a_{11}&=l_{11}\\
a_{12}&=l_{11}u_{12}\\
a_{21}&=l_{21}\\
a_{22}&=l_{21}u_{12}+l_{22}
\end{align*}

So,

\begin{align*}
L&=\begin{bmatrix}
a_{11} & 0\\
a_{21} & a_{22}-\frac{a_{12}a_{21}}{a_{11}}
\end{bmatrix}\\
U&=\begin{bmatrix}
1 & \frac{a_{12}}{a_{11}}\\
0 & 1
\end{bmatrix}
\end{align*}

This then can be used to solve for the $x$ in $Ax=b$

\begin{align*}
y_1&=\frac{b_1}{a_{11}}\\
y_2&=\frac{a_{21}b_1-a_{11}b_2}{a_{12}a_{21}-a_{11}a_{22}}
\end{align*}
then
\begin{align*}
x_1&=\frac{y_1 - u_{12}x_2}{u_{11}}=\frac{a_{22}b_1-a_{12}b_2}{a_{11}a_{22}-a_{12}a_{21}}\\
x_2&=\frac{y_2}{u_{22}}=\frac{a_{21}b_1-a_{11}b_2}{a_{12}a_{21}-a_{11}a_{22}}
\end{align*}

Generalizing this is a bit difficult, since the step above changes depending on the shape of the matrix. Fortunately, there exists Crout's Algorithm for LU decomposition which involves partial pivoting; partial pivoting results in a $P$ matrix which shuffles around the rows of a matrix and acts as a stabilizer, numerically stabilizing the algorithm.

# Code

I decided to use F# as the programming language to create our program to calculate both LU decompose and to calculate the Padé Approximant. I read on some forums that F# is very good for numerical work, and I tend to agree now that I have worked with it. I used the Visual Studio IDE to code F# with the .Net core 2.1 package. It's pretty interesting since it uses indentation to indicate code blocks, like Python, and has an extremely strong functional programming paradigm with support for multiple paradigms including Object Oriented Programming (OOP). F# uses .fs as the suffix for it's source code, so let's first create, in a directory of your choice, a text file dubbed Program.fs and include the following code

open System

[<EntryPoint>]
let main argv =
0


Now, let's create a file in the same directory called Utils.s; where we are going to add a function which will print matrices, multiply matrices, multiply matrices with vectors, create matrices with random numbers, and create vectors with random numbers. Encapsulate it in a module type called Utils.

module Utils


The first function, to print matrices in the terminal, will have a 2-D array double variable as an input symbolizing the matrix. To get the dimensions of the matrix we have to grab the first row and column as a vector then get it's length. F# makes this easy

  let printm (a : double[,]) =
let n = a.[*,0].Length
let m = a.[0,*].Length


Now, we iterate through the matrix where we use the printf command, which acts like C's printf command, and the printfn command, which add a line. Like C# array's indices start at 0 and end at n-1, where n is the size of the array.

    for i = 0 to n-1 do
for j = 0 to m-1 do
printf "%f " a.[i,j]
printfn ""


We are done with the printm function. As you can see, functions and variables are declared pretty much, almost, the same way, interesting. To multiply two matrices we get two matrix inputs and output a matrix. We follow the standard matrix multiplication algorithm (link). The use of Lambda function with the Array2D.init is optimal (link).

  let matmul (a: double[,]) (b: double[n]) : double[,] =
let n = a.[*,0].Length
let m = a.[0,*].Length
let m2 = b.[0.*].Length
Array2D.init n m2 (fun x y ->
let mutable s = 0.0
for i = 0 to m-1 do
s <- s + a.[x, i] * b.[i, y]
s
)


When using let variables are unchangeable, but by using the mutable modifier we can change it using <- instead of =. The Matrix with Vector multiplication is similar, but with a fixed side length of $1$.

  let matvecmul (a: double[,]) (b: double[]) : double[] =
let n = a.[*,0].Length
let m = a.[0,*].Length
let m2 = b.Length
Array.init m2 (fun x ->
let mutable s = 0.0
for i = 0 to m-1 do
s <- s + a.[x,i] * b.[i]
s
)


For the random function we are going to use the System.Random() call from C# and create arrays around it.

  let randmat (n : int) : double [,] =
let rnd = System.Random()
Array2D.init<double> n n (fun _ _ -> rnd.NextDouble () * 10.0)

let randvec (n : int) : double [] =
let rnd = System.Random()
Array.init<double> n (fun _ -> rnd.NextDouble () * 100.0)


I multiplied rnd.NextDouble() with something to have some differences in our output numbers. We are now ready to implement our LU Decomposition functions. Start by creating a file called LUDecoposition.js to encompass our functions in a module called LU Decomposition. Also, import our Utils module with open.

module LUDecomposition
open Utils


type uses a couple of hats as a way to create class, structs and the like. We are going to use it to create a new object that will contain the pivot matrix, the input A matrix, and the resulting L and U matrix.

  type PALU = {P : double[,]; A : double[,]; L : double[,]; U : double[,]}


To start, let's create a function which will create diagonal square matrices of size n.

  let eye (n : int) : double[,] =
Array2D.init<double> n n (fun x y -> if x = y then 1.0 else 0.0)


Next, we are going to create a function which will create the pivot matrix. The pivot matrix, when multiplied with the input A matrix will rearrange it to have the largest element of each row along the diagonal of the matrix. To do this I followed an algorithm created in the book, "Numerical Recipes...", mentioned above.

  let pivotize (a : double[,]) : double[,] =
let n = a.[*,0].Length
let mutable p = eye n
for i = 0 to n-1 do
let mutable max_j = i
for j = i to n-1 do
if abs a.[j,i] > abs a.[max_j,i] then max_j <- j
if max_j <> i then
let tmp = p.[i,*]
p.[i,*] <- p.[max_j,*]
p.[max_j,*] <- tmp
p


We can now decompose our matrix! Let's start the function by getting the dimension of the square matrix, find it's pivot matrix, and calculate the pivot matrix.

  let LUDecompose (a : double[,]) =
let n = a.[*,0].Length
let p = pivotize a
let aprime = matmul p a
// Future code


Now, create two mutable matrices with the name l and u. l will be an identity matrix, created from the eye function, and u will be a matrix full of 0s.

    let mutable l = eye n
let mutable u = Array2D.zeroCreate n n


For the loops representing the crout's algorithm and the return of our created object

    for j = 0 to n-1 do
for i = j to n-1 do
let mutable sum = 0.0
for k = 0 to j do
sum <- sum + l.[i,k] * u.[k,j]
l.[i,j] <- aprime.[i,j] - sum
for i = j to n-1 do
let mutable sum = 0.0
for k = 0 to j do
sum <- sum + l.[j,k] * u.[k,i]
u.[j,i] <- (aprime.[j,i] - sum) / l.[j,j]
{ P=p; A=a; L=l; U=u }


Next, create the function which will solve the equations using backward and forward substitution. First, get the length of the $A$ input matrix, pivot the $b$, and create the two $y$ and $x$ zero mutable vectors.

  let LUSolve (tosolve : PALU) (b : double[]) : double[] =
let n = tosolve.A.[0,*].Length
let bprime = matvecmul tosolve.P b
let mutable y = Array.zeroCreate n
let mutable x = Array.zeroCreate n


Forward substitution with
\begin{align*}
y_1&=\frac{b_1}{l_{11}}\\
y_i&=\frac{1}{l_{ii}}\bigg[ b_i - \sum_{j=1}^i-1 l_{ij}y_j \bigg] &i=2,3,\cdots,N
\end{align*}

    y.[0] <- b.[0] / tosolve.L.[0,0]
for i = 1 to n-1 do
let mutable sum = 0.0
for j = 0 to i - 1 do
sum <- sum + tosolve.L.[i,j] * y.[j]
y.[i] <- (1.0 / tosolve.L.[i,i]) * (bprime.[i] - sum)


and backward substitution
\begin{align*}
x_N&=\frac{y_N}{u_{NN}}\\
x_i&=\frac{1}{u_{ii}}\bigg[y_i-\sum_{j=i+1}^N u_{ij}x_j\bigg] &i=N-1,N-2,\cdots,1
\end{align*}
to implement the above summation and output the x vector

 x.[n-1] <- y.[n-1] / tosolve.U.[n-1,n-1]
for i = n-2 downto 0 do
let mutable sum = 0.0
for j = i + 1 to n - 1 do
sum <- sum + tosolve.U.[i,j] * x.[j]
x.[i] <- (1.0 / tosolve.U.[i,i]) * (y.[i] - sum)
x


In the book, "Numerical Recipes: The art of scientific computing", the author explains a way to improve solutions to LU decomposition by using backward and forward substitution to solve for the error term $\delta\vec{x}$ on the equation $A\cdot \delta\vec{x}=A\cdot (\vec{x}+\delta\vec{x})-\vec{b}$

  let LUimprove (tosolve : PALU) (b : double[]) (x : double[]) : double[] =
let n = tosolve.A.[0,*].Length
let mutable r = Array.zeroCreate n
for i = 0 to n-1 do
let mutable sdp = -b.[i]
for j = 0 to n-1 do
sdp <- sdp + tosolve.A.[i,j] * x.[j]
r.[i] <- sdp
r <- LUSolve tosolve r
Array.map2 (-) x r


For the pade approximant, we need to create a new module in a file called Pade.fs and in the file module Pade. Open both LUDecomposition and Utils

module Pade
open LUDecomposition
open Utils


We need to create a new object that will house two vectors (double arrays) which will contain the numerator and denominator coefficient terms for the polynomials.

  type fractionpoly = { Num: double[]; Den: double[]}


To calculate the pade polynomials, first get the size of the matrices and create some temporary vectors and matrices.

  let pade (a : double[]) : fractionpoly =
let n = (a.Length) / 2
let mutable q   = Array2D.zeroCreate n n
let mutable y = Array.zeroCreate<double> n


Setup the matrices which represent the series of linear equations shown in the pade section of the post

    for j = 0 to n-1 do
y.[j] <- a.[n + j + 1]
for k = 0 to n-1 do
q.[j,k] <- a.[j-k+n]


Use LU decomposition to solve the equations and iterate a couple of times using the LUimprove function to improve our solution

    let decomp = LUDecompose q
let mutable x = LUSolve decomp y
for j = 0 to 5 do
x <- LUimprove decomp y x


Then use substitution to solve the remaining coefficients

    for k = 0 to n-1 do
let mutable sum = a.[k+1]
for j = 0 to k do
sum <- sum - x.[j] * a.[k-j]
y.[k] <- sum


Finally, setup a couple vectors which will be the outputs

    let numer = Array.init<double> (n+1) (fun j ->
if j = 0 then
a.[0]
else
y.[j-1]
)
let denom = Array.init<double> (n+1) (fun j ->
if j = 0 then
1.0
else
-x.[j-1]
)
{ Num=numer; Den=denom }


Let's calculate some pade polynomials! In the Program.js file in the main function, we are going to calculate the pade approximants for the function $f(x)=e^{x}$. The maclaurin series to $x^10$ for the function is
\begin{align*}
0.0000248016 x^8+0.000198413 x^7+0.00138889 x^6+\\
0.00833333 x^5+0.0416667 x^4+0.166667 x^3+0.5 x^2+1. x+1.
\end{align*}
Create an array in the main function containing the coefficients as a vector

[<EntryPoint>]
let main argv =
let taylor = [|1.0;1.0;0.5;0.166667;0.0416667;0.00833333;0.00138889;0.000198413;0.0000248016|]


Then use the pade function to calculate the pade approximant from the coefficients

    let pad = pade taylor

[|1.0; 0.5000556785; 0.1071711941; 0.01191084626; 0.0005955617021|]

Here is a plot of the calculated pade approximant, the taylor function, to $x^4$, and the actual function.