When calculating square roots computers we can employ various techniques.

Introduction

I remember in high school, I had this great math teacher teaching algebra. She taught that a we can find a square root or factor out a square root by writing out the factors of said number. For instance, for $\sqrt{125}$, we can divide $125$ by 5 to get 25, then divide 25 by 5 to obtain 5. Then, if there are pairs of the same number we can factor it out with the remaining numbers staying within the square root resulting in $5\sqrt{5}$.

Now, I wondered, how is it that computers calculate square roots.

Babylonian Method

Let's have $\sqrt{y}$ be the number we are trying to acquire in the function $\sqrt{y}=x$. If we guess what $x$ is then there would be a resulting error $e$, so $\sqrt{y}=x+e\rightarrow y=(x+e)^2$. we expand the binomial to $y=e^2+2ex+x^2$ and solve for $e$ $$e=\frac{y-x^2}{2x-e}$$ The next step requires us to assume that the error is way smaller than $x$. When $e$ is a lot smaller than $x$ then we can say that $\frac{y-x^2}{2x+e}$ is close to $\frac{y-x^2}{2x}$, so $\frac{y-x^2}{2x+e}\approx \frac{y-x^2}{2x}$, by the way I remember doing this exact approximation in an analytical chemistry class. We can use this to get closer to the square root from the initial guess. So, $$x\approx x_0^2$$ where $x_0$ is the initial guess, is farther away from $x$ than $$x\approx x_0+\frac{y-x_0^2}{2x_0}=\frac{\frac{y}{x_0}+x_0}{2}=x_1$$ We can then repeat the this until it has converged sufficiently.
Here is a quick implementation in C

double babyloniansqrt(double y)
{
  /* Initial Guess */
  double x = y / 2;
  
  /* Accuracy level */
  double e = 0.0000001;
  
  while (x*x - y > e)
    x = (y/x + x) / 2;
    
  return x;
}

Bakhshali Method

The Bakhshali method is similar to the Babylon method in that one iteration of the Bakhshali method is two of the Babylon method. So, \begin{align}
a_n &= \frac{y-x_n^2}{2x_n}\\
b_n &= x_n + a_n
\end{align}
Expanded the equations turn to $$x_{n+1}=\frac{x_n^2(x_n^2+6y)+y^2}{4x_n(x_n^2+y)}$$
The C code then is

double bakhashlisqrt(double y)
{
  /* Initial Guess */
  double x = y / 2;
  
  /* accuracy level */
  double x = 0.0000001;
  
  /* Temprary variable that will store x squared */
  double x2 = x * x;
  
  while (x2 - y > e)
  {
    x = (x2 * (x2 + 6 * y) + y * y) / (4 * x * (x2 + y));
    x2 = x * x;
  }
  
  return x;
}

two-variable iterative method

Developed for use in one of the first computers, this method really only works for numbers for numbers between $0 \le y \le 3$. The algorithm uses two looping variables making it easy to code. The first variable is assigned the number we want to find the square root of and the second the same number minus 1, so \begin{align}
a &= y \\
c &= y - 1
\end{align} After which, we loop through the following \begin{align}
a:&=a-\frac{ac}{2}\\
c:&=\frac{c^2(c-3)}{4}
\end{align} It well then converge where $a$ is close square root and $c$ is close to 0. Here is a C function to do just that

double twovaritsqrt(double y)
{
  double a = y;
  double c = y - 1;
  
  /* accuracy level */
  double e = 0.0000001;
  
  while (c > e)
  {
    a = a - a * c / 2;
    c = c * c * (c - 3) / 4;
  }
  return a;
}

Taylor Series

Another way to calculate the square root of a function is calculate the Taylor series of the square root. Since, we cannot have the series revolve around the point at zero we will revolve around the point at 1. So, we have a the definition of a taylor series $$f(x)=\sum_{n=0}^\infty \frac{f^{(n)}(x_0)}{n!}(x-x_0)^n$$ Where $x_0$ we will set to 1. The $n$-th Derivative of square root starting from $n=0$ and $x_0$ is $1$ and continues on $n=1\rightarrow\frac{1}{2},n=2\rightarrow-\frac{1}{4},n=3\rightarrow \frac{3}{8}$ and we can see a pattern if the Gamma Function is used $n\rightarrow \frac{\Gamma(\frac{3}{2})}{\Gamma(3/2-n)}=\frac{\sqrt{\pi}}{2\Gamma(\frac{3}{2}-n)}$. So, the series turns to $$f(x)=\sum_{n=0}^\infty \frac{\sqrt{\pi}}{2\Gamma(\frac{3}{2}-n)n!}(x-1)^n$$ The taylor function converges slower than the babylonian method and the gamma function makes the calculation complex, but we can precalculate some of the coefficient for the polynomial. Here is the C code.

double taylorsqrt(double y)
{
  return 0.246094 + y * (1.23047 + y * (-0.820313 + y * (0.492188 + (-0.175781 + 0.0273438 * y) * y)));
}

Using floating point standards

This one is a bit different. In computers, floating point numbers are stored, in memory, in such a way that the first bit, a 0 or 1, represents whether the number is positive or negative and is called the sign bit. The next 8 bits, in single precision, or 16, in double precision, stores the exponent of the number and the remaining bits, 23 in single precision and 47 in double precision. An integer uses the first bit to store the sign and the rest to store the actual number, all in base 2. So, if we were to represent the number 1 according to double precision in binary then convert this to base 10 we get $4607182418800017408$, which is equal to $1023\times 2^{52}$. We can then convert a floating point number to an integer, in this case a double precision number to a 64-bit integer, and use the above to roughly calculate the square root. By divide the number by $2^{52}$ and subtracting $1023$ we can approximate the logarithm function in base-2, so $x_{\text{int}}\cdot 2^{-52}-1023\approx \log_2 (x)$.
If we tried to find the logarithm of base 2 of a large number of form $m\times 2^p$ we get $p+\log_2(m)$, and lets say that we try to get the square root of a number in form $m\times 2^p$ we get $\sqrt{m}\times 2^{ \frac{p}{2} }$ which for very large number we can make the approximation $\sqrt{m\times 2^{p}}=\sqrt{m}\times 2^{\frac{p}{2}}\approx \frac{m}{2}\times 2^{\frac{p}{2}}$. \begin{align}
\sqrt{m\times 2^p}&=\sqrt{m}\times 2^{p/2}\\
&\approx m\times 2^{p/2}\\
\log_2(m\times 2^{p/2})&=\frac{p}{2}+\log_2(m)
\end{align}
So, if, let's say, we converted the floating point to integer from divided by $2^{52}$, subtracted $1023$, divided that by $2$, added $1023$, multiplied by $2^{52}$, and converted to floating point form we can square root the number. let's let the number converted to integer form be $x$ \begin{align}
\bigg(\frac{\frac{x}{2^{52}}-1023}{2}+1023\bigg)\cdot 2^{52}&=\frac{x-2^{52}}{2}+\frac{1023+1}{2}\cdot 2^{52}\\
&=\frac{x-2^{52}}{2}+2^{61}
\end{align}
By bitshifting we can easily do these operations

#include <stdint.h>
double bitsqrt(double y)
{
  int64_t yint = *(int64_t*)&y;
  const int64_t one = 1, fif2 = 52, six1 = 61;
  
  /* y = y - 2^52 */
  yint -= one << fif2;
  /* y = y / 2 */
  yint >>= one;
  /* y = y + 2^61 */
  yint += one << six1;
  
  return *(double*)&yint;
}

Of course you can also use the built-in sqrt function that calls the cpu opcode called sqrtsd but this is more for systems with no such thing and for the curious.
SOURCE CODE