Mandelbrot set

The Mandelbrot set and its complex beauty.
# Formula At its simplest, the Mandelbrot set is defined iteratively by the following formula:

`z_(n+1) = (z_n)^2 + c`

# Generation The rules for generating the Mandelbrot set are surprisingly simple; we begin by defining the following three constants:

`W` = the width of the plane`H` = the height of the plane`Z` = the zoom factor
Then for each value of `x={1,2,...,W}` and `y={1,2,...,H}`, we create a complex number `c` as following:
`c = (2x - W) / (W*Z) + ((2y - H) / (H*Z))i`
The value of `c` is then assigned to a new variable, called `z_1`, as:
`z_1 = c`
Then we define a new constant, called `I`, which represents the total number of iterations that we want to perform (usually, in the range `[30,150]`).
`I = 100`
Then we choose a limit `L`, which will stop the iteration early if a certain value `|z_n|` (where `1 <= n <= I`) exceeds the value of `L`.
`L = 2`
Now we can begin the iteration, with `n={1,2,...,I}`.
`z_(n+1) = (z_n)^2 + c`

RSA algorithm

RSA is a practical public-key cryptographic algorithm, which is widely used on modern computers to communicate securely over large distances.

The acronym of the algorithm stands for Ron Rivest, Adi Shamir and Leonard Adleman, which first published the algorithm 1978.

# Algorithm overviewChoose `p` and `q` as distinct prime numbersCompute `n` as `n = p*q`Compute `\phi(n)` as `\phi(n) = (p-1) * (q-1)`Choose `e` such that `1 < e < \phi(n)` and `e` and `\phi(n)` are coprimeCompute the value of `d` as `d ≡ e^(-1) mod \phi(n)`Public key is `(e, n)`Private key is `(d, n)`The encryption of `m` as `c`, is `c ≡ m^e mod n`The decryption of `c` as `m`, is `m ≡ c^d mod n`
# Generating `p` and `q` In order to generate a public and a private key, the algorithm requires two distinct prime numbers `p` and `q`, which are randomly chosen and should have, roughly, the same number of bits. By today standards, it is recommended that each prime number to have at least 2048 bits.

In Perl, there is a mo…


In this post we're going to take a look at what infinitesimals are and why they are important.
Infinitesimals are an abstract concept of very small values that are impossible to represent quantitatively in a finite system.

# Definition We define one infinitesimal as:

`ε = lim_{n to \infty}\frac{1}{n}`

with the inequality: `ε > 0`.

In general, the following inequalities hold true:

`\frac{0}{n} < \frac{1}{n} < \frac{2}{n} < ... < \frac{n}{n}`

as `n -> \infty`.

# Appearance The infinitesimals appear in some fundamental limits, one of which is the limit for the natural exponentiation function:

`lim_{n to \infty}(1 + \frac{\x}{n})^n = \exp(\x)`

Using our infinitesimal notation, we can rewrite the limit as:

`lim_{n to \infty}(1 + ε*\x)^n = \exp(\x)`

where, for `x=1`, we have:

`lim_{n to \infty}(1 + ε)^n = \e`.

# Debate There was (and, probably, still is) a debate in mathematics whether the following limit:

`lim_{n to \infty}\frac{1}{n}`

is `0` or greater than `0`.


Is the Riemann hypothesis true?

In 2001, Jeffrey Lagarias proved that the Riemann hypothesis is equivalent to the following statement (proof here):

`\sigma(n) <= \H_n + \ln(\H_n)*\exp(\H_n)`
with strict inequality for `n > 1`, where `\sigma(n)` is the sum of the positive divisors of `n`.

In 1913, Grönwall showed that the asymptotic growth rate of the sigma function can be expressed by:

`\lim_{n to \infty}\frac{\sigma(n)}{\n \ln\ln n} = \exp(\gamma)`

where lim is the limit superior.

Relying on this two theorems, we can show that:

`lim_{n to \infty}\frac{\exp(\gamma) * n \ln \ln n}{\H_n + \ln(\H_n) * \exp(\H_n)} = 1`

with strict inequality for each `1 < n < \infty` (see Wolfram|Alpha):
`\exp(\gamma) * n \ln \ln n < \H_n + \ln(\H_n) * \exp(\H_n)`

If the Riemann hypothesis is true, then for each `n ≥ 5041`:

`\sigma(n) <= \exp(\gamma) * n \ln \ln n`
By using the usual definition of the `\gamma` constant:
`\gamma = \lim_{n to \infty}(\H_n - \ln n)`

we can reformulate the result as (see Wolfram|Alpha):

Euler-Mascheroni constant

In this post we're going to take a look at a mysterious mathematical constant, called the Euler–Mascheroni constant, and its fascinating role in harmonic and prime numbers.

# Is `gamma` transcendental? This constant, although it has a fairly simple definition, it is currently not known whether it is rational or irrational, but it is widely believed by mathematicians to be transcendental, which also implies that it is irrational.
It is usually defined as: 
`\gamma = \lim_{n to \infty}(\H_n - \ln n)`
where `\H_n` is the `n`thharmonic number, which is defined as:
`\H_n = \sum_{k=1}^(n)\frac{1}{k}`
# "Proving" that `\gamma` is transcendental There exists a proof that `\gamma` is transcendental, but the proof is very subtle:

By Lindemann–Weierstrass theorem, the natural logarithm of any positive algebraic number other than 1 is a transcendental number.The `n`th harmonic number is rational.
As all rational numbers are algebraic numbers, the definition of `\gamma` reduces to:

Why 0÷0 doesn't have a value?

In this short post we're going to take a look at why `0/0` is really undefined.

# Overview It's an old mathematical issue, which has been debated many times over the centuries with mostly the same result: `0/0` does not have a defined value. But now, we're going to take a look at why this is true.

# Illustration To illustrate this, let's consider the following sum:

`\sum_{k=0}^(n)b^k = b^0 + b^1 + b^2 + ... + b^n`

Deriving a closed form to this sum, we get:

`\sum_{k=0}^(n)b^k = (b^(n+1) - 1) / (b-1)`

For example, when `b=3` and `n=4`, we have:

`3^0 + 3^1 + 3^2 + 3^3 + 3^4 = (3^(4+1) - 1) / (3-1)`

All good so far. However, if we set `b=1`, we have a special case:

`(1^(n+1) - 1) / (1-1) = 0/0`

We know that `1^k=1` for any `k>=0`, therefore:

`\sum_{k=0}^(n)1^k = n+1`

but when `b=1`, our closed-form evaluates to `0/0` for any value of `n`. From this we can conclude that `0/0` does not have a certain value.

Taking this example a little bit further, we can also show that…

Symbolic mathematical evaluations

Math is really fun, especially when is done symbolically.

# Overview This time we're taking a look at some interesting relations and identities for fractions, which will give us an insight of what is really going on, for example, in an infinite sum and how we can analyze it by evaluating it symbolically.
There is an useful and interesting identity for summing two fractions:
$$\frac{a}{b} + \frac{c}{d} = \frac{ad + cb}{bd}$$

The question is: can we extend it to three fractions? What about ten? What about an infinite number of them?

Well, yes, this is possible, and it's actually quite easy to find a general formula to this. To give you a taste how this can be analyzed, let's sum four fractions (using `a` for the numerator and `b` for the denominator, just for illustration, but they can have different values):

$$\frac{a}{b} + \frac{a}{b} + \frac{a}{b} + \frac{a}{b} = \frac{b(b(b(a) + ab) + abb) + abbb}{bbbb}$$

Do you see the pattern? There is a beautiful recursive relation w…

Image edge detection

Edge detection is a fundamental tool in image processing, machine vision and computer vision.

In this post we're going to take a look at a very basic edge detection algorithm, which takes into account a predefined tolerance value that can be adjusted to detect arbitrary fine details in a given image.

# Algorithm The algorithm is extremely simple; it begins by iterating over each pixel in the image, then, for each pixel, it checks its neighbors.

|       |       |       |
|   A   |   B   |   C   |
|       |       |       |
|       ||       |
|   D   ||   E   |
|       ||       |
|       |       |       |
|   F   |   G   |   H   |
|       |       |       |

If any two opposite neighbors differ in intensity by more than a predefined tolerance, then it marks the current pixel as part of an edge.

The comparison is done by taking the absolute difference of each two opposite neighbors and mapping it t…