# Once Again, about Greatest Common Divisor, the Euclidean Algorithm and a Little About the History of Algorithms in General. Of Course, with Swift Examples

Algorithms — one of the main topics of programming, they are everywhere (especially in job interviews, haha).

One of the most famous is so-called the Euclidean algorithm — perhaps, the most common way to find the greatest common divisor (GCD) of two non-negative integers. They also love to start learning (and teaching) corresponding mathematics and computer science sections.

Donald E. Knuth, the notorious author of the treatise The Art of Computer Programming (but not only this one), thinks that it’s the first algorithm at all. Because although the algorithm was invented and had been used before Euclidus (who lived in IV-III centuries B.C.) — it was mentioned by Aristotle who lived a century earlier — Euclidus described it in an iterative manner which is consistent with the contemporary word meaning.

The word “algorithm” itself dates back to a Persian mathematician named al-Khwarizmi who lived in approximately VIII-IX centuries A.C. However, the time it began to be used in the contemporary meaning is believed to be XX century. More precisely, its first decades — the information technology rising.

# The Euclidean Algorithm

Just out of curiosity, here is the Knuth’s edition of the Euclidian algorithm’s specification:

Proposition.Given two positive integers, find their GCD.Let A and C be the two given positive integers; it is required to find their GCD. If C divides A, then C is a common divisor of C and A, since it also divides itself. And it clearly is in fact the greatest, since no greater number than C will divide C.

But if C does not divide A, then continually subtract the lesser of the numbers A, C from the greater, until some number is left that divides the previous one. This will eventually happen, for if unity is left, it will divide the previous number.

Now let E be the positive remainder of A divided by C; let F be the positive remainder of C divided by E; and suppose that F is a divisor of E. Since F divides E and E divides C − F, F also divides C − F; but it also divides itself, so it divides C. And C divides A − E; therefore F also divides A − E. But it also divides E; therefore it divides A. Hence it is a common divisor of A and C.

I now claim that it is also the greatest. For if F is not the greatest common divisor of A and C, some larger number will divide them both. Let such a number be G.

Now since G divides C while C divides A−E, G divides A−E. G also divides the whole of A, so it divides the remainder E. But E divides C − F; therefore G also divides C − F. And G also divides the whole of C, so it divides the remainder F; that is, a greater number divides a smaller one. This is impossible.

Therefore no number greater than F will divide A and C, so F is their GCD.

Corollary.This argument makes it evident that any number dividing two numbers divides their GCD. Q.E.D.

This specification provides two ways of finding a GCD — by subtraction and by division. In fact, those two ways are exactly the two which are widely known nowadays.

Here’s an example of Swift implementation of the subtraction one:

funcsubtractionGCD(_ firstNumber: Int,

_ secondNumber: Int) -> Int {

ifletsimpleGCD = simpleCasesGCD(firstNumber, secondNumber) {

returnsimpleGCD

}varfirstNumber = firstNumber

varsecondNumber = secondNumber

whilefirstNumber != 0, secondNumber != 0 {

iffirstNumber > secondNumber {

firstNumber = firstNumber - secondNumber

}else{

secondNumber = secondNumber - firstNumber

}

}returnfirstNumber + secondNumber// One of them is 0.}

For the sake of reusability I extracted basic cases in which a GCD can be deduced immediately:

funcsimpleCasesGCD(_ firstNumber: Int,

_ secondNumber: Int) -> Int? {

iffirstNumber == secondNumber {

returnfirstNumber// Any.}

iffirstNumber == 0 {

returnsecondNumber

}

ifsecondNumber == 0 {

returnfirstNumber

}returnnil

}

(That is, if two numbers are equal so their GCD equals them, too. If one of two numbers is zero, then GCD is the second number, because zero can be divided by any number — obviously, with the result of zero.)

Only non-negative numbers can be used as input data. Repsectively, same methods can be used for negative numbers, but their absolute values must be used instead. (Of course, a common divisor can be negative, but at the same time its absolute value will be a common divisor, too, and positive numbers are obviously greater that negative. Thus, negative numbers can’t be GCDs.)

And this is an example of implementation of the division Euclidean algorithm:

funcdivisionGCD(_ firstNumber: Int, _ secondNumber: Int) -> Int {

ifletsimpleGCD = simpleCasesGCD(firstNumber, secondNumber) {

returnsimpleGCD

}varfirstNumber = firstNumber

varsecondNumber = secondNumber

whilefirstNumber != 0, secondNumber != 0 {

iffirstNumber > secondNumber {

firstNumber = firstNumber % secondNumber

}else{

secondNumber = secondNumber % firstNumber

}

}returnfirstNumber + secondNumber// One of them is 0.}

This second version is considered as the preferable nowadays because it takes noticeably fewer iterations. However, when computers was big and slow, the division operation itself could be a complex procedure. Hence, the first option could turn out to be more effective.

I made several measures using a beloved method `measure(_:)`

of `XCTestCase`

class from the Xcode’s native testing framework `XCTest t`

o compare those two algorithm versions.

As an input data I generated an array of random numbers pairs. Of course, methods were tested with the same set of numbers. Numbers were spread from 0 to 9999. I measured running time for 1, 10, 100, 1.000, 10.000, 100.000, 1.000.000 and 10.000.000 pairs of numbers in a row. The latter made me wait for a result for a couple of minutes so I decided not to go further.

Here’s a simple numbers pairs generating code:

*// Generates 100 pairs.*

**let** pairs = (0..<100).map { _ in

(Int.random(in: 0..<10000), Int.random(in: 0..<10000))

}

Here’s my measurement code:

**func** testSubstractionGCDPerformance() {

measure() {

_ = pairs.map { substractionGCD($0, $1) }

}

}

And this is my results:

All in all, it’s highly observable how badly the subtration method loses to the division one.

## An “Improved” Version of the Euclidean Algorithm

It’s possible to came across the Euclidean algorithm’s version in which one of the numbers on every step is replaced, instead of a division remainder, by a difference between that remainder and the second number. This version can be implemented like this:

funcimprovedDivisionGCD(_ firstNumber: Int,

_ secondNumber: Int) -> Int {

ifletsimpleGCD = simpleCasesGCD(firstNumber, secondNumber) {

returnsimpleGCD

}varfirstNumber = firstNumber

varsecondNumber = secondNumber

whilefirstNumber != 0, secondNumber != 0 {

iffirstNumber > secondNumber {

letfirstNumberClaim = firstNumber % secondNumber

iffirstNumberClaim > secondNumber / 2 {

firstNumber = abs(firstNumberClaim - secondNumber)

}else{

firstNumber = firstNumberClaim

}

}else{

letsecondNumberClaim = secondNumber % firstNumber

ifsecondNumberClaim > firstNumber / 2 {

secondNumber = abs(secondNumberClaim - firstNumber)

}else{

secondNumber = secondNumberClaim

}

}

}returnfirstNumber + secondNumber// One of them is 0.}

This modification reduces iterations number, but, judging by measures on my computer, additional computations and checks, neutralize this advantage completely:

## A Little Bit More about Significance of the Euclidean Algorithm

The algorithm also has the geometric version, for the greatest measure of two line segments finding.

Of course, the algorithm was generalized for a GCD finding of any numbers’ quantity, not only two. Putting in the nutshell, if a GCD of two numbers, a and b, finding function is gcd(a, b) then three numbers, a, b and c, GCD is gcd(gcd(a, b), c). And so on, for any count of numbers a GCD can be found sequentially. Of course, it’s true for any GCD computation, not only for the Euclidean algorithm.

Another algorithm’s generalization is an algorithm for finding GCD of polynomials. But this topic is out of the scope of this post.

## Complexity

The algorithm’s time complexity was researched quite some time ago, not too fast and by considerably more learned people than your humble servant. However, the problem is solved and the question is answered. By Gabriel Lamé.

Long story short, the answer can be formulated by Lamé’s theorem related to the algorithm. The number of steps equals to the sequence number of the Fibonacci number closest bigger to the least of two input numbers minus two. In other words, if u > v (and v > 1) and v < Fn (where Fn is the closest to v Fibonacci number) then the algorithm’s steps number equals n — 2.

Fibonacci numbers grow exponentially, thus we have a logarithmic time complexity function (of the least of two numbers).

That implicitly shows us that the worst input data for the Euclidean algorithm is two successive Fibonacci numbers.

# Binary Algorithm

Talking about GCD search it’s worth mention another algorithm that was proposed in the 1960s by someone called Joseph Stein. The algorithm is focused on binary arithmetics and doesn’t use division operations.

It’s based on four certain facts:

- If both u and v are even, then gcd(u, v) = 2 × gcd(u ÷ 2, v ÷ 2);
- If u is even and v is not, then gcd(u, v) = gcd(u ÷ 2, v);
- gcd(u, v) = gcd(u — v, v) (follows from the Euclidean algorithm);
- If both u and v are odd, then (u — v) is even and |u — v| < max(u, v).

The recursive version of the algorithm can be seen on Wikipedia (for contemporary programming languages it requires only a few lines of code), I didn’t re-write it in Swift. But the iterative one can look like this:

funcbinaryGCD(_ firstNumber: Int, _ secondNumber: Int) -> Int {

ifletsimpleGCD = simpleCasesGCD(firstNumber, secondNumber) {

returnsimpleGCD

}varfirstNumber = firstNumber

varsecondNumber = secondNumbervarshift = 0

while(firstNumber | secondNumber) & 1 == 0 {

shift += 1

firstNumber >>= 1

secondNumber >>= 1

}

whilefirstNumber & 1 == 0 {

firstNumber >>= 1

}

repeat{

whilesecondNumber & 1 == 0 {

secondNumber >>= 1

}

iffirstNumber > secondNumber {

swap(&firstNumber, &secondNumber)

}

secondNumber -= firstNumber

}whilesecondNumber != 0returnfirstNumber << shift

}

After similar measurements it turned out that on my computer this tricky algorithm doesn’t meet expectations. Although it works twice as fast as the subtraction version of the Euclidean algorithm, it noticeably looses to the classic, division, one. Here’s the full table or results:

(It easily might be that the algorithm can be re-written more effectively, the way it affects the result. But why have a compiler then?!)

By the way, despite the fact that this algorithm got his 15 minutes during the IT era, it was already known in the Ancient China. Its description was found in works dated by I century A.C. But in such terms as “half division” and in the context of fraction reduction.

# Conclusion

To be frank, I didn’t want to prove anything or draw any revolutionary conclusions from this simple “research”. I just wanted to satisfy my curiosity, to check different approaches results and to warm up my fingers. However, I hope it was interesting to you, too!