Introduction to the Basics of Algorithms in Computer Science
An overview of the definition, history, and practical applications of algorithms in computer science with reference to the Python programming language.
The subject of algorithms was until recently, known only to software engineers, mathematicians and scientists. Today, however, the term is universally known thanks to its frequent use by behemoths of the tech world like Google, Facebook, and Amazon.
Just about everyone has heard of ‘the algorithm‘ when it comes to internet searches, suggestions for content on YouTube or TikTok, and of course the controversial notion of ‘censorship’ on some large platforms.
So what is an algorithm? Where did the idea come from? Where are they used today? And is Python a suitable language for their construction?
What is an algorithm?
There is a strict definition and a broader one. The more precise one goes something like this:
‘a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation’
The more common meaning of the word is any step-by-step process that solves a problem or achieves a specific goal. In the world of computing, algorithms are the very lifeblood of everything useful. Computers are essentially machines that carry out instructions via defined processes, even if that process involves learning to improve its own processes.
Like any machine, a computer follows your instructions to the letter, so if you want a computer to do something, you need to think through all of the logical steps and map these out as clear instructions for your computer.
Over the millennia various algorithms have been created to solve specific problems, meaning that you don’t need to reinvent the wheel for every task you encounter. A solid knowledge of algorithms will save you time, sweat, and tears by giving you convenient shortcuts to a variety of common problems.
How does an algorithm work?
Every algorithm is designed with a specific goal in mind, and each is as unique as its defined outcome. At the level of writing code to implement algorithms, the smallest building blocks tend to be if-then statements that bring you to the solution in the most efficient and elegant manner possible.
Where there isn’t a readymade algorithm software engineers plot out the steps in pseudo-code at a granular level for every possible eventuality, including exception handling for dead-ends and erroneous inputs.
An algorithm is essentially a representation of human thought laid out in a series of steps for a computer to copy. The advantage of a computer is that it executes these steps far quicker than the human brain. Then once the algorithm is set, it always executes in exactly the same way. Unlike with us, there will be no steps missed.
A brief history of algorithms
Although the word entered popular consciousness only in the last 20 years or so, its roots are ancient. The modern idea of algorithms as something related exclusively to computer science dates back only to the 1950s with the arrival of the first commercially useful computers.
However, the earliest examples date as far back as the 1600s BC, when Babylonian mathematicians worked out a method for calculating square roots by converging estimates together. Some 1000 years later, around 300 BC Euclid developed an algorithm to calculate the greatest common denominator of number pairs.
As you can see, this is an old, old, old idea.
In the 9th century AD, the Persian mathematician Muhammad ibn Musa al-Khwarizmi invented algebra, and a 12th-century Latin translation of his name as ‘Algorithmi’ gave us the name we have for this method of solving problems by process today.
The first algorithm for a computing machine
Ada Lovelace (née Byron), inspiration for the Cardano cryptocurrency ADA, daughter of infamous literary man of scandal Lord Byron and friend of Charles Babbage, inventor of the Analytical Engine, published the first ever algorithm to be used by a computer in 1843. It is here that algorithms and computer science meet for the first time.
In 1847 George Boole brought logic and calculation together in Boolean algebra, marrying algorithmic thinking and computer science together for all time.
In 1936 Alan Turing, the famous British code breaker developed the Turing Machine which is the true forerunner of modern computers. His attempt to capture the essence of how humans calculate rather than developing a new process for each task led to the development of modern general-purpose computing.
Since the 1960s the use of algorithms in computing has become ubiquitous with notable contributions being the RSA encryption algorithm in 1973 for secure data transmission, the page rank algorithm for internet search in 1998, and the distributed hash table in 2001 providing fault tolerance, scalability, and decentralization to lookup services.
What are the main types of computer algorithm?
Since the 1950s several groups have emerged as the most useful and commonly used.
- Sorting and Searching algorithms order items in a list or search for values within an array.
- Recursive algorithms call upon themselves in a loop until the problem reaches a solution.
- Divide and conquer algorithms break a problem into smaller parts first, solve all the subproblems, and then combine them back together for the final answer.
- Dynamic Programming algorithms remember the solutions to prior problems and use them when solving more complex problems in future.
- Greedy algorithms prioritize immediate benefit before moving on to the next stage of a solution.
- Brute force algorithms attempt all possible solutions in sequence until an answer is found.
- Backtracking algorithms are designed to recursively retreat to an earlier step in the process if a solution or part of a solution fails. Only the most recent step will be undone before moving on to the next alternative.
- Graph algorithms use data in visual graphic form to solve a problem or search for data
Most common popular algorithms in computer science
Many problems and tasks are so universal that any good software engineer will instantly reach for a known algorithm to move a project forward.
Here are some of the most commonly used in real-world programming:
- Quick Sort is a divide-and-conquer method
- Heap Sort is a comparison-based sorting technique
- Bucket Sort is a scatter-order-gather approach to sorting
- Counting Sort is a method based on keys similar to hashing
- Binary Search is a method that divides an array in half with each guess, limiting the number of tries to find a correct answer
- Breadth-First Search (BFS) is a tree or graph traversing and data structure searching algorithm
- Depth-First Search (DFS), like BFS, is a tree or graph traversing and data structure searching algorithm.
- Hashing is a method that searches data by ID using an index. This is the most widely used technique today.
- KMP, the Knuth-Morris-Pratt algorithm is used to match short patterns within longer strings. A good example is the use of Ctrl+F to find a word or phrase within a document.
- Rational Expression is used to validate strings such as URL parsing and matching.
- The Sieve of Eratosthenes is one of the oldest algorithms dating back to ancient Greek times.
- Fermat primality test
- Miller-Rabin primality test
Examples of real uses in programming
We’ve already seen some examples of the real-world uses of algorithmic programming, but the truth is that almost all applications of computer science from the most advanced to the most commonplace involve algorithms.
The primality algorithms above may seem abstract but they are crucial to the world of online security and cryptography. Encryption and decryption depend on processes like these. The hash tables referred to above are used to store IP addresses in routers making them indispensable to the functioning of the internet.
All of the search methods mentioned above are used by search engines, in AI, and in the GPS in your car or on Google maps.
Every time you sort items in a list by price, number of reviews, popularity, etc, sorting algorithms come into play.
Once you get into the weeds with algorithms you will need to measure their efficiency. To do this you must take into account that the most efficient response to a question is not always the best solution to a problem.
The example of route-finding algorithms shows this. If a piece of software examines all possible routes in a country, it may take a long time to calculate this. The app could be hanging for hours. If however, the algorithm is limited only to highways and excludes smaller roads, the solution can be delivered in seconds.
The solution will be a perfectly good route, although possibly, not the absolute best one possible.
Other factors to take into account are the speed of the computer running the code and the speed of the code itself. A bad algorithm might run faster on a good computer than a good algorithm on a slow machine.
Asymptotic analysis is the method used to compare the efficiency of algorithms independently of platforms, languages, compilers, etc. We use asymptotic notation to denote and measure the growth of run time as a function of the increase in the amount of data searched, sorted, or calculated, rather than the speed with which the solution is delivered in clock time.
Algorithms and Python
Every aspiring computer scientist needs to get to grips with algorithms in a practical way with a programming language. In our view, Python is hands down the best choice for many reasons.
Python is famously easy to read and learn. The syntax is simple and intuitive without any of the braces or semicolons that can throw the beginner off in other languages. Python code actually resembles the pseudocode that you might use in a computer science class to design programs.
This simplicity does not mean lack of functionality. Algorithmic methods such as recursion, divide and conquer, backtracking, graph, sort and search are all easily handled in Python without any of the headaches typically encountered by beginners starting with languages like C++ or even Java.
Algorithms go to the very heart of computer science and cannot be ignored. Without a thorough knowledge of these, a developer is working blind and liable to find themselves reinventing wheels, badly, over and over.
If you are just starting out with this fascinating and ancient subject choosing Python as your study language is an excellent choice that will save you time and effort further on down the road.
If you have any questions about becoming a Python developer or how to implement algorithms in your Python code, don’t hesitate to reach out. Make sure to check out our other posts on Python and computer science if are interested in learning more.