Algorithms
Algorithms
An algorithm is a finite sequence of precise instructions for performing a computation or for solving a problem. A pseudocode description of the algorithm for finding the maximum element in a finite sequence follows.
- Algorithm : Finding the maximum element in a finite sequence.
- procedure max(
: integers) max for to if max then max return max
Searching Algorithms
The general searching problem can be described as follows:
Locate an element
The Linear Search
The first searching algorithm that we will present is called the linear search algorithm.
- Algorithm : The Linear Search Algorithm.
procedure linear search(
: integer, : distinct integers) while ( and ) if then location else location return location
The Binary Search
We will now consider another searching algorithm. This algorithm can be used when the list has terms occurring in order of increasing size. This second searching algorithm is called the binary search algorithm.
- Algorithm : The Binary Search Algorithm.
procedure binary search(
: integer, : distinct integers) [Left] [Right] while ( ) if then else if then location else location return location
Sorting
Suppose that we have a way to order elements of the set. Sorting is putting these elements into a list in which the elements are in increasing order. For instance, sorting the list 7, 2, 1, 4, 5, 9 produces the list 1, 2, 4, 5, 7, 9. Sorting the list d, h, c, a, f (using alphabetical order) produces the list a, c, d, f, h.
The Bubble Sort
The bubble sort is one of the simplest sorting algorithms. It puts a list into increasing order by successively comparing adjacent elements, interchanging them if they are in the wrong order.
- Algorithm : The Bubble Sort.
procedure bubble sort(
: real numbers with ) for to for to if then interchange and { $ is in increasing order}
The Insertion Sort
In general, in the
- Algorithm: The Insertion Sort.
procedure insertion sort(
: real numbers with ) for to while
for to
{ $ is in increasing order}
The Growth of Functions
Big-
Big- Notation
Let
Big- Estimates for Some Important Functions
Polynomials can often be used to estimate the growth of functions. Instead of analyzing the growth of polynomials each time they occur, we would like a result that can always be used to estimate the growth of a polynomial.
- Theorem :
Let
where an are real numbers. Then is .
We now give some useful facts that help us determine whether big-
- If
is a polynomial of degree or less, then is , but the reverse of this relationship does not hold. If then is but is not . - Every positive power of the logarithm of
to the base , where , is big- of every positive power of , but the reverse relationship never holds. Whenever and and are positive, we have is , but is not . - Every power of
is big- of every exponential function of with a base that is greater than one, but the reverse relationship never holds. Whenever and is positive, we have is but is not . - If we have two exponential functions with different bases greater than one, one of these functions is big-
of the other if and only if its base is smaller or equal. When , we have is but is not . - If
, we have is but is not .
The Growth of Combinations of Functions
Many algorithms are made up of two or more separate subprocedures. The number of steps used by a computer to solve a problem with input of a specified size using such an algorithm is the sum of the number of steps used by these subprocedures. To give a big-
- Suppose that
is and that is . Then is , where for all . - Suppose that
is and is . Then is .
Big-Omega and Big-Theta Notation
Big-
- Let
and be functions from the set of integers or the set of real numbers to the set of real numbers. We say that is if there are constants and with positive such that whenever .
Complexity of Algorithms
Questions such as these involve the computational complexity of the algorithm. An analysis of the time required to solve a problem of a particular size involves the time complexity of the algorithm. An analysis of the computer memory required involves the space complexity of the algorithm. Considerations of the time and space complexity of an algorithm are essential when algorithms are implemented. It is important to know whether an algorithm will produce an answer in a microsecond, a minute, or a billion years.
Time Complexity
The time complexity of an algorithm can be expressed in terms of the number of operations used by the algorithm when the input has a particular size. The operations used to measure time complexity can be the comparison of integers, the addition of integers, the multiplication of integers, the division of integers, or any other basic operation.
Worst-Case Complexity
By the worst-case performance of an algorithm, we mean the largest number of operations needed to solve the given problem using this algorithm on input of specified size. Worst-case analysis tells us how many operations an algorithm requires to guarantee that it will produce a solution.
Average-Case Complexity
Another important type of complexity analysis, besides worst-case analysis, is called average-case analysis. The average number of operations used to solve the problem over all possible inputs of a given size is found in this type of analysis. Average-case time complexity analysis is usually much more complicated than worst-case.