Algorithms

Algorithms

An algorithm is a finite sequence of precise instructions for performing a computation or for solving a problem. A pseudocode description of the algorithm for finding the maximum element in a finite sequence follows.

  • Algorithm : Finding the maximum element in a finite sequence.
  • procedure max( : integers) max for to if max then max return max

Searching Algorithms

The general searching problem can be described as follows: Locate an element in a list of distinct elements or determine that it is not in the list. The solution to this search problem is the location of the term in the list that equals (that is, is the solution if ) and is if is not in the list.

The first searching algorithm that we will present is called the linear search algorithm.

  • Algorithm : The Linear Search Algorithm. procedure linear search( : integer, : distinct integers) while ( and ) if then location else location return location

We will now consider another searching algorithm. This algorithm can be used when the list has terms occurring in order of increasing size. This second searching algorithm is called the binary search algorithm.

  • Algorithm : The Binary Search Algorithm. procedure binary search( : integer, : distinct integers) [Left] [Right] while () if then else if then location else location return location

Sorting

Suppose that we have a way to order elements of the set. Sorting is putting these elements into a list in which the elements are in increasing order. For instance, sorting the list 7, 2, 1, 4, 5, 9 produces the list 1, 2, 4, 5, 7, 9. Sorting the list d, h, c, a, f (using alphabetical order) produces the list a, c, d, f, h.

The Bubble Sort

The bubble sort is one of the simplest sorting algorithms. It puts a list into increasing order by successively comparing adjacent elements, interchanging them if they are in the wrong order.

  • Algorithm : The Bubble Sort. procedure bubble sort( : real numbers with ) for to for to if then interchange and { $ is in increasing order}
The Insertion Sort

In general, in the th step of the insertion sort, the th element of the list is inserted into the correct position in the list of the previously sorted elements. To insert the th element in the list, a linear search technique is used; the th element is successively compared with the already sorted elements at the start of the list until the first element that is not less than this element is found or until it has been compared with all elements; the th element is inserted in the correct position so that the first elements are sorted. The algorithm continues until the last element is placed in the correct position relative to the already sorted list of the first elements.

  • Algorithm: The Insertion Sort. procedure insertion sort( : real numbers with ) for to while
    for to
    { $ is in increasing order}

The Growth of Functions

Big- notation is used extensively to estimate the number of operations an algorithm uses as its input grows. With the help of this notation, we can determine whether it is practical to use a particular algorithm to solve a problem as the size of the input increases. Furthermore, using big- notation, we can compare two algorithms to determine which is more efficient as the size of the input grows.

Big- Notation

Let and be functions from the set of integers or the set of real numbers to the set of real numbers. We say that is if there are constants and such that whenever .

Big- Estimates for Some Important Functions

Polynomials can often be used to estimate the growth of functions. Instead of analyzing the growth of polynomials each time they occur, we would like a result that can always be used to estimate the growth of a polynomial.

  • Theorem : Let where an are real numbers. Then is .

We now give some useful facts that help us determine whether big- relationships hold between pairs of functions when each of the functions is a power of a logarithm, a power, or an exponential function of the form where .

  • If is a polynomial of degree or less, then is , but the reverse of this relationship does not hold. If then is but is not .
  • Every positive power of the logarithm of to the base , where , is big- of every positive power of , but the reverse relationship never holds. Whenever and and are positive, we have is , but is not .
  • Every power of is big- of every exponential function of with a base that is greater than one, but the reverse relationship never holds. Whenever and is positive, we have is but is not .
  • If we have two exponential functions with different bases greater than one, one of these functions is big- of the other if and only if its base is smaller or equal. When , we have is but is not .
  • If , we have is but is not .

The Growth of Combinations of Functions

Many algorithms are made up of two or more separate subprocedures. The number of steps used by a computer to solve a problem with input of a specified size using such an algorithm is the sum of the number of steps used by these subprocedures. To give a big- estimate for the number of steps needed, it is necessary to find big- estimates for the number of steps used by each subprocedure and then combine these estimates.

  • Suppose that is and that is . Then is , where for all .
  • Suppose that is and is . Then is .

Big-Omega and Big-Theta Notation

Big- notation is used extensively to describe the growth of functions, but it has limitations. In particular, when is , we have an upper bound, in terms of , for the size of for large values of . However, big- notation does not provide a lower bound for the size of for large . For this, we use big-Omega (big-) notation. When we want to give both an upper and a lower bound on the size of a function , relative to a reference function , we use big-Theta (big-) notation.

  • Let and be functions from the set of integers or the set of real numbers to the set of real numbers. We say that is if there are constants and with positive such that whenever .

Complexity of Algorithms

Questions such as these involve the computational complexity of the algorithm. An analysis of the time required to solve a problem of a particular size involves the time complexity of the algorithm. An analysis of the computer memory required involves the space complexity of the algorithm. Considerations of the time and space complexity of an algorithm are essential when algorithms are implemented. It is important to know whether an algorithm will produce an answer in a microsecond, a minute, or a billion years.

Time Complexity

The time complexity of an algorithm can be expressed in terms of the number of operations used by the algorithm when the input has a particular size. The operations used to measure time complexity can be the comparison of integers, the addition of integers, the multiplication of integers, the division of integers, or any other basic operation.

Worst-Case Complexity

By the worst-case performance of an algorithm, we mean the largest number of operations needed to solve the given problem using this algorithm on input of specified size. Worst-case analysis tells us how many operations an algorithm requires to guarantee that it will produce a solution.

Average-Case Complexity

Another important type of complexity analysis, besides worst-case analysis, is called average-case analysis. The average number of operations used to solve the problem over all possible inputs of a given size is found in this type of analysis. Average-case time complexity analysis is usually much more complicated than worst-case.