# ArjunRayGA/cs-algorithms forked from ga-wdi-boston/cs-algorithms

No description, website, or topics provided.
This branch is 19 commits behind ga-wdi-boston:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
.editorconfig
.gitignore
.remarkrc
CONTRIBUTING.md
STYLE.md

# Computer Science: Algorithms

We've already discussed abstract data types and how to implement real-world things using a set of tools available to us when using a particular programming language. Next we'll discuss a few algorithms that will give us a foundation to discuss more advanced computer science topics, and also help can us make better choices when writing code.

## Objectives

By the end of this, developers should be able to:

• Define "algorithm"
• Identify what Big-O time-complexity measures
• Give an example of a divide-and-conquer algorithm
• Predict the time-complexity of a given algorithm

## Preparation

1. Fork and clone this repository.

## Algorithms

algorithm (n.) - a process or set of rules to be followed to attain a goal

Algorithm is a fancy word for recipe. When we have a problem, we take a series of steps to solve that problem. Say I want a peanut butter and jelly sandwich, and Robin has agreed to make it for me. The problem is, he doesn't know how. Assuming an otherwise-adult set of knowledge, how might we tell Robin to make me a sandwich?

1. Go to the 12th floor
2. Go to the kitchen
3. Find the bread, toaster, utensils, peanut butter, and jelly
5. Using a knife or spoon, spread one slice of toast with peanut butter
6. Spread the other slice of toast with jelly
7. Place the two pieces of bread together

If Robin needed to make sandwiches for all of us, how would he do that? What's the "easy" or naïve way to obtain many sandwiches? What is a more efficient way?

### Lab: Outline an Algorithm

Take a few minutes to describe your morning algorithm... uh, routine. Share it with a neighbor. How many steps are there? How do you save time if you're in a rush?

I have a deck of unsorted playing cards. Describe in English an algorithm for sorting them. How would this algorithm change if my goal were not only to sort the deck, but to kill time while doing it?

## Sorting

A well-known problem in computer science is sorting an array. There are many strategies (read: algorithms) for accomplishing this. Here is an incomplete list:

• Bubble sort
• Quick sort
• Merge sort
• Insertion sort
• Selection sort
• Heap sort

This illustrates something important about algorithms: you nearly always have a choice. There is no "one way" to solve a problem, no "right" way. Different algorithms are better in different contexts, or with different constraints. It's up to you to consider the options and pick the one that best meets your needs.

### Lab: Research Quick Sort

Take 10 minutes to work with a partner to read the pseudocode and ruby implementations of quick sort. One of you will be asked to explain quick sort in your own words.

### Lab: Visualize an Algorithm or two

Let's visualize both selection sort and insertion sort with six volunteers holding cards.

## Big-O (Asymptotic Analysis)

We use "Big-O" notation to describe an algorithm's complexity. Complexity measures how much time is needed and how much space is required to scale with input. A common optimization technique is to trade increased space complexity for reduced time complexity (and less commonly vice versa).

Question: Why do we focus on time-complexity?

Take a simple algorithm for demonstration purposes: printing an element of an array to the screen.

```[1, 2, 3].each do |n|
puts n
end```

This procedure iterates through an array and prints each element to the screen. This example has three steps. If the array had five elements, five steps would be require to print all the elements complete. We say that this procedure has linear time-complexity: the time-to-complete grows in lock-step with the number of elements. This is denoted `O(n)` and pronounced "Big o of n", "O of n", or simply "linear".

To identify an algorithm's complexity, we focus on the "family" of scaling functions the algorithm belongs to. We don't care about exact values. For example:

```[1, 2, 3].each do |n|
puts "hello!"
puts n
end```

This procedure prints two lines to the screen on each iteration, for a total of six lines. If we had a five-item array, it would print ten items to the screen. You might be tempted to say this algorithm has complexity `O(2n)`. But, `y = 2n` is still a linear function, so it is more appropriate to say that this procedure is linear (`O(n)`).

Another way of saying we care about the "family" of scaling functions is to say we care about the shape of the scaling function for an algorithm. In both of the previous examples, graphing elements (`n`) against completion time yields a straight line, hence we say that the procedures scale linearly.

### Lab: Study Big-O Families

Study the Big-O table to become familiar with these scaling functions, and compare the table to the running time graph.

## Predicting Complexity

How can you predict the complexity of a given algorithm? We can look for certain features to help us characterize it.

• Loops take linear time to complete (`O(n)`).
• Nested loops take quadratic time to complete (`O(n^2)`), or worse, cubic time (`O(n^3)`)
• For consecutive statements, add the times-to-complete.
• For branching statements (`if/else`), take the complexity of the worse branch.
• Generating all the possible permutations of a set, e.g. `abc`, `acb`, `cba`, etc, takes factorial time (`O(n!)`)

Here's a party trick that is sure to make you popular. Bet someone you can state with certainty a number they've chosen, between one and one hundred, after seven questions. The questions will all be "Is `<number>` less than the number you've chosen?" Before I show you how, let's look at a simplistic solution that takes, at worst, 100 guesses:

```1. Guess "1". If no,
2. Guess "2". If no,
3. Guess "3". If no,
...
100.  Guess "100". You win!```

Question: What is the complexity of this algorithm?

The algorithm that gets you there in the minimum number of guesses is called a "binary search". It goes like this:

1. Calculate the mid-point of the array.
2. If it is the target value, return it and you're done.
3. If not, ask if the mid-point is less than their chosen number.
4. If not, the answer is in the upper half-range. If so, it's in the lower half-range.
5. Repeat the above steps until there are only two numbers left.
6. The final "less than" question gives you the answer.

It's relatively easy to see in a tree diagram. Can you see the binary structure of the tree?

Would checking if the middle number is the correct answer speed things up?

A binary search works with any direct access weakly ordered set (an ordered array with elements that may compare equal). See here for a more rigorous algorithm.

Question: Explain why this algorithm is an example of divide-and-conquer in your own words.