Personal Brand: Week 3

Eduardo Ahumada
5 min readAug 3, 2021

--

Hi! In today's blog, I will share more about the thing that I’ve been studying. The purpose of my study sessions has been preparing for a coding interview. Most interviews will involve solving a problem that requires a good background in data structures and algorithms, why? we do not only need to know how to create solutions by using everything without thinking of the costs of each of our decisions in terms of time for executing your code and space for storing in memory the different processes. We need to take into consideration if our problem could be better solved with an array or with a linked list; if we should use linear search or binary search. It will depend on how data is given and what actions are we going to be making on the data.

I would like to talk about some important concepts in data structures and algorithms that will come in handy when trying to solve different problems that require retrieving data or to map values as key-value pairs, I'm talking about hash maps. In addition, I am going to share another tool that helps us create a solution for problems that require iterative processes, but with the main purpose of simplifying code, not necessarily making our code more efficient, but more readable, I’m talking about recursion.

Hash Maps

Photo by Steve Johnson on Unsplash

Hash maps are data structures that are built on top of another data structure, an array using an indexing system. Hash maps are common data structures to store key-value with fast assignments and lookup. We retrieve a value stored in the hash map by using the key under which it was stored. Key is an identifier given to a value for later retrieval.

Each index of the array (the underlying data structure) can store one key-value pair (depending on how we handle collisions, I’m going to talk about collisions soon). How do we decide in which index will we store a specific key-value pair? Hash tables use a hash function to turn a key into an index within the array. Then the hash function can be used to access an index when inserting a value or retrieving a value from a hash map.

An important thing to remember is that each key in a hash map can only be paired with only one value, that is, we cannot repeat keys. Nevertheless, different keys can be paired to the same value.

When the hash function is working, we can have hash collisions, this happens when the hash function returns the same index for two different value pairs.

Collisions can be handle with different strategies. Two strategies are:

  • Separate chaining: Each array index stores another data structure such as a linked list, which stores all values for multiple keys that hash to the same index in the array.
  • Open addressing: We stick to the array as our underlying data structure, but when a collision happens, we continue looking for a new index to save the data. Commonly, a method called probing is used, it means continuing to find new array indices in a fixed sequence until an empty index is found.

In Python exists an implementation of the hash table data structure which is the dictionary. However, it can be implemented as a class using a list instead of an array and limiting its size. Implementing a hash table helps understand better the inner components of the data structure.

Some things to understand if we want to implement the hash table class: The hash class uses a process to create the hash value, a hash method to encode the key and returning a value from it. Then a compressor method to fit that hash value into our array size.

Here is the code for the implementation of a hash table using open addressing to resolve collisions.

For the implementation of separate chaining:
Here is the linked list used.
Here is a list for the test case.

And finally, here is the implementation of the hash table that uses separate chaining with a linked list to avoid collisions.

Recursion

Photo by Ludde Lorentz on Unsplash

Recursion is a technique for solving iterative problems, but by defining the problem in terms of itself. The main parts of a recursive function that are important to remember are the base case and the recursive step.

The base case is an element that a recursion function must have in order to stop the function from recurring indefinitely. We can for example wait for a value to be zero, wait for a list to be empty, so forth, and so on.

What part of the recursive function will allow us to get to the base case to stop recursing? Here is when the recursive step is necessary. The recursive step will call the recursive function with some input that brings it closer to its base case. For example, if the base case is waiting for a value to be zero, and the input is initially N, the recursive step will have to call the function with N-1 until the base condition is met and the function returns a value.

How does the computer save each call of the recursion to be able to go call by call to finally solve the expected value? Programming languages use a tool called a call stack to manage the invocation of the recursive functions. Just like a stack, a call stack for a recursive function calls the last function in its stack when the base case is met. Call stacks can suffer “stack overflow” which means trying to fit elements into a stack that is already full. In python, when we reach the limit of recursive calls we get an error.

Programming languages use execution contexts to manage recursive functions, an execution context is the set of arguments to the recursive function call.

The big-O runtime for a recursive function is equivalent to the number of recursive function calls. This will vary depending on the complexity of the algorithm. An example can be an input N to a recursive function that is called N times will have a runtime of O(N). A recursive function of input N that calls itself twice per function may have a runtime of 0(2^N). Nevertheless, there are a lot of scenarios, so it is necessary to analyze this to calculate the runtime.

Recursive functions tend to be at least a little less efficient than comparable iterative solutions because of the call stack. The feature of recursion is how it can reduce complex problems into an elegant solution of only a few lines of code. Recursion forces us to separate a task into its smallest piece, the base case, and the smallest step to get there, the recursive step.

Here is the code for different examples of solving problems using recursion.

That would be everything for this week, I hope you learned a lot!

Photo by Kelly Sikkema on Unsplash

--

--

Eduardo Ahumada

Engineer looking to help and contribute. Learning about Software development and Computer Science.