In one sense, recursion is simple.

In fact, most of what you need to know about recursion can be summed up in this gif:

n4xUtpn---Imgur

But the deeper you go, the harder it is to fully grasp. But it doesn't have to be.

We just published a full course on the freeCodeCamp.org YouTube channel that will help you to grasp recursion at a conceptual level.

The Simple Engineer developed this course. He has created many courses and is great at helping to explain tricky concepts in ways that are easy to understand.

Recursion is a powerful technique that helps us bridge the gap between complex problems being solved with elegant code. This course breaks down what recursion is, why you would and wouldn’t want to use it, and shows a variety of examples for how it can be used.

The course explains recursion with all sorts of data-structures, animations, debugging, and call-stack analysis to get a deeper understanding to these principles.

The code is written in Java, but the principles apply to any language.

Here are all the topics covered in this course:

  • What is Recursion?
  • Explaining Recursion via ATM Analogy
  • Explaining Recursion via Essay Revision Analogy
  • Summarizing What Recursion Is
  • Why & Why Not Recursion
  • Understanding The Call Stack
  • Call Stack Analogy
  • Recursion With Strings Introduction
  • String Reversal Explanation
  • String Reversal Call Stack Animation
  • Palindrome Explanation
  • Palindrome Call Stack Animation
  • Recursion With Numbers
  • Decimal To Binary Explanation
  • Decimal To Binary Code & Debug
  • Sum of Natural Numbers Explanation
  • Sum of Natural Numbers Code & Debug
  • Divide & Conquer Algorithms
  • Binary Search Animation & Explanation
  • Fibonacci Explanation
  • Fibonacci Animation
  • Merge Sort Explanation & Animation
  • Merge Sort Code & Debug
  • Linked Lists
  • Linked List Reversal Animation
  • Linked List Code & Debug
  • Merge Two Sorted Linked Lists Animation
  • Merge Two Sorted Linked Lists Code & Debug
  • Trees
  • Insert Value Into Binary Search Tree Animation
  • Insert Value Into Binary Search Tree Code Walkthrough
  • Insert Value Into Binary Search Tree Call Stack Animation
  • Print All Leaf Nodes Explanation
  • Print All Leaf Nodes Code & Debug
  • Graphs
  • Depth-First Search Animation
  • Depth-First Search Code Walkthrough
  • Recursion Optimizations
  • Memoization & Caching
  • Tail-Call Recursion
  • Conclusion

Watch the full course below or on the freeCodeCamp.org YouTube channel (2-hour watch).

Transcript

(autogenerated)

When learning about recursion, it can seem like you're always going back to the beginning.

In this course, the simple engineer will help you understand recursion using animations, thought processes, and more.

Hey, guys, and welcome to another video brought to you by the simple engineer.

In today's video, we are going to delve deep into the depths of recursion, and strengthen your algorithmic mental model around this programming paradigm will do so by looking at a variety of different examples and animations.

So let's get right into it.

The first question that we have to answer about recursion is what even is recursion? And I think the best way to talk about it is through this analogy, and let's imagine that you're sitting in line waiting to withdraw money out of an ATM.

And for the sake of this example, let's assume that this right here is you and you have this question this underlying question.

And you want to know how many people are standing in front of you in this line.

And this kind of brute force, iterative version of you says, Well, what I can do is I could step out of line, I could run to the front, and I can maybe, maybe count one by one.

And maybe I store these counts in some auxilary, you know, notebook that I have some external variable that you can imagine, and then you run down this line, and you get back and you say, Okay, I got my answer.

But I did a lot of work.

And we want to kind of switch this paradigm, we want to think, how can I be lazy? What's the least amount of work that I can do, and sort of break this problem down in some sort of sub structure? And so what I do well, I tap on this girl's shoulder in front of me, and I asked her a very simple question, I say, hey, what number are you? And she turns around and looks at me and says, you know, I'm sorry, I don't really know.

But what I'll do is I'll ask the woman in front of me.

And what happens is, this process kind of continues, and it continues up until we hit some stopping condition.

And this condition that we get stopped at is when we hit this woman, and she taps on this guy's shoulder.

And he says, she says, hey, what number are you in line? And he gives this response.

And he's number one.

And this is interesting, right? Because this is kind of the stopping condition to this sort of problem that we were solving.

And what she does, if she takes that number, and she says, Well, if he was number one, then I'm basically one plus one, because I count myself.

And this guy says, Well, if she was number two, that I'm one plus whatever she was, and this idea unravels backwards to the original asker.

And this woman basically says to the guy, hey, there were, you know, 10 people in front of me, I count myself.

And now I'm at 11.

And it turns out, that we can model this problem with this simple blueprint, we ask the first question, what's the least amount of work that I can do? How do I break this problem down into some sub problem? And the second question that I have is, when would the process complete? Like, what's my stopping condition.

And in this case, it's when we hit the first person in line, which is the gentleman withdrawing money from the ATM, it just so happens that we can actually write this problem in a very simplified, minimal, beautiful piece of code.

And the function that we have is just this get my position in line.

And we have this kind of abstraction of a person that we have.

And notice the return values in integer.

And, again, if we think back to those two questions that we have for this blueprint, we say if the next person in line is null, then we must be the first person in line.

Right? That's kind of this base case, imagine nobody is in line, well, then you're probably the first person in line.

Now, if we don't satisfy this conditional, then what do we do, we do just a little bit of work, we say, I'm going to count myself as somebody that contributes to the number of people in line.

And I'm going to add this kind of recursive call, and I'm calling myself.

And this is the interesting property of recursion, we're calling our self.

But the parameter that we condition on actually further progresses us to the problem that we're trying to solve.

And so I'm not I'm not passing the same person object into the function again, I'm actually passing a person that progresses me a little bit closer to the question that I'm asking, which is, how many people are in front of me.

And this is really all recursion is about is how can I take some large problem and break it down into a bunch of subproblems such that each invocation of my method gets me a little bit closer to the problem that I'm trying to solve? Let's think about another example.

Let's take back to the the school days where you're writing a bunch of essays and typically this process, at least for me, is you know, you writing So you submitted to your professor and he says, Hey, you know, that essay was terrible, I want you to go make revisions.

And the essay gets passed back to the student.

And this process can actually continue over and over and over again, you write an essay, you make revisions, you submit it, it gets denied, and you rinse and repeat.

And you do this until the professor says, Okay, that was good, I'm going to put it into my briefcase, and start grading it.

And it just so happens, this, again, is this kind of recursive strategy that can be developed.

If we look at a simple piece of code.

And what are we doing, we're really, we look at an essay, we revise the essay, we read it, we get feedback on it, we apply changes to it.

And then notice, we just call ourselves again, right.

And there's a single base case kind of hidden in here.

And we do it until the essay is complete.

And notice that each invocation is the same essay object.

But what we've done is we've inched our way a little bit closer to that goal, that goal of no longer needing revisions on the essay.

So the whole process again, just to re emphasize is that we do a little bit of work on each invocation of our method call Intel, we hit some base case, some stopping condition that says, hey, you no longer need to continue.

So again, like what is recursion, recursion is nothing more than a method that calls itself and to be more precise, it's usually a method that maybe returns a value, maybe it doesn't.

But it's conditioned on some parameter, such that when you hit some conditional at some point in time, you can actually stop recursing, some base case, right.

And we consider this piece the base case, this is the stopping condition, such that we no longer grow the number of recursive calls that we're storing in memory.

And this piece down here, this is the recursive call.

This is where I, you know, I do some unit of work, some small sub problem that inches me or progresses me closer to the goal or question I'm trying to answer.

Before we dive into the technical intricacies of recursion, it's important that we discuss some of the trade offs.

Why would we want to use recursion? And why would we want to avoid recursion.

And I think there are valid examples for both.

And it really comes down to the situation.

The first pro that I'll give is that it really bridges the gap between elegance and complexity, we will look at a variety of different problems where we're traversing complex data structures like trees and graphs.

And it really boils down to three or four lines of code.

And that is vastly different than doing something like that, in this kind of imperative approach, where we're, we're looking at things with a lot of loops and a lot of variables.

And that can get really messy really quickly, with data structures that are inherently recursive.

Now, on the downside, you know, adding a bunch of method calls on the call stack incurs some CPU overhead, right, there is some slowness, with calling methods in your code compared to iterating through a loop.

And that, again, is another trade off you need to make it can be a time or space trade off that you need to consider.

Now, I touched on this briefly, but again, like recursion can reduce the need for complex loops, and auxiliary data structures.

recursion has this sort of implicit stack, which is a data structure commonly used in a lot of algorithms.

And so having that sort of implicit stack and kind of self manage looping construct, it's given to you as a part of recursive calls, you can exploit that property to really simplify your code and focus on the problem you're solving.

Now, the con for that is as you're growing the amount of method invocations in your computer memory, you can actually run out of memory.

And we'll look at a lot of examples as to why this is, and we'll, we'll start to explore this, this idea of a Stack Overflow exception.

And this is where we start to run out of the pre allocated buffer of memory that we have for our program, which can actually cause your program to crash.

There are a lot of pros when it comes to recursion when talking about optimization, reducing time complexity.

And that's what this idea of memos zation, and we'll look at some examples of memorization and caching, to help speed up redundant calls.

And that's a beautiful property of recursion.

Now, the con again, just like any piece of code, is that if recursion is overused, you can get into this sort of habit where you start to develop really complex code, if it's not well instructed, and you want to make sure that and what we'll get a lot better at this as we go through this This tutorial is when you look at problems, you want to kind of ask yourself, Is this a good use case? For recursion? Can I really break it down into subproblems that make sense for recursion? If you cannot, then you may get into this con where you have unnecessarily complex code.

Now, the final thing, as I had mentioned before, is recursion works really well for recursively defined data structures, JSON objects, trees, graphs, things that allow you to basically focus on one tiny unit of the data structure at a time.

And it just so happens we'll look at a bunch of different examples as to why this holds true.

But this is just a small set of pros and cons, things to consider when thinking about using recursion.

This brings us to a topic that I think is often overlooked when teaching recursion, which I think is one of the fundamental concepts that you really need to grasp to understand what people call quote unquote, magical about recursion.

It's just magic with how it works.

And after we look at this example, we'll start to realize how logical recursion is and why it's not necessarily such a magical thing.

Let's imagine that you go to work one day, and the first thing that you want to do is check your email.

And whilst checking your email, you get interrupted by your boss, and your boss says, Hey, I need you to go attend some meeting.

And you have to actually deviate from your task.

So you don't finish checking your email, you get interrupted.

And before you can check your email, now you need to attend this meeting.

Now, let's say your boss, well walk into this meeting, your boss interrupts you again.

And he says, Hey, our investors are visiting, I would love for you to go to this board meeting and impress them with all your knowledge.

And so again, we'll walk into this meeting, you get interrupted, you have to go impress the investors.

Right, that's your next task.

And so before you can attend the meeting, before you can check your email, you need to impress these investors.

And finally, your boss, he interrupts you again.

And he says, Hey, you know, I'm sorry.

But I need you to go help Jake with his code, his code is failing.

And we need to push to production.

So now you have to basically avoid the investor meeting, avoid the initial meeting, you can't check your email until you help j code.

Once you help him, this thing kind of gets popped off your to do list, you know, you go to the board meeting, you attend this original meeting.

And then finally, you can check your email.

And you may be asking, like, why is this ever at all relevant to recursion.

And it just so happens that this sort of idea is exactly how the call stack works when we're talking about invoking methods within our programs.

Let's take a look at this simple program here.

Notice we have three method calls.

And they're kind of chained together.

The first one returns a string, but it has a dependency on B, B returns a string, but it has a dependency on C.

And c just returns a string.

So nothing fancy at all.

Now, the call stack is going to be this sort of abstraction that our operating system leverages to store method invocations within our program.

It allows us to understand what memory addresses we return data to, and it stores local variable information, like what are the parameters that were passed into me.

So if we execute a, the first thing that goes on to the call stack is this sort of idea known as a stack frame.

And it basically says, Hey, I want to call Hello, and I want to concatenate the result of B.

But in order for me to concatenate the result of B, I need to actually now call B.

So that pushes B onto the call stack.

Now I'm in the same scenario, where I want to now return from B, but I have a dependency on C.

So now I call C and put that on the call stack.

And notice this is the same sort of process that we looked at in the previous analogy.

I cannot return or pop these things off the stack until I kind of go in the order in which these things were called.

So I returned friends, right.

And now friends replaces that see invocation.

And so now this has been fully evaluated.

And so now since B is fully evaluated, I can return that value to the B method invocation.

And now that B stack frame gets popped off the stack.

And it's only at this point that I have now evaluated that entire chain of method calls.

So now I can return this string value.

And get my expected output.

Now you may be asking, like why still? Why is this relevant to recursive thinking.

And let's look at an example.

Let's look at a call stack when we call these recursive calls, right? This is just a program that calls itself and it'll execute forever, right? So the first time I invoke a will now I need to invoke a, but it calls a again.

And it just keeps happening, right? There's no stopping condition.

And finally, there's going to be this point, this point in time where I try and invoke a again, and I get a, I get an error, and that error is a Stack Overflow.

And this exception happens when we exceed the pre allocated buffer of memory that our program has.

Right, we basically have run out of memory, we've exceeded the stack, and our invocations have overflowed, and we can no longer handle it.

And this is the whole thing with recursion, why we need a base case, we need to return a value.

So just like we saw previously, for methods that aren't recursive, we noticed that frames still grow and grow and grow.

And the only way for those frames to shrink in size is for them to return some value for them to stop invoking methods.

And the same holds true with recursion.

The only difference is, we have some sort of base case something that says, hey, this is the one thing that I want to condition on to avoid us from further recursing.

I want to start off the first technical component of this presentation looking at recursion with strings.

And this is going to give us a really good idea of manipulating input parameters on the call stack recursively.

And the first problem I want to look at is this idea of string reversal.

And so what we do is we have an input string like the symbol engineer.

And the idea is the output would be the input in the reverse order, right? And so the question is, how do we how do we build a really concise recursive function that gives us something like this.

And as we look, as we look at the skeleton structure for this code, we of course, are going to have some sort of input, right? So the input is going to be some string, and the output is going to be the reverse version of that.

And so we always ask kind of these two questions.

The first is, what is the base case? And this is really asking, when can I no longer continue within my algorithm? And the next line of code is going to be all about what's the smallest amount of work I can contribute? So in this case, it's going to be basically between each invocation, what's the small unit that I can actually modify or manipulate to progress a little bit closer to the goal? So let's look at the first question.

The first question is saying, When can I no longer continue? And I think I think when you think about this sort of scenario, there are two schools of thought.

And when I when I like to construct base cases, I typically think if I were to just pass in a very small input, like what is the smallest input that I could just pass in to start to this function, where I would need to basically immediately return.

And there could be two schools of thought with this approach, right? And the two schools of thought could be, well, one letter reversed is itself, right.

And that could be a really, really good base case for string reversal.

But we want to be even lazier.

Like what is the laziest, like the least amount of work that I could even consider thinking about.

And that would be the empty string, the empty string reversed, is again, just the empty string.

And so if I passed in the empty string to this function, and it would only make sense to get the empty string back.

And so if we, if we modify the code, and we look at this, we just have a simple base case where we evaluate if the input string is the empty string, then let's just return the empty string.

But now we need to consider how do I even get to that point? What's the smallest amount of work that I can contribute? Right? And that's the question that we're asking here.

I need to do something that whittles down the decision space within each recursive call.

And so we kind of asked this question, what's the smallest unit that I can deal with in a string? A string is just a bunch of characters, right? And so maybe I can modify a single character.

And this is where we get to this question.

Let me pick a single character out of this string.

And maybe where I position it will allow it to be concatenated from the call stack in the reverse order.

And it turns out when we when we write this code, we get a recursive call where we, we take that first character from the input string and we concatenate After the recursive call, and you may be looking at this and you say, Okay, well, the input parameter has changed.

And it's changed because we've actually shrunken down the decision space, we've shrunk in that input string, because our entire goal is to get closer to that base case, right.

And so we've taken everything directly after the first character to the end.

And the idea is that if we do this enough times, we can shrink our search space on each invocation and get the goal, which is the reverse string.

So this first piece is all is all focused on shrinking the problem space.

And the second piece is reflecting really the work that we're doing to contribute closer to the goal.

And I think it's worthwhile to analyze what is happening on the call stack.

So let's say that we pass in Hello.

And we don't hit the base case, on line two, we immediately go to line six, which is basically the recursive invocation with the shrunken down substring.

And then we concatenate the H.

And just as we had discussed with call stacks, we can't pop this off the call stack until that recursive call has also completed, which adds an additional stack frame to the call stack.

And you notice, again, we don't hit the base case, and we shrink down that input string, but we concatenate the first character.

And we keep doing this.

And it's nice, because I don't need to keep track of the H, I don't need to keep track of the E.

It's all self managed for me in the stack frame on the call stack.

And so I invoke reverse string again.

And now I get Oh.

And now watches this gets fairly interesting, because as I pass in, oh, I get to a point where my input string has been whittled down to just the empty string.

And it just so happens that for this use case, this is the base case.

So I return the empty string.

So that reverse string gets evaluated the empty string from this base case, and I end up just returning the empty string plus Oh.

And now this recursive call gets evaluated.

And now I add L.

And this just becomes o l, right.

And so now I can return this.

And this gets popped off the call stack.

And now you can see that I'm basically returning these values to the stack frame that preceded me.

And popping things off the call stack.

This function gets evaluated in ADS II.

So I return it and pop this off the call stack.

This function gets evaluated to o Ll E.

So I return it, it gets evaluated.

And then this gets popped off the call stack.

And as you can see, this is the goal, right? This is the reverse string.

And it's the power of the call stack the power of these recursive calls that allow us to return values back down the stack.

And this is a very good property that we can exploit when using recursion.

Let's look at palindromes, palindromes are these unique words, where we can basically spell the same word forward and backward.

Let's look at this word.

The mechanical way that we kind of analyze if a word is a palindrome or not, is we look at both ends.

And we basically say okay, do these letters match? Yes, they do.

So we shrink in Word, we say do these letters match? Yes, they do.

So we shrink in Word.

And now this is just one character.

And it proves that we've we've matched successfully.

So this indeed would be a palindrome.

Now let's look at a snippet of code to see how we could think about something like this recursively.

Now, obviously, we're going to have a Boolean function, because it's either Yes, it's a palindrome or No, it's not.

And we're going to evaluate some input string.

And of course, the first thing we always consider is what is this base case, the thing that stops us from recursing? Now, think back to what I had said in the previous example, I always like to consider my base cases, what is the smallest input that I could just pass into this function? And it turns out very often not very often, for string recursion, you can typically whittle down your search space and evaluate the input length.

So if you passed in a palindrome that was of size zero, then that's a palindrome, right? Because there's there's nothing that proves it's not a palindrome.

There are no characters to compare against.

And in the same vein, if we pass in just a single character, with a single character, both forward and backward is the same character.

So that's also a palindrome.

So these are really good use cases, or base cases to evaluate this conditional, which is whether or not a string is a palindrome.

But let's continue we need to consider the small amount of work that we can do.

And in the animation that we just looked at, you notice that we had two pointers, one on the left and one on the right, and we were comparing those strings, those characters, we're saying they need to be the same.

If they're not the same at any point in time, then we've violated the property of a palindrome.

And if we violate the property of palindrome, then we just go to this false.

This is kind of this fallback base case.

So at any point in the algorithm, if the characters don't match up at their respective opposite indices, then we can terminate, we return false and the false gets propagated through the call stack.

And the initial color function would return the false.

But if that's not the case, then we get to this interesting thing, where we call this recursive call, and it whittles down or sub our substring.

So let's look at the call stack to understand what's happening.

We pass in race car as an input string.

And this of course, is a palindrome you can see race car forward and backwards, it's just race car.

And the first thing we do is we evaluate this, this conditional this base case, is the length zero or one? Well, it's definitely not.

So we continue.

And now we compare the first character and the last character for this input string.

And if they're equal, then we just call ourselves again, but we shrink that input string.

And since these since R and R are equal, we add another stack frame to the call stack.

And we've shrunken down our input string, we again, look at the base case, we don't satisfy it.

So we compare the first and the last character.

And we notice in this in this case, they're also equal.

So we call ourselves again, right, and we don't pop the stack frame off the call stack, because we still have more work to do.

And so we shrink down our input string, we still haven't satisfied the base case.

And our characters at the first and last position are still the same.

So we do just a little bit more work.

And now we're at the last character.

And we've already come to this conclusion that if the input length is zero or one, then we just return true.

So for this one particular subproblem, this one particular element, this is indeed a palindrome.

So what do we do, we say, yes, this was a palindrome and we pop it off the call stack.

And since we return true, and all of these other conditionals have been satisfied, that we can just propagate through the true Boolean down the call stack.

So we return true here and pop this off.

We go to the next stack frame and return true here and pop this off.

And now we get to the final stack frame, which evaluates to true.

And this gets popped off the stack.

Now I want to look at recursion with numbers.

And the thing that you'll start to realize as we go through these sorts of problems, is that the same blueprint holds true for all of them.

And the first problem I want to look at is this idea of converting decimal base 10 values to binary, which is a base two number format, ones and zeros.

So the question is, how can we convert a number like 25 into its binary counterpart.

And it turns out, there's a very mechanical way of doing this.

So let's take the number 233.

And the formula here, as we'll come to find is we can do a division by two and this is basically doing the floor operator.

So it gets us an integer instead of a floating point value.

And we get we get an output, which is 116.

And we have a remainder, and that remainder is one.

And the mechanical process that we can follow to convert decimal to binary is we take the results of this operation, which is 116.

And we divide that by two.

And we just keep evaluating, and this is 58, remainder zero, and then we take 58.

And we divide that by two, and this is 29, remainder zero.

And we basically just keep taking the result in the output and dividing it by two.

And notice we're keeping track of the remainders here.

And the interesting property by doing these operations gets us to a point where we can take all of these remainders.

So notice, we've gotten to this base case, which is zero.

And that kind of halts this progression.

So now we're done.

And if we take all of these remainders, in the order that they were basically pushed onto the stack, we can evaluate them and this is the result in binary string for 233.

And the question is, how can we Okay, this this mechanical process, you know, very formulaic? How can we convert it into some recursive operation? Let's take a look at some of this code.

Now, notice, we said the first thing is we took 233 divided by two, we got some output The remainder was one.

And so there are actually many ways of doing this problem.

And we're going to keep it really simple.

So notice we're dealing with strings.

As the result outputs, we're going to be concatenating strings basically.

And so what we consider here is we we think about that base case, right? If I hit zero, then there's no longer reason for me to divide further, right, I return my result.

Now, if I have not hit zero yet, it means that I still have more division to do.

And so the first thing that we do is we get that remainder.

So when we look at 233, divided by two, we want to store that remainder because that represents one of the binary digits that we care about, basically tells us if the value is even or odd.

And this is the binary, this is the remainder that will contribute to the result.

And so that's why we store this in the result.

And notice, we just prepend this to the result.

And now what I do is I again, just call find binary and I shrink my problem space by half.

And I just propagate the result through.

Now let's let's actually look at the call stack.

By coding this.

And seeing what it looks like I want to take the code that we were just looking at, but look at it from the perspective of the call stack and understand how these stack frames are building up over time.

Now, there are many ways to code this fine binary function.

And a lot of people do it returning an integer.

And that's completely fine.

And if you want to do it that way, you can actually follow along with any modification of this and analyze the call stack with me.

So as we run in debug, the call stack is going to show up down here.

So these are all the stack frames that we can watch grow.

And the variables on the current stack frame will show up under here in local.

And so as we dive into this function, you notice that this is the first stack frame for find binary.

And in order for this to complete, we need to return some value.

We haven't hit our base case yet.

So we jump into our work.

And we just make another recursive call.

So the first stack frame can't pop off yet, because we're going to invoke ourselves again.

And once we do that, you notice that we add another stack frame.

And also notice that the input parameters have changed.

We've shrunken down our decision space, and we've appended the first digit to our binary result.

We haven't hit our base case yet.

And we continue.

And again, we do a recursive operation.

And our decision space keeps shrinking.

And this is the same mechanical process that we saw in the animation, we divide by two and log the remainder, we divide by two and log the remainder.

And we do this until we hit some sort of base condition.

Right.

And as we shrink down our problem space, notice we get to this point where the decimal is one.

And that still is not our base case.

So we go through, we get the result.

And this binary string is looking pretty good.

In on the final recursive call, you notice the decimal is now zero.

And this is a really good case now, because now we can just return.

So we come in here and we return this value.

And this value comes in and says okay, find binary for that invocation returned this result.

And so now we're just saying, okay, continue.

And notice as I step through, the stack frames have grown.

But now they should shrink off the call stack.

So I step through.

And notice they've all shrunken down, so all of them have returned, they've all returned a value.

And now once I step over here, we noticed that the binary value has been evaluated, you can come into this debug console and look at it.

And this is the resulting binary string.

And so it's good to look at how the call stack is working to understand.

Okay, how many stack frames do we build up to get to our result? And how do they unravel once we hit our base case here, we've just straight up propagated the result down through all of the stack frames, and all of them just return the same thing.

And we'll look at a lot of different examples where all the stack frames kind of work together.

And they wait for the result to do a little bit more work.

And so there's a lot of different versions of this.

The next problem that I want to look at with numbers is the sum of natural numbers.

And the idea of this problem is you take an input number like 10, and then you sum all the values up to 10.

And what we do here is we actually just add them all up, and we get some output and the output in this case would be 55.

And the question is how can we build some succinct function that Does this recursively Now let's look at a little snippet of code and try to understand what's happening behind the scenes.

So recursive summation, again, we take in an input value.

And the first question we need to ask is, what is the smallest input value that I could pass in.

So if I pass in one, for example, the sum of one to one is just itself.

One, right.

And so that's a good, that's a good place for the base case.

And again, keep in mind, we are simplifying these functions.

So you know, edge cases, we're not really focused on that.

Right, now we're focused on the core goal, which is building a succinct function.

So if I pass in one, I would return one.

Now, if I pass in a larger number, like 55, I still have a lot of work to do, I need to add up all the proceeding numbers up to 55 from one.

And so what I can do is I can take whatever value I am currently.

And I can add it to myself again, but subtracting one from that numbers, right.

So it shrinks me closer to this base case.

Let's again, look at some code and understand how the call stack is working with this sort of code.

So I've taken the same code from the slide.

And now we want to look at the call stack and understand what's happening as we make these recursive calls.

The first thing that we'll look at is the number five, so we want to add all values from one to five.

Let's put a breakpoint here and debug into the call stack.

So as I dive into this operation, we first evaluate the base case, which hasn't been hit yet.

And so now what I want to do is I want to take five, and I want to add it to another recursive call, but that recursion is shrinking down the input space again.

And so as I dive in, notice this input number change.

So I make the recursive call, and the input number has shrunken down to four, and another stack frame has been added to the call stack.

And so we keep evaluating has the conditional been hit? No, it has not.

So I come in here and it shrinks down to three.

And I keep doing this, and I come in here.

I come in here.

And now the input number is one has the base case been hit yet? Yes, it has.

And so if I dive into this, you notice that we return one here.

And this one returns this value to the stack frame right below it.

And so notice that as I continue, that stack frame gets popped off.

And you can see that the invocation of this method returned one here.

And so now what I'm doing is I'm taking two plus one, and I'm returning this value and this value is going to continuously unravel through these stack frames.

So notice, all of these get popped off.

And here's the final invocation, the input number is 10 here.

And as we continue, this is evaluating for recursive summation of 10.

And so now again, for 10, we get down to a base case.

And that base case is one.

And if we look, so I'll put a breakpoint here.

And we continue, we can see that we have the results of 15 and 55.

And these get printed out.

And so again, this is just an example to show as we're looking at the stack frames, how a stack frame can return a value to its previous stack frame to finish the operation.

And that's the key here for recursion.

And we'll keep doing this for a lot of the problems that we look at to get familiarized with how the call stack is working, and how the stack frames grow.

I want to talk about divide and conquer algorithms, which are really great demonstrations of recursion.

And the blueprint that we think about for dividing conquer holds through so we you know, still divide a large problem into several small problems.

But divide and conquer is all about we divide them into subproblems.

We independently solve them.

And then we actually merge the results to solve some holistic problem, right? We merge them together and say, Hey, this is the solution.

And they are typically recursive.

And let's look at the first one, which is binary search.

And the entire purpose of binary search is we look at a sorted list of numbers.

And the key point here is they're sorted.

And let's call this array.

And the first thing that we say is we say alright, I'm going to start with the left and right index, so the far left index is zero and the far right index is the length of the array minus one.

And the whole purpose of binary search is to find a value we want to find a value in this array.

So we know that zero is less than the length of the array.

So we can Have not hit our base case and we calculate the midpoint.

So the midpoint is the left and right index added together and divided by two.

And so we check, we say, okay, is that midpoint? The answer that we're looking for? So that's another sort of base case, have we hit or found the number 10? Here? And the answer is no.

So we ask one of two questions, the first question that we ask is, is the number that we're looking for on the left half of the array, or is the number that we're looking for on the right half of the array.

And remember, we just found the midpoint, so we're considering everything to the left and the midpoint and everything to the right of the midpoint because the input data is sorted.

Now, if it's on the left half, which is what we're looking at here, then we completely discard the right half.

And notice, the way that we do that is we change the bounds of the problem.

So the left half stays the same.

So our starting point on the left stays the same, but we only consider up to the midpoint minus one.

And so that is our ability to shrink our decision space and completely discard the right half in each recursion, or each recursive invocation.

Now, if we get to this other evaluation, this is basically saying the value we're looking for is on the right half.

And what this allows us to do is we only consider our starting point to be the midpoint plus one all the way to the end.

So we completely discard the left half for the rest of this problem.

And this indeed, is also a recursive call.

So look at the first stack frame in the call stack to the right, we start with zero, our upper bound index for the right variable is nine.

And the number we're searching for is 10.

So here's our midpoint, this is three.

And we asked this question, Is this the value that we're looking for? Is this 10? And the answer is no, it's not.

So what we can do is we can discard the entire left half of the search space because 10 is greater than the midpoint.

And it turns out, we add another stack frame to the call stack and notice how the parameters have changed.

Now for the left, the starting bound, which is the left is five, which is the index where four is, and the upper bound is nine, which is the original length of the array that we're working with.

And again, we calculate the midpoint, the midpoint on this sub array is nine, and we say is 10, greater than nine or less than nine.

And of course, we know 10 is greater than nine.

And so what we can do is completely discard the left half of that sub array and continue.

And we add another stack frame.

And the stack frame, the start bound is eight in the upper bound is nine.

And we again, recalculate the midpoint.

And the midpoint at this point actually turns out on line seven, we've hit this base case, the base case here is that we've found the number that we're looking for.

And this is the solution.

And so what we can do is just return this value.

But we're not done yet, because we need to still consider the call stack.

And so this value is 10.

And so what we do from this stack frame is we return 10.

And in this case, we returning the index that 10 is app.

So 10 is at the eighth index of the array, and this gets popped off the stack.

And now this binary search invocation has been completed, and it gets eight.

So it just returns eight.

And this gets popped off the stack.

And now finally, we return eight.

And we're complete.

And the final stack frame gets popped off the stack.

And that's how binary search works.

Let's look at Fibonacci, a classical, mathematical and computer science problem that often demonstrates the power of recursion.

And we're going to be looking at the non optimized version here.

And we'll add some optimizations at the end.

And let's look at the mathematical expression.

First, we're basically saying that for some input in at the index n is going to be made up of the sum of the two values at the two indices that precede it.

Let's look at the Wikipedia page for Fibonacci.

And this is the sequence the Fibonacci sequence.

And if we take one of these numbers like 55, it's the sum of the two numbers that preceded so 34 and 21.

And if we look at 34, it's the sum of 21 and 13.

And if we look at 13, it's the sum of eight and five, etc.

And this, this formula holds true for all values of one to infinity.

And so this is the Fibonacci sequence.

So if we look at this expression again, now we have these base cases in pink, and basically says that for the values of at index zero and one, the base cases are zero and one was Effectively, and we can just return at that point.

So that would be where we stopped that recursion.

And the piece of yellow is just saying that this, this holds true for all values of one to infinity.

So let's let's kind of look at this and understand why it differs from the previous.

Now we're dividing and conquering, right, we're dividing this problem, we have fib of n minus one that needs to happen, we add it to fib of n minus two.

And these are two recursive calls.

And as we think back how the call stack works, we know that the recursion on the left needs to complete before we even considering, consider starting the recursive call on the right.

So fib of n minus one could have a ton of different calls that need to complete before we even do the plus operator to fib of n minus two.

So let's look at how this would animate.

So we want to find the Fibonacci of five.

And like I had said, we do not evaluate the right hand side of the expression yet, because we need to satisfy that first recursive call first.

So this gets us F, fib of five minus one, which is four.

And this gets us four minus one, which is three.

And this process continues F of two.

And remember our base case, f of one is just one, so we would return from here.

And now to can evaluate the right hand side of its recursive operation.

Remember, we're calling the same function over and over again.

So now we've hit a base case, we've evaluated the left hand side, which is fib of n minus two, or fib of n minus one.

And now we can do fib of n minus two for this value, which is f of zero.

This again is a base case.

So we can return from here.

Now, the sum of these two values return and get passed up to F of three.

And now F of three can evaluate its recursive call on the right hand side of that plus operator.

And again, we do f of one, that's a base case, so we just return it.

Right? Now this, these F of two, f of one get added together to return from F of three, and this gets passed to f of four.

And now F of four can evaluate the right hand side of its expression, which is f of n minus two.

So here we get F of two.

And again, this recursive property holds true, this is a base case, we return it, we evaluate the right hand side, this is a base case, we return it.

And now this gets propagated back up to F of four.

And these values get propagated back up to F of five.

And it's from here.

Now, this is the very first, you know call of the function, we can now evaluate the right hand side of that plus operator.

And again, we get down to f of three, we get down to f of two, and then this is f of one.

And this is a base case.

So we return it and this is f of zero, this is a base case, we return it.

And notice like we're doing the same thing over and over again, this gets returned.

Now we go to the right hand side, this gets returned, this gets passed up.

And now we can add these two values.

And that's how we find the Fibonacci.

Now, we'll look at optimizations for this later.

But I just want to point out one thing like as we evaluate this, notice, we have f of three here, we have f of three here.

This is redundant, right? All of the nodes below F of three are the exact same.

And so it seems extremely wasteful, that we're recalculating these values.

And so as we'll come to realize, there are optimizations that we can consider to avoid recalculating something that we've already done the work for.

So if you look, we have f of two, f two and f of two.

And this is three nodes.

And you can imagine for really large numbers, it turns out for Fibonacci, you know, most modern day computers cannot run Fibonacci for this function at a very high input.

And that's because the recursive calls are so intensive.

So we have to look at some optimization techniques known as memoization.

Merge Sort is this poster child of divide and conquer when explaining this in a lot of computer science classes.

And the idea is that we take in a bunch of unsorted values, like the following.

And the idea is that we divide this array in such a way that we keep dividing.

And then we merge up the sorted results to sort this array in ascending order.

And it kind of looks like this.

So we split the array in half and we say, Alright, I'm going to focus on the left hand side.

And remember, before I even do the right hand side, I need to finish the left hand side first, right? That's the order in which recursion is going to operate here.

And this process just holds through.

So this is the left hand array and what do I do I split this in half.

And now I decompose this into two parts.

But again, I first have to focus on this half.

And then I can focus on this half.

So I look at, you know, I split these, and I look at foreign one.

And the base case here is that you can't really soar just one value.

And so I can stop splitting when I hit just one value.

And I compare these two, and I merge and sort them together just by a simple comparison operator.

Now I can look at the right hand side, and I basically split these.

So I have one integer here, and I take two and zero, and two and zero gets split even further.

And again, I'm down to this case where I just have digits, and so zero to get compared, they get merged up.

Now three has a linear time comparison, again, zero and two, and these get merged together.

And now we take one and 402, and three, these get merged together.

And now I've solved the left half of this array.

But remember, now I need to do the right half, we're evaluating in the order that recursion would consider this.

So when we split this, I get the right half of the array.

And I do the same thing on this side.

So I take this array, split it, I get negative one and seven.

This gets split, I compare them.

And it just so happens, they were already in sorted order.

So these get put back in the same spot.

Now I take the right half of the array, right, and I split this even further.

And so when this gets split, I have 10, nine and 20 gets split, nine and 20 are individual digits.

And when I compare them, they are already in sorted order.

And then I compare all the digits here, and they get merged back in sorted order.

And then again, finally I compare these digits, and they get merged back in sorted order.

And this brings us to the final merge, remember divide and conquer is all about dividing your problem into subproblems.

And then merging the results together.

So what we've done is we've just recursively merged all the results together from the recursive sub calls.

And this is the final merge.

And just to emphasize how this comparison works to ensure that these things get put back in sorted order, we do a linear comparison.

And what that means is we basically have two pointers, starting with the left hand side and we say alright, which number is smaller, and we know negative one is smaller, so we put this in its spot and increment the pointer.

And then we compare these two values will zero smaller here.

So we put that in spot and increment the pointer.

And we keep doing this as we compare values.

And notice here, we've actually run out of values in the left hand array.

And since we already know the right hand array is sorted, we just placed these back in the positions they belong.

And as we look at the resultant array, we noticed that Merge Sort has sorted the input completely.

And so this right here is the sort of solution.

And now the question is, how do we build something like this? How do we devise a recursive algorithm that could construct a sorting, you know, solution to a problem like this, we're gonna look at some code to do that.

So we have a blank slate here.

And we want to consider how Merge Sort would work to satisfy the properties that we looked at.

Now, the first thing I want to do is build the recursive call.

And so remember, Merge Sort takes in an array, and it sorts it.

And we're going to do this in place.

So I'm going to build a function that's public, and static.

And it's just going to be void.

So we're going to modify the original input array.

And what I'm going to take in this is going to be called merge sort.

And it's going to take in just a one dimensional integer array, and we'll call it data.

And it will have a start, and an end, which will represent the indices that we're working with.

Now, for merge sort, this is going to be a recursive call, we need to consider the base cases.

Remember, we work from the start to the end.

And if those values overlap, then we've hit our base case.

So we asked this question, if start is less than end, and we can continue doing work.

But if the start passes, whatever the end pointer is, then there's no more work to continue.

We've already sorted the data.

And so remember, we're all about taking the array and splitting it in two halves, we want to divide the problem into two halves and solve them independently.

And so the first thing that we can do is we can calculate the midpoint.

And so what we do is we say in mid is going to be basically whatever the start index is of the array plus the end index, and we divide that by two, and that's going to be the midpoint that we're working with.

And what do we want to do? Here, well, what we want to do is we want to divide the array in two parts.

And so it would make sense for us to just call merge sort.

We're dealing with the same data array.

But our bounds are what has changed.

And so the start is again, for the first half going to start at the same position, but the end is going to end at the midpoint.

And that's going to be the left sub half.

Now, if we think about the right sub half, the start is going to change, right? The start is going to be whatever the midpoint is plus one, all the way to the end.

And this is how we can consider splitting this array into two different sub halves.

But the question is, how do we how do we merge this data, right, what we're doing is we're just continuously dividing and dividing and dividing, but I want to be able to merge the data in sorted format.

So let's build a function called merge.

And what merge is going to do, it's going to take the original data array, and it's going to start taking the start the midpoint in the ending index.

And remember that last animation that we just looked at does this kind of linear time comparison to replace the values in the correct spot when it's looking at the two sorted sub arrays, so I build a function here, it's going to be public, static void.

And I'm just going to call this merge.

And again, it's going to take in an integer array called data, a starting point, a midpoint and an end point.

And now remember, we basically need to merge these values, but I don't want to modify the input array yet.

So I'm going to build a temporary array.

So it would make sense to build a temporary array to avoid modifying, can't type to avoid modifying the original contents.

And so in order for me to do that, I can just build a simple temporary array here.

And it will be a new int array.

And the size is just going to be dependent on the indices, right, so I have n minus start, plus one.

So this is going to build me a pre allocated buffer of memory to hold all the data for the sub arrays that I'm dealing with, from the start and end index.

And now what I'm going to do is I'm just going to copy the values.

So I want to I don't want to lose a reference to the start or midpoint.

So I'm going to say int i equals start, we'll say j is equal to mid plus one, and k is equal to zero.

and k is going to be this kind of tracker variable that we use to keep track of the values that we put into this temporary.

So let's continue.

Now recall back when we were merging the data together in that final sub call, that's a good way to kind of realize how this is working.

So we're basically doing a linear time comparison, at the values in the left array, and the pointer on the right array, and we're saying which one is smaller, whichever one is smaller is going to be placed first in this temporary.

And so I'm basically saying, while i is less than or equal to mid, which is going to be the left sub array.

And we want to say, well, j is less than or equal to the end, then we can continue.

And what is this saying? What this is saying is while both of the sub arrays have values, then try and merge them in sorted order.

Right, and that's what we want to do.

So we're starting with I, which is the left sub array up to the midpoint, and then j, which starts from the midpoint to the end, is going to be the right sub array.

And we just want to compare these values.

And so we ask a question, we say, Alright, well, if data sub i is less than or equal to data sub j, then we know what we know that the value in the left sub array is less than whatever the value is in the right sub array.

So what we can do is we can say, Alright, in this temporary buffer, I'm going to put in at the index k data sub i.

So the smaller value gets put next in the temporary.

And now what I want to do is I want to increment i, since I've already placed it in the array, and I want to increment K, so I don't override this value.

Now if this condition doesn't hold true, then what I can say is well, okay, so that would imply the opposite.

So I would just say temp of K is going to equal data sub J.

And then what we would do is we would say k plus plus, and j plus plus.

And you can also get fancy, you can come in here and do something like k plus plus here.

And you can do something like j plus plus here, it's both, it's going to be the same thing.

So if you want to simplify your code, that way, you can do the post increment operator to reduce the amount of code.

And this handles basically the comparison between the sub arrays.

But remember the conditional here, the conditional says, we only do this while both values have values to compare against.

And the example that we looked at, we actually ran out of data in the left sub array.

And we just had to just blindly place all the data in the right sub array into the original array.

And so we need to satisfy that case here.

And in order to do that, we can just say, while i is less than or equal to the midpoint, so while there's basically data to traverse, still, we're just going to place it into the position of the temporary array where it belongs.

This would be data, so I, and again, we would do k plus plus n i plus plus.

And that handles exhausting the left sub array, if the right sub array has run out of values, so we say, add the rest of the values from the left sub array into the result.

Now on the opposite side, if the left sub array has run out of values, but we still have data in the right sub array, then we just need to have the reverse conditional and we say, while j is less than or equal to end, then we're just gonna say temp sub k is equal to data sub j.

And then again, we just increment K.

And we increment J.

And this is the same thing, we're adding the rest of the values from the right sub array into the result.

You may ask, Are we done yet? And we're almost done.

But we need to remember we built this temporary array, and this is void.

And so we haven't really done any work yet.

And so the question we need to ask is, how do we modify the original data array in memory.

Now, we don't want to modify the entire array, we only want to modify the subsection that we're dealing with right now, in whatever recursive call that we're in.

And so this brings us to the copy phase, how do we copy this data over from temp into the right positions of the data, which is the original data.

And it turns out, we can just have a simple for loop where we say i is equal to start.

And while i is less than end, right, so we're only taking the subsection that we're dealing with.

So from start to end, right, which could be any, any bounds at any point of the recursion.

So while I is equal to start n is less than or equal to end, we're just going to increment i.

And in each iteration, we're going to load up the original data array at the index i to be equal to whatever's in the temporary array at the value of i minus start.

So if the data array that we're dealing with, if the start is 12, then what we would do is we would say, all right, well, I equals 12.

So 12 minus star is zero.

So we're going to load data set 12, with whatever's at temp sub zero.

And this is the copy phase.

This is how we actually override values at each sub array.

Let's actually run some input here, I'm going to have a new input array, we'll say, you know, data equals new int.

And we'll just load it with some values, we'll say negative five, you know, 2010 320.

And what I want to do is I want to sort this.

So I'm going to call merge sort.

And I'm going to give it the data array, the start is going to be the zeroeth index.

And the end is going to be the length minus one.

And so this should sort the array in place.

And so what I can do is I can put a breakpoint to look at the array once it's been sorted.

And then we'll look at the call stack to see what actually is happening behind the scenes.

So I'm going to debug and we'll notice I get here and merge sort has completed.

And what I can do is I can look at this stack frame and look at the data.

And you notice that we have in the zeroeth index, negative 5023 10 and 20.

And so this is the output in sorted order.

But this doesn't really do us any good unless we understand what's happening at the level of the call stack.

So let's take a look at that.

By put a breakpoint in the merge sort function.

Let's kind of look and see what's happening.

If I dive in here, the first stack frame gets added to the call stack for merge sort.

And I basically ask is start less than end? And the answer is no, we still have work to do.

And so when I go in here, I calculate the midpoint.

And I say I want to divide this array into two sub halves.

And the midpoint here is two.

And so what I do is I say, Alright, I'm going to divide everything from zero to two into its own set of sub arrays, and then I'll handle the right half.

And remember, I can't even go to the right half until the left half has completed.

So I dive into this.

And now I'm looking at everything from zero to two.

And I have to break this up even further.

And so now I'm at this position where the midpoint is one.

And again, I grow the call stack, and I'm still only on the left hand side.

And I calculate the midpoint again, and I go again.

And now we don't satisfy this, right, because zero and zero are equal.

And so we actually pop this off the call stack.

And now we return to the next recursive call.

And the next recursive call says, Okay, now I want to handle the right hand side of this, right.

And when I dive into this, I grow the call stack again.

And again, we pop off, right, we don't satisfy this base case, and we kind of just continue this operation.

And so now I've reached the basically the base case on the far left hand side of the first left sub half of this array.

And what it's asking me to do is it's asking me to now merge these values in sorted order.

So if I dive in here, and we look at the merge, let's analyze the start the mid and the end.

Remember, I'm only dealing with basically two values here.

And so if I go in here, we have a temporary array, and it's only of size two.

And that's because my base case here for merge sort is only really comparing two values.

And so we go in here, and we say, all right, which one's smaller, the left value, or the right value.

And I come in here, and I say, Okay, well, here, the left value is smaller.

So I load up K, I load up temp, and I put in negative five.

And then I have another while loop.

And remember, because I ran out of values to compare in the left sub array, I need to basically take all the values in the right sub array and put them in the positions they belong.

So I come down here and I load up the values, increment the counters.

And now we're done, because we've hit this base case.

So now, I have this sorted sub array, negative five and 20.

And what do I need to do I need to put these in the original spot, right? If we look at these values, I have, you know, I, J and K.

What does that mean? Well, it means that from the position that I'm dealing with, in the original array, I need to override those values.

So that's what this replacement is going to do.

So coming here, I replaced value one, I replace value two.

And now I'm done.

And that is the process that we go through for a single iteration of merge sort.

And this process will basically continue as we are dealing with bigger and bigger sub arrays as we recurse back from the bottom up, right, and now we're merging again.

And notice, like, since we did the left sub half, which was just two values, now we're dealing with the right sub half, right.

And now we have three values, and the right sub half of another sub problem.

And so we just continue comparing and going up and going up.

Now, we won't go through every stack frame.

In this call.

There are quite a few as we saw in the animation.

But I encourage you to rewatch that animation because that is the order that the call stack will be evaluated.

And that's the order in which things will recurs back up from the subproblems to get merged into some overall solution.

And that's one of the key components of divide and conquer.

linkless are really common data structures used to store data that is not necessarily contiguously stored in memory.

And it turns out, you can do a lot of cool recursion on these sorts of things.

We're going to look at link list reversal.

And we'll look at an animation to walk through this code.

Now the idea is that we have some sort of linked list.

Let's say we have values in sorted order like this.

And we'll just have a bunch of pointers.

And the idea is that the head node changes from one pointing to two pointing to 345.

Instead, we want six to point toward five, five, to point toward four, and so on.

And when we look at this code, this reversal code, we asked this question, all right, if the head node is null, right, if we pass in a null value, or the next value from the head is null, then we just return the head.

And this kind of brings us back to that idea of considering base cases.

What's the smallest thing I could pass in? If I passed in? No, then I can just return null.

And that would be a reversed list.

Right? And in the same vein, if I passed in just a single node, where there was no next node, then I could just return head again.

And those are good base cases for this solution.

But the question is, how do I even start the reversal? What's the unit of work I need to do and that's what we're going to look at.

So we start with one, the head node that we pass in on the first iteration is one.

And we don't hit the base case.

And so on line three, we see that we call reverse list head dot next.

And that immediately pushes us to the next node.

And we have one stack frame on the call stack.

Now we're looking at two and this, this is not know either, and it also doesn't have a pointer pointing to null.

And so we execute line three, again at another stack frame on the call stack.

And that gets us to three.

And this process continues all the way until we get to six.

And when six gets passed in as a parameter, we say is it null? And the answer's no, but the next value is null.

And so what do we return, what we do is we return head, which is just six, and six gets returned as the node to the previous value.

So this is the base case.

And we return this to five.

And so now P has been evaluated to the number six.

And what we're saying since since the number five is the current head node, we're saying five dot next dot next, equals five.

And it turns out that what that means is we're basically just saying, Okay, well, what I want you to do is take six and have it point back to five, because head next to six, and head dot next, which is six dot next, is going to be pointing to five.

And if that doesn't make sense, I encourage you to really slow down and read this code.

And think five is the head node.

And we want to say the next pointers, next position just points back to ourself.

Now the problem here is that five is is also pointing to six.

And if we were to trace this, this is this would be a cyclic dependency.

And this would never end right.

And so what we can do is we can just say fives dot next, can point to nothing, right? This is no, and so that gets dropped off.

And what we return is we return six, because we want six to be the new head.

And so six will just get propagated down the call stack to the original color to be the new head node.

And that's why we return p here.

And so now we return and we get to four.

And when we look at four, we know that P is six.

And what we're asking here is we're saying fours, next note is five, and I want five 2.4.

And so again, we do the same thing, we take five, eight points to four.

And to avoid the cyclic dependency, we drop this pointer here, this connection.

And what do we return? What will you turn six, because again, six gets propagated through, and six is always going to be P in this case.

And so now I return.

And I know six is P, but head dot next is four.

And so I want for 2.23.

So I say head dot next dot next equals three.

So I draw a pointer.

And again, I dropped the cyclic dependency and I get rid of this connection.

And now I return, P is still six, I return to two and I say two's next value, which is three, I want three dot next to point to two.

So we do this, I dropped the pointer and I return.

And then finally we do this again.

I dropped the pointer and now we're done.

So let's let's take the opportunity to look at the call stack and really understand how this is working in a little bit more detail.

In order to save some time I've written some code.

The first piece of code I want to look at is this idea of a node.

Now this node is just an abstraction that holds a value and a pointer to The next node.

And so this is the easiest way to build a singly linked list.

Now, I also made a method called print link list.

And this will just print all the values.

And this is a verification that we can use to ensure that we've reversed the linked list correctly.

And the final thing I want to look at is the actual code.

So this is the same code that was on the slide.

And I want to look at the call stack as we debug through this.

So I have a singly linked list here 12345, just as we looked at, and they're all linked together.

And what we want to do is we want to actually see how the reversal is working.

So if I run and debug this, we're going to look at the call stack.

So the first thing I hit is this call.

And you'll notice in the stack frame, as I dive into this function, this is the initial stack frame that gets put onto the call stack.

And ask myself, is the node that I'm looking at, which is one, is it No, and the answer is no.

is the next value? No, the answer's no, it's two, right.

And we can see that here.

So I continue through.

And I just call reverse list again, which is a recursive call.

And as a dive into this, I add another stack frame to the call stack.

And now you see my values too.

And my next note is still not know.

So I continue through.

And I add another recursive call, which adds another stack to the stack frame to the call stack.

And I continue.

And this process is going to keep continuing.

Now notice here, the value is five, but my next value is actually null.

And that satisfies this base case here.

So if we continue, you notice I just return null here.

And this stops the recursion.

So notice this stack frame should get popped off.

So as I continue through, that stack frame gets removed.

And now I'm looking at four.

And it actually shows me what the result of this recursive call was for P.

So as I step over, we can look at p and see that P is just five.

And so I'm basically saying the node dot next, so let's evaluate that.

So node dot next has a value of five.

And I'm basically saying I want five two point.

So I want five dot next 2.2, whatever my current value is, which is four, right? So we know no dot Val is four.

So I'm changing the pointers here.

So as I step over, now I have four pointing to know as I execute this, so we step over one more.

So as I look@no.val.no dot next, there should be no which it is.

And now the value P which was five should be pointing to for now.

So P dot next, dot Val is four.

Right.

And so the whole idea here is that now we return the very last note again, and we pop this off the call stack.

And the node that we're looking at at this point is three.

And right now three is pointing to four, right? So if we look at no dot Val, which is three, we can say no dot next dot Val, and this is four.

But this is wrong, right? Because we want for it to be pointing to three, not three pointing to four.

And so that's what this line is doing.

I'm saying I want four to point back to three.

And so I do that, and then I drop the connection.

And so now if I evaluate this, and we look at p, we can actually see the progress that we've made five points to four, and four points to three.

Okay, so we're not done yet.

But we're making progress.

And as we return from here, we pop off the call stack.

And we just keep doing this process.

We return we pop up the call stack, we come appear we return and we pop up the call stack.

And now we're back to the original call.

And remember how we propagated the very last value throughout all these calls.

So as we look at the reverse value, reverse dot Val, it's five and this is our new head node, which was the goal of this problem.

And so if we come in we look at reverse, we say okay, we have five the next value is for the next value.

There's three the next value, there's two and an x value.

There's one, and then we're complete.

And so this is how the call stack works for a linked list reversal.

A really fun problem to consider with linked lists is how do you merge two sorted linked lists recursively.

We're going to look at how the call stack works for this Let's analyze this small snippet of code and see if we can conceptualize what's happening on the call stack.

We take in two head nodes.

One was a list of values, and the other is a singly linked list of values as well.

And we asked this question, the first question is, what's the smallest input I could pass in? And that's a good consideration for our base case, the same formula that we've been applying.

If I pass in a head node for a, that's no, then I can just return b, because B would be merged with no, which would just be itself.

And in the same vein, if b is null, and I merge that with a, then I can just return a.

And if they're both No, then no merge with no is also just the null value.

So this holds true.

And that's the base case.

But let's consider the other side of things.

And this is kind of a similar comparison that we saw with merge sort.

Right now we're considering, Okay, first of all, base case.

But second of all, what's the unit of work that we need to do the recursion.

And I think it'll help if we look at two lists.

So we have two sorted lists of values.

And we want to go through this recursive call.

And the first thing that we do is we add a stack frame to the call stack.

And it basically says, the head nodes that I have are one and four, so neither of them are null.

And what I want to do is I want to say, all right, which which value is smaller, the value in the first node or the value in the second node A or B, A's data is one and B's data is four.

So that first conditional is the one that will be recursive upon.

And I basically say one dot next is going to be pointing to sorted sorted merge.

And node A is going to be incremented by one value, so I add another stacker into the call stack.

And so now I pass in eight.

And now he is getting compared with four.

And we still haven't hit our base case, right.

And so I look at the L statement.

And I compare eight and four and four is obviously smaller.

And so this is going to be the next node that I consider.

And I just continue this process.

So now eight and 11 gets compared, instance eight is smaller, and I pass in 22.

And I add another stack frame.

And now 22, and 11, get compared.

And since 11 is smaller, I pass in the node right after 11.

And these get compared.

And notice in the calls, the recursion needs to complete, but I'm actually setting these results as the next value.

And so as we compare 22 and 1616 is smaller, so I pass in 20.

And now 20 is smaller.

But it's interesting because there is no node after 20.

And so now I'm at this point where I have no.

And now we consider the base case, if we have a null value, which is B in this case, then I just returned a.

And if I return a then I'm just returning the reference to the node of 22.

So here's the base case.

And what I can do is I can just simply return 22 here, and this gets popped off the call stack.

And this is where the unravelling starts to begin.

So when I return 22 here, I actually change 20s dot next pointer, right, because I'm saying 20 dot next, is equal the result of sorted merge.

And since sorted, merge returned 22, I can now modify this pointer.

And from here, I return a.

And when I return a I'm just returning 20.

And this gets popped off the call stack.

And now, you know we're looking at 16.

And we're saying what should sixteens next pointer be? Well, 16 should point to 20.

But it already is pointing to 20.

So we don't really need to do anything.

And so I returned from here 16.

And this gets popped off the call stack.

And again in the same in the same case, 11 is already pointing to 16.

So we don't need to modify anything.

But we return 11 from here and this gets popped off the call stack.

And now we have this question eight was pointing to 22.

But since we returned to 11, we can now say that eight is going to point to 11.

And that means we modify the pointer, which we've done.

Right, and now we return eight from this stack frame, a smaller value, and this gets popped off the call stack.

Now we compare eight and four four was pointing to 11.

But since eight got returned from the recursive call, and we take four and we point it to eight now we return the smaller value From the stack frame.

And what do we do here? Well, what happens here is we pop this off the call stack, and four is the smaller value.

And so now what we want is we want one to point to the four node.

And this is our final stack frame.

And from here, we just return one, which is the head node, which is the smallest value from the two sorted lists.

And from here, this gets popped off the stack frame.

And if we follow these edges, or these pointers, we noticed one goes to four goes to eight goes to 11, to 16, to 20, to 22, to 40.

And this ends up being our sorted list.

And even for this, we can take a second to look at the code and understand the call stack.

Alright, so I've gone ahead and I've actually built a couple linkless.

So we have one here.

So one, 513 14, and 550.

And we have another completely independent link lists to 15 130 203 50.

And the idea here is how do we take these and merge them in their sorted order.

And so we take the same code that we were just looking at, but now we want to analyze it from the call stacks perspective.

And so I'm going to come over here, and I'm going to put a breakpoint on the initial call.

And we're going to debug into this function.

So as I dive into this function, you'll notice the call stack grows.

And now we have two lists, this would be a and b from the example.

So we have one and two.

And this corresponds to the two initial nodes that we have here, two and one.

And what we're doing is we're checking if you know the first list is null, then we return the second list.

And if the second list is null, we return the first list.

This, of course, does not hold true yet.

So we continue and compare the values.

Since one is less than two, we jump in here and we do another recursive call.

And notice that we pass in the next value that one was pointing to, but two stays the same.

And we continue this comparison.

Now since five is greater than two, we go to the second conditional, and we do a recursive call.

And notice the stack frames are just growing.

They're just growing.

And this stack frame is dependent on the result of this stack frame.

And as I continue through and do these comparisons, I have another stack frame.

And each subsequent stack frame has a dependency on the results of whichever one is above it.

And so we keep doing this.

And we do this until we hit one of these base cases.

So I continue through.

And now I've hit a base case, the value that we have is list one, which is 550.

And I'm just going to return this.

And when I return it, what happens? Well, it's the result of this function.

So I say l to Next is 550.

And when that gets evaluated, I just return here.

And you notice that the stack frames will start shrinking now because we've hit a base case.

And so as I continue through, and I'm returning these values, you'll notice that our call stack is shrinking in size.

And we're basically reassigning the pointers to the next node.

And it shrinks and shrinks and shrinks.

And now finally, we get to a point where the sorted operation is complete.

And so if we look at sorted merge, if we look at this value, we noticed that the first value is one, the next value is two 513.

And it's now in sorted order.

So we've taken two sorted lists and merge them in ascending order, which is the expected output.

And this is just an example of taking two linked lists and showing how the merge operation works on the call stack.

This brings me to one of my favorite topics, which is trees.

And trees are one of the really fantastic data structures that work really well with recursion.

And the first question we're going to look at is inserting a value into a binary search tree.

Now a tree for especially a binary search tree is going to be a series of nodes and we're going to basically start from the top down, we're gonna have a bunch of connections, and the number of children at most that we can have from a single notice to and the least amount of nodes you can have is zero.

And so what we're going to do is we're going to add a bunch of numbers here and analyze the properties of this tree.

Notice that everything to the left of the root node 100 is less than 100.

And everything to the right is greater than 100.

Now this property it turns out holds true recursively.

So if we look at at everything to the left is less than 80.

Everything to the right is greater than eight But it's interesting because everything to the right is greater than 80, but also still less than 100.

And so if you look, the largest value on the subtree of 80, on the right subtree is 95.

And 95 is still less than 100.

And so if you're not familiar with the properties of binary search trees, I encourage you to just do a quick, quick look and look at the rules for that.

But this is pretty much the entire rule set for a binary search tree.

And we're faced with this task and the task is to add this node, and the node is 108.

And if we look at this value, we basically say, Okay, I'm going to start at the top of this tree, and I'm going to figure out where can I place 108.

And I compare I say, is 108, greater than 100, or less than 100? or equal to 100? And I say it's greater than, so I go to the right.

And I ask is 108 greater than 120, or less than less than or equal to 120? And we see it's less than so I go to the left sub half.

And I asked the same question is 100, a greater than 110 or less than 110.

And I see it the less than, and so now I go down here.

And I draw connection.

And this is the point where 108 belongs.

And here you can see it's its sale still satisfies the rule set, right? 108 is greater than 100.

So it satisfies that property, it's less than 120.

It's less than 110.

So this is the valid position for 108.

And so let's look at the code.

Turns out the code for this is extremely simple.

And like I had said, before we consider the base case, what's the smallest thing I can pass in? Well, what if there is no root? Right? What if this is the first value that we're inserting into the tree? Well, if that's the case, then what I do is I just create a new node, I set the data, and I return that value.

Right? And it turns out, that's a fairly good base case.

Now, if we consider the other case, now we need to start thinking about, okay, well, what if I need to do some comparisons? And this is where I start comparing the data.

So let's look at what's happening.

The first question says, If I hit null, I've recurse to my end goal, according to the search tree properties of the binary search tree.

Now remember, binary search tree node, insertions will always happen at the leaf level.

So they always happen at the end.

So if you've done all your comparisons until the very end, then you've hit a valid position to insert the data.

Right? And that valid position would be when you have no more comparisons to do, which is implied by the head node B, no.

Now the next conditional I look at is basically saying, Okay, I'm at a node, and I want to compare it to my current node.

So remember, when we were comparing 108 to 100, I asked this question I say, is the node i want to insert greater or less than this value.

If it's greater than I say, head dot right equals, and then I just make another recursive call.

And I progress to either the left or right half of the subtree.

The other side is just the opposite.

So if I'm less than the current node, then I want to go down the left hand side of the sub tree.

Now, this final piece right here, is, once I've done all this work, I've done all the insertions, I just want to return the original root node.

And that original root node just says basically, here's the tree that you started with.

Let's look at the call stack.

During this insertion process.

I call insert node.

And I'm comparing 100 and 108.

And I know 108 is greater than 100.

So I recurse down the right hand side.

Now when I get to this point, I add another stack frame.

And I'm comparing 120 and 108.

And I know 108 is now less than this.

So I recurse down the left hand side.

Now here, I'm comparing 108 and 110.

Add another stack frame, I compare these values.

And because it's less than I go to the left hand side.

And this is interesting, because now my value that I compare against is no.

And remember, when we look at the code here, no is just the base case.

This is what we consider when we insert a node.

And so I just create that 108 node and I return it right.

And so here is where I would say, okay, new node had died data equals 108.

And I would return this node.

And so from this stack frame, I would actually return the 108 node and pop this off the stack.

And what I would do here because I was saying 110 dot left, equals whatever that value was.

Now I can draw connection to 108 and just start returning those values up the stack.

And that's the process of inserting a value into a binary search tree.

Let's look at another fun problem, which is also a kind of a fun depth first tree reversal problem.

And the purpose here is to print all the leaf nodes.

So we're looking at the same tree.

But we want to build a function that prints out all the nodes that we see here in the order from left to right.

So 30 6080 590-510-8115 and 150.

Now if we think about the recursion for how this works, mechanically, we start at the root node, and we go all the way down.

And we hit a leaf node here, so we would print it out.

And what I would do is I would basically pop off the call stack and go to this node.

But I would immediately go to this nodes, right subtree.

And this would be another leaf node because it has no children.

And I would print this and recurse up the stack.

And then I would go back up to 80.

And now go down, it's right half.

And here, I'll go down its left sub half, find a leaf node return, go down the right sub half, find a leaf node return.

And this process just continues.

And the order of execution is really important to understand here.

Remember, if I'm going down the left sub half, I can't even consider the right sub half until the left sub half has been fully explored.

And that's part of the property that depth first search follows.

And we'll look at another DFS example with a graph in just a second.

So now I go down, I recurse down the left sub half, I find the leaf node I print it, I recurse back up and do this all the way until I'm basically out of leaf nodes to find which brings us here.

And now we're done.

So here's the code for this.

And we'll look at this to understand how things get added on the call stack as well in the code.

But again, our base cases, what happens if we pass in a null value, that's a pretty good base case, right? Because then we would just return there's nothing to print if you just pass a null.

But we also have to think about the goal, the goal is to print the leaf node.

And so we want to keep recursing until we hit a leaf node.

And the properties of a node being a leaf node is that there are no children.

And so this is where we start to ask, okay, if there are no children to the left, there are no children to the right, then I can actually evaluate the underlying value of that node and just return.

Now, if we're not a leaf node, then we need to recurse down the left sub half of the tree, which is what we're doing here.

And if we have a right sub half to recurse, then we go down the right sub half after the left sub half has been explored.

So this allows us again, this is why we evaluate the left hand side first.

And then we go down the right hand side recursively.

And we do that for all recursive sub trees up until we're done.

So let's look at the call stack and see how this is executing, or we're looking at here is the same program.

So we have some code here, print leaves, and it's the same code that we looked at before.

So if the input root node is null, we just return, we evaluate the left and right.

And this basically checks if a given node is a belief.

And we print it, and then we pop ourselves off the call stack.

Now if we don't satisfy that, then we go down the left and right half the subtree.

So using the same code that we looked at for insertion, I've actually built up a tree to use.

So I have some input with a bunch of numbers.

These are all the numbers that we were looking at before I believe.

And it's just a random set of numbers.

So nothing special.

And we just insert them.

So I have a function here from the previous example called insert node.

And it just builds us a tree for us.

And so now what I want to do is I want to print all the leaves here.

So what I'm going to do is I'm going to add a breakpoint here.

And we're going to debug straight into it.

And the first question I asked is, is the root node? No.

And the answer is no, it's not No.

Right? So this first value.

And so what do I do? Well, I recurse down the entire left half of the subtree.

So remember, it starts at 100.

And now I go down to the left.

And since it's not No, I skip over this.

And now I just recurse down the left half of the tree.

And for that node, again, it's not know.

And for that specific node i again recurse down to the left sub half of the tree.

And I just keep going down the left half I don't even explore the right half until this level.

Tap has been fully explored.

And you notice we just printed out the first leaf node, which is 30.

And it's only until I explored that entire sub trees left half, that I can now go to this other sub trees right half.

So I dive in here.

And now I go down the right half.

It turns out that the right half again, is just another leaf node.

And so I can just print this value.

And if I put a breakpoint here, you'll notice that we just keep printing.

You know, we hit Knowles, we hit Knowles, and we print the leaves.

And you can see down here, this is the leaf node evaluation that we're doing.

And the reason I wanted to walk through this animation is just further emphasize that when we're thinking about the call stack, I cannot even consider evaluating this expression until every recursive call, and every sub call has been fully evaluated.

Once that holds true, then I can start unraveling the right sub trees and I work from the bottom up.

And that's the idea for these kind of DFS traversals that we're dealing with.

This brings us to our last section of algorithms that we'll be looking at.

And we'll look at a simple example with graphs.

And we'll just see how similar working with graphs is to something like trees.

And we're going to look at a very popular algorithm known as depth first search.

And the idea is that we start at a node like a, and we basically say we're searching for a node, let's say the node we're searching for is H.

And so I start from a and I say, Okay, let me get all the neighbors and see is this my value, so a is not my value.

So I go to my neighbor, and I say B, B is not my value, go to C, D is not my value, but I get all my neighbors.

And this is at an interesting point, because I'm searching for H, right, which is in the far half.

But when I get my neighbor's, let's say the first neighbor I get is E.

And so I actually have to explore down the depths of E before I even consider going to H.

And so I go down, and I see E.

And now I go to F.

And now I have to look at all of F's neighbors.

So the first neighbor I go to is K.

And if k had neighbors, I would explore all of those neighbors.

But it doesn't, so I can just pop off.

And now I go to J and back to F.

Now I go to I am back to F, and I kind of recurse backup the call stack.

So now that I've explored all the neighbors of right, I've explored one neighbor of D, I only have one unvisited neighbor left for me to explore.

And it's g so I go to G and I say is this the value that I'm looking for? And it's not.

So I look at the neighbors of G and I say is this the value I'm looking for? And it is.

And this is the idea of depth first search.

And so we're going to look at a piece of code and walk through it line by line and see how something like this would work recursively.

We look at a piece of code like this, and we have the input node.

And to avoid cycles, which can happen in graphs, which is different than trees will keep a set of visited values.

So we never want to visit the same node more than once.

And so that's what the visited set is going to do.

And the goal integer here is just the node that we're looking for.

So I know the graph we were just looking at was letters, but we're going to assume that we're working with a graph with integer values.

And we ask our base case questions again.

So think about the smallest thing you could pass in if you pass in an empty node, a normal node, then there's no way you could ever find the goal, because the goal is an integer value.

And so here, we would just return false.

So this is a base case.

The other base case is if we've actually found the node.

So let's assume the node you pass in is the gold node, then here we found the node.

And so we would just return true here, and this would imply that we found the value that we're looking for.

But let's assume that we haven't found the value, just like the example.

The first thing that we need to do is we need to aggregate all of the neighbors from a given node.

And so we're going to assume the node API has a function call that allows you to aggregate all the neighbors.

And the first question we ask is have we have we visited this node before? Remember, we want to avoid cycles.

And so we never want to visit a node more than once.

And so if it's been visited, we just ignore it and we continue.

But if it hasn't been visited, then we add it to the visited set.

And we start the exploration.

And we ask this question, have we found the node that we've been working with with this particular neighbor.

If we've found the solution, then we can just return right away.

Now if we haven't found the solution from this particular neighbor, then we'll continue recursing through the rest of the neighbors And it's at that point, if we found it, we would return true.

And this brings us to our last base case, let's assume that we've traversed all of the neighbors.

And we never found the solution.

That would mean that the goal value that we're looking for was just never in the graph to begin with.

And this brings us to our final base case of just returning false.

And this is a way to say, hey, I've searched for this node, but it didn't exist in the graph.

It's false, it did not show up in the search.

And that's where we can terminate the algorithm.

I think it's important to spend just a little bit of time talking about optimizations that you can make when using recursion.

And the first that we're going to look at is memoir, zation.

And caching, which we talked about briefly when writing Fibonacci.

And the question that we're trying to answer here is how can we speed up our program by pretty much storing things that we've already re computed or computed for the first time.

So if I've computed some very expensive operation, and I have to re compute it again, in a subsequent recursive call, I want to check to see if I've done that work already.

And if I have, I'm just going to return the result instead of re computing that operation again.

And the best way to kind of conceptualize This is looking at the Fibonacci tree sequence.

Here, we see that I'm doing F of three twice, and I'm re computing F of two, three times.

And the question is, why can't I just compute f of two once, and then just reuse that value, so I don't grow these sub trees.

And it turns out, you can do that really easily.

Let's look at the modified Fibonacci code.

Notice I have a hashmap here and hashmaps have this property, such that we can retrieve values from the map in constant time access, so big O of one.

And the reason is because the data that we put into the hashmap is hashed.

And so we have constant time access to the memory addresses in which that data is stored.

And so when I call the Fibonacci function, the first thing I always do is I say, Hey, does the cache have this value.

And you can also notice that I've pre populated the cache with the base cases as well.

Now in the instance, that the cache does not have the value, I still go about the recursive call, as I had done previously.

But I ensure that once that recursive call has been evaluated, I put the result into the cache.

And then I return my result.

And this is a really key component because the cache is global, right.

And so all the subsequent recursive calls are putting their results in the cache.

And as things get added, and popped off the stack, I can just keep cross referencing the cache to see if I'm doing redundant work or not.

And that saves me a lot of computational power.

So that's the idea of memorization and caching.

And you can do a lot of this in a lot of different situations.

The next optimization I want to just discuss is this idea of tail call recursion optimization.

And people usually refer to this as you know, tail call optimization, or tail recursion.

And the idea here is that basically, the compiler in certain languages, especially functional programming languages, will optimize the number of stack frames that get added to the call stack to basically remove this idea of stack overflows in a lot of scenarios.

Now, the way that the compiler does this analysis is it really looks at the last function call.

And it has to be a recursive call.

And we'll look at an idea of comparing these two things.

And there was actually a response on the computer science Stack Exchange by this user, and he gave two examples.

The first is this simple recursive factorial function.

And notice that the final return value is a value multiplied by a recursive call.

Now, this is not tail recursive, because the return value is not just the recursive call.

And so in order for this to be tail recursive optimized, we need to modify to look something like this.

Now, if you spend a second to look at this code, you'll realize that functionally it's doing the exact same thing.

But we've had to modify the parameters to get to a point where we can build a function that's tail call optimized.

And the reason it's optimized with this tail call recursive property is because the final return value is in itself just a recursive call.

And it turns out in certain operating in certain languages, in their compilers, they've implemented some optimization techniques to exploit this property and reduce stack overflows.

Now, I just want to make a couple notes on this.

The rule of thumb is to always make your recursive calls be the last instruction Nothing gets added to it, just the recursive call.

And the other thing to consider is that this is supported mostly by functional languages.

But it's not inherent in languages like Python and Java and even JavaScript.

There are certain browsers for ies 2016.

They do support tail call recursion optimization for things like JavaScript, but it's not supported across the board.

And so as you're considering this optimization technique, think about what language you're using.

And think about, you know, the compiler that you're using and where your code is being executed to see if this sort of optimization is supported or not.

So this brings us to our final slide.

What's next? And when we ask this question, it's really to consider like how far do you want to go in your journey of recursion.

And that brings us to another video that we'll have coming out, known as algorithmic mental models for backtracking.

And this is the entire notion that we basically solve even more complex problems by looking at a decision space and recursively going through and seeing if our decision was good or bad, and backtracking to see if we can solve the solution in a different way.

And that's the next phase that I think is important for understanding recursion.

So thank you guys so much for watching.

Be sure to check out my channel youtube.com slash the symbol engineer and connect with me on youtube or Twitter or LinkedIn.

And I'd be happy to talk with you and answer any questions you may have.

So thanks, guys for watching and have a good day.