Instead of setting up your cloud infrastructure manually, it can be easier and safer to use code. The AWS Cloud Development Kit (CDK) is an open-source software development framework to define your cloud application resources using familiar programming languages.

We just published a crash course on the freeCodeCamp.org YouTube channel that will teach you all about the AWS CDK and how to use it.

Matt Martz created this course. He is an engineer, AWS Community Builder, and self-described CDK fanboi.

After learning the basics of the AWS CDK, Matt will take you on a speedrun of the official CDK workshop. Then you will learn about advanced topics such as testing and best-practices.

Here are the sections in this course:

  • CDK Crash Course Intro
  • What we'll cover
  • Resources
  • CDK Basics
  • What are CDK Constructs?
  • Level 3 Construct Examples
  • Synthesis, Assets, Bootstrapping and Deploy
  • CDK Workshop Speedrun - Cloud9 Prep
  • CDK Workshop Speedrun - New Project
  • CDK Workshop Speedrun - Hello, CDK
  • CDK Workshop Speedrun - Writing Constructs
  • CDK Workshop Speedrun - Using Construct Libraries
  • CDK Workshop Speedrun - Testing Constructs
  • Advanced CDK
  • More Resources and Thanks!

Watch the full course below or on the freeCodeCamp.org YouTube channel (1-hour watch).

Transcript

(autogenerated)

AWS cloud development kit is an open source tool that allows developers to use their favorite programming languages to write infrastructures code for AWS, Matt marks is an AWS community builder, and he will teach you how to use the AWS cloud development kit.

Hey, I'm Matt Martz.

I'm an AWS community builder.

And I'm here to give you a CDK Crash Course from Free Code Camp.

You might be here because you're tired of manually provisioning your resources via the console.

Or maybe like me, you hate writing yaml.

But CDK has also been in the news a lot lately.

In December, version two became generally available.

Version two was a massive improvement over version one, since there is only one stable NPM library to install, instead of one for each module.

Among other improvements, CDK was also featured fairly prominently in Dr. Vogel's reinvent keynote.

And the CDK book was released.

The CDK book was written by a bunch of the AWS heroes, and it will go into greater depth than what will be covered in this crash course, there will be overlap, but I still strongly recommend the book.

So what will we cover in this course, I'm going to start by going over the basics of CDK definitions of the app stacks, constructs and how it all relates to CloudFormation.

Then I'm going to do a 30 minute speed run of the CDK workshop, which is a workshop provided by AWS, and it's available at CDK workshop.com.

From there, we'll go into some advanced topics like testing and best practices.

This course is split up into chapters on YouTube.

So if there are particular areas of interest, feel free to jump around.

I'll also provide timestamp links in the description with supplemental information.

This will include useful documentation, blogs, links, or anything else.

I'm going to go through the CDK workshop very quickly, but you should do it yourself.

The best way to learn about CDK is by using it.

And since this is a YouTube video, feel free to speed me up, slow me down, or rewatch sections if you need to.

If you think I've missed something in this crash course, or you didn't understand something, let me know in the comments or follow me on Twitter and reach out there.

I'll be happy to help.

Here are a few other resources I'd like to specifically call out.

The CDK community on Slack is very active.

It's full of friendly, knowledgeable people that love to help out.

The authors of the CDK book are also very active there.

Speaking of Did you hear there's a CDK book out now.

Aside from that, there's also CBK day which is an annual conference related to CDK.

Last year, I gave a 15 minute lightning talk on creating verifiable jots with CDK.

And I'll add a link to that below.

And with that out of the way, I hope you're in the right place.

Let's move on to the basics.

CDK stands for cloud development kit.

It's open source software provided by AWS that enables you to write imperative code to generate declarative CloudFormation templates.

It enables you to define the how so that you can get the what that means you can focus more on the architecture of your app instead of the nitty gritty details of IAM roles and policy statements.

CDK is available in JavaScript, TypeScript, Python, Java, C sharp, and it's in a developer preview for go CDK itself is no GS based.

Even if you use one of the other languages, the CDK code itself will be executed via no GS.

So no J S is a requirement.

Okay, so CDK generates CloudFormation.

But what's that cloud formation is an AWS service that provisions AWS resources and stacks.

AWS resources are things like lambda functions, s3, buckets, DynamoDB, tables, pretty much anything that AWS offers.

A CloudFormation stack is just a collection of AWS resources.

Cloud for me, CloudFormation uses JSON or YAML files as a template that describes the intended state of all the resources you need to deploy your application.

The stack implements and manages the group of resources outlined in your template and it allows the state and dependencies of those resources to be managed together.

When you update a CloudFormation template, it creates a change set.

A change set is a preview of changes that will be executed by stack operations to create Update or remove resources so that the template becomes in sync.

So CDK generates cloud formation and cloud formation provisions, AWS resources.

What does the CDK app look like and how does it relate? The root of CDK is the CDK app.

It's a construct that coordinates the lifecycle of the stacks within it.

There is no CloudFormation equivalent for an app.

Within the app, you can have multiple CDK stacks, and CDK stacks can even have nested stacks.

CDK stacks are one to one equivalent with CloudFormation stacks, and nested stacks are also one to one equilibrium equivalent with CloudFormation stacks.

When you run CDK, synth or CDK deploy the output is the CloudFormation template for this app structure.

The CDK app is a special route construct that orchestrates the lifecycle of the stacks and the resources within it.

The app lifecycle constructs the tree of the app in a top down approach, and then it executes the methods for the constructs within it.

You typically don't need to directly interface with any of the prepare, validate or synthesize methods of the constructs, but they can be overridden.

The final stage of the apps lifecycle is the deploy phase, where the cloud formation artifact is uploaded to cloud formation.

The unit of deployment in CDK is called a stack.

All resources defined within the scope of a stack, either directly or indirectly, are provisioned as a single unit CDK stacks have the same limitations as CloudFormation stacks.

Stacks are powerful in cloud formation.

In order to deploy a template to multiple environments, you need to use cloud formation parameters.

But these are only resolved during deployment.

In cloud formation, if you want to conditionally include a resource based on a parameter, you have to use the CloudFormation condition.

But in CDK, you don't have to use either, you can simply use an if statement to check a value whether the resource should be defined or not, it ends up being much simpler.

You can use cloud formation parameters and conditions in CDK.

But they're discouraged since they only resolve during deployment, which is well after CDK synthesis.

We've heard a lot about constructs.

But what are they? There are four levels of constructs.

Level zero constructs are just basic resources.

All of the higher levels inherit from level zero, there's no specific type tied to it.

Level one constructs are one to one representations of CloudFormation resources.

They are all prefixed in the CDK API with the letters C F n, we which is a short form of CloudFormation.

The way I remember this is level one is one to one with CloudFormation.

Level two constructs aren't improved or extended level one constructs.

They're provided by the CDK team and offer helper methods and sensible defaults.

Level three constructs are combinations of constructs, which could be an intermingling of level one, two and three constructs together.

More often than not, you'll be interacting with level two or three constructs.

Some modules in CDK don't have level two constructs yet.

But pretty much all of the level one constructs exist for the corresponding CloudFormation API.

The CDK team is very fast at implementing changes to level one constructs and more frequently used level two constructs.

for level one construct, let's look at the access analyzer module.

In this case, the module doesn't have any level two constructs available, so there's only the level one CFN analyzer construct.

As you can see, this is a one to one representation with the CloudFormation API.

The CFN analyzer construct has the same properties defined as the CloudFormation API for the same.

There are also new helper utility methods on the CFN analyzer construct.

For level two construct, let's look at the DynamoDB table construct.

It doesn't have a CFN prefix and includes many sensible defaults.

For example, the billing mode defaults to pay per request, unless you specify replication regions.

If you specify replication regions, it becomes provisioned and if it becomes provisioned, the default Read and Write capacities get set and defaulted to five The default removal policy of the table is retained.

And the table has helper methods to create global or local secondary indexes, and to create varying levels of access to the table and it streams.

For level three constructs, the CDK doesn't offer anything specific out of the box.

These tend to be created at the individual organization or community level, and provided as libraries.

An example level three construct would be to create a notifying bucket.

This construct creates an s3 bucket along with an SNS topic.

It then adds an object created notification coming from the s3 bucket to target the SNS topic.

All you have to do to use this is add a new notifying bucket to your app.

And it will provision all of these resources automatically.

There are several great resources in the community that I'd also like to call out.

CDK patterns.com is a resource that enables you to search by community provided examples, and it's cross referenced by the different components used within it.

AWS also offers another open source extension of the CDK with their AWS solutions constructs.

These provide multi service well architected patterns for quickly defining solutions in code for frequently used patterns.

This is available as an NPM library that you can install and use right out of the box.

There's also the construct hub.

The construct hub is a central destination for discovering and sharing cloud application design patterns and reference architectures designed for CDK CDK for Kubernetes, and CDK.

For TerraForm, and other construct based tools, construct hub pulls from the NPM registry, all CDK constructs that support multiple languages and are correctly annotated.

Great.

So I think we have a handle on constructs.

Now, let's go a little deeper.

How do we generate the CloudFormation template.

In order to do that the app needs to be synthesized.

To do this, we can run CDK.

Since this traverses the app tree and invokes synthesize on all of the constructs, this ends up generating unique IDs for the CloudFormation resources and generates the respective EML along with any assets that are needed.

Okay, so what are assets? Assets are the files bundled into CDK apps.

They include things like the lambda handler code, Docker images, layer versions, files, going into an s3, bucket, etc.

They can represent any artifact that the app needs.

When you run CDK synth or deploy, these get output to a CDK dot out folder on your local machine.

Okay, so what does CDK do with these assets? How do they get put into CloudFormation? Well, that's where bootstrapping comes into play.

Bootstrapping creates a CDK toolkit cloud formation stack deployed to your account.

This account includes an s3 bucket and various permissions that are needed for CDK to do the deployments and upload the assets.

bootstrapping is required when your app has assets or your CloudFormation templates become larger than 50 kilobytes.

So it's pretty much always required, you're almost always going to have some sort of asset in your stack.

For CDK version to administrator access is needed to create the roles that CDK toolkit stack needs in order to do deployments.

You won't need administrator access after CDK is bootstrapped.

With the bootstrapping done, we can move on to deploying.

When you run CDK deploy, the app is initialized or constructed into an app tree.

Once the construction is done, the app goes through preparation, validation and synthesize steps.

So each construct calls its prepare method, then each construct calls its validate method.

And then each construct calls its synthesized method.

Up to this point, this is what CDK sent us.

From there, the app uploads the template and any other artifacts to cloud formation, and cloud formation deployment begins.

Once this handoff is done, it's all in the hands of CloudFormation.

Now that we've defined some of the basic pieces of CDK let's move on to the workshop.

The word stops at CDK workshop calm.

You should do this first and then come back.

Perfect.

Once you come back, we'll do the 30 Minute speed round through the workshop.

We're gonna get started with this workshop by creating a cloud nine environment.

To do that, I'm going to steal some instructions from the CDK pipelines workshop, it's going to have a screen and I enroll for Cloud Nine that has full access.

This is going to be used in bootstrapping the account for CDK.

Version two links for the CDK pipelines workshop will be in the description below.

It's a great workshop and you should work through it if you have the time.

First, we're going to go to the cloud nine console and create an environment.

I'm just going to name it CDK workshop, and we don't need a description.

Then we select an instance type, I did this workshop with a T three small instance.

But the M five large would probably be better.

All the other options can use the defaults.

While CLOUD NINE spins up, I want to note that both the T three small and N five large instances are not free tier eligible.

So it might cost you some money.

I was getting some memory errors towards the end with the T three small instance.

But it still worked.

To create the I am role.

The CDK pipelines workshop has a deep link that will take you to the role creation page.

From there, we confirm that it's AWS service entity and for easy to and then go to permissions.

Make sure that administrator access is selected.

We don't need new tags, so we can skip that.

And then we're going to give it a name, we're going to name this CDK admin and create the role.

With the role created, we can go back to the CDK pipeline workshop and see that we need to actually go in and attach the roll to the EC two instance.

There is a deep link.

But since we had named the EC two instance, something else we actually need to go to the EC to console, go to instances.

And then select the instance that with our cloud nine instance that's running.

Go to actions, security and modify Iam role.

From here we're going to select the IAM roles that we just created CDK admin and save it.

Now if we go to the CDK workshop, Cloud Nine and refresh, it will automatically have the role that we need.

Next, we need to do some cloud nine housekeeping and setup, we're going to remove the AWS manage temporary credentials.

These prevent us from using the role that we attached to the Cloud Nine easy to instance, in the upper right corner of cloud nine, go to settings and then scroll down to AWS settings and turn off AWS manage temporary credentials.

To make doubly sure that the credentials are gone, we're going to remove the credentials folder.

Next, we're going to do some environment config, we need to install JQ using yum.

And then we're going to export some environment variables with AWS account information.

To check to make sure that the AWS region was set correctly, we're going to do this test command.

And finally, we're going to export the account ID and region and make sure AWS is configured correctly.

Now we need to make sure that Cloud Nine is easy to instances using the right a Iam role.

To do that, we're going to use this STS get color identity command.

It's gonna fail here because we actually named the role CDK admin.

So if we update the command to check for CDK admin, we'll get the that the im role is valid like this.

At this point, we can switch back to the actual CDK intro workshop.

We're going to go to the prerequisites tab and start checking things to make sure that everything is like it should be.

The AWS CLI is already installed and we've already set up the AWS account and user.

Next we want to make sure that we have a good node version installed In Cloud Nine, CLOUD NINE comes with Node installed.

So let's just check the version.

And here version 16 is fine.

We're also gonna want to check CDK.

Now, this is a CDK to workshop.

So we want to make sure that we actually bumped up the version of CDK.

So if we do NPM i for install AWS CDK.

Globally, we're gonna get this error because he's the Cloud Nine instance of easytouse weird, so you actually have to force it.

In this case, the CDK pipeline tutorial also goes through this as well.

Now that we've forced installed it, we can see that version 2.3 is installed.

We're already using AWS cloud nine, we just checked the CDK version.

And that concludes the prerequisites.

Next, we can begin the actual workshop.

So let's get started, we'll go to the CDK.

And next step, the first thing we need to do is create a directory to work in.

So we're gonna make the CDK workshop directory and change into it.

Now we need to actually admit the project.

So we're gonna run CDK and it sample app with the language TypeScript.

This creates a Git repo, and NPM installs all the dependencies for a basic CDK two project CDK two is a massive improvement over CDK version one because it bakes in all of the stable modules into one NPM dependency CDK version one had individual NPM libraries for each module like DynamoDB, lambda SNS, etc.

In version two, you just have one, and that avoids a lot of the dependency management hell.

Now that the thug project is initialized to CDK workshop is going to want us to run the watch command in the background to compile the TypeScript into JavaScript.

So we're going to create a new terminal, change directories into the project folder, and then run the NPM run watch command in the background.

With that done, we can move on to the next step and actually check out the project structure of what was what's been initialized so far.

So let's go back to the Cloud Nine environment, we'll open up the project folder, and let's start looking at some files.

Let's start where CDK starts with the bin slash CDK workshop TypeScript file.

This file instantiates the CDK app by creating stacks.

That is to say, when you run CDK center CD kit, boy, it starts here to generate the CloudFormation templates.

CDK stacks are one to one equivalent with CloudFormation stacks.

An app can have multiple stacks, and each stack would end up creating a new stack in cloud formation.

In the case of our sample app, there's only one stack, the CDK workshop stack.

So let's look at that next.

The sample stack is pretty simple.

It uses two level two constructs.

One to create an SQ SQ and one to create an SNS topic, then subscribes the queue to the topic.

That's it.

The workshop goes into similar detail with these.

But let's move on to our first CDK synth.

As described before, when CDK apps are executed, they produce an AWS CloudFormation template for each stack defined in the app.

Switching back to cloud nine, let's run CDK Seth, I'm going to output to a template dot Yamo file to make it a little easier to read.

Now, there are two important concepts I want to call out here.

CDK is both imperative it describes the how the app will be built.

But it's also item potent, given the same inputs, it will produce the same outputs.

See here, we use the same code between the CDK workshop in my cloud nine instance.

And the cloud formation that was generated as the same.

For example, the resource hash hashes match between the two.

This is because we used all the same inputs that the CDK workshop did.

If I went and change the topic or cue ID, the hash would change and we'd have something else.

This is going to be important in unit testing later.

Now that we've sent, let's deploy.

In order to do that we need to bootstrap the account Let's open up a new console window and go to cloud formation.

And this account, the only one running right now should be the one that's used for cloud nine.

Right here.

So let's go back to the Cloud Nine terminal and run the Bootstrap.

This is why we needed to create an im role with administrator access.

And it's actually the only time you need admin access when you're bootstrapping the account.

In order for CDK, to deploy apps, it needs to store the assets somewhere.

The Bootstrap creates its own cloud formation template that we'll see when we go back to cloud formation.

And that includes a number of roles and policies that enable it to store assets for deployment.

As you can see here, the CDK toolkit stack was just created from the bootstrap process.

Now, if we switch back to cloud nine, we can actually run the deploy command.

The deploy command resynthesize is the app.

And if there's any IAM policy changes in the census, it will display them and ask to make sure that you want to continue.

Here, since we're subscribing the queue to the topic, a IAM policy needs to be in place in order to make that happen.

So yeah, we do want that to happen.

So let's deploy it.

While CloudFormation is spinning up, if we refresh CloudFormation, we'll see the CDK workshop stack create is in progress.

And we can actually view the resources as they're created.

So here we can see that there's a metadata entry along with the queue and topic, but there's no actual subscription yet or policies related to it.

Now that the crate is complete, we can go back to the resources and see that yes, there is a queue policy and a subscription.

Well, that was fun.

But now let's get her hands dirty.

The CDK workshop is going to take you through a couple steps at this point.

We're going to clean up the stack by removing the SNS topic and the SQ sq.

And then we're going to create a simple API gateway backed by lambda.

So let's get started.

We're going to delete the sample code from our stack.

And then we're going to do a diff to see what it shows.

Go back to the stack file.

And delete the queue topic and the topic that adds description along with the imports.

That are on the CDK diff command.

What CDK diff does is it re synthesizes the CloudFormation app and compares the yam all from one to the other.

And we can see here that it's destroying the queue queue policy subscription and the topic.

Next we can CDK deploy as it's deploying, the only thing that the CloudFormation stack should be left with is the CDK metadata entry.

Till we go back to cloud formation, we can check the progress of this by refreshing the resources as it is progressing.

They're done Now let's go back to cloud nine and create the lambda.

We'll create a folder called lambda, and put a hello.js file in it.

Inside of that, we'll put some basic lambda code.

Next, we need to create the lambda in the stack, we'll add the lambda import from CDK to then create the actual lambda function.

One of the biggest benefits of creating AWS resources using CDK is the IntelliSense.

Having imported from AWS lambda, I can create a new function using new lambda dot function, pass the stack and ID of the lambda in and give it some properties.

In this case, we needed to find the runtime using lambda.runtime.no, Gs 14 where the code lives, which we use code dot from asset and then lambda, which uploads the lambda folder as an asset to s3.

And then they were the handlers.

In this case, our handlers and Hello, the Hello file, and it's named handler.

Now we can run the diff.

And this will show us that it's going to create a im role for the lambda and the lambda function itself.

Which this looks good to me.

So let's just deploy it.

I'm going to speed this part up a little bit.

And we can see the function and the role in cloud formation.

So let's test out the lambda itself.

In the code source area of the lambda console, we can create a test event by clicking the test button.

We'll select API gateway AWS proxy, because ultimately, we'll be connecting this to an API gateway.

Don't forget to give it a name.

And doesn't like spaces.

And hit create.

What do we get test again, it'll invoke the lambda.

And now we can see the status of 200.

And that the body has the correct response.

So let's add an API gateway.

First, we'll import the API gateway module from CDK.

And then we'll create a lambda REST API.

Lambda rest API's are just rest API's with a greedy proxy that sends everything to the defined lambda.

We'll pass in the stack construct and give the REST API an ID.

And then we'll tell it our hello function as the API handler.

Now we can run a diff.

And I'll also show that there's actually no API gateway already defined by going to the API gateway console.

And here, there's no API's.

So let's deploy it.

I'm going to speed this up again.

But what it's doing is it's creating a bunch of resources automatically.

There's the API gateway itself permissions for the gateway, and for it to invoke the lambda, etc, etc.

Now that it's done, we can see we went from three resources to 15 in CloudFormation.

And this was all handled with a couple lines of code.

And if I go and refresh the lambda page We can see that the API gateway is added to the lambda console as a trigger.

Similar, we can see the API is defined on the API gateway console.

So let's test it.

We'll grab the API URL that was automatically output as part of the deploy, and send it a curl request.

Nice.

And let's try another one.

Very nice.

All we've done so far is use a few level two constructs.

Let's make our own level three construct.

We'll make an interceptor lambda that writes to the DynamoDB table, and then invokes any targeted downstream lambda.

Let's get started.

We'll create a new hit counter type script file in the lib folder, and put some boilerplate construct code in here.

All this does is extend the construct and to find the props.

Next, let's create our interceptor lambda and the lambda folder, we'll create hit counter dot j, s.

And we'll copy the code over from the workshop.

what this code does is it uses two modules from the AWS SDK to interact with both Dynamo DB and lambda will use Update Item from Dynamo DB to increment a hit counter for a path.

And we'll use the lamda client to invoke the downstream lambda, and then return the response from that lambda with the table and lambda will be passed into this interceptor lambda via environment variables.

Now that that's done, we need to create both the table and the lambda, we'll add the DynamoDB module from CDK.

And then create the table using new DynamoDB dot table, we'll pass in the stack construct and give the table an ID.

And DynamoDB also needs to define a partition key.

So we'll go ahead and do that as well.

Next, we'll create the lambda, this follows the same pattern as before do lambda dot function, pass in this give it a ID and set the runtime which is no Jess.

The handler, which in this case is hit counter dot handler and then the code coming from code dot from asset lambda we need to pass in the environment variables.

And I'll make a tweak to the lambda Id just to be consistent with the workshop.

Now we're going to want to expose the lamda as a read only property of the construct.

So add this Public Read Only line to the class.

And we'll assign the lambda to the property.

Now this construct is great as is, but it's not being used anywhere.

So let's fix that.

We'll go back to the main workshop stack.

And use the construct.

It's the same pattern as a level two construct.

So we can do a new hit counter pass in the stack, give it an ID.

And in our case, the properties here is the downstream lambda that we want to invoke.

So we'll set downstream to Hello.

And I will do a trick to import the hit counter from the local file.

But now we need to make the API use this lambda instead.

So we will move the API below and change the handler of the API from hello to hello with counter and this is why we expose The Read Only property for the handler, it's so that the API actually has access to the lambda within the construct.

So let's deploy it.

With some editing magic, we'll skip ahead again.

Now that we're deployed, we can test it.

The CDK workshop builds in snares to expose some nice things about LTU constructs.

As we can see, the request here failed.

In order to see why we'll need to look at the logs.

So let's switch to the lambda console in the console.

Click the Monitor tab, and click view logs in cloud watch.

If we go to the latest log stream, we can see that there's an invoke error, and it's because of an Access Denied exception.

And if you read a little further, it says basically that the lambda is not authorized to perform Dynamo DB Update Item on the resource.

We never gave it permission.

So let's go back to the hit counter file.

We're going to use the tables L to construct helper method of grant readwrite data to apply the correct policy statements to lamda.

The workshop also forgot to add the permission for the lambda to be able to invoke the downstream lambda.

So we're just going to go ahead and add that now.

Now if we deploy again, and skip forward and we test the function we'll get a successful response because everything has the correct permissions.

From here, if we go to Dynamo DB, and refresh the items in the table, we can see that the right endpoints are being tracked.

From the CDK workshop, we're going to npm install the CDK Dynamo table viewer library into our project.

With it installed, we can switch to our stack and import the table viewer from the library.

Then we're going to go and create a new instance of it.

This table viewer library is an L three construct because it groups multiple L two constructs together, just like we did with the hit counter L three construct that we made in the previous section.

The table viewer construct expects us to pass in the DynamoDB table from our other construct, but we didn't expose that.

So let's go back to our construct add the table property and assign the table to that property.

With that, we can pass the hello with counter dot table property into our third party construct.

Now if we deploy and skip ahead, we'll see that the table viewer construct created a separate API gateway and its own endpoint defined in the outputs.

If we go there, it will end up displaying what's in the DynamoDB table.

Testing constructs is one of the most powerful things about CDK.

The CDK workshop walks you through two types of tests, assertion tests and validation tests.

For the first assertion test, we're going to create a stack and use our hit counter construct and then verify that one DynamoDB table was created.

So let's go back to the code.

Create the test and remove the old ones Now we can run it.

And one DynamoDB table was created.

Next we'll move on to checking the lambda.

The CDK assertions library has some useful features like capture, which can intercept different properties as part of the template synthesis.

In this case, we use CAPTCHA to verify that the correct downstream function and table names are being passed into the hit counter lambda.

When we run this test, we expect it to fail because I didn't copy the references from the synthesized output.

But making it fail actually makes it easier to find them.

So we'll grab the correct names and update the test with the right ones.

Now when we run it, it'll pass.

Next, let's switch to a test driven development mindset.

And say we want to make sure our table is using server side encryption, we can assert that DynamoDB table resource has server side encryption enabled.

So when we run the test, it'll fail.

From a development perspective, that means we need to go back to the construct and make sure that server side encryption is enabled.

Now if we run the test again, it'll pass.

Since we don't actually want that, I'm going to remove it from the construct and remove the test.

Next, let's talk about validation testing.

Let's say as part of your construct, you want to make sure a sane number of read capacity units are being used in the DynamoDB table.

We can add a read capacity property that gets passed into the construct.

Use it in the table.

And then in the constructor, we can validate that the value passed in is reasonable, say between five and 20.

If it's not, we'll throw an error.

Now let's add a test that passes in a value out of range to a three and verify that the error is thrown.

When we run the test, it'll pass because the error is being thrown.

This error would be thrown as part of stack synthesis.

So let's see that we'll go back to our workshop stack.

Add an out of bounds read capacity value and run the stack synthesis.

As you can see there is validation testing is super useful.

At my work, I use it to help enforce some naming conventions, and also to log or throw deprecation warnings when property inputs or defaults change.

Great job with the workshop.

Now that the workshops done, we can move on to some advanced topics like aspects and best practices.

In the workshop, we went over fine grained assertions by checking for things like number of tables created and ensuring encryption was turned on.

We're checking to make sure the lamda environment variables being passed in or the right one.

But you could also use snapshot testing to check the entire template if you want to do and use gests match snapshot functionality.

We also went over validation testing as part of the workshop.

We checked the properties being passed into the construct to make sure read capacity units were within a specific range.

In my organization, we also use this to help enforce some naming conventions as well as throw some deprecation warnings in the event we have to significantly change the construct.

In this case, the warnings won't block deployment, but it will show up in the console when doing synthetic deploy, where developers can pick them up and fix them.

You are aren't limited to just logging errors to the console though, when you sent you're executing the actual code.

So you could also log to an external service like a platform service that tracks deployments, or logs to cloud watch itself.

The last form of testing is integration testing.

With CDK, you can use AWS custom resources to test that the resources are working together correctly as part of the CloudFormation deployment.

If something breaks, it will automatically roll back the CloudFormation deployment.

CDK has a provider framework for interfacing with CloudFormation custom resources.

These custom resources enable you to write custom provisioning logic and templates that CloudFormation runs anytime you create, update or delete stacks.

We can make use of this to do integration testing across our stack.

Let's say you have an event bus, a lamda.

And a DynamoDB table that you want to make sure are interacting with each other.

You could use the provider framework to spin up a lambda to emit a test event on an Event Bus, all part of the CloudFormation deployment.

And then the custom resource can query DynamoDB waiting for the change.

If the change doesn't happen in a set amount of time, the build will fail and CloudFormation will automatically roll back the changes.

Matt Morgan has an excellent blog post about this called testing the async cloud with AWS CDK.

The CDK book also has a section on this.

Another advanced topic for the CDK are aspects aspects are a very powerful tool in CDK.

There are a way to apply an operation to all constructs in a given scope.

The aspect could modify the constructs such as by adding tags, or it could verify something or about the state of the constructs such as ensuring that all the buckets are encrypted.

During the Prepare phase CDK calls the visit method of the object for the construct and each of its children in a top down order.

The visit method is free to change anything in the construct.

In this example, the bucket version checking class implements a visit method that checks if s3 bucket versioning is enabled.

Here it throws an error, but you could just as easily modify the construct to enable versioning.

Everything that the stack is applied to that is every construct within the stack is evaluated by this visit method.

Which is why we need to check the instance of the node being evaluated to make sure it's the CFN bucket.

The nodes will all end up being the level one construct of the resource in question.

Even if you use a level two bucket construct within your stack code.

For best practices, AWS breaks up their best practice recommendations into four different areas.

Organization coding construct an application.

For organization best practices, AWS recommends having a team of CDK experts help set standards and train or mentor developers in the use of CDK.

This doesn't have to be a large team.

It could be as small as one or two people or it could be a center of excellence if you have a large organization.

AWS also recommends the practice of deploying to multiple AWS accounts.

For example, having separate production, QA and development accounts.

You should use continuous integration and continuous deployment tools like CDK pipelines for deploying beyond development accounts.

For coding best practices, AWS recommends to only add complexity where you actually need it.

Don't architect for all possible scenarios up front.

So the keep it simple, stupid principle.

It also recommends to use the well architected framework AWS CDK apps are a mechanism to codify and deliver well architected best practices.

As you follow these principles, you can create and share components that implement them.

Earlier I stated it's possible to have multiple apps in a project, which is true, but it's not recommended.

Best practice is to not do that and to only have a single app per repo.

In a CI CD world changes the one app could unnecessarily trigger the deployment of another app, even if it didn't change.

Finally, your CDK code and the code that implements your runtime logic should be in the same package.

They don't need to live in separate repositories or packages.

For construct best practices, you should be building constructs of your logical units of code.

If you consistently build websites with s3, buckets, API gateways and lambdas, make that a construct and then use the construct in your stack.

It's a bad practice to use environment variable lookups within your constructs.

environment variable lookups should happen at the top level of a CDK app, and then be passed to the stacks and constructs as properties.

By using environment variable lookups, and constructs you lose portability and make your constructs harder to reuse.

If you avoid network lookups during synthesis and model all your production stages in code, you can run a full suite of unit tests at build time consistently in all environments.

Be careful how you fact refactor your code that may result in changes to the logical ID.

The logical ID is derived from the ID you specify when you instantiate a construct, and the constructs position in the app tree.

Changing the logical idea of a resource results in the resource being replaced with a new one at the next deployment, which you almost never want.

constructs are great, but they're not built to enforce compliance.

Compliance is better suited to using Service control policies and permission boundaries.

For application best practices, make your decisions at synthesis time.

Don't use cloud formation conditions and parameters.

For example, a common CDK practice of iterating over a list and instantiating a construct for each isn't possible using CloudFormation expressions.

Treat CloudFormation as an implementation detail and not a language target.

You're not writing CloudFormation templates, you're writing code that happens to generate them.

If you leave out resource names, CDK will generate them for you.

And it will do so in a way that won't cause problems when you end up refactoring your code later.

CDK does its best to keep you from losing data by defaulting to policies that retain everything you create.

But Cloud watch is expensive.

Go through and set your own retention policies.

CDK also has helper methods that will empty an s3 bucket using a custom resource prior to destroying a stack which is very useful.

Consider keeping stateful resources like databases in a separate stack from stateless resources.

You can turn on termination protection for stateful stacks, while keeping it off for stateless once.

Stateful resources are more sensitive to construct renaming, which is another reason to keep them in a separate stack.

It reduces risk when reorganizing your app later.

Determine determinism is key to successful CDK deployments.

CDK context is a mechanism for recording a snapshot of the non deterministic values coming from synthesis.

This allows future synthesis operation to produce the same exact template.

non deterministic values results in using things like a constructs from lookup where you're looking up an external resource.

These from LOOKUP values are cached in CDK context dot JSON.

The grant convenience methods allow you to greatly simplify the IAM process.

Manually defining roles or using predefined roles in your AWS account causes a big loss and flexibility in how you design your applications.

Service control policies and permission boundaries are better alternatives to predefined roles.

Another CDK best practice is to create a stack for each environment.

When you synthesize your application, the cloud assembly created in the CDK dot out folder contains a separate template for each environment.

Your entire build is deterministic.

Most of the level two constructs in CDK have convenience methods to help in the creation of metrics, which you can easily use to generate CloudWatch dashboards and alarms use them.

Finally, there are a few more resources that I'd like to call out.

Free Code Camp has a great blog post about how to use CDK version two to create a three tier serverless application.

I have several posts on my blog related to CDK but one in particular shows how much using a general purpose programming language enables you to extend the CDK even further.

In my open API specs from CDK.

Without deploying first article, I show how you can traverse an API gateways endpoint structure during synthesis to output an open API spec first without deploying.

Lastly, serverless stack framework is pretty neat.

It extends the CDK and enables local lamda development.

It has a pretty neat console for observability when doing development too.

Definitely worth checking out if you made it this far through the lesson, thanks for watching.

Thanks.

You can follow me on Twitter at marks codes.

And my other socials and my blog posts are located at marks dot codes.

This was a Free Code Camp CDK Crash Course.

Thanks bye bye .