What is the Azure AI Fundamentals Certification?

The Azure AI Fundamentals Certification is for those seeking a Machine Learning role such as AI Engineer or Data Scientist.

In the last decade companies have been collecting vast amount of data surrounding their service and product offerings.

The successful companies of the 2010s were data-driven companies who knew how to collect, transform, store and analyze their vast amount of data.

The successful companies of the 2020s will be ml-driven who how know how to leverage deep learning and create AI product and service offerings.

Overview of the Azure AI Fundamentals

The Azure AI Fundamentals covers the following:

  • Azure Cognitive Services
  • AI Concepts
  • Knowledge Mining
  • Responsible AI
  • Basics of ML pipelines and MLOps
  • Classical ML models
  • AutoML and Azure ML Studio

How do you get the Azure AI Fundamentals certification?

You can get the certification by paying the exam fee and sitting for the exam at a test center partnered with Microsoft Azure.

Microsoft Azure is partnered with Pearson Vue and PSI Online who have a network of test centers around the world. They provide both in-person and online exams. If you have the opportunity, I recommend that you take the exam in-person.

Microsoft has a portal page on Pearsue Vue where you can register and book your exam.

That exam fee is $99 USD.


Can I simply watch the videos and pass the exam?

For a fundamental certification like the AI-900 you can pass by just watching the video content without exploring hands-on with the Azure services on your own

Azure has a much higher frequency of updates than other cloud service providers. Sometimes there are new updates every month to a certification however, the AI-900 is not hands-on focused, so study courses are less prone to becoming stale.

  • The exam has 40 to 60 questions with a timeline of 60 minutes.
  • The exam contains many different question types.
  • A passing grade is around 70%.

The Free Azure AI Fundamentals Video Course

Just like my other cloud certification courses published on freeCodeCamp, this course will remain free forever.

The course contains study strategies, lectures, follow-alongs, and cheatsheets, and it's designed to be a complete end-to-end course.

Head on over to freeCodeCamp's YouTube channel to start working through the full 4-hour course.

Full Transcript

(note: autogenerated)

Hey, this is Andrew Brown, your cloud instructor exam Pro, and I'm bringing you another complete study course.

And this time, it's the Azure AI fundamentals made available here on Free Code Camp.

So this course is designed to help you pass the exam and achieve Microsoft issued certification.

And we're going to do that by providing you great lecture content, follow along to get that hands on experience, and cheat sheets for the day of your exam.

So when you do get that certification, you can put it on your resume or LinkedIn, and show that you have that ASHRAE knowledge to get that cloud job or get that promotion.

So I want to introduce myself, I'm previously the CTO of multiple edtech companies with 15 years, industry experienced five years specializing cloud service community hero, and I've published many free cloud courses.

I love Star Trek and coconut water.

So I just want to take a moment here to thank viewers like you, because it's you that make these free courses possible.

And if you want to support more free courses, just like this one, a great way to do that is buying our additional study materials on exam pro.co.

And for this exam, it's Ford slash ai 900.

This will get you access to study notes, flashcards, quizlets, downloadable cheat sheets and practice exams.

And you'll also be able to ask questions and get some learning support.

So if you want to keep up to date with any of the more courses I am releasing, you can follow me on twitter at Andrew, Andrew route 10 share with me when you pass the exam, or what you might like to see as the next course.

So there we go.

Let's get to it.

Hey, this is Andrew Brown from exam Pro.

And we are at the start of our journey here learning about the AI 900 asking the most important question which is what is the a 900.

So the Azure AI fundamentals certification is for those seeking in ML roles such as AI engineer or data scientists.

And the certification will demonstrate a person can define and understand Azure cognitive services, ai concepts, knowledge mining, responsible AI basis of ml pipelines, classical ml models, auto ml and Azure ML studio.

So you don't need to know super complicated ml knowledge here, but definitely helps to get you through there.

But yeah, so this certification is generally referred to by its course code, the AIA 900 into the natural path for the Azure AI engineer or Azure Data Scientist certification.

And this generally is an easy course to pass.

It's great for those new to cloud or ml related technology, looking at our roadmap, you might be asking, okay, well, what are the paths? And what should I learn.

And so here are my markers.

And let's get out the annotation tool or laser pointer to see where we can go.

Now if you already have your az 900.

That's a great starting point before you take your ai 900.

If you don't have your az 900, you can jump right into the 900.

But I strongly recommend you go get that az 900 because it gives you General, General, foundational knowledge, it's just another thing that you should not have to worry about, which is just how to use Azure at a fundamental level.

Do you need the DP 900 to take the 900 No, but a lot of people seem to like to to go this route where they want to have that data foundation before they move on to a to the AI 100 because they know that that is just broad knowledge is going to be useful there.

So you know, it is apparent that you see a lot people getting the AI 900 and the DPI 900.

Together, vana 100, the path is a little bit more clear, it's either going to be data scientists or AI engineer.

So AI engineer is just the cognitive services turned up to 11, you have to know how to use the AI services in and out for data scientists is more focused on setting up actual pipelines, and things like that within the Azure Machine Learning Studio.

So you just have to decide which path is for you.

The data scientist is definitely harder than the AI engineer, I think the coaches was updated.

So I just updated that to 102.

And I think the AI engineers to be two separate, you had to take two separate courses.

But now it's just a single one.

So it's unified.

But you know, if you aren't ready for the data sciences, some people like taking the AI engineer first and then doing the data scientist.

So this is kind of like a warm up.

Again, it's not 100% necessary, but it's just based on your personal learning style.

And a lot of times people like to take the data engineer after the data scientists just to round out their complete knowledge.

Now, if you already have the az 900 and the associate, you can safely go to the data scientist if you want to risk it, because this one is really hard.

So if you've passed the easy one before, you know, you're gonna probably have a lot more confidence, learning about this stuff, all this fun foundational stuff at this level here.

But of course, it's always recommended to go grab these foundational certs because sometimes course materials just do not cover that information.

And so the obvious stuff is going to get left out.

Okay.

So moving forward here.

So how long should you study to pass for the 900? Well, if you have one year's experience with Azure, you're looking at five hours as little as five hours could be up to 10 hours.

If you have passed the az 900 to dp 900.

Around 10 hours is the average.

If you're completely new to ml AI, you're looking at 15 hours, this could get extended to 20 to 30 Again, it just depends on how green you are like how new you are to these concepts.

But you know, I think on average, we're looking at 15 hours, the recommended study time is 30 minutes a day for 14 days should get you through it.

You know, but, you know, just don't over study and just don't spend too little time, you know.

So where do you take this example, you can take it in person at a test center or online from the convenience of your own home.

So there's two popular test centers, there's psi and Pearson VUE, well, and I should say these are theirs.

These are not necessarily test centers, per se, but they are a collection of test centers that are partnered with psi Pearson VUE so that you can easily take it at a local test center.

If you ever heard the term Proctor that is that means a supervisor person that is monitoring you while you're taking the exam to them.

When we talk about online exams, they'll say proctored exams to refer to the online component, if I had the option to meet in person online, always the in person because it's a controlled environment, it's way less stress us stress stressful.

And, you know, online, there can so many things can go wrong.

So you know, but it's up to your personal preference and your situation.

Okay? What does it take to pass the exam? Well, you got to watch the lectures, and memorize key information, do hands on labs and follow along with your own Azure account, I would say that you could probably get away with just watching all the videos in this one without having to do but again, you know, it really does reinforce information.

If you do take the time there.

There is some stuff that is an Azure Machine Learning Studio, you might be wary of launching because we do have to run instances and they will cost money.

So if you if you feel that you're not comfortable with that, just watching you should be okay.

But when you get into the associate tier you absolutely you just have to expect to pay something to to learn and take that risk, okay? You want to do paid online practice exams that simulate the real exam.

So I do have paid practice exams that accompany this course that are on my platform exam Pro.

And that's how you can help support more of these free courses.

Can you pass this without taking practice exam Asher's a little bit harder? If this is an AWS exam? I'd say yes.

for Azure.

It's kind of risky, the easy 100 Sure.

Ai 900 dP 900 sC 900? No, I think you should get a practice exam, at least one, or go through the sample one, there's a sample one probably looking around for on the Azure website.

Let's just look at the exam guide break down here very shortly.

And then in the following video, we'll look at in more detail.

So it's broken down into the following domains to describe AI workloads and considerations describe fundamental principles of machine learning on Azure describe features of computer vision workloads on Azure, describe features of natural language processing workloads on Azure, describe features of conversational AI workloads on Azure.

And I want you to notice it says describe, describe, describe, describe, describe, that's good, because that tells you it's not going to be super, super hard, right? If you start seeing things that say, beyond describing identify, then you know, it's going to be a bit harder, okay? The passing grade here is 700 out of 1000.

So that's around 70 70%, I would say around because you could possibly fail with 70%.

Because these things work on scaled scoring.

For response types, there's about 40 to 60 questions, and you can afford to get 12 to 18 questions wrong.

I put an asterisk there because there's not always just one question per like, per section, but I'll talk about the here in a second.

So some questions are worth more than one point.

There's no penalty for wrong questions.

Some questions cannot be skipped.

And the format of questions can be multiple choice, multiple answer, drag and drop hot area case studies.

Case Studies.

I don't remember I don't think I saw a case study on mind.

But case studies will have a series of questions, a series of questions that make up or come back to a particular business problem.

And so those are very interesting.

That's why we have that asterik up here, okay.

So for the duration, you get one hour, that means about one minute per question.

The time for this exam is 60 minutes, your seat time is 90 minutes, seat time refers to the amount of time that you should take to allocate for that exam.

So this includes time to review the instructions, read, accept the NDA, complete the exam and provide feedback at the exam.

This is going to be valid for 24 months up to two years before we have certification.

And, you know, we'll proceed to the full exam guide now.

Okay.

Hey, this is Andrew Brown from exam Pro.

And what we've pulled up here is the official exam outline on the Microsoft website.

If you want to find this yourself, just got to type in ai 100, Azure or Microsoft, you should be able to easily find it, the page looks like this.

And what I want you to do is scroll on down because we're looking for skills measured and from there, we're going to download the skills outline.

And once we have that open, you might want to bump up the text.

And so what you'll always see in these documents is a red text at the top saying hey, we've updated the track as your loves updating their courses with minute updates that don't generally affect the outcome of the study.

But it does get a lot people worrying.

So we say, well, is your core set of data? So no, no, they're just making minor changes.

Because they'll do this like five times a year.

And so if there was a major revision, what would happen is they would change it.

So instead of being the AI 900, to be like, the AI 901, or 9902, we saw that recently with the, the AI 100 words now the ai, ai 102 or 103.

Sorry.

So you know, just watch out for those.

And if it's a major revision, then yes, the course you would need a completely new course, and it would not match.

But for minors, it's going to be my new thing.

So if we scroll on down, and a lot of times, they'll just cross out what they've changed in this one in particular, they did not show us in detail, you'd have to read through the comparison.

But we'll look at the new listing here.

and work our way through here is to describe artificial or AI workloads and considerations.

So here we just kind of describing the generalities of AI.

So prediction forecasting, this is because when we use auto ml prediction would be classification and regression and forecasting would be that real time series forecasting, I suppose, identity features anomaly detection.

So not a lot in the exam for this.

So we we touched on it briefly, computer vision workloads, there's a lot of stuff under computer vision, as you'll find out through the course, and LP and knowledge, mining workloads, conversational AI, workloads.

And again, these are all the concepts, not how to use the services, then you have the responsible AI section.

And so Microsoft has these six principles that they really want you to know.

And they push it throughout all their AI services.

So those are the 16 liter.

Now, they're not that hard to learn, then describe fundamental principles of machine learning on Azure.

So here, it's just describing regression, classification and clustering.

We have a lot of practical experience with these in the course.

So you will understand at the end what these are used for.

For core machine learning concepts, we can identify features and labels in a data set.

So that's their data labeling service, describe how training validation data sets are used in machine learning.

So we touch on that describe how machine learning algorithms are used for training, select and interpret model valuations of metrics for classification regression, a lot of these deal categories in auto ml because it automatically does it, but we can see how it does that.

Okay.

Well, I think having to do it ourselves, identify core tasks and creating a machine learning solution.

So describe common features of data ingestion, preparation, feature engineering, selection, features of model training, evaluation, features of model deployment and management.

And then we have described no code solution.

So auto ml, they like to call it automated.

ml, but really, the industry just calls it auto ml.

Then there's the designer for building pipelines.

Here's where we see some changes.

So identify features of image classification, features of object detection solution.

So semantic segmentation is gone, which is great, because I don't even know what that is.

So it's great that it's out there, OCR solutions, and then you have face detection.

Then under computer vision tasks, we have computer vision, custom vision, face services form recognizer tones.

There's a lot around computer vision.

For NLP we have key phrase extraction, identity recognition, sentiment analysis, language modeling, speech recognition, synthesis, this one doesn't really appear much.

It's kind of a concept not so much something we have to do.

Then there's translation.

We have nlp, nlp stuff.

So text analytics, Luis, or Louis, I'm not sure which way to pronounce it.

Speech service and text, translator text.

Then down below below, we have a conversational AI.

So building out web chat bots, and characters of conversation AI solutions looks like these two have telephone and personal digital assistants not sure why they decided to remove that.

But that's okay.

I think that's fine.

q&a maker and Azure bot, I really like this service, by the way.

So yeah, there we go.

That is the outline.

And now we'll jump into the actual course.

Hey, this is Andrew Brown from exam Pro.

And we are looking at the layers of machine learning.

So here I have this thing that looks like kind of an onion.

And what it is, it's just describing the relationship between these ml terms related to AI, and we'll just work our way through here starting at the top.

So artificial intelligence, also known as AI is when machines that perform jobs that mimic human behavior.

So it doesn't describe how it does that.

But it's just the fact that that's what AI is one layer underneath we have machine learning.

So machines that get better at a task without explicit programming.

Then we have deep learning.

So these are machines that have an artificial neural network inspired by the human brain to solve complex problems.

And if you're talking about some of that actually assembles either ml or deep learning models or algorithms that's a data scientist or person with multidisciplinary skills and math statistics, predictive modeling machine learning to make future predictions.

So what you need to understand is that AI is just the outcome, right? And so AI could be using m Underneath, or deep learning, or a combination of both or just FL statements, okay? Alright, so let's take a look here at the key elements of AI.

So AI is the software that imitates human behaviors and capabilities.

And there are key elements according to Azure or Microsoft as to what makes up AI.

So let's go through this list quickly here.

So we have machine learning, which is the foundation of an AI system that can learn predict like a human, you have anomaly detection.

So detect outliers or things out of place, like a human computer vision, be able to see like a human natural language processing, also known as NLP, be able to process human languages and refer a context, you know, like a human, conversational AI be able to hold a conversation with a human.

So, you know, I wrote here, according to Microsoft and Azure, because you know, the global definition is a bit different.

But I just wanted to put this here, because I've definitely seen this as an exam question.

And so we're going to have to go with Asher's definition here.

Okay.

Let's define what is a data set.

So a data set is a logical grouping of units of data that are closely related to or share the same data structure.

And there are publicly available datasets that are used in learning of statistics, data analytics and machine learning.

I just want to cover a couple here.

So the first is the M NIST database.

So images of handwritten digits use to test classify cluster image processing algorithms commonly used when learning how to build computer vision ml models to translate handwritten into or handwriting into digital text.

So it's just a bunch of handwritten numbers and letters.

And then another very popular data set is the common objects in context cocoa dataset.

So this is a dataset which contains many common images using a JSON file, cocoa format that identify objects or segments within an image.

And so this data set has a lot of stuff in its object segmentations recognition and IP context, super pixel stuff, segmentation, they have a lot of images, and a lot of objects.

So there's a lot of stuff in there.

So why am I talking about this, and in particular cocoa data sets? Well, when you use Azure Machine Learning Studio, it has a daily data labeling service.

And the thing is, is that it can actually export out into cocoa format.

So that's why I want you to get exposure to what cocoa was.

And the other thing is, is that when you're building out Azure Machine Learning pipelines, you they actually have open datasets, as you'll see later in the course, that shows you that you can just use very common ones.

And so you might see m NIST and the other one there.

So I just wanted to get you some exposure.

Okay.

Let's talk about data labeling.

So this is the process of identifying raw data, so images, text files, videos, and adding one or more meaningful and informative labels to provide context so a machine learning model can learn.

So with supervised machine learning, labeling is a prerequisite to produce training data.

And each piece of data will generally be labeled by a human.

The reason why I say generally here is because with Azure Data labeling service, they can actually do ml assisted labeling.

So with unsupervised machine learning labels will be produced by the machine and may not be human readable.

And then one other thing I want to touch on is the term called ground truth.

So this is a proper, a properly labeled data set that you can use as the objective standard to train and assess a given model is often called ground truth, the accuracy of your train model will depend on the accuracy of your ground truth.

Now using Azure as tools I've ever seen and used that word ground truth, I see that a lot in AWS, and even this graphic here is from AWS.

But I just want to make sure you are familiar with all that stuff.

Okay.

Let's compare supervised unsupervised and reinforcement learning.

Starting at the top, we got supervised learning, this is where the data has been labeled for training.

And it's considered task driven, because you're trying to make a prediction get a value back.

So when the labels are known, and you want a precise outcome, when you need a specific value returned, and so you're going to be using classification and regression in these cases.

For unsupervised learning, this is where data that has not been labeled, the ML model needs to do its own labeling.

This is considered data driven.

It's trying to recognize a structure or a pattern.

And so this is when the labels are not known.

And the outcome does not need to be precise when you're trying to make sense of data.

So you have clustering, dimensionality reduction and Association.

Have you ever heard this term before? The idea is it's trying to reduce the amount of dimensions to make it easier to work with the data.

So make sense of the data, right? We have reinforcement learning.

So this is where there is no data.

There's an environment and an ml model generates data and makes many attempts to reach a goal.

So this is considered decisions driven.

And so this is for game AI learning tasks robot navigation, when you've seen someone code In a video game that can play itself, that's what this is.

If you're wondering, this is not all the types of machine learning.

And these specific, unsupervised and supervised is considered classical machine learning because they have heavily rely on statistics and math to produce the outcome.

But there you go.

So what is a neural network? Well, it's often described as mimicking the brain, it's a neuron or node that represents an algorithm.

So data is inputted into a neuron and based on the output, the data will be passed to one of many connected neurons, that it connections between neurons is weighted, I really should have highlighted that one that's very important.

The network is organized into layers, there will be an input layer, one too many hidden layers and an open layer.

So here's an example of a very simple neural network.

Notice the nn, a lot of times you'll see this in ML as an abbreviation for neural networks.

And sometimes neural networks are just called neural nets.

So just understand that's the same term here.

What is deep learning? This is a neural network that has three or more hidden layers, it's considered deep learning, because at this point, it's it's not human readable to understand what's going on within those layers.

What is Ford fried, so neural networks, where they have connections between nodes that do not form a cycle, they always move forward.

So that's just describes a forward pass through the network, you'll see fn n, which stands for forward feed neural network just described that type of network, then there's about back propagation, which are in forward feed networks.

This is where we move backwards through the neural net, adjusting the weights to improve the outcome on next iteration.

This is how a neural net learns.

The way the backpropagation knows to do this is that there's a loss function.

So a function that compared the ground true to the prediction to determine the error rate how bad the network performs.

So when it gets to the end, it's going to perform that calculation, and then it's going to do its back propagation, adjust the weights, then you have activation functions, I'm just going to clear this up here.

So activation functions.

They're an algorithm applied to a hidden layer node that affects connected output.

So for this entire hidden layer, they'll all have the same one here and just kind of affects how it learns and like how the weighting works, so it's part of backpropagation.

And just the learning process, there's a concept of dense so when the next layer increases the amount of nodes and you have a sparse so when the next layer decreases the amount of notes.

Anytime you see something going from a dense layer to a sparse later, that's usually called dimensional dimensionality reduction because you're reducing the amount of dimensions because the amount of nodes in your network determines the dimensions you have.

Okay.

What is a GPU? Well, it's a general processing unit that is specially designed to quickly render high resolution images and videos concurrently.

GPUs can perform parallel operations on multiple sets of data.

So they are commonly used for non graphical tasks such as machine learning, and scientific computation.

So a CPU has an average of four to 16.

processor cores, a GPU can have 1000s of processor cores, so something that has 48 GPUs can have as many as 40,000 cores.

Here's an image I grabbed right off the Nvidia website.

And so it really illustrates very well, like how this would be really good for machine learning or neural networks.

Because neural networks have a bunch of nodes.

They're very repetitive tasks, you can spread them across a lot of cores, that's gonna work out really great.

So GPUs are suited for repetitive and highly parallel computing tasks such as rendering, graphics, cryptocurrency mining, deep learning and machine learning.

We're talking about CUDA before we can let's talk about what Nvidia is.

So Nvidia is a company that manufactures graphical processing units for gaming and professional markets.

If you play video games, you've heard of Nvidia.

So what is CUDA? It is the compute unified device architecture.

It is a parallel computing platform and API by Nvidia that allows developers to use CUDA enabled GPUs for general purpose computing on GPUs.

So GP GPUs, all major deep learning frameworks are integrated with Nvidia deep learning SDK.

The Nvidia deep learning SDK is a collection of Nvidia libraries for deep learning.

One of those libraries is the CUDA deep neural network library.

So cu dnn so CUDA, RC UD and n provide provides highly tuned implementations for standard routine such as forward and back convolution convolutions really great for computer vision, pooling normalization activation layers.

So, you know, in the Azure certification for the AI 900.

They're not going to be talking about CUDA.

But if you understand these two things, you'll understand why GPUs really matter.

Okay.

All right, let's get a easy introduction into machine learning pipeline.

So this one is definitely not an exhaustive one, and we're definitely gonna see more complex ones throughout this course.

But let's get to it here.

So starting on the left hand side, we might start with data labeling.

This is very important when you're doing supervised learning because you need to to label your data set, the ML model can learn by example, during training, this stage and the feature engineers nearing stage or is considered pre processing because we are preparing our data to be trained for the model.

When we move on to feature engineering, the idea here is that ml models can only work with numerical data.

So we need to translate it into a format that it can understand.

So extract out the important data that the ML model needs to focus on.

Okay, then there's the training step.

So your model needs to learn how to become smarter, it will perform multiple iterations getting smarter with each iteration, you might also have a hyper parameter tuning step here, it says tuning but should say tuning.

But the ML model can have different parameters.

So you can use ml to try out many different parameters to optimize the outcome.

When you get to deep learning, it's impossible to tweak the parameters by hand.

So you have to use hyper parameter tuning, then you have serving, sometimes known as deploying.

But you know, when we say deploy, we talked about the entire pipeline, not necessarily just the ML model step.

So we need to make an ml model accessible.

So we serve it by hosting in a virtual machine or container.

When we're talking about Azure machine learning, it's either going to be an Azure Kubernetes service or Azure Container instance.

And you have inference.

So inference is the act of request, requesting to make a prediction, so you send your payload with either CSV or whatever, and you get back the results, you have a real time endpoint and batch processing.

So real time, it's just there they can batch can be real time as well.

But generally, it's slower.

But the idea is that do I? Am I making a single item prediction? Or am I giving you a bunch of data at once.

And again, this is a very simplified ml pipeline, I'm sure we'll revisit ml pipeline later in this course.

So let's compare the the terms forecasting and prediction.

So forecasting, you make a prediction with relevant data.

It's great for analysis of trends, and it's not guessing.

And when you're talking about prediction, this is where you make a prediction without relevant data, you use statistics to predict future outcomes, it's more of guessing.

And he uses decision theory.

So imagine you have a bunch of data.

And the idea is you're going to infer from that data, okay, maybe it's a, maybe it's B, maybe it's C.

And for prediction, you don't have really much data, so you're going to have to kind of invent it.

And the idea is that you'll figure out what the outcome is there.

These are extremely broad terms, but you know, just so you have a high level view of these two things, okay.

So what are performance or evaluation metrics? Well, they are used to evaluate different machine learning algorithms.

So the idea is, you know, when your machine learning makes a prediction, these are the metrics you're using to evaluate to determine, you know, is your ml model working as you intended.

So for different types of problems, different metrics matter, this is absolutely not an exhaustive list.

I just want you to get you exposure to these words and things so that when you see them you go, Okay, I'll come back here and refer to this.

But lots of these are just it's not you it's not necessarily remember but classification metrics you should know.

So classification, we have accuracy, precision recall f1 score, rockin AUC.

For regression metrics.

We have MSC, our MSC ma ranking metrics, we have MLR, DCG and DCG.

Statistical metrics, we have correlation, computer vision metrics, we have psnr, SSI m, IOU and LP metrics, we have perplexity blue Meteor rogue, deep learning related metrics.

We have Inception score, I cannot say this person's name, but or I'm assuming it's a person but this Inception distance.

And there are two categories evaluation metrics, we have internal evaluation.

So metrics used to evaluate the internals of an ml model.

So accuracy f1 score precision, recall, I call them the famous for using all kinds of models and external evaluation metrics used to evaluate the final prediction of an ml model.

So yeah, don't get too worked up here.

I know that's a lot of stuff.

The ones that matter, we will see again, okay.

Let's take a look at Jupiter notebook.

So these are web based applications for authoring documents to combine live code narrative text equations, visualizations.

So if you're doing data science, or you're building ml models, you apps that are going to be working with Jupyter notebooks.

They're always integrated into cloud service providers ml tools.

So Jupyter Notebook actually came about from ipython.

So ipython is the precursor of it.

And they extracted that feature out it became Jupyter Notebook I bought ipython is now a kernel to run Python.

So when you execute Python code here, it's using ipython, which is a version of Python Jupyter Notebooks were overhauled and better integrated into an ID called Jupiter labs, which we'll talk about here in a moment.

And you generally want to open notebooks in labs, the legacy web based interfaces known as Jupiter classic notebooks.

This is what the old one looks like you still open them up, but everyone uses Jupiter labs now.

Okay, so let's talk about Jupiter labs.

Jupiter Labs is the next generation web based user interface, all familiar features of the classic Jupyter Notebook is in a flexible, powerful user interface.

It has notebooks, a terminal, a text editor, a file browser, rich outputs, Jupiter labs will eventually replace the classic Jupyter notebooks.

So there you go.

We keep mentioning regression, but let's talk about it in more detail here.

So we kind of understand the concept.

So regression is the process of finding a function.

to correlate a labeled data set gnosis is labeled, that means it's going to be for supervised learning into a continuous variable number.

So another way to say it is predict this variable in the future.

So the future is just means like that continuous variable doesn't have to be time.

But that's just a good example of regression.

So what will the temperature be next week? So we will be 20? Celsius? How would we determine that? Well, we would have vectors, so dots, they're plotted on a graph that has multiple dimensions, the dimensions could be greater than just x and y, you could have many.

And then you have a regression line.

This is the line that's going through our data set.

And and that's going to help us figure out how to predict the value.

So how would we do that? Well, we we need to calculate the distance of a vector from the regression line, which is called an error.

And so different regression algorithms use the error to predict different variable a future variable.

So just to look at this graphic here, so here's our regression line.

And here is a dot like a vector or a piece of information.

And this distance from the line that the actual distance is what we're going to use in our ML model to figure out if we were to plot another line up here, right? You know, we compare this line to all the other lines, okay? And that's how we find similarity.

And what will commonly see for this is mean squared error, root mean squared error, mean? absolute error.

So MSE, mrsc, and Ma.

Okay.

Let's take a closer look at the concepts of classification.

So classification is the process of finding a function to divide a labeled data set.

So again, this is supervised learning into classes or categories, so predict a category to apply to the inputted data.

So will it rain next Saturday, will it be sunny or rainy? So we have our data set.

And the idea is we're drawing through this a classification line to divide the data set.

So regression we're measuring the line to or the vectors to the line.

And this line is just what side of the line is that on? If it's on this side, then it's sunny.

If it's on this side, it's rainy.

Okay.

For classification algorithms, we got large logistic regression decision trees, random forests, neural networks, Naive Bayes, k nearest neighbor, also known as k and n, and support vector machines.

svms.

Okay.

Let's take a closer look at clustering.

So clustering is the process of grouping unlabeled data.

So unlabeled data means it's unsupervised learning based on similarities and differences.

So the outcome could be grouped data based on similarities or differences.

I guess it's the same description up here.

But imagine we have a graph and we have data.

And the idea is we draw boundaries around that to see similar groups.

So maybe we're recommending purchases to Windows computers, or recommending purchase to Mac computers.

Now remember, this is unlabeled data, so the label is being inferred, or, or they're just saying these things are similar, right? So clustering algorithms, we got k means K, mi dois, identity base hierarchial.

Okay.

Hey, this is Andrew Brown from exam Pro.

And we're looking at the confusion matrix.

And this is a table to visualize the mall predictions, the predicted versus the ground truth labels, the actual also known as that error matrix, and they're useful for classification problems to determine if our, if our classification is working as we think it is.

So imagine we have a question how many bananas did this person eat or these people eat? And so we have this kind of box here where we have predicted versus actual, and it's really comparing the ground truth, and what the model predicted, right? And so on the exam, they'll ask you questions like, Okay, well imagine that.

And they might not even say yes or no, maybe like zero and one.

And so what they're saying is, you know, imagine you have, you want to tell us the true positives, right? And so the idea is, they won't show you the labels here, but you know, one in one would be a true positive and zero and zero would be a false negative.

Okay? Another thing they'll ask you about these confusion matrix is, is the size of them.

So the idea is that we're looking right now at a oops, just gonna erase that there.

But we're looking at a binary classifier because we have one label and just two labels, right, one and two, okay, but we you could have three say one, two, and three.

So how would you calculate that well, Be a third cell over here you know and so it's gonna be an excellent predictive because we're only gonna have ground truth versus prediction.

And so that's how you'll know it will be six the size will be six might not say cells, but we'll just say six.

Okay.

So to understand anomaly detection, let's define quickly what is an anomaly so an abnormal thing that is marked by deviation from the norm or standard.

So, anomaly detection is the process of finding outliers within a data set color anomaly so detecting when a piece of data or access patterns appear suspicious or malicious.

So use cases for anomaly detection can be data cleaning, intrusion detection, fraud detection, system health monitoring, event detection and sensory or sensor networks, ecosystem disturbances, detection of critical and cascading flaws, anomaly detection is by hand is a very tedious process of using ml for a knowledge section is more efficient and accurate.

And Azure has a service called anomaly detector detects anomalies in data to quickly find, quickly identify and troubleshoot issues.

So computer vision is when we use machine learning neural networks to gain high level understanding of digital images or videos.

So for computer vision deep learning algorithms, we have convolution neural networks.

These are for image and video recognition.

They're inspired after how the human eye actually processes information, and sends it back to the brain to be processed.

You have recurrent neural networks RNNs, which are generally used for handwriting recognition or speech recognition.

Of course, these algorithms have other applications, but these are the most common use cases for them.

four types of computer vision, we have image classification, so look at an image or video and classify its place in a category object detection.

So identify objects within an image or video and apply labels and location boundaries.

semantic segmentation, so identify segments or objects by drawing pixel masks around them so great for objects and movement, image analysis, so analyze an image or video to apply descriptive context labels.

So maybe an employee is sitting at a desk in Tokyo would be something that image analysis would do optical character recognition, or OCR, find text in images or videos and extract them into digital text for editing facial detection, so detect faces in a photo or video, and dry location boundary and label their expression.

So for computer vision to some things around Azure Microsoft services, there's one called seeing AI.

It's an app developed by Microsoft for iOS.

So you use your device camera to identify objects, people and objects, and the app is audibly describes those objects for people with visual impairments.

It's totally free.

If you have an iOS app, I have an Android phone so I cannot use it.

But I hear it's great.

Some of the Azure computer vision service offerings is computer vision.

So analyze images and videos, extract descriptions, tags, objects and texts, custom vision so custom image classification object detection models using your own images face so detect and identify people and emotions and images form recognizer, so translate scan documents into key value or tabular editable data.

So natural language processing, also known as NLP is machine learning that can understand the context of a corpus corpus being a body of related text.

So NLP is enable you to analyze and interpret text within documents and email messages interpret, or contextualize spoken tokens.

So for example, maybe customer sentiment analysis whether customers happy or sad, synthesize speech, so a voice assistants assistant talking to you automatically translate spoken or written phrases and sentences between languages, interpret spoken or written commands and determine appropriate actions.

A very famous example for a voice assistant specifically or virtual assistant for Microsoft is Cortana.

He uses the Bing search engine to perform tasks such as setting reminders and answering questions for the user.

And if you're on a Windows 10 machine, it's very easy to activate Cortana by accident.

When we were talking about Azure as MLP offering we have text an analytic so sentiment analysis to find out what customers think.

Find topic.

Topic relevant phrases using key phrase extraction, identify the language of the text with language detection, detect and categorize entities in your text.

With named entity recognition for translator we have real time text translation and multi language support.

For speech service.

We have transcribe audible speech into readable searchable text, and then we have language understand understanding, also known as Louis natural language processing service that enables you to understand human language in your own application website, chatbots IoT device and more when we talk about conversational AI, it usually generally uses NLP so that's where you'll see that overlap next, okay.

Let's take a look here at conversational AI, which is technology that can participate in conversations with humans.

So we have chatbots voice assistant In an interactive voice recognition systems, which is like the second version to interactive voice response system, so you know, when you call in and they say press these numbers that is a response to some and recognition system is when they can actually take human speech and translate that into action.

So the use cases here would be online customer support replaces human agents for, for replying about customer FAQs, maybe shipping questions anything about customer support, accessibility, so voice opera UI for those who are visually impaired HR processes, so employee training, onboarding, updating employee information.

I've never seen it used like that.

But that's what they say is the use case healthcare accessible, affordable health care.

So maybe you're doing a claim process.

I've never seen this.

But maybe in the US where you do more your claims and everything is privatized, it makes more sense, Internet of Things, IoT devices.

So Amazon, Alexa Apple, Siri, Google Home, and I suppose Cortana, but it doesn't really have a particular device.

So that's why I didn't list it their computer software, so autocomplete search on phone or desktop.

So that would be Cortana.

Something it could do.

For the two services that around conversational AI for Azure, we have q&a maker so create a conversational Question and Answer bot from your existing content, also known as a knowledge base, and Azure bot service intelligent service bot service that scales on demand used for creating publishing managing bots.

So the idea is you make your bot here and then you deploy it with this.

Okay.

Let's take a look here at responsible AI which focuses on ethical, transparent and accountable uses ai ai technology, Microsoft puts into practice responsible AI via its six Microsoft AI principles.

This whole thing is invented by Microsoft.

And so you know, it's not necessarily a standard but it's something that Microsoft is pushing hard to have people adopt Okay, so we The first thing we have is fairness.

So this is an AI system, which should treat all people fairly, we have reliability and safety and AI systems should perform reliably and safely.

Privacy and Security AI system should be secure and respect privacy inclusiveness system should empower everyone and engage people transparency, AI systems should be understandable accountability.

People should be accountable for AI systems and we need to know these in greater detail.

So we're going to have a short little video on each of these okay.

First arlis is fairness.

So AI systems should treat all people fairly so an AI system can reinforce existing social societal stereotype, stereotypical bias can be introduced during the development of a pipeline.

So an AI system that are used to allocate or withhold opportunities, resources or information in domains such as criminal justice, employee employment in hiring, finance, and credit.

So an example here would be an ml model designed to select a final applicant for hiring pipeline without incorporating any bias based on gender, ethnicity or may result in unfair advantage.

So as your ml can tell you how each feature can influence a models prediction for bias.

One thing that could be of use is fair learn.

So it's an open source Python project to help data scientists to improve fairness and the AI systems.

At the time of I made this course, a lot of their stuff is still in preview.

So you know, it's the fairness component is it's not 100% there, but it's great to see that they're getting that along.

Okay.

So we are on to our second AI principle for Microsoft and this one is AI systems should perform reliably and safely.

So AI software must be rigorously tested to ensure they work as expected before released to the end user.

If there are scenarios where AI is making mistakes, it is important to release a report quantified risks and harms to end users so they are informed of the shortcomings of an AI solution.

Something you should really remember for the exam, they'll definitely ask that AI were concern for reliability, safety for humans is critically important.

autonomous vehicles, health diagnosis, suggestions, prescriptions and autonomous weapon systems.

They didn't mention this in their content.

And I was just like doing some additional resource research.

I'm like, yeah, you really don't want mistakes when you have automated weapons or ethical You shouldn't have them at all.

But hey, that's, that's just how the world works.

But yeah, this is this category here.

We're on to our third Microsoft AI principle AI system should be secure and respect privacy.

So AI can require vast amounts of data to train deep machine ml models, the nature of an ml model may require personally identifiable information.

So P eyes, it is important that we ensure protection of user data that is not leak or disclosed.

In some cases, ml models can be run locally on a user's device.

So their PII is remain on their device avoiding the vulnerability.

This is called this is like edge computing.

So that's the concept there AI security principles to check malicious actors.

So data origin and lineage data use internal versus external data corruption considerations, anomaly detection.

So there you go.

We're on to the fourth Microsoft principles of assets.

Should empower everyone and engage people.

If we can design AI solutions for the minority of users, they can design AI solutions to the majority of users.

So we're talking about minority groups, we're talking about physical ability, gender, sexual orientation, ethnicity, other factors.

This one's really simple.

In terms of practicality, it doesn't 100% make sense, because if you've worked with groups that are deaf and blind developing technology for them, a lot of times they need specialized solutions.

But the approach here is that, you know, if we can design for the minority, we can design for all that is the principle there.

So that's what we need to know.

Okay.

Let's take a look here at transparency.

So AI system should be understandable.

So interpretability, and intelligibility is when the end user can understand the behavior of UI.

So transparency of AI systems can result in mitigating unfairness help developers debug their AI systems gaining more trust from our users, those builds a, those who build AI systems should be open about why they're using AI open about the limitations of the AI systems.

adopting an open source AI framework can provide transparency, at least from a technical perspective on the internal workings of an AI system.

We are on to the last Microsoft AI principle here people should be accountable for AI systems.

So the structure put in place to consistently enacting AI principles and taking them into account AI systems should work within frameworks of governments, organizational principles, ethical and legal standards that are clearly defined principles guide Microsoft and how they develop, sell an advocate when working with third parties and this push towards regulation towards a principle.

So this is Microsoft saying, Hey, everybody adopt our model.

There are many other models, I guess it's great that Microsoft is taking the charge there, I just feel that it needs to be a bit more well developed.

But what we'll do is look at some more practical examples so we can better understand how to apply their principles.

Okay.

So if we really want to understand how to apply the Microsoft AI principles, they've great created this nice little tool via a free web app for practical scenarios.

So they have these cards, you can read through these cards, they're color coded for different scenarios, and there's a website so let's go take a look at that and see what we can learn Okay.

All right, so we're here on the guidelines for human AI interaction so we can better understand the how to put into practice the Microsoft AI principles.

They have 18 cards and let's work our way through here and see the examples the first one our list, make clear what the system can do help the users understand what the AI system is capable of doing.

So here PowerPoint quickstart builders and builds an online outline to help you get started researching subject displays suggested topics that help you understand the features capability.

Then we have the Bing app shows examples of types of things you can search for.

Apple Watch displays all metrics attracts explains how going on the second card we have make clear how well the system can do what it can do.

So here we have office new companion experience ideas doc alongside your work, and offers one click assistance but grammar design data insights, richer images and more.

The unassuming term ideas coupled with label previews, help set expectations and presented suggestions.

The recommender in Apple Music uses language such as we will think you'll like to communicate uncertainty.

The Help page for Outlook webmail explains the filtering into focused and other and we'll start working right away but we'll get better with use, making clear the mistakes will happen and you teach the product and set overrides onto our red cards.

Here.

We have time surfaces based on context time when to act or interrupt based on the user's current task environment.

When it's time to leave for appointments, Outlook sends a time to leave notification with directions for both driving and public transit taking into account current location of that location real time traffic information.

And then we have after using Apple Maps routing, it remembers when you're parked your car when you open the app after a little while it suggests routing to the location of the parked car.

All these Apple examples make me think that Microsoft has some kind of partnership with Apple.

I guess I guess Microsoft or or Bill Gates did own Apple shares.

So maybe they're closer than we think, show contextually relevant information time when to act or interrupt based on user's current task and environment.

Powered by machine learning acronyms in Word helps you understand shorthand employed in your own work environment relative to current OpenDocument.

On walmart.com, when the user is looking at a product, such as gaming console recommends accessories and games that would go with it.

When a user searches for movies, Google shows results including showtimes near the user's location for the current data onto our fifth card here.

Match based.

We didn't we didn't miss this one, right.

Yeah, we did.

Okay, so we're on the fifth one here match relevant social norms ensure experiences delivered in a way that users would expect Given the social cultural context when editor identifies ways to improve writing style prints optionals politely consider using.

That's the Canadian way being polite.

Google Photos is able to recognize pets and use the wording important cats and dogs recognizing that for many pets are an important part of one's family.

And you know what? When I started renting my new house, I said, you know, these are probably dogs and my landlord said, Well, of course pets are part of the family.

That was something I like to hear.

Cortana uses semi formal town apologizing when unable to find a contact, which is polite and socially appropriate.

I like that.

Okay, mitigate social biases ensure AI system, languages and behaviors do not reinforce undesirable unfair stereotypes and biases.

My analytics summarizes how you spend your time at work, and suggest ways to work smarter one ways to mitigate bias is by using gender neutral icons to represent important people sounds good to me.

A Bing search for SEO or doctor shows images of diverse people in terms of gender and ethnicity.

Sounds good to me.

The predictive keyboard for Android suggests both genders when typing a pronoun starting with the letter H.

We're onto our yellow cards support efficient invocation so make it easy to invoke or request system services when needed.

So Flash Fill is a helpful time saver in Excel that can be easily invoked with on canvas interactions and that keep you in flow on amazon.com Oh, hey there got Amazon.

In addition to the system giving recommendations as you Rouse you can manually invoke additional recommendations from the recommender for your menu.

design ideas in Microsoft PowerPoint can be invoked with the with the press of a button if needed.

I cannot stand it when that pops up.

He's up to tell it to leave me alone.

Okay, support efficient, dismal, efficient.

Does Mazal dismissal Oh, support efficient dismissal? Okay, make it easy to dismiss or ignore under undesired AI system services.

Okay, that sounds good to me.

Microsoft forms allows you to create custom surveys, quizzes, polls, questionnaires and forms some choices, questions triggers suggested options, but just beneath the relevant question, this suggestion can be easily ignored and dismissed.

Instagram allows the user to easily hide report ads that have been suggested by AI by tapping the ellipses at the top of the right of the ad.

Siri can be easily dismissed by saying Never mind.

always telling my Alexa Nevermind.

Support efficient correction make it easy to edit, refine or recover the AI system when the when the AI system is wrong, so I'll Auto altex automatically generates alt text for photographs by using intelligence services in the cloud descriptions can be easily modified by clicking the alt text button in the ribbon once you set a reminder, with Siri, the UI displays a tap to edit link.

When being automatically correct spelling errors in search queries it provides the option to revert to the query as originally typed with one click onto a card number 10.

Scope services when in doubt, engage in dis ambiguous, Anubis, disk and big uation or gracefully degrade the AI system service when uncertain about a user's goal.

So an auto replacement word is uncertain of a correction it engages in disambiguation by displaying multiple options you can select from certain will let you know it has trouble hearing if you don't respond or talk or, or speak too softly.

Bing Maps will provide multiple routing options when, when unable to recommend best one we're onto card number 11.

make clear why the system did what it did enable users to access an explanation of why the AI system behaved as it did.

Office Online recommends document documents based on history and activity descriptive text above each document makes it clear why the recommendation is shown.

product recommendations on amazon.com include why recommend recommended link that shows that what products in the user shopping history and forums.

The recommendations Facebook enables you to access an explanation about why you are seeing each ad in the news feed onto our green cards.

So remember recent interactions so maintain short term memory and allow the user to make efficient references to that memory.

When attaching a file outlook offers a list of recent files including recently copied file links.

Look also remembers people you have interacted with recently and displays them when addressing a new email being searched remember, some recent queries and search can be continued.

conversationally.

How old is he after a search for Keanu Reeves? Siri carries over the contacts from one interaction to the next a text message is created from the person you told Siri to message to onto card number 13 lucky number 13 learn from user behavior personal user experience by learning from their actions over time.

Tap on the search bar in Office applications and search lists the top three commands on your screen that you're most likely to need to personalize the technology called zero query doesn't even need to type in the search bar to provide a personalized predictive answer.

amazon.com gives personalized product recommendations recommendations based on previous purchases on the card 14.

update and adapt cautiously limit disruptive changes when updating adaptive adapting the AI systems behaviors so PowerPoint designer improves slides for office 365 subscribers by automatically generating design ideas from to choose from designer has integrated new capabilities such as smart graphics, icons, suggestions and existing user experience ensuring the updates are not disruptive.

Office tell office Tell me feature shows dynamically recommended items and it doesn't need a try area to minimize disruptive changes onto card number 15.

Encouraged granular feedback enabled users to provide feedback indicating their preferences during regular interactions with the AI system so ideas in Excel empowers you to understand your data through high level visual summaries, trends and patterns encourages feedback on each suggestion by asking is this helpful? Not only does Instagram provide the option to hide specific ads, but it's also solicits feedback to understand why the ad is not relevant.

In Apple's music app love dislike buttons are prominent, easily accessible.

Number 16 convey the consequences of user actions immediately under update or convey how user actions will impact future behaviors of the AI system.

You can get stock and geographic data types in Excel It is easy as typing text into a cell and converting it to stock data type or geographic geographic data type.

When you perform the conversion action, an icon immediately appears in the converted cells upon tapping the like Dislike button for each recommendation.

In Apple Music, a pop up informs the user that they'll receive more or fewer similar recommendations onto card number 17.

Or almost near the end provide global controls allow the user to globally customize the system.

System monitors and how it behaves so editor expands on spelling and grammar.

Checking capabilities of words include more advanced proofing and editing designed to ensure document is readable editor can flag a range of critique types and allow to customize the thing is is that in Word, it's so awful spell checking, I don't understand like it's been years and the spell checking never gets better.

So the guy implore better spell checking.

I think being search provides settings that impact that the types of results the engine will return, for example, safe search.

Then we have Google Photos allows you to to turn location history on and off for future photos.

It's kind of funny seeing like being in there about like using AI because at one point, it's almost pretty certain that Bing was copying just google search indexes to learn how to index.

I don't know that's Microsoft for you.

We're onto card 18 notify users about changes informed user when AI system adds or updates as capabilities.

Then what's new dialog in office informs you about changes by giving an overview of the latest features and updates, including updates to AI features in Outlook web to help tab includes a what's new section that covers updates.

So there we go.

We made it to the end of the list.

I hope that was a fun lesson for you.

And there I hope that we could kind of match up the the responsibly I I kind of wish what they would have done is actually mapped it out here and say word match, but I guess it's kind of an isolate service that kind of ties in.

So I guess there we go, Okay.

Hey, this is Andrew Brown from exam Pro.

And we're looking at Azure cognitive services.

And this is a comprehensive family of AI services, and cognitive API's to help you build intelligent apps.

So create customizable, pre trained models built with breakthrough AI researchers I put that in quotations I'm kind of throwing some shade at Microsoft Azure just because it's their marketing material, right? deploy Cognitive Services anywhere from cloud to the edge.

With containers get started quickly no machine learning expertise required.

But I think it helps to have a bit of background knowledge developed with strict ethical standards.

Microsoft loves talking about the responsible.

There's responsible AI stuff, empowering responsible use with industry leading tools and guidelines.

So let's do a quick breakdown of the types of services in this family.

So for decision we have anomaly detector identify potential problems early on content moderator detect potentially offensive or unwanted content, personalize or create rich personalized experiences for every user.

For languages we have language understanding, also known as alue is Louis I don't know I didn't put the initialism there but don't worry, we'll see it again.

Build natural language understanding into app spots and IoT devices q&a maker create a conversational Question and Answer layer over your data text analytics.

detect sentiment.

So sentiment is like whether customers are happy, sad, glad, keep phrases and named entities translator detect and translate more than 90 supported languages.

For speech, we have speech to text to transcribe audible speech into readable search text, text to speech convert text to lifelike speech for natural interfaces, speech translation, so integrate real time speech translation into your apps, Speaker recognition, identify and verify the people speaking based on audio for vision.

We have computer vision, so analyze content and images and videos custom vision, so analyze or sorry, customize image record or image recognition to fit your business needs, face detect and identify people and emotions in images.

So there you go.

So as your cognitive services is an umbrella AI service that enables customers to access multiple AI services with an API key and API endpoint, so what you do is you go create a new cognitive service.

And once you're there, it's going to generate two keys and an endpoint.

And that is what you're using generally for authentication with the various AI services programmatically.

And that is something that is key to the service that you need to know.

So knowledge mining is a discipline in AI that uses a combination of intelligence services to quickly learn from vast amounts of information.

So it allows organizations to deeply understand and easily explore information, uncover hidden insights and find relationships and patterns at scale.

So we have ingest, enrich and explore as our three steps.

So for ingest content from a range of sources using connectors to first and third party data stores.

So we might have structured data such as databases csvs.

The csvs would more be semi structured, but we're not going to get into that level of detail unstructured data.

So PDFs, videos, images and audio for enrich the content with AI capabilities that let you extract information, find patterns and deepen understanding.

So cognitive services like vision, language, speech, decision, and search, and explore the newly indexed data via search bots, existing businesses, applications and data visualizations and rich, structured data, customer relationship management, rap systems, Power BI, this whole knowledge mining thing is a thing but like, I believe that the whole model around this is so that Azure shows you how you can use the cognitive services to solve things without having to invent new solutions.

So let's look at a bunch of use cases that Azure has and see what where we can find some useful use.

So the first one here is for content research.

So when organizations task employees review and research of technical data, it can be tedious to read page after page of dense Tex knowledge mining helps employees quickly review these dense materials.

So you have a document and in the Richmond step, you could be doing printed text recognition key phrase extraction, sharpen or sharpen skills, technical keyword, sanitation, format, definition minor large scale vocabulary matcher, you put it through a search service, and now you have search reference library, so it makes things a lot easier to work with.

Now, we have audit risk compliance management so developers could use knowledge mining to help attorneys quickly identify entities of importance from discovery documents and flag important ideas across documents that we have documents.

So clause extraction clause classification, TV power risk, named identity extraction, key phrase extraction, language detection, automate translation, then you put it back into a search index and now you can use it our management platform or a word plug in.

And so we have business process management in industries where bidding competition is fierce, or when the diagnosis of a problem must be quick or in near real time, companies use knowledge mining to avoid costly mistakes.

So the client drilling and completion reports, document processor, ai services and custom models queue for human validation, Intelligent Automation, you send it to a back end system or a data lake and or a data lake and then you do your analytics dashboard.

Then we have customer support and feedback analysis.

So for many companies, customer support is costly and efficient.

Knowledge mining can help customer support teams quickly find the right answers for a customer inquiry or assess customer sentiment at scale.

So you have your source data, you do your document cracking use cognitive skills, so pre trained services or custom.

You have enriched documents.

From here you're going to do your projections and have a knowledge store you're gonna have a search index, and then do your analytics something like Power BI, we have digital assessment management.

There's a lot of these but it really helps you understand how cognitive services are going to be useful.

Given the amount of unstructured data created daily, many companies are struggling to make use of or find information within their files.

Knowledge mining through a search index makes it easy for end customers and employees to locate what they're looking for faster.

So you can just like art, metadata and the actual images themselves for the top player.

geopoint extractor biographical richer than down below we're tagging, we're custom object detector similar image tagger, we put it in a search index, they love those search indexes.

And now you have an art Explorer.

We have contract management, this is the last one here, many companies create products for multiple sectors.

Hence the business opportunities with different vendors and buyers increase exponentially.

Knowledge mining can help organizations to scour 1000s of pages of sources to create Accurate Bids.

So here we have RFP documents.

This will actually probably come back later in the original set, but we will will will do risk extraction, print text recognition, key phrase extraction, organizational extraction engineering standards will create a search index and put it here, this will bring back data.

Also, metadata extraction will come back here.

And then this is just like a continuous pipeline, okay.

Hey, this is Andrew Brown from exam Pro.

And we are looking at face service.

And Azure face service provides an AI algorithm that can detect recognize and analyze human faces and images, such as a face and an image face with specific attributes, face landmarks similar faces the same face as a specific identity across a gallery of images.

So here's an example of an image that I ran that will do in the follow along.

And what it's done is it's drawn a bounding box around the image.

And there's this ID and this is a unique identifier, string for each detected face in an image.

And these can be unique across a gallery, which is really useful as well.

Another cool thing you can do is face landmarks.

So the idea is that you have a face and it can identify very particular components of it.

And up to 27 predefined landmarks is what is provided with this face service.

Another interesting thing is face attributes.

So you can check whether they're wearing accessories, accessories, so think like earrings or lip rings, determine its age, the blurriness of the image, what kind of emotion is being experienced the exposure of the image, you know, the contrast, facial hair, gender, glasses, your hair in general, the head pose, there's a lot of information around that makeup, which seems to be limited, like when we ran it here in the lab, all we got back was eye makeup and lip makeup.

But hey, we get some information, whether they're wearing a mask, noise, so whether there's artifacts like visual artifacts, or occlusion, so whether an object is blocking the parts of the face, and then they simply have a boolean value for whether the person smiling or not, which I assume is a very common component.

So that's pretty much all we really need to know about the face service.

And there you go.

Hey, this is Andrew Brown from exam prep, and we are looking at the speech and translate service.

So Azure is translate service is a translation services the name implies, and it can translate 90 languages and dialects.

And I was even surprised to find out that it can translate into calling on and it uses neural machine translation and Mt replacing its legacy to statistical machine translation SMT.

So what my guess here is that statistical meaning that it used classical machine learning back in 2010, and, and then they decided to switch it over to neural networks, which, of course, would be a lot more accurate as your transit service can support a custom translator.

So it allows you to extend the service for translation based on your business domain use cases.

So if you use a lot of technical words and things like that, then you can fine tune that or particular phrases.

Then there's the other service, Azure speech service.

And this is a, a speech, synthesis service, a service.

So what can do speech to text text to speech and speech translation, so it's synthesizing creating new voices.

Okay, so we have speech to text.

So real time speech to text batch batching multidevice, conversation, conversation, transcription.

And you can create custom speech models and you have text to speech.

So this utilizes a speech synthesis markup language, so it's just a way of formatting it, and it can create custom voices.

Then you have the voice assistance of integrates with the Bot Framework SDK, and speech recognition.

So speaker verification and identification.

So there you go.

Hey, this is Andrew Brown from exam Pro.

And we were looking at text analytics and this is a service for NLP so natural language processing for text mining and text analysis.

So text analytics can perform sentiment analysis, so find out what people think about your brand or topics.

So features provide sentiment labels, such as negative, neutral positive, then you have opinion mining, which is an aspect based sentiment analysis.

It's for granular information about the opinions related to aspects.

Then you have key phrase extraction.

So quickly identify the main concepts in text.

You have language detections that detect the language of an input, a text that it's written in, and you have named entity recognition, so ner so identify and categorize entities in your text as people places off objects and quantities, and subset of any AR is personally identifiable information.

So P eyes, let's just look at a few of these more in detail.

Some of them are very obvious, but some of these would help to have an example.

So the first we're looking at is key phrase extraction.

So quickly identify the main concepts in text.

So key phrase extraction works best when you when you give bigger amounts of text to work on.

This is the opposite of sentiment analysis, which performs better on smaller amounts of text.

So document sizes can be 5000 or fewer characters per document.

And you can have up to 1000 items per collection.

So imagine you have a movie review with a lot of text in here and you want to extract out the key phrases.

So here it is, sideboard ship, enterprise, surface travels, things like that, then you have named entity recognition.

So this detects words and phrases mentioned in unstructured data that can be associated with one or more semantic types.

And so here's an example.

I think this is medicine, bass.

And so the idea is that it's identifying, it's identifying these words or phrases, and then it's applying a semantic type.

So it's saying like this is like diagnosis is the medication class and stuff like that.

semantic type could be more broad.

So there's location events, a habit location, twice here, person diagnosis age, and there is a predefined set, I believe that is in Azure that you should expect, but they have a generic one.

And then there's one that's for health.

We're looking at sentiment analysis, this graphic makes it make a lot more sense when we're splitting between sentiment and opinion mining.

The idea here is that sentiment analysis will apply labels and confidence scores to text at the sentence and document level.

And so labels could include negative positive, mixed or neutral and will have a confidence score ranging from zero to one.

And so over here, we have a sentiment analysis of this line here and in saying that this was a negative sentiment.

But look, there's something that's positive and there's something that's negative, so was it really negative, and that's where opinion mining gets really useful because it has more granular data, where we have a subject and we have an opinion, right and so here we can see the room was great, but the staff was unfriendly negative.

So we have a bit of a split there Okay.

Hey, this is Angie brown from exam pro and we are looking at optical character recognition, also known as OCR, and this is the process of extracting printed or handwritten text into a digital and editable format.

So OCR can be applied to photos of street signs, products, documents, invoices, bills, financial reports, articles and more.

And so here's an example of us extracting out nutritional data or nutritional facts off the back of a food product.

So Azure has two different kinds of API's that can perform OCR.

They have the OCR API and the read API.

So the OCR API uses an older recognition model.

It supports only images, it executes synchronous notes synchronously, returning immediately, when it detects texts, it's suited for less text, it supports more languages, it's easier to implement.

And on the other side, we have the read API.

So this is an updated recognition model supports images and PDFs, executes asynchronously.

paralyzes tasks per line for faster results, suited for lots of tax supports a few languages, and it's a bit more difficult to implement.

And so when we want to use this service, we're going to be using computer vision SDK, okay.

Hey, this is Andrew Brown from exam Pro, and we're taking a look here at form recognizer service.

This is a specialized OCR service that translates printed text into digital and editable content.

It pervert preserves the structure and relationships of the form like data.

That's what makes it so special.

So form recognizer is used to automate data entry in your applications and enrich your document search capabilities.

It can identify key value pairs selection marks table structures, it can produce output structures such as original file relationships, bounding boxes, confidence score, and form recognizer is composed of a custom document processing models, pre built models for invoices, receipts, IDs, business cards, the model layouts, let's talk about the layout here.

So extract text selection marks table structures along with bounding box coordinates from documents form.

recognizer can extract text selection marks and table structures.

The row and column numbers associate with the text using high definition optical character enhancement models.

That is totally useless text.

Hey, this is Andrew Brown from exam Pro.

And we are looking at form recognizers service.

And this is a specialized service for OCR.

That translates printed text into digital editable content.

But the magic here is that that preserves the structure and relationship of form like data.

So there's an invoice you see those magenta lines, it's saying identify that form like data.

So for recognizer is used to automate data entry in your applications and enrich your document search capabilities.

And it can identify key value pairs, selection of marks, tables, structures, and it can put structures such as original file relationships, bounding box boxes, confidence scores.

It's composed of customer custom document processing model, pre built models for invoices, receipts, IDs, business cards, it's based on this layout model.

And there you go.

So let's touch upon custom models.

So custom models allow you to extract text key value pairs selection marks in tabular data from your forms.

These models are trained with your data, so they're tailored to your forms, you only need five samples, sample input forms to start, a trained document processing model can output structured data that includes the relationship and the original form document.

After you train the model, you can test and retrain it and eventually use it reliably extract data from more forms according to your needs.

You have two learning options, you have unsupervised learning to understand the layout and relationships between fields entries in your forms.

And you have supervised learning to extract values of interest using the labeled form.

So we've covered unsupervised and supervised learning, so you're going to be very familiar with these two.

Okay.

So form recognizer service has many pre built models that are easy to get started with.

And so let's go look at them and see what kind of fields that extracts out by default.

So the first is receipts.

So sales receipts from Australia, Canada, Great Britain, India and United States will work great here and the fields that will extract out his receipt type merchant name, merchant phone number, merchant address, transaction, date, transaction time, total subtotal, tax tip, items, name, quantity, price, total price, there's information that is on a receipt that you're not getting out of these fields.

And that's where you make your own custom model right.

For Business cards.

It's only available for English business cards, but we can extract our contact names first name, last name, company names, departments, job titles, emails, websites, addresses, mobile phones, faxes, work phones, and other phone numbers.

Not sure how many people are using business cards these days, but hey, they have it as an option for invoices, extract data from invoices in various formats and return structured data.

So we have customer name, customer ID, purchase order, invoice, ID, invoice, date, due date, vendor name, vendor address, vendor address, receipt, customer address, customer address, receipt and billing address, billing address, receipt shipping address, subtotal, total tax invoice, total amount to service address, remittance address, start service start date and end date, previous unpaid balance and then they even have one for line items.

So items amount description, quantity, unit price, Product Code, unit date, tax, and then for IDs which could be worldwide passports, US driver's license, things like that.

You have fields such as country region, date of birth, date of expiry expiration document name, first name, last name, nationality, sex, machine readable zone, I'm not sure what that is document type, and address and region.

And there are some additional features with some of these bottles.

We didn't really cover them it's not that important but yeah, there we go.

Hey, this is Andrew Brown from exam Pro, and we're looking at natural understanding or Lewis or Luis depends on how you'd like to say it.

And this is a no code ml service to build language, natural language into apps, bots and IoT devices have quickly create enterprise ready custom models that continuously improve so Louis I'm gonna just call it Louis because that's what I prefer is access via its own isolate domain@lewis.ai at a utilizes NLP and NLU so NLU is the ability to perform or ability to transform a linguistic statement to a representation that enables you to understand your users naturally.

And it is intended to focus on intention and extraction, okay, so where the users want, or was or what the users want, and what the users are talking about.

So the loose application is composed of a schema and a schema is auto generated for you when you use the Louis AI web interface.

So you definitely are going to be reading this by hand, but it just helps to see what's kind of in there.

If you do have some programmatic skills, you obviously you can make better use of the service isn't just the web interface.

But the schema defines intention.

So what the users are asking for a loose app always contains a nun intent.

We'll talk about why that is in a moment.

And entities what parts of the intent is used to determine the answer.

Then you also have utterances.

So examples of the user input that includes intent and entities to train the ML model to match predictions against the real user input.

So an intent requires one or more example utterance for training.

And it is recommended to have 15 to 30 example utterances to explicitly train to ignore an utterance you use the nun intent.

So, intent classifies that user as utterances and entities extract data from utterances.

So hopefully it understands I always get this stuff mixed up, it always takes me a bit of time to understand there is more than just these things is like features and other things.

But you know, for the 900, we don't need to go that deep.

Okay, just to skip to visualizing this to make a bit easier.

So imagine we have this, this utterance here, these would be the identities that we have to end Toronto, this is example utterance.

And then the idea is that you'd have the intent and the intent.

And if you look at this keyword here, this really helps Word says classify is that's what it is.

It's a classification of this example utterance, and that's how the ML model is going to learn, okay.

Hey, this is Andrew Brown from exam prep, and we're looking at q&a maker service.

And this is a cloud based NLP service that allows you to create a natural conversational layer over your data.

So q&a maker is hosted on its own iyslah domain at q&a maker.ai it will help the most, it will help you find the most appropriate answer from any input from your custom knowledge base of information.

So you can commonly it's commonly used to build conversation clients, which includes social apps chatbots speech enabled desktop applications.

q&a maker doesn't store customer data, all the customer data stored in the region, the customer deploys the dependent services instances within Okay, so let's look at some of the use cases for this.

So when you have static information, you can use q&a maker in your knowledge base.

The answers this knowledge base is custom to your needs, which you've built with documents such as PDF URLs, where you want to provide the same answer to repeat question command when different users submit the same question the answers is returned when you want to filter stack information based on meta information.

So meta tag data is provide provides additional filtering options relevant to your client application users and information common metadata information includes chitchat content type, format, content, purpose, content, freshness.

And there's the use case when you want to manage a bot conversation that includes static information.

So your knowledge base takes takes the user conversational text, or command and answers that if the answer is part of a pre determined conversation flow, represented in the knowledge base with multiple turnkey contexts the bot can easily provide this flow.

So q&a maker import your content into a knowledge base of questions and answer pairs.

And q&a maker can build your knowledge base from an existing document manual, or website, your all docx PDF.

I thought this was the coolest thing.

So you can just basically have anyone write a docx.

As long as it has a heading and a text.

They can even extract images and I'll just turn it into the bot.

It just saves you so much time It's crazy.

It will use ml to extract the question and answer pairs.

The content of the question and answer pairs include all the alternate forms of the question metadata tags used to filter choices.

During the search, follow up prompts to continue to search refinement, refinement, q&a maker stores, answers text in markdown.

Once your knowledge base is imported, you can fine tune the important results by editing the question and answer pairs.

As seen here.

There is the chat box.

So you can converse with your bot through a chat box.

I wouldn't say it's particularly a feature of q&a maker, but I just want you to know that's how you would interact with it.

So when you're using the q&a maker AI, the Azure bot service the bot composer, or via channels, you'll get an embeddable one, you'll see this box where you can start typing in your questions and and get back the answers to test it.

Here.

An example is a multi turn conversation.

So somebody asked a question, a generic question.

And that said, Hey, are you talking about AWS or Azure, which is kinda like a follow up prompt.

And we'll talk about multiturn here in a second, but that's something I want you to know about.

Okay.

So chit chat is a feature in q&a maker that allows you to easily add pre populated sets of top chit chats into your knowledge base.

The data set has about 100 scenarios of chit chat in voices of multiple personas.

So the idea is like if someone says something random, like how are you doing? What's the weather today, things that your bot wouldn't necessarily know.

It has like canned answers, and it's going to be different based on how you want the response to be okay.

There's a concept of layered ranking.

So the q&a maker system is a layered ranking approach the data stored in Azure Search, which also serves as the first ranking layer, the top result for from Azure Search are then passed through q&a makers and LP ranking model to produce the final results and confidence score.

Just touching on multi turn conversation is a follow up prompt and context to manage the multiple turns known as multi turn for your bot from one question to another when a question can't be answered in a single turn.

That is when you're using multi turn conversation.

So q&a maker provides multi turn prompts and active learning to help you improve your questions based on key answer pairs, and it gives you the opportunity to connect questions and answer pairs.

The connection allows the client app position to provide a top answer and provide more questions refine the search for a final answer.

After the knowledge base receives questions from users at the Publish endpoint, can I make replies active learning to these rules or questions to suggest changes to your knowledge base to improve the quality alright.

Hey, this is Andrew Brown from exam Pro.

And we are looking at Azure bot service.

So the Azure bot services an intelligent serverless bot service that scales on demand used for creating publishing and managing bots.

So you can register and publish a variety of bots from the Azure portal.

So here there's a bunch of ones I've never heard of, probably with third party providers partnered with Azure.

And then there's the ones that we would know like the Azure health health bot, the Azure bot, or the webapp bot, which is more of a generic one.

So Azure bot service bop bop bot service can integrate your bot with other Azure, Microsoft or third party service services via channel so you can have a direct line out Alexa, office 365, Facebook, Keke line, Microsoft Teams, Skype, Twilio and more.

Alright, and two things that are commonly associated with the Azure bot service is the Bot Framework and bot composer.

In fact, it was really hard just to make make this slide here because they just weren't very descriptive on it.

Because I wanted to push these other two things here.

Let's talk about the Bot Framework SDK.

So the Bot Framework SDK, which is now version four is an Open Source SDK that enables developers to model and build sophisticated conversations.

The Bot Framework along with the Azure bot service provides an end to end workflow.

So we can design build, test, publish, connect, and evaluate.

Are bots.

okay with this.

With this framework, developers can create bots that use speech, understand natural language, handle questions, answers, and more.

The Bot Framework includes a modular, extensible SDK for building bots, as well as tools, templates and related AI services.

Then you have Bot Framework composer.

And this is built on top of the Bot Framework SDK.

It's an open source IV for developers to author test provision and manage conversational experiences.

You can download, it's an app on Windows OS X, and Linux is probably built using like web technology.

And so here is the actual app there.

And so you can see there's kind of a bit of a flow and things you can do in there.

So you can either you see or note to build your bot, you can deploy the bot to the Azure web apps or Azure Functions.

You have templates to build q&a maker bot enterprise or personal assistant bot language bot calendar, or people bought.

You can test and debug via the Bot Framework emulator, and has a built in package manager.

There's a lot more to these things.

But again, at the AI 900 this is all we need to know.

But yeah, there you go.

Hey, this is Andrew Brown from exam Pro.

And we are looking at Azure Machine Learning service, I want you to know there's a classic version of the service, it's still accessible in the portal.

This is not an exam, we are going to 100% avoid it.

It has severe limitations, we cannot transfer anything over from the closet to the new one.

So the one we're going to focus on is the Azure Machine Learning service.

You do create studios within it.

So you'll hear me say Azure Machine Learning Studio and I'm referring to the new one, a service that simplifies running AI ml work related workloads allowing you to build flexible automated ml pipelines, use Python or R run deep learning workloads such as TensorFlow, we can make Jupyter Notebooks in here.

So build and document your machine learning models as you build them, share and collaborate Azure Machine Learning SDK for Python.

So an SDK designed specifically to interact with the Azure Machine Learning Services.

It does ml Ops, machine learning operations, so end to end automation of ml model pipelines, CIC D training inference, Azure Machine Learning designer.

So this is a drag and drop interface to visually build test deploy machine learning models, technically, pipelines, I guess, as a data labeling service, assemble a team of humans to label your training data responsible in machine learning.

So model fairness, through disparity metrics, and mitigate unfairness at the time of the service is not very good, but it's supposed to tie in with the responsible AI that Microsoft is always promoting.

Okay.

So once we launch our own studio with an Azure Machine Learning service, you're gonna get this nice big bar, navigation left hand side, it shows you there's a lot of stuff that's in here.

So let's just break it down on what all these things are.

So for authoring, we got notebooks these are Jupyter, notebooks and ID to write Python code to build ml models.

They kind of have their own preview, which I don't really like.

But there's a way to bridge it over to Jupyter Notebooks or into Visual Studio code.

We have auto ml completely automated process to build and train ml models.

So if you're limited to only three types of models, but still, that's great.

We have the designers of visual drag and drop designer to construct end to end ml pipelines.

For assets we have data sets of data that you can upload which we will be used which will be used for training Experiments when you run a training job, they are detailed here, pipelines, ml workflows, you have built or have used in the designer model.

So a model registry containing train models that can be deployed endpoints.

So when you deploy a model, it's hosted on accessible endpoint.

So you're going to be able to access it via a REST API, or maybe the SDK for managing got compute the underlying computing instances used for notebooks, training and inference, environments, reproducible Python environment for machine learning experiments, data stores a data repository where your data resides, data labeling, so you have a human with ml assisted labeling to label your data for supervised learning, Link services, external service, you can connect to the workspace such as Azure synapse analytics.

Let's take a look at the types of compute that is available in our Azure Machine Learning Studio, we got four categories, we have compute instances, development workstations that data scientists can use to work with data and models, compute clusters to scalable clusters of VMs, for on demand processing, experimentation, code, deployment targets for predictive services that use your trained models, and attach compute links to existing Azure compute resources such as Azure VMs.

And Azure Data brick clusters.

Now, what's interesting here is like with this compute, you can see that you can open it in Jupiter labs, Jupiter VS code, our studio and terminal.

But you can you can work with your computers as your development workstations directly in the studio, which that's the way I do it.

What's interesting is for inference, that's when you're want to make a prediction, you use Azure Kubernetes service or Azure Container instance, I didn't see it show up under here.

So I'm kind of confused whether that's where it appears.

Maybe we'll discover as we do the follow logs that they do appear here, but I'm not sure about that one.

But yeah, those are the four there, okay.

So within Azure Machine Learning Studio, we can do some data labeling, so we create data labeling jobs to prepare your ground truth.

For supervised learning, you have two options human in the loop labeling, you have a team of humans that will apply labeling, these are humans, you grant access to labeling, machine learning assists to deal with labeling, you will use ml to perform labeling.

So you can export the label data for machine learning, experimentation, any time, your users often export multiple times and train different models.

rather than wait for all the images to be labeled.

Images, labels can be exported in cocoa format.

That's why we talked about cocoa a lot earlier in our data set section as your machine learning data set.

And this is the data set format that makes it easy to use for training and Azure machine learning.

So generally, you want to use that format.

The idea is you would choose a labeling Task Type.

And that way you would have this UI and then people go in and just click buttons and do the labeling.

Okay.

So as your ml data store securely connects you to storage services on Azure without putting your authentication credentials and the integrity of your original data source at risk.

So here is the example of data sources that are available to us in the studio.

And let's just go quickly through them.

So we have Azure Blob Storage.

This is data that is stored as objects distributed across many machines, as your file share a mountable file share via SMB and NFS protocols as your data lake storage Gen two, this blob searches for vast amounts of big data analytics, as your SQL is a fully managed MS SQL relational database as your Postgres database, this is an open source relational database, often considered an object related database preferred by developers as your MySQL, another open source relational database, the most popular one and considered a pure relational database, okay.

So as your ml data sets makes it easy to register your datasets for use with your ml workloads.

So what you do is you'd add a data set and you get a bunch of metadata associated with it.

And you can also upload a dish like the data set again to have multiple versions.

So you'll have a current version and a latest version, it's very easy to get started working with them, because we'll have some sample code that's for the Azure ML SDK to import that into, into your Jupyter notebooks.

For datasets, you can generate profiles that will give you summary statistics, distribution of data and more, you will have to use a compute instance to generate that data.

So you'd press the Generate profile, and you'd have that stored I think it's in blob storage.

There are open data sets is they're publicly hosted data sets that are commonly used for learning how to build ml models.

So if you go to open data sets, you just choose one.

And so this is a curated list of open data sets that you can quickly add to your data store.

Great for learning how to use auto ml or Azure Machine Learning designer or any kind of ml workload if you're new to it.

That's why we covered amnesty and cocoa earlier just because those are some common data sets there.

But there you go.

Take a look here at Azure ML experiments.

This is a logical grouping of Azure runs and runs Act is the act of running ml tasks on a virtual machine or container.

So here's a list of them.

And it can run various types of ml tasks.

So scripts could be pre processing, auto ml, a training pipeline, but what it's not gonna include is inference.

And what I mean is once you've deployed your model or pipeline, and you make predictions via request, it's just not going to show up under here.

Okay? Okay, so we have Azure ML pipelines, which is an executable workflow of a complete machine learning task Not to be confused with Azure pipelines, which is part of Azure DevOps, or Data Factory, which has its own pipelines, it's a total, totally separate thing here.

So subtasks are encapsulated as a series of steps within the pipeline.

Independent steps allow multiple data scientists to work on the same pipeline at the same time without overtaxing compute resources.

Separate steps also make it easy to use different compute type sizes for each step.

When you rerun a pipeline, the run jumps to the steps that need to be rerun, such as the updated training script steps do not need to be rerun, and they will be skipped.

After a pipeline has been published, you can configure a REST endpoint, which allows you to rerun the pipeline from any platform or stack.

There's two ways to build pipelines, you can use the Azure ML designer or pre bakley, using Azure Machine Learning Python SDK.

So here's an example of some code.

Just make a note here, I mean, it's not that important.

But notice as you create steps, okay, and then you assemble all the steps into a pipeline here.

Alright.

So Azure Machine Learning designer lets you quickly build as your ml pipelines without having to write any code.

So here is what it looks like.

And over there, you can see our pipeline is quite visual.

And on the left hand side, you have a bunch of assets you can drag out that are pre built there.

So it's a really fast way for building a pipeline.

So you do have to have a good understanding of ml pipelines end to end to make good use of it.

Once you've trained your pipeline, you can create an inference pipeline, so you drop down and you'd say whether you want it to be real or batch, or you can toggle between them later.

So I mean, there's a lot to this service.

But for the 100, we don't have to go diving too deep, okay.

So as your ml models are the model registry allows you to create, manage and track your registered models as incremental versions under the same name.

So each time you register a model with the same name as an existing one, the registry assures that it's a new version.

Additionally, you can provide metadata tags and use tags when you search for models.

So yeah, it's just really easy way to share and deploy or download your models, okay? As your MLM points allow you to deploy machine learning models as a web service.

So the workflow for deploying models, register the model, prepare an entry script, prepare an inference configuration, deploy the model locally to ensure everything works, compute, choose a compute, Target, redeploy the model to the cloud test the resulting web service.

So we have two options here real time endpoints endpoint that provides remote access to invoke the ML model service running on either Azure Kubernetes service Eks, or Azure Container instances ACI, then we have pipeline endpoint.

So endpoint that provides remote access to invoke an ml pipeline, you can parameterize the pipeline endpoint for manage repeatability in batch scoring and retraining scenarios.

And so you can deploy a model to an endpoint yet, it will either be deployed to a Eks or ACI, as we said earlier, and the thing is, is that when you do do that, just understand that that's going to be shown under the A Ks or ACI within the Azure portal.

It's not consolidated under the Azure Machine Learning Studio.

When you've deployed a real time endpoint, you can test the endpoint by sending either a single request or batch request.

So they have a nice form here with single or it's like here, it's a CSV that you can send.

So there you go.

So Azure has a built in Jupiter like notebook editor, so you can build and train your ml models.

And so here is an example of it.

I personally don't like it too much.

But that's okay, because we have some other options.

To make it easier.

All you do is you choose your compute instance, to run the notebook, you'll choose your kernel, which is a pre loaded programming language and programming libraries for different use cases.

But that's a Jupiter kernel concept there.

So you can open the notebook at a more familiar ID such as VS code, Jupyter, notebook classic or Jupiter lab.

So you go there, drop it down, choose it and open it up.

And now you're in a more familiar territory.

The VS code one is exactly the same experience as the one in Azure or Azure ML studio.

I personally don't like it.

I think most people are going to be using the notebooks but it's great that they have all those options.

So Azure automated machine learning, also known as auto ml automates the process of creating an ml model.

So with Azure auto, ml you supply a data set, choose a test type, then auto ml will train and tune your model.

So here are test types, let's quickly go through them.

So we have classification, when you need to make a prediction based on several classes, so binary classification, multi class classification regression, when you need to predict a continuous number value, and then time series forecasting when you need to predict the value based on time.

So just look at them a little bit more in detail.

So classification is a type of supervised learning in which the model learns using training data and apply those learnings to new data.

So here is an example.

Or this is just the option here.

And so the goal of classification is to predict which categories new data will fall into based on learning from its training data.

So binary classification is a record is labeled out of two possible labels.

So maybe it's true or false, zero or one, just two values.

multiclass classification is a record is labeled out of range of out of a range of labels.

And so can be like happy, sad, mad or rad.

And just, you know, I can see there's a spelling mistake there.

But yeah, there should be an F.

So let's just correct that.

There we go.

You can also apply deep learning and so if you turned deep learning on you probably want to use a GPU compute instance, just because or compute cluster because deep learning really prefers GPUs.

Okay.

Looking at regression, it's also a type of supervised learning where the model learns using training data and applies those learnings to new data, but it's a bit different, where the goal of aggression is to predict a variable in the future, then you have time series forecasting and this sounds a lot like regression because it is, so forecast revenue inventory sales or customer demand, an automated time series experiment that is treated as a multivariate regression problem, past time series values are pivoted to become additional dimensions for the regressor together with other predictors, unlike classical time series methods has an advantage of naturally incorporating multiple contextual variables and their relationship to one another during training.

So use cases here or dance configurations, I should say, holiday detection and future position time series, deep learning neural networks.

So you got auto ri ma profit forecast TCN.

Many models supports through grouping, rolling origin, cross validation, configurable labs rolling window aggregate features, so there you go.

So within auto ml, we have data guardrails, and these are run by auto ml when automatic feature rotation is enabled, it's a sequence of checks to ensure high quality input data is being used to train the model.

So just to show you some information here.

So the idea is that could apply validation split handling so the input data has been split for validation to improve the performance, then you have missing feature value imputation so no features missing values were detected in training data, high cardinality feature detection, your inputs were analyzed, and no high cardinality features were detected.

High cardinality means like if you have too many dimensions, it becomes very dense or hard to process the data.

So that's something good to check against.

Let's talk about auto ml is automatic feature isolation.

So during model training with auto ml, one of the following scaling or normalization techniques will be applied to each model.

The first is standard scale rapper standardized features by removing the mean and scaling to unit variants.

min max scalar transform features by scaling each feature by the columns minimum maximum max ABS scalar scale each feature by its maximum absolute value, robust scale our scales features by the quantitative quantal range, a PCA linear dimensionality reduction using single value decomposition of the data to project it to lower dimensional space.

dimension reduction is very useful if your data is too complex.

And let's say you have data you have too many labels like 20 3040 labels for like four categories to pick out of you want to reduce the dimensions so that your machine learning model is not overwhelmed.

So then you have truncated SVD wrappers.

So the transformer performs linear dimensionality reduction by means of truncated single singular value decomposition contrary to PCA, the estimator does not send her the data before computing the singular value decomposition, which means it can work with spicy sparse matrices, efficiently sparse normalization to each sample that is each row of the data matrix which with at least one zero component is rescaled independently of other samples, that is norm.

So one l or two l two, I can refer to it or L.

Anyway, I one and, and I two.

Okay? So the thing is, is that on the exam, they're probably not going to be asking these questions but I just like to get you exposure, but I just want to show you that auto ml is doing all this.

This is like pre processing stuff, you know, like this is stuff that you'd have to do, and so it's just taking care of the stuff for Are you okay? So within Azure auto ml, they have a feature called model selection.

And this is the task of selecting a statistical model from a set of candidate models.

And Azure auto ml will use different, or many different ml algorithms that will recommend the best performing candidates.

So here's a list.

And I want to just point out, down below, there's three pages, there's 53 models, that's a lot of models.

And so you can see that the one I chose is its top candidate was called voting ensemble, that's an ensemble algorithm, that's where you take two weak ml models, combine them together to make a more stronger one.

And notice here, it will show us the results.

And this is what we're looking for, which is the primary metric, the highest value should indicate that that's the model we should want to use, you can get an explanation of the model called that's known as explainability.

And now if you're a data scientist, you might be a bit smarter and say, Well, I know this one should be better.

So I'll use this and tweak it.

But you know, if you don't know you're doing just go with the top line, okay.

So we just saw that we had a top candidate model, and there could be an explanation to understand as to the effectiveness of this, this is called MX L.

So machine learning explainability This is the process of explaining interpreting ml or deep learning models, MX, m, l x, can help machine learning developers to better understand interpret models behavior.

So after your top candidate models selected by Azure auto ml, you can get an explanation of internals of various factors.

So model performance data set, explore aggregate feature importance, individual feature importance.

So I mean, yeah, this is aggregate.

So what it's looking at, and it's actually cut off here, but it's saying that these are the most important ones that affect how the the models outcome.

So I think this is the diabetes data data set.

So BMI would be one, that would be a huge influence there, there, okay.

So the primary metric is a parameter that determines the metric to be used during the mall training for optimization.

So we for classification, we have a few and regression and time series, we have a few.

But you'll have these task types.

And underneath, you'll choose the additional configuration.

And that's where you can override the primary metric, it might just auto detect it for you.

So you don't have to because it might sample some of your data set to just kind of guess.

But you might have to override it yourself.

Just going through some scenarios.

And we'll break it down into two categories.

So here we have suited for larger datasets that are well balanced.

well balanced means that your data set like is evenly distributed.

So if you have classifications for A and B, let's say you have 100, and 100, they're well balanced, right, you don't have one data set much a subset of your data set much larger than the other that's labeled.

So for accuracy, this is great for image classification, sentiment analysis term prediction, for average precision score weighted is for sentiment analysis, nor macro recall term prediction for precision score weighted, uncertain as to what that would be good for maybe sentiment analysis suited for smaller data sets that are imbalanced.

So that's where your data set like you might have like 10 records for one and 500 for the other on the label.

So you have AUC weighted fraud detection, image classification, anomaly detection, spam detection, on to regression scenarios, we'll break it down into ranges.

So when you have a very wide range, Spearman correlation works really well are to score.

This is great for airline delay salary estimation, but resolution time, when you're looking at smaller ranges, we're talking about normalized root square mean to error.

So price predictions, review tips, score predictions, for normalized mean absolute error, it's going to be just another one here, they don't give a description for time series, it's the same thing.

It's just in the context of time series of forecasting.

Alright.

Another option we can change is the validation type when we're setting up our ML model.

So validation, model validation is when we compare the results of our training data set to our test data set model validation occurs after we train the model.

And so you can just drop it down there, and we have some options.

So auto k fold cross validation, Monte Carlo cross validation, train validation split, I'm not going to really get into the details of that.

I don't think it'll show up on the AI 900 exam.

But I just want you to be aware of that you do have those options, okay.

Hey, this is Andrew Brown from exam Pro.

And we're taking a look here at custom vision.

And this is a fully managed no code service to quickly build your own classification, and object detection ml models.

The service is hosted on its own isolate domain at www custom vision.ai.

So the first idea is you upload your images of bring your own labelled images or custom vision to quickly add tags to any unlabeled data images.

You use the labeled images to teach custom vision, the concepts you care about, which is training, and you use a simple REST API that calls to quickly tag images.

With your new custom computer vision model so you can evaluate, okay.

So when we launch custom vision, we have to create a project.

And with that, we need to choose a project type.

And we have classification and object detection.

Reviewing classification, here, you have the option between multi labels.

So when you want to apply many tags to an image, so think of an image that contains both a cat and a dog, you have multi class, so when you only have one possible tag to apply to an image, so it's either an apple, banana, and orange, it's not multiples of these things.

You have object detection, this is when we want to detect various objects in an image.

And you also need to choose a domain a domain is a Microsoft managed data set that is used for training the ML model.

There are different domains that are suited for different use cases.

So let's go take a look first at image classification domains.

So here is the big list of domains being over here.

Okay, and we'll go through these here.

So general is optimized for a broad range of image classification tasks.

If none of the none of the other specified domains are appropriate, or you're unsure of which domain to choose Select one of the general domains so G, or a one is optimized for better accuracy with comparable inference time as general domain recommended for larger datasets or more difficult user scenarios.

This domain requires more training time, then you have a to optimize for better accuracy with faster adverts times than a one and general domains recommended for more most datasets this domain requires less training time, then general and a one, you have food optimized for photographs or dishes as you would see them on a restaurant menu.

If you want to classify photographs of individual fruits or vegetables use food domains.

So that we have optimized for recognizable landmarks both natural and artificial.

This domain works best when landmark is clearly visible in the photograph, this domain works even if the lend mark is slightly obstructed by people in front of it.

Then you have retail so optimized for images that are found in a shopping cart or shopping website.

If you want a high precision classifying classified in between dresses, pants shirts uses domain contact domains optimized for the constraints of real time classification on the edge.

Okay, then we have object detection domain, so this one's a lot shorter, so I'll get through a lot quicker.

So optimize for a broad range of object detection tasks if none of the other domains are appropriate, or you're unsure of which domain choose the general one a one optimize for better accuracy and comparable inference time than the general domain recommended for most accurate region.

locations, larger data sets or more difficult use case scenarios, the domain requires more training results are not deterministic expect plus minus 1% mean average precision difference with the same training data provided you have logo optimized for finding brands, logos and images products on shelf so optimized for detecting and classifying products on the shelf.

So there you go.

Okay, so let's get some more practical knowledge of the service.

So for image classification, you're gonna upload multiple images and apply single or multiple labels to the entire image.

So here I have a bunch of images uploaded.

And then I have my tags over here.

And they could either be multi or singular.

For object detection, you apply tags to objects in an image for data labeling, and you hover your cursor over the image custom vision uses ml to show bounding bounding boxes of possible objects that are not yet been labeled.

If it does not detect it, you can also just click and drag to draw out whatever square you want.

So here's one where I tagged it up quite a bit, you have to have at least 50 images on every tag to train.

So just be aware of that when you are tagging your images.

When you're training, your model is ready when you and you have two options.

So you have quick training that's trained quickly, but it will be less accurate, you have advanced training, this increases compute time to improve your results.

So for advanced training, basically, you just have this thing that you move to the right.

With each iteration of training, our ML model will improve the evaluation metrics.

So precision recall, it's going to vary.

We're going to talk about the metrics here in a moment, but the probability threshold value determines when to stop training, when our evaluation metric meets our desired thresholds.

So these are just additional options where when you're training, you can move this left to right, and these left to right, okay.

And then when we get a results back, we're gonna get some metrics here.

So we have evaluation master.

So we have precision being exact, inaccurate, selects items that are relevant, recalls that sensitivity or known as true positive rate, how many relevant items returned average precision, it's important that you remember these because they might ask you that on the exam.

So for cut when we're looking at object detection, and we're looking at the evaluation metric outcomes for this one, we have precision recall and mean average precision.

Once we have deployed our pipeline, it makes sense that we go ahead and give it a quick test to make sure it's working correctly to press Click Click test button and you can upload your image and it will tell you so this one says it's worth, when you're ready to publish, you just hit the publish button.

And then you'll get some prediction URL and information so you can invoke it.

One other feature that's kind of useful is the smart labelers.

So once you've loaded some training data within a canal, make suggestions, right.

So you can't do this right away.

But once it has some data, it's like, it's like kind of a prediction that is not 100%, guaranteed, right, and it just helps you build up your training data set a lot faster.

Very useful.

If you have a very large data set, this is known as ml assisted labeling, okay.

Hey, this is Andrew Brown from exam Pro.

And in this follow along, we're gonna set up a studio with an Azure Machine Learning service, so that it will be the basis for all the fall logs here.

So what I want you to do is go all the way the top here, and type in Azure machine learning.

And you're looking for this one that looks like a science bottle here.

And we'll go ahead and create ourselves our Machine Learning Studio.

And so I'll create a new one here, and I'll just say, my studio.

I will hit OK.

And we'll name the workspace.

So I will say my work workplace will maybe say and I'll workplace here.

For containers, there are nodes that we will create all that stuff for us.

I'll hit create and create.

And so what we're going to do here is just wait for that creation, okay? Alright, so after a short little wait there, it looks like our studio set up.

So we'll go to that resource launch the studio and where are now in.

So there's a lot of stuff in here.

But generally, the first thing you'll ever want to do is get yourself a notebook going.

So in the top left corner, I'm going to go to notebooks.

And what we'll need to do is load some files in here.

Now they do have some Sample Files, like how to use Azure ML.

So if we just quickly go through here, you know, maybe we'll want to look at something like Ms NIST here.

And we'll go ahead and open this one.

And we'll just go ahead and clone this.

And we'll just clone it over here.

Okay, and the idea is that we want to get this notebook running.

And so notebooks have to be backed by some kind of compute.

So up here, it says, No compute found, and etc.

So what we can do here, I'm just gonna go back to my files, oh, it went back there for me.

But what I'm going to do is go all the way down.

Actually, I'll just expand this up here makes it a bit easier, close this tab out.

But what we'll do is go down to compute.

And here we have our four types of compute to compute instances is when we're running notebooks, compute clusters, is when we're doing training inference clusters is when we have a inference pipeline.

And then attached computers bringing things like hdn sites or data bricks into here, but for compute instances is what we need, we'll get ahead and go new, you'll notice they have the option between CPU and GPU.

GPU is much more expensive.

So it's like 90 cents per hour.

For a notebook, we do not need anything super powerful.

Notice, it'll say here, development on notebooks, IDs, lightweight testing, here, it's as classical ml model training, auto ml pipelines, etc.

So I want to make this a bit cheaper for us here.

Because we're going to be using the notebook to run cognitive services and those costs next to nothing like they don't take much compute power.

And for some other ones, we might do something a bit larger.

For this, this is good enough.

So I'll go ahead and hit next.

I'm just gonna say my notebook instance here.

We'll go ahead and hit Create.

And so we're just gonna have to wait for that to finish creating and running and when it is, I'll see you back here in a moment.

Alright, so after a short little wait there, it looks like our server is running.

And you can even see here it shows you you can launch in Jupiter labs, Jupiter VS code, our studio or the terminal.

But what I'm going to do is go back all the way to our notebooks just so we have some consistency here, I want you to notice that it's now running on this compute.

If it's not, you can go ahead and select it.

And it also loaded in Python 3.6, there is 3.8.

Right now, it's not a big deal which one you use.

But that is the kernel, like how it will run this stuff.

Now, this is all interesting.

But I don't want to run this right now what I want to do is get those cognitive services into here.

So what we can do is just go up here and we'll choose editors and edit in Jupiter lab.

What that should do is open up a new tab here is it opening.

If it's not opening, what we can do is go to compute.

Sometimes it's a bit more responsive.

If we just click there, it's the same way of getting to it.

I don't know why, but just sometimes that link doesn't work when you're in the notebook.

And what we can do is while we're in here now we can see that this is where this is An example project is okay.

But what we want to do is get those cognitive services in here.

So I don't know if I showed it to you yet, but I have a repository, I just gotta go find it.

It's somewhere on my screen.

Here it is.

Okay, so I have a repo called the free az, AZ night free, AZ I should be ai 900, I think I'll go ahead and change that, or that is going to get confusing.

Okay, so what I want you to do here is, we'll get this loaded in.

So this is a public directory, I'm just thinking, there's a couple ways we can do it, we can go and I use the terminal to grab it, what I'm going to do is I'm just going to go download the zip.

And this is just one of the easiest ways to install it, and we need to place it somewhere.

So here are my downloads.

And I'm just going to drag it out here.

Okay.

And what we'll do is upload that there.

So I can't remember if it lets you upload entire folders, we'll give it a go see if it lets us maybe rename this to the free AZ or ai 900 there, we'll say open.

Yeah, so it's individual file.

So it's not that big of a deal, but we can go ahead and select it like that.

And maybe we'll just take him to the folder and here we'll say this cognitive services.

Okay.

And what we'll do here is keep on uploading some stuff.

So we have assets.

So I have a couple loose files there.

And I know we have crew groups will have crew.

Oops.

Sometimes it's not as responsive.

We want OCR, I believe we have on called movie reviews.

So we'll go into OCR here and upload the files that we have.

So we have a few files there.

And we'll go back a directory here.

And I know movie reviews are just static files.

And we have an objects folder.

So we will go back here to objects.

And then we'll go back and to crew and we need a folder called Wharf a folder called Crusher, a folder called data.

And so for each of these, we have some images.

Think Ron Wharf, right? Yeah, we are okay, great.

So we will quickly upload all these I will technically we don't really need to upload any of these walls, these images we don't but I'm going to put them here anyway, I just remembered that these we just upload directly to the service.

But because I'm already doing it, I'm just gonna put them here, even though we're not going to do anything with them.

All right.

And so now we are all set up to do some cognitive services.

So I'll see you the next video.

Alright, so now that we have our work environment set up, what we can do is go ahead and get Cognitive Services hooked up, because we need that service in order to interact with it.

Because if we open up any of these, you're gonna notice we have a cognitive key endpoint that we're going to need.

So what I want you to do is go back to your Azure Portal.

And at the top here, we'll type in cognitive services.

Now the thing is, is that all these services are individualized, but at some point, they did group them together, and you're able to use them through a unified key and API endpoint.

That's what this is.

And that's the way we're going to do it.

So let's say add, and it brought us to the marketplace.

So I'm just going to type in cognitive services.

And then just click this one here.

And we'll hit Create.

And we'll make a new one here.

I'm gonna call my cogs services, say, Okay, I prefer to be in US East, I believe in us West, it's fine.

And so in here, we'll just say my cog services.

And if it doesn't like that, I'll just put some numbers in.

There we go.

We'll do standard so we will be charged something for that.

Let's go take a look at the pricing.

So you can see that the pricing is quite variable here, but it's like you'd have to do 1000 transactions.

Before you are billed, so I think we're going to be okay for billing.

We'll check boxes here, we'll go down below, it's telling us about responsible AI.

Notice, sometimes services will actually have you checkbox it.

But in this case, it just tells us there.

And we'll go ahead and hit Create.

And I don't believe this took very long, so we'll give it a second here.

Yep, it's all deployed.

So we'll go to this resource here.

And what we're looking for are keys and endpoints.

And so we have two keys and two endpoints, we only need a single key.

So I'm going to copy this endpoint over, we're gonna go over to Jupiter lab, and I'm just going to paste this in here.

I'm just gonna put it in all the ones that need it.

So this one needs one.

This one needs one.

This one needs one.

And this one needs one.

And we will show the key here, I guess doesn't show but it copies.

Of course, I will end up deleting my key before you ever see it.

But this is something you don't want to share publicly.

And usually, you don't want to embed keys directly into a notebook.

But this is the only way to do it.

So this is how it is with Azure.

So yeah, all our keys are installed.

Going back to the cognitive services, nothing super exciting here.

But it does tell us what services work with it.

You'll see there's an asterisk beside custom vision, because we're gonna access that through another app.

But yeah, cognitive services all set up.

And so that means we are ready to start doing some of these labs.

Okay.

All right.

So let's take a look here at computer vision first.

And computer vision is actually used for a variety of different services.

As you will see, it's kind of an umbrella for a lot of different things.

But the one in particular that we're looking at here is describe image in stream.

If we go over here to the documentation, this operation generates description of image in a human readable language.

And with complete sentences, the description is based on a collection of content tags, which also returned by the operation.

Okay, so let's go see what that looks like in action.

So the first thing is, is that we need to install this Azure Cognitive Services vision computer vision.

Now we do have a kernel and these aren't installed by default, they're not part of the machine learning the Azure Machine Learning SDK for Python, I believe that's pre installed.

But these AI services are not.

So what we'll do is go ahead and run it this way.

And you'll notice where it says pip install, that's how it knows to install.

And once that is done, we'll go run our requirements here.

So we have the OS, which is for usually handling up like OS layer stuff, we have met matplotlib, which is to visually plot things, and we're gonna use that to show images and draw borders, we need to handle images.

I'm not sure if we're using NumPy here, but I have NumPy loaded.

And then here we have the Azure Cognitive Services vision, computer vision, we're going to load the client.

And then we have the credentials.

And these are generic credentials for the cognitive services credentials.

It's commonly used for most of the services and some exceptions, they the API's do not support them yet, but I imagine they will in the future.

So just notice that when we run something, it will show a number.

If there's an asterisk, it means it hasn't ran yet.

So I'll go ahead and hit play up here.

So it was an Astra can we get her to, and we'll go ahead and hit play again.

And now those are loaded in and so we'll go ahead and hit play.

Okay, so here we have just packaged our credentials together.

So we passed our key into here, and then will now load in the client, install, pass our endpoint and our key.

Okay, so hit play.

So now we we just want to load our image.

So here we're loading assets data dot jpg, just make sure that that is there.

So we have assets, and there it is.

And we're going to load it as a stream because you have to pass streams along.

So hit play.

You'll see that it now ran.

And so now we'll go ahead and make that call.

Okay, great.

And so we're getting some data back.

And notice we have some properties person while indoor man pointing captions.

It's not showing all the information, sometimes you have to extract it out.

But we'll take a look here.

So this is a way of showing matplotlib in line.

I don't think we have to run it here, but I have it in here anyway.

And so what it's going to do is it's going to show us the image, right? So it's going to print us the image, and it's going to grab whatever caption is returns to see how there's captions.

So we're going to iterate through the captions.

That's going to give us a confidence score saying it thinks it's this so let's see what it comes up with.

Okay, and so here it says brand spider spider looking at a camera.

So that is the actor who plays data on Star Trek as a confidence score.

57 point 45% even though it's 100% correct, they probably don't know contextual things like in the sense of like pop culture, like they don't know probably search for characters, but they're gonna be able to identify celebrities because it's in their database.

So that is the first introduction to computer, computer vision there.

But the key things you want to remember here is that we use this describe an image stream.

And that we get this confidence score and we get this contextual information.

Okay.

And so that's the first I'll move on to maybe custom vision next.

Alright, so let's take a look at custom vision.

So we can do some classification and object detection.

So the thing is, is that it's possible, it's possible to launch custom vision through the Marketplace.

So if we go, we're not going to do it this way.

If you type in custom vision, it never shows up here.

But if you go to the marketplace here, and type in custom vision, and you go here, you can create it this way.

But the way I like to do it, I think it's a lot easier to do is we'll go up the top here and type in custom vision.ai.

And you'll come to this website.

And what you'll do is go ahead and sign in, it's going to connect to your Azure account.

And once you're in, you can go ahead here and create a new project.

So the first one here is I'm just gonna call this the Star Trek crew.

We're gonna use this to identify different Star Trek members, we'll go down here, and we haven't yet created a resource.

So we'll go create new, my custom vision resource.

We'll drop this down, we'll put this in our cog services, we'll go stick with us West as much as we can.

Here, we have fo and so fo is blocked out for me to just choose.

So I think fo is the free tier, but I don't get it.

And once we're back here, we'll go down below and choose our standard.

And we're going to have a lot of options here.

So we have between classification and object detection.

So classification, is when you have an image and you just want to say, what, what is this image, right.

And so we have two modes where we can say, let's apply multiple labels.

So let's say there were two people in the photo or whether there was a dog and cat.

And I think this example is a dog and a cat.

Or you just have a single class where it's like, what is the one thing that is in this photo, it can only be of one of the particular categories.

This is the one we're going to do multiclass, we have a bunch of different domains here.

And if you want to, you can go ahead and read about all the different domains and their best use case, but we're going to stick with a two that is optimized for it.

So that's faster, right.

And that's really good for our demo.

So we're going to choose general a two, I'm going to go ahead and create this project.

And so now what we need to do is start labeling our arc our content.

So what we'll do is I just want to go ahead and create the tags ahead of time.

So we'll say Wharf will have data.

And we'll have Crusher.

And now what we'll do is we'll go ahead and upload as images.

So, you know, we upload in the Jupyter Notebook, but it was totally not necessary.

So here is data, because we're going to do it all through here.

And we'll just apply the data tag to them all at once, which saves us a lot of time, I love that will upload now Worf.

And I don't want to upload them all I have this one quick test image we're going to use to make sure that this works correctly.

And I'm going to choose Worf.

And then we'll go ahead and add Beverly.

There she is.

Beverly Crusher.

Okay, so we have all of our images.

And I don't know how this one got in here, but it's under Worf, it works out totally fine.

So what I want to do is go ahead and train this model, because they're all labeled.

So we have a ground truth.

And we'll let it go ahead and train.

So we'll go and press train.

And we have two options, quick training, advanced training, advanced training, where we can increase the time for better accuracy.

But honestly, we just want to do quick training.

So I'll go ahead and do quick training.

And it's going to start it's iterative process.

Notice on the left hand side, we have probability threshold, the minimum probability score for a prediction to be valid when calculating calculating precision, and recall.

So we, the thing is, is that if it doesn't at least meet that requirements, it will quit out.

And if it gets above that, that it might quit out early, just because it's good enough.

Okay.

So training doesn't take too long, it might take five to 10 minutes, I can't remember how long it takes.

But what I'll do is I'll see you back here in a moment, okay.

All right.

So after waiting a short little while here, it looks like our results are done, we get 100% matches.

So these are our evaluation metrics to say whether the model was achieved its actual goal or not.

So we have precision recall.

And I believe this is average precision.

And so it says that it did a really good job.

So that means that it should have no problem matching up an image.

So in the top right corner, we have this button that is called Quick tests.

And this is going to give us the opportunity to quickly test these.

So what we'll do is browse our files locally here.

And actually I'm going to go to Yeah, we'll go here and we have war.

And so I have this quick image here, we'll test that we'll see if it actually matches up to be worth.

And it says 98.7%.

Worse, that's pretty good.

I also have some additional images here I just put into the repo to test against, and we'll see what it matches up.

Because I thought it'd be interesting to do something that is not necessarily them, but it's something pretty close to, you know, it's pretty close to what those are.

Okay.

So we'll go to crew here, and First we'll try you.

Okay, and who is the Borg, so he's kind of like an Android.

And so we can see he mostly matches to data.

So that's pretty good.

We'll give another one go.

martock is a click on so he should be matched up to Worf.

Very strong match to work.

That's pretty good.

And then polaski.

She is a doctor and female, so she should get matched up to Beverly Crusher.

And she does.

So this works out pretty darn well.

And I hadn't even tried that.

So it's pretty exciting.

So now let's say we want to go ahead and well, if we want to make predictions, we could do them in bulk here.

I believe that you could do them in bulk.

But anyway.

Yeah, I guess I always thought this was like, I could have swore, yeah, if we didn't have these images before, I think that it actually has an upload option, it's probably just a quick test.

So I'm a bit confused there.

But anyway, so now that this is ready, what we can do is go ahead and publish it so that it is publicly accessible.

So we'll just say here, crew model.

Okay, and we'll drop that down, say publish.

And once it's published, now we have this public URL.

So this is an endpoint that we can go hit programmatically.

I'm not going to do that.

I mean, we could use postman to do that.

But my point is, is that we've basically figured it out for classification.

So now that we've done classification, let's go back here to the division here.

And let's now let's go ahead and do object detection.

Okay.

Alright, so we're still in custom vision, let's go ahead and try out object detection.

So object detection is when you can identify particular items in a scene.

And so this one is going to be combat just we're going to call it because we're going to try to detect combat, we have more domains, here, we're gonna stick with a general a one.

And we'll go ahead and create this project here.

And so what we need to do is add a bunch of images, I'm going to go ahead and create our tag, which is going to be called combat, you can look for multiple different kinds of labels, but then you need a lot of images.

So we're just gonna keep it simple and have that there, I'm going to go ahead and add some images.

And we're going to go back a couple steps here, into our objects.

And here I have a bunch of photos, and we need exactly 15 to train.

So we got 1-234-567-8910 1112 1314 1516.

And so I threw an additional image in here, this is the batch test.

So we'll leave that out.

And we'll see if that picks up really well.

And, yeah, we got them all here.

And so we'll go ahead and upload those.

And we'll hit upload files.

Okay.

And we'll say done, and we can now begin to labels.

We'll click into here and what I want to do if you hover over it should start detecting things.

If it doesn't, you can click and drag bolt, click this one.

They're all con badges, so we're not going to tag anything else here.

Okay.

So go here, hover over is it gonna give me the combat? No, so I'm just right clicking and dragging to get it.

Okay.

Okay, do we get this combat? Yes.

Do we get this one? Yep.

Simple as that.

Okay, it doesn't always get it, but most cases it does.

Okay, didn't get that one.

So we'll just drag it out.

Okay, it's not getting that one.

It's interesting.

Like, that one's pretty clear.

But it's interesting what it picks out and what does what does not grab it.

So it's not getting this one, probably because the photo doesn't have enough contrast.

And this one has a lot hoping that that gives us more data to work with here.

Yeah, I think the higher the contrast easier for it to detect those.

It's not getting that one.

Not getting that one.

Okay, there we go.

Yes, there are a lot I know as some of these ones that are packed, but there's only like three photos that are like this.

They have badges but they're slightly different.

So we're gonna leave those out.

I think it actually had that one, but we'll just tag it anyway.

And hopefully this will be worth the effort here.

There we go.

I think that was the last one.

Okay, great.

So we have all of our tag photos.

And what we can do is go ahead and train the model, same option, quick training, advanced training, we're gonna do a quick training here.

And notice that the options are slightly different, we have probably threshold.

And then we have overlap thresholds.

So the minimum percentage of overlap between predicted bounding boxes and ground truth boxes to be considered for correct prediction.

So I'll see you back here when it is done.

Alright, so after waiting a little bit a while here, it looks like it's done.

It's trained.

And so precision is at 75%.

So precision, the number will tell you if a tag is predicted by your model, how likely that it's likely to be.

So how likely did a guess right? Then you have recall? So the number will tell you out of the tags, which should be predicted correctly, what percentage does your model correctly find? So we have 100%.

And then you have mean average precision, this number will tell you the overall object detector performance across all the tags.

Okay, so what we'll do is we'll go ahead and do a quick test on this model.

And we'll see how it does.

I can't remember if I actually even ran this.

So it'll be curious to see the first one here.

It's not as clearly visible, it's part of their uniform.

So I'm not expecting you to pick it up.

But we'll see what it does.

It picks up pretty much all of them.

exception, this one is definitely not a con badge.

But that's okay.

Alicia suggests obviously, the probability is above the selected threshold.

So if we increase it, we'll just bring it down a bit.

So there it kind of improves it.

If we move it around back and forth.

Okay.

So I imagined it via the API, we could choose that let's go look at our other sample image here.

I'm not seeing it.

Where did I save it? Let me just double check, make sure that it's in the correct directory here.

Okay.

Yeah, I saved it to the wrong place just a moment.

I will place it just call that bench test to one second.

Okay, and so I'll just browse here again.

And so here we have another one.

See if it picks up the badge right here.

There we go.

So looks like it works.

So yeah, I guess custom vision is pretty easy to use, and pretty darn good.

So what we'll do is close this off and make our way back to our Jupiter labs to move on to our our next lab here, okay.

All right, so let's move on to the face service.

So just go ahead and double click there on the left hand side.

And what we'll do is work our way from the top.

So the first thing we need to do is make sure that we have the computer vision installed.

So the face service is part of the Computer Vision API.

And once that is done, we'll go ahead and do our imports.

Very similar to last one.

But here we're using the face client, we're still using the cognitive service credentials will populate our keys, will you make the face client and authenticate.

And we're going to use the same image we used prior with our computer vision, so the data one there, and we'll go ahead and print out the results.

And so we get an object back.

So it's not very clear what it is.

But here if we hit show, okay, here, it's data, and it's identifying the face IDs are going through this code.

So we're just saying open the image, we're going to set up our figure for plotting, it's going to say, Well, how many faces did it detect in the photo, and so here it says, detected one face, it will iterate through it.

And then we'll create a bounding box around the images, we can do that because it returns back the face rectangles, we get a top left, right, etc.

And we will draw that wrangle on top.

So we have magenta, I could change it to like three if I wanted to.

I don't know what the other colors are.

So I'm not even going to try but yeah, there it is.

And then we annotate with the face ID that's the unique identifier for the face.

And then we show the image.

Okay, so that's one.

And then if we wanted to get more detailed information, like attributes such as age, emotion, makeup or gender, this resolution image wasn't large enough.

So I had to find a different image and do that.

So that's one thing you need to know is if it's not large enough, we won't process it.

So we're just loading data large.

Very similar process, but it is the same thing detect with stream but now we're passing in return face attributes.

And so here we're saying the attributes we want.

And there's that list and we went through it in the lecture content and so here We'll go ahead and run this.

And so we're getting more information.

So that magenta line is a bit hard to see.

I'm just gonna increase that to three.

Okay, still really hard to see.

But that's okay.

So approximately age 44, I think the actor was a bit younger than that.

Data technically is male presenting, but he's an Android.

So it doesn't necessarily have a gender, I suppose.

He actually is wearing a lot of makeup.

But all it detects is it I guess it's only Pickler on the lips and the eyes.

So it says he doesn't have makeup.

So maybe there's a color, you know, like eyeshadow or stuff.

And we would detect that in terms of personality.

I like how he's a 002 points, percent.

Sad, but he's neutral, right.

So just going through the code here very quickly.

So again, it's the number of faces so it detected one face.

And then we draw a bounding box around the face for the detected attributes, it's returned back in the data here.

So we just say, get the phase attributes, turn it into a dictionary.

And then we can just get those values and iterate over it.

So that's as complicated as it is.

And so there we go.

Alright, so we're on to our next cognitive service.

Let's take a look at form recognizer.

Alright, and so form recognizer, it tries to identify, like forms and turns them into readable things.

And so they have one for receipts in particular.

So at the top, finally, we're not using computer computer vision, we actually have a different one.

So this one's Azure AI form recognizer.

So run that there.

But this one in particular isn't up to date in terms of using it like, notice, all the other ones are using the cognitive service credential.

So for this, we actually had to use the Azure Key credential, which was annoying, I tried to use the other one to be consistent, but I couldn't use it.

Okay, so what we'll do is run our keys like before, we have a client very similar process.

And this time, we actually have a receipt.

And so we have begin recognize receipt.

So it's going to analyze the receipt information.

And then it's what it's going to do is show us the image.

Okay, just so we have a reference to look at the images and actually yellow, it's a white background.

I don't know why when it renders out here, it does that, but that's just what happens.

And it even obscures the server name.

I don't know why.

But anyway, if we go down below, this is returned results up here, right, so we got our results.

And so if we just print out the results, here, we can see we get a recognized forum back, we get fields, and some additional things.

And if we go into the fields itself, we see there's a lot more information, if you can make out like here, it says merchant phone number, form field label value, and there's a number 512707.

So for these things here, like the receipts, if we can just find the API quickly here, it has predefined fields.

I'm not sure.

Yeah, business card, etc.

Like if we just type in merchant, I'm just trying to see if there's a big old list here.

It's not really showing us a full list.

But these are predefined things that are returned, right? So they've defined those.

Maybe it's over here.

There we go.

So these are the predefined ones that extracts out.

So we have receipt, type, merchant name, etc, etc.

And so if we go back to here, you can see I have a field called merchant name.

So we get there it says Alamo Drafthouse cinema, let's say we want to try to get that balance.

Maybe we can try to figure out which one it is.

I never ran this myself when I made it.

So we'll see what it is.

But here it has total price.

What's interesting is that these, this is a space.

So it's a kind of unusual, you think it'd be together, but let's see if that works.

Okay, doesn't like that.

Maybe that's just a typo on their part.

Okay, so we get none.

Let's try price.

See what it picks up? Nope, nothing.

We know that the phone numbers there.

So we'll give the phone number.

There we go.

So you know, it's an OK service.

But, you know, you know, your your mileage will vary based on what you do there.

Maybe we could try total, because that makes more sense, right? Ah, yeah, there we go.

Okay, great.

So yeah, it is pulling out the information.

And so that's pretty much all you need to know about that service there.

Okay.

Let's take a look at some of our OCR capabilities here.

I believe that's in computer vision.

So we'll go ahead and open that up.

At the top here we'll install computer vision as we did before, very similar to the other computer vision tasks, but this time we have a couple of ones here that I'll explain that as we go through here.

We'll load our keys.

We'll Do our credentials will load the client.

Okay, and then we have this function here called printed text.

So what this function is going to do is it's going to print out the results of whatever text it processes.

Okay, so the idea is that we're going to feed in an image, and it's going to give us back out the text for the image.

So we'll run this function.

And I have two different images, because I actually ran it on the first one, and the results were terrible.

And so I got a second image and it was a bit better.

Okay, so we'll go ahead and run this, it's going to show us the image.

Okay, and so this is the photo, it was supposed to extract out Star Trek The Next Generation, but because of the artifacts and size of the image, we get back, not English, okay.

So you know, maybe a high resolution image, it would have a better a better time there.

But that is what we got back.

Okay.

So let's go take a look at our second image and see how it did.

And this one, I'm surprised that I actually extracts out a lot more information, you can see realize a hard time with the Star Trek font, but we get Deep Space Nine, nine, a visitor tells all life death, some errors here, so it's not perfect.

But you know, you can see that it does something here.

Now there is the iOS.

This is like for OCR, where we have like first very simple images and texts.

This is where we use the recognized printed text in stream.

But if we're doing this for larger amounts of text, and we want to do this, want this analyzed a synchronously, then we want to use the read API, and it's a little bit more involved.

So what we'll do here is load a different image.

And this is a script, we'll look at the image here in a moment.

But here we read in stream, and we create these operations.

Okay.

And what it will do is it will asynchronous asynchronously send all the information over.

Okay.

So I think this is supposed to be results here.

Minor typo.

And we will go ahead and give that a run.

Okay, so here you can see it's extract out the image if we want to see this image.

I thought I thought I showed this image here, but I guess I don't.

Yes, this plot image here to show us the image.

path.

It's up here.

It doesn't want to show us it's funny because this one up here is showing us No problem, right? Um, well, I can just show you the image.

It's not a big deal.

But I'm not sure why it's not showing up here today.

So if we go to your assets here, I go to OCR.

I'm just gonna open this up.

Hope it's opening up in Photoshop.

And so this is what it's transcribing.

Okay, so this is a thing.

This is like a guide to Star Trek where they talk about like, you know, what, what makes Star Trek Star Trek.

So just looking here, it's actually pretty darn good.

Okay.

But like read API is a lot more efficient, because it can work asynchronously.

And so when you have a lot of texts, that's what you want to do, okay? Like it's feeding in each individual line, right, so that it can be more effective that way.

So let's go look at some handwritten stuff.

So just in case the image doesn't pop up, we'll go ahead and open this one.

And so this is a handwritten note that William Shatner wrote to a fan of Star Trek, and it's basically incomprehensible.

I don't know if you can read that here.

But see, was very something he was something hospital and healthy was something he was something I can't even read it.

Okay, so let's see what the machine thinks here.

And it says image path, it's called path.

Let's just change that out.

We hadn't run that.

Run that there.

And we'll go ahead and run it.

And here we got the image.

So poner us very sick, he was the hospital his BD was, etc.

Beat nobody lost.

His family knew Captain halden.

So reads better than how I could read it, honestly, like it is.

It's really hard, right? Like, if you looked at this, like, that looks like difficult was bt healthy.

I could see why it's guessing like that, right? dying.

It's like that looks like dying to me.

You don't I mean, so it's just poorly hand handwritten, but I mean, it's pretty good for what it is.

So yeah, there you go.

Alright, so let's take a look at another cognitive service here.

And this one is text analysis.

And so what we'll do is install the Azure cognitive services, language text analytics here.

So go ahead and hit run.

Alright, and once that's installed, this one actually is using the cognitive services credential, so it's a little bit more standard with our other ones here.

We'll go ahead and run that there.

We'll make our credentials low clients.

And this one, what we're going to do is try to determine sentiment and understand why people like a particular movie or not.

So I've loaded a bunch of reviews, they are again, I can show you the data, if it helps.

And so I'm just trying to find my right folder here.

And so if we go back, look, our movie reviews, here's like a review, someone wrote, first contact just works.

It's works as a rousing chapter in the Star Trek to lesser set works as a mainstream entertainment.

So different reviews for Star Trek First Contact, which was a very popular movie back in the day.

So what we'll do is as we will load the reviews, so it's just iterating through the text files and showing us what the reviews are.

so here we can see all the written text had a lot of trouble getting the last one to display, but it does get loaded in.

And so here we're using the the text analysis to show us key phrases because maybe that would give us an indicator.

And so that's the object back but maybe that'll give us an indicator as to like what people are saying as important things so here we see Borg ship, enterprise smaller ship escapes, neutral zone travels, contact damage, co writer beautiful mind sophisticate science fiction, best whales, Leonard Nimoy.

Okay.

wealth of unrealized potential filmmaker john Franks.

Okay, so very interesting stuff as here Borg ship again, you've seen Borg ship a lot.

So that is kind of key phrases, let's go get Cust or customer sentiment or how people felt about it, do they like it or not.

And so here, we just call sentiment.

And what we'll do is if it's above five, then it's positive, and it's below five, then it's a negative review.

I think most people thought it was very good film.

So this one says it's pretty low nine.

So let's go take a look at that one.

It wasn't actually showing rendered there.

So maybe we'll have to open it up manually.

See if that's actually accurate, it's empty.

So there you go.

I guess we had a blank one in there.

I must have forgot to paste it in.

But that's okay.

That's a good indicator that, you know, that's what happens if you don't have it.

So let's look at number one, then, which is actually this one is nine, this is 04.

This one here is eight.

So open up eight.

When the board launch on Earth, the enterprise is sent to the neutral zone, etc, etc.

However, smaller ship escapes traveled of enterprise falls back.

Meanwhile, the survivors, so like this is a synopsis.

It doesn't say whether they like it, or they don't.

But it was before, I guess.

So there's nothing positive about it.

Right? If we were looking at one that was this one's pretty low, which is no, no, it's not.

It's one.

So it seems like this person probably really liked it.

Or no, I guess that's actually pretty low.

Because it's one it's not nine, nine is very high.

Let's take a look at this one.

Review number two.

If we go up here, the dog has improved the Sorry, I'm going to turn the show but there's a wealth of unrealized potential.

So that's a fair one saying that maybe they don't like it as much.

I don't know if they give it two stars, right, we could probably actually correlate it with the actual results, because I did get these off of IMDb and Rotten Tomatoes.

But yeah, there you go.

That is text analysis.

Alright, so now we're on to q&a maker.

And so we're not going to need to do anything programmatically, because q&a maker is all about no code or low code to build out a questions and answers bot service.

So what we'll do is go all the way up to here.

And I want you to type in Q and maker.ai.

Because as far as I'm aware of so accessible through the portal, sometimes you can find these things.

Again, if we go to the marketplace.

I'm just curious.

I could just take a look here really quickly.

Whenever it decides to log us in here.

Okay, great.

So I'll go over to marketplace.

And probably we type in q&a.

Maybe we do something here q&a.

Yep.

So we go here.

Give it a second here.

Seems like Azure is a little bit slow right now.

It's usually very fast.

But you know, the service varies.

Well, it's not loading for me right now.

But that's okay, because we're not going to do it that way.

Anyway.

So you can go to q&a maker.ai.

And what I want you to do is go all at the top of the right corner, and we'll hit sign in.

And what we'll be doing is connecting via our single sign on with our account, so it already knows I have an account there.

I'm gonna give it a moment here.

And I'm going to go ahead and just give it a second.

There we go.

So it says I don't have any knowledge base, which is true.

So let's go ahead and create ourselves a new knowledge base.

And here we have the option Between stable and preview, I'm going to stick with stable because I don't know what's in preview.

I'm pretty happy with that.

So we need to connect q&a service q&a service to our knowledge base.

And so back over here in Azure, actually, I guess we do have to make one now that I remember, we actually have to create a q&a maker service.

So I'll go down here and put this under my cog services will say my queue at a queue and a service might complain about the name.

Yep, so I'll just put some numbers here.

We will pick a free tier sounds good, I'll go free what I actually get the option, that's what I will choose.

Down below, we'll choose free again, USB sounds great to me, it generates out the name, it's the same name as here.

So that's fine.

We don't need App Insights, I'm going to leave it enabled, because I think it changes the standard or zero when you do not have an enabled unusually.

And so we will create our q&a maker service, give it a moment here.

And it says I remember it will say like, even if you try it, it might have to wait 10 minutes for it to create the service.

So even though even after it's provisioned, it will take some time.

So what we should do is prepare our doc because it can take in a variety different files.

I just want to show you here that the q&a that a whole paper here formatting the guidelines.

And basically it's pretty smart about knowing where headings and answers is.

So for unstructured data, we just have a heading, and we have some text, let's write some things in here that we can think of.

Since we're all about certification, we should write some stuff here.

So how many AWS certifications are there? I believe right now, there are 11.

eight of us certifications.

Okay.

And maybe if we use our headings here, this would probably be a good idea here.

Yeah.

Okay.

Another one could be how many fundamental Azure certifications are there.

And we'll give this a heading.

And we'll say there are three as your I think there's three.

There's other ones, right, like pirate power platform and stuff.

But just being Azure specific.

There are three as your fundamental certifications, certification, so we have the DP 900, the AI 900.

The az 900.

I guess there's four, there's the SC 900.

Right.

So there are four.

Okay.

We'll say which is the hardest.

Azure Azure Association certification.

And what we'll say here is, I think, I mean, it's my is my opinion is it's the Azure administrator, had some background noise there.

That's why I was a bit pausing there.

But the Azure minister, az 104, I would say that's the hardest, which is harder.

The AWS or Azure certifications, I'd say Azure certifications are harder.

Because they check exact steps for implementation where AWS focuses on concepts.

Okay, so we have a bit of a knowledge base here.

So I'll save it.

And assuming that this is ready, because we did a little bit time to put this together.

We'll go back to q&a, hit a refresh here.

Give it a moment, drop it down, choose our service.

And notice here that we have chitchat extraction and only extraction we're going to do to chat.

I will say my or this is the reference can be changed any time this would be like a certification q&a.

So here, we want to populate.

So we'll go to files here, I'm going to go to my desktop.

And here it is.

I'll open it.

We will choose professional town.

Go ahead and create that.

And so I'll see you back here in a moment.

Alright, so after waiting a short little time here, it loaded in our data.

So you can see that it figured out which is the question which is the answer and also has a bunch of default.

So Here if somebody was at something very silly, like, can you cry, I'll say I don't have a body.

It has a lot of information pre loaded for us, which is really nice.

Why don't we go ahead and test this? We could go and say, we'll go here and then we'll write in, say, like, hello.

Say boring.

This is good morning.

Okay, so we'll say, how many certifications are there? We didn't say AWS, but let's just see what happens.

So to kind of infer even though we didn't say AWS in particular, so I noticed that there's AWS and Azure, so how many fundamental Azure certifications, things like that, and so chose AWS.

So it's not like the perfect service, but it's pretty good.

I wonder what would happen if we placed in one that's like Azure, I don't know how many Azure certs there are, we'll just say like, there's 1112, I can't ever remember, they're always adding more.

But I want to close this here.

There we go.

So let's just go add a new key pair here.

And we'll say, how many Azure certification are there, I should have said certifications, I'll probably just set one moment.

So there, there are 12, Azure certifications.

Who knows how many they have, they have like 14 or something, say like, between 11 and 14.

They just added they just updated them too frequently.

I can't keep track.

So we'll go here and we'll just say certifications.

And we will save and retrain.

So we'll just wait here a moment.

Great.

And so now we'll go ahead and test this again.

So we'll say how many certifications are there? I see it's pulling the first answer.

If I say Azure, let's just see if it gets the right one here.

How many Azure certifications are there? Okay, so, you know, maybe you'd have to say you'd have to have a generic one for that match.

So if we go back here, and we say, how many certifications are there? You say, you know, like, which certification? Which cert cloud service provider.

Here we got AWS Azure.

Follow prompt, you can use guides through conversational flow prompts are used to link q&a pairs and can be displayed.

I haven't used this yet.

But I mean, it sounds like something that's pretty good.

Because there is multi turn into so the idea is that if you had to go through multiple steps, you could absolutely do that.

We've tried a little bit here, fall prompt you can use to guide us to convert props are used to link q&a pairs together, text or button for suggested action.

Oh, okay, so maybe we would just do like AWS link to q&a.

And then so search an existing q&a or create a new one.

So let's say like, how many eight of us, oh, okay, we're typing in context, this follows up will not be understood out of the context flow.

Sure.

Because it should be within context, right.

And here, we can do another one we say like Azure will say, how many Azure contacts only.

Whoops, that got away from me there.

We'll save that.

And what we'll do is save and train.

Go back here.

And we'll say, how many certifications are there? Enter.

So we have to choose AWS.

So there we go.

So we got something that works pretty good there.

Since I'm happy with it, we can go ahead and go and publish that.

So let's say publish.

And now that it's published, we could use postman or curl to trigger it.

But what I want to do is create a bot because with Azure bot services, then we can actually utilize it with other integrations right.

It's great way to use your bot or to actually host your bot.

So we'll go over here and link it over.

If you don't click it, it doesn't pre loaded in so it's kind of a pain.

If you lose Got to go back there and click it again.

But let's just say certification.

que en de.

And we will look through here.

So I'm going to go with free premium messages, 10k 1k Premium message units, messages, I'm kind of confused by the pricing.

But f0 using means free.

So that's what I'm gonna go for that SDK or no GS, I'm gonna use no GS now that we're gonna do anything there with it.

Go ahead and create that.

And I don't think this takes too long.

We'll see here.

Just go ahead and click on that there.

I'll just wait here a bit.

I'll see you back here in a moment.

All right.

So after waiting, I don't know about five minutes there.

It looks like our bots services deployed, we'll go to that resource there.

You can download the bot source code.

Actually, I never did this.

So I don't know what it looks like.

So be curious to see this.

Just to see what the code is.

I assume that because we chose chose no GS, it would give us that is the default there.

So download this code as you're creating the source IP.

Not sure how long this takes.

Maybe regretting clicking on that.

But what we'll do is we'll go in the left hand side here to channels because I just want to show here.

Yeah, that didn't download.

We'll try it here in a second.

But what we'll do is we'll go back up profile.

unspecified bar we talked about.

Yeah, maybe it needs some time.

So you know, maybe we'll just give the bot a little bit of time here.

I'm not sure why it's giving us a hard time because this bot is definitely deployed.

If we go over to our bots, right.

bot services, it is here.

Sometimes there's like latency, you know, with Azure.

Oh, there we go.

Okay, see works now.

Fine, right.

And so I want to show you that there's different channels and these are just easy ways to integrate your bot and different services.

So whether you want her to use it with Alexa GroupMe, Skype telephony, Twilio Skype for Business, apparently they don't have that anymore.

Because they get small teams now, right.

keek, which I don't know, people still use that Slack, which that discord, telegram Facebook, email.

That's kind of cool.

But teams teams is a really good one.

I use teams.

There's a direct line channel, I don't know what that means.

And there's web chat, which is just having like an embed code.

So if we go over, we can go and test it over here to start testing our web chat.

And so it's the same thing as before, we just say things like, how many certifications are there? Azure, and get a clear answer back.

We'll go back up to our overview.

Let's try see if we can download that code.

Again.

I was kind of curious what that looks like.

Yes, it will download a lot of code a.

There we go.

So now we can hit download.

And so there is the code, I'm going to go ahead and open that up.

So yeah, I guess when we chose JavaScript, that made a lot more sense.

Let's give it a little peek here.

I'm just going to drop this on my desktop here.

So just to make a new folder here and call this bot code.

Okay, I know you can't see what I'm doing here.

But let's go here, and gret, double click into here, and then just drag that code on him.

And then what we can do is open this up in VS code, I should have VS code running somewhere around here.

I'm gonna go ahead and open that off screen here.

I'll just show you my screen in a moment.

Say show code, oops, File, Open Folder.

botcon code, okay.

And all the way back here.

And so we got a lot of code here.

never looked at this before.

But you know, I'm a pretty good programmers.

So it's not too hard for me to understand.

So it's like your API request, things like that.

I guess it would just be like, if you needed to integrate into your application, then it kind of shows you all the code.

They're just trying to see our dialogue choices.

Nothing super exciting.

Okay, you know what I go and make the Was it the AI are the 100 whatever the data scientists courses, I'm sure I'll be a lot more thorough here.

But I'm just curious as to what that looks like.

Now, if we wanted to have an easy integration, we can get an embedding code for this.

So if we go back to our channels, I believe we can go and edit.

Ah, yeah.

So here we have a code.

So what I'll do is go back to Jupiter labs, I'm just going to go make a new empty notebook.

So let's go up here and say notebook.

And this can be for our q&a.

Doesn't really matter what Colonel, say cute and a maker.

Just show like, if you wanted a very, very simple way of integrating your bot, we would go back over to wherever it is here.

Here, we are going to go ahead and copy this iframe.

I think it's percentage percentage HTML.

So it treats this cell as HTML.

And I don't have any HTML to render.

So we will place that in there.

And notice we have to replace our secret key.

So I will go back here and I will show my key and we will copy that.

And we will paste that key in here.

And then we'll run this.

And I can type in here.

Where am I? just silly things.

Who are you? How many Azure certifications? Are there? Well, I wonder if I just leave the are there off? Let's see if it's figures it out.

Okay, cool.

So yeah, I mean, that's pretty much it with q&a maker.

So yeah, that's great.

So I think we're done here.

And we can move on to checking out Louis or Liu is learning understanding to make a more robust bot, okay.

Alright, so we are on to our last cognitive service.

And this one is going to be Louis or Luis, depending on how you'd like to say it.

It's Li s, which is language understanding.

So you type in L ui s.ai.

And that's going to bring us up to this external websites.

So part of Azure just has its own domain.

And so here, we'll choose our subscription.

And we have no author authoring source.

So I guess we'll have to go ahead and create one ourselves.

So get down here, and we will choose my cognitive services as your resource name.

So my off service or my cognitive service, great new cognitive service account, but we already have one, so I don't want to make another one.

Right, it should show up here, right? Or valid in the author authoring region.

So it's possible that we're just in the incorrect region.

So we might end up creating two of these.

And that's totally fine.

I don't care.

It's as long as we get this work in here, because we're gonna delete everything, get the end anyway.

And so just say, my cog service, too.

And we'll say West us because I think that maybe we didn't choose one of these regions.

Let's go double check.

If we go back to our portal, just the limitations of the service, right.

So we'll go to my cog services here.

I just want to go cognitive services.

So just want to see where this is deployed.

And this is in us, West us.

Yeah, so I don't know why it's not showing up there.

But whatever.

If that sort of wants, we'll give it what it wants, okay.

shouldn't give us that much trouble, but pay, that's how it goes.

And so we have an authorized authoring service, I'm gonna refresh here and see if it added a second one, it didn't.

So all right.

That's fine.

So we'll just say, my sample bot will use English as our culture.

If nothing shows up here, don't worry, you can choose it later on.

I remember the first time I did this, it didn't show up.

And so now we have my cog service, my custom vision service, we want cog service.

So anyway, it tells us about schema, like how you make a schema animates talking about like body, action, intent, and example utterance, but we're just gonna set up something very simple here.

So we're gonna create or attend, the one that we always see is flight booking.

So I'll go here, do that.

And what we want to do is write an under and so like, book, me a flight to Toronto.

Okay.

So if someone were to type that in, then the idea was it would return back the intent this value and metadata around it.

And we could programmatically provide code, right? So what we need is identity identities and we can actually just click here and make one here.

So enter name identity, and we'll just call this location.

Okay.

Here we have option machine learned and list if you flip between it.

This is like a magic Have a ticket order and you have these values that can change, or you just have a value that always stays the same like lists.

So that's our airport.

That makes sense, we'll do that.

If we go over to entities, we can see it here.

Alright, so nothing super exciting there.

But what I want to show you is if we go ahead, and we should probably add, fight booking should be about book flight.

flight booking, flight booking.

Okay, so we'll go ahead and I know there's only one, we'll go ahead and train our model.

Because we don't need to know tons, right, we cover a lot in the lecture content to build a complex bot is more for the associate level.

But now what we can do is go ahead and test this and we'll say, book me a flight to Seattle.

Okay, and notice here it says book flight, we can go inspect it, and we get some additional data.

So top scoring, so it says how likely that was the intent.

Okay, so you get kind of an idea there, there's additional things here, it doesn't really matter.

We'll go back here, and we will go ahead and publish our model.

So we can put it into a production slot, you can see we have sentiment analysis, speech, priming, we don't care about either of those things.

We can go and see where our endpoint is.

And so now we have an endpoint that we can work with.

So yeah, I mean, that's pretty much all you really need to learn about Louis.

But I think we're all done for cognitive services.

So we're going to keep around our notebook, because we're going to still use your Jupyter Notebook for some other things.

But what I want you to do is make your way over to your resource groups.

Because if you've been pretty clean, it's all within here, we'll just take a look here.

So we have our q&a.

All of our stuff here, I'm just making sure it's all there.

And so I'm just gonna go ahead and delete this resource group.

And that should wipe away everything, okay? For the cognitive services part.

Alright, so we're all good here.

And I'm just going to go off, and I'll leave this open, because it's always a pain to get back to it, reopen it, but let's make our way back to the home here and the Azure Machine Learning Studio.

And now we can actually explore building up machine learning pipelines.

Okay, so we are on to the ML kit, follow along here.

So we're going to learn how to build some pipelines, the first i think is the easiest will be auto automated ml are also known as auto ml.

The idea here is it's going to just build up the entire pipeline for us.

So we don't have to do any thinking we just say what kind of model we want to run and have it to make a prediction.

So what we'll do is a new automated ml, and we're going to need a data set.

So I don't have one.

But the nice thing is they have these open datasets.

So if you click here, you'll see there is a bunch here.

And a lot of these you'll come across quite often, not just on Azure, but other places like this diabetes one, I seen it like everywhere, okay.

And so like, if we just go click here, and maybe we can read a bit more here.

So diabetes data, set 422 samples with 10 features, ideal for getting started with machine learning algorithms.

It's one of the popular psychic learn toy data sets.

It's probably where I've seen it before, though it's not showing up there.

You scroll on down, you can see the data sets available as your notebooks data, bricks and Azure synapse.

The thing is, we have these values of age, sex, BMI, BP and y is trying to make a prediction, it's trying to say, what's the likelihood of you having diabetes or not? And so it's not boolean value.

So it's not a binary classifier.

It's kind of on a like you, would you be doing binary classifications? classification, say, do you have diabetes, or you can make a prediction to say, what's the likelihood or this value if you gave another value in there.

But anyway, this is the predicted value, a lot of times this is x, so everything here is x.

And this is considered y, the actual prediction.

So sometimes it's why and sometimes it's actually named what it is.

But that's just what it is here.

So we'll close that off.

And so we'll choose the diabetes set.

And it will be data set one.

And so it will worry about feedback later.

So we'll click on sample diabetes will hit next.

And here's going to try to figure out what kind of model that we want.

We have to create a new experiments a container to run the model in so I'll just say, diabetes.

My diabetes, it sounds a bit odd, but that's what it is the target call and we want to predict is seeing the train to predict is the why it's usually the why we don't have a compute cluster.

So I'll go ahead and create a new compute.

We have dedicated or low priority.

Technically, we It is low priority, but I just want this done low priority, but don't forget to compute nodes, your job may be preempted.

I'm gonna say with dedicated for the time being, we're gonna stick with CPU.

If we go with this, it does take about an hour to run.

So I ran this ticket about an hour.

So if you don't mind, it's only going to cost you 15 cents.

But if you want this done a lot sooner, I'm going to try to do something a little bit more powerful.

So just trying to decide here, because if it only takes an hour, I might run it on something more powerful, that's 90 cents, that might be overkill, because it's not really deep learning.

It's just a statistical, statistical stuff.

So true and large data set, I wouldn't say it's large real time inference, other latency sensitive ones.

How Bode? Why is this one, I'm just looking here, cuz this one's 29 cents, this one's more expensive.

But it has 32 gigabytes of RAM.

This was 28 Oh, 14 gigabytes of RAM and storage.

So this one's our highest in the tier, again, you can choose this one, you just have to wait a lot longer, I just want to see if it finishes a lot faster, okay, without having to go to the GPU level.

So I don't think GPU is gonna help too much here.

The computer name is my diabetes machine.

minimum number of nodes.

You want to provision if you want dedicated nodes to set the count here, maximum.

I guess I just want one node, right? We will go ahead and oops, complete name must be 216 characters long.

What is it? Is it too long? Okay, there we go.

We'll give it a moment here.

Yeah, it's gonna spin up the cluster.

So it does take a little bit time to start this.

So I'll see you back here when this is done.

Okay.

Great.

So after a short little wait there, it looks like our cluster is running.

If we double check here, we can go to compute, I believe that shows up under here under the compute clusters.

So there it is, this is slightly different.

This one shows you applications and this one is just size, etc.

and click in here see nodes and runtimes.

We'll go make our way back here.

And we'll go ahead and hit next.

And notice that I think it actually will select what it generally does, it'll look at your prediction value, maybe sample a bit of it and say, okay, you probably want a regression thing.

So to predict a continuous numeric values.

So the thing is, is that if it was a label, like text, or if it was just zero in one, it probably would choose classification, because it's, you saw our y value is like a number that was all over the place.

It thinks it's regression.

So I think that's a good indicator there.

So let's go with regression.

You know, but you might want it as a binary classifier, but it's another story there.

So it's, as soon as we created it just started, it didn't give us the option to say, hey, I want to start running it.

Notice on here, it's going to do feature iteration.

So that means it's automatically gonna select out features for us, which is what we wanted to do.

It's set up to do regression, we have some configuration here.

So training time is three hours, doesn't mean it's gonna train for three hours.

But that's, I guess it's time out for it.

You could set a metric score threshold, so it has to meet at least this to be successful.

If it's not going to do it probably would quit out early crossmember valve or cross validation, just make sure the data is good.

You can see blocked algorithms.

So TensorFlow dnn, TensorFlow linear regression, if it was using dnn.

So deep learning neural network, I probably would have chosen the GPU to see if it would go faster.

Look at the primary metric gets normalized root square root mean square error, sometimes on the exam will actually ask you like, what's the primary metric for this thing.

So it's good to take a look and see what they actually use.

For that, I'll probably be sure to highlight that stuff in the actual lecture content.

But this will take some time to run.

We have data guard rails, it will actually not populate, I guess until we've ran it.

So we'll just let it run.

And I'll see you back here when it's done.

Okay.

All right.

So after a very, very, very long wait, our auto ml job is done.

It took 60 minutes.

Using a larger instance, didn't save me any time.

I don't know if maybe if I ran a GPU instance, it would be a lot faster.

I'd be very curious to try that out.

But not something for this certification course.

So we go into here and yeah, the cheaper instance was the same amount of time.

So it probably just needs GPUs.

It really depends on the type of models it's running.

So we have a bunch of different algorithms in here.

It ran about 42 different models.

I thought of like last time I ran it, I saw a lot more but you can see there's all kinds of models that it's running and then it's going to choose the top candidates.

So it shows Voting ensemble.

So ensemble is we don't cover really in the course because it gets too much into ml, but ensemble is when you actually use two different weaker models and combine the results in order to make a more powerful ml model.

Okay.

So here, we'll get some explanation.

I tried this before, and I didn't get really good information.

So if we go here, like I don't have anything under model performance.

So this tab requires a ray of predicted values from the model to be supplied.

We didn't supply any, so we don't get any Data Explorer.

So select a cohort of the data, that all the data is what we have here.

So like, here, we were seeing age.

And I guess it's just giving us an indicator about the age information.

Use the slider to show just descending feature importance, select up to three cohorts to see the feature important side by side.

Okay.

So I guess, s five and BMI.

I don't know what s five is, we'd have to look up the dataset.

BMI is your body mass index.

So that's a clear indicator as to what affects whether you have diabetes or not.

So that makes sense.

age doesn't seem to be a huge factor, which is kind of interesting.

Individual feature importance, we can go here and just kind of like narrow in and say, Okay, well, why is this outlier over here? And they're like age 79.

Right? So it's kind of interesting to see that information.

So it does give you some x explanation as to, you know, why things are why they are.

Over here, we have a little bit more different data.

This is kind of interesting model performance.

I don't know what I'm looking at, but like here, it's over a mean squared.

So it's that mean squared calculation there again? Okay.

Yeah, it's something right.

But anyway, the point is, is that, that we finally get metrics, I guess we always had to click there, because that makes more sense.

So yeah, there's more values here.

Sure.

data transformation, sorts of data processing feature engineering scaling techniques, and machine learning algorithm, auto ml.

So you know, if you were a real data scientists, all this stuff would make sense to you.

I think just with time, it'll, it'll make sense.

But even at this point, I'm not sure.

And I don't care about the model, right? If you're building something for real, I'm sure the information becomes a lot more valuable.

So this model is done.

And the idea is that we can deploy oops, if we go back to the actual models, because we actually went into the map.

So we go back to the auto ml here.

I think you can deploy any model that you'd like.

So you can go here and deploy this, like if you prefer a different model, you could deploy it.

If we go into data guard rails, we kind of skipped over that this is a way does automatic feature rotation, so it's extracting the feature, so it handles the splitting, how it handles missing features.

Hi, Carbonell nowadays, like if you have too much data, it might have to do dimensionality reduction.

So that's just saying like, hey, if this is a problem, maybe we would do some pre processing or stuff to make it easier to work with the data.

So if we're happy with this, we can go ahead and deploy it.

So let's say deploy, just say infer my diabetes.

Here we have aka s and E's.

Azure Container instance.

Let's do Azure Kubernetes Kubernetes services because we did the other one here.

Say diabetes.

Broad maybe a Ks diabetes.

Oh, compute name, sorry.

One of the inference ones.

Okay.

So in order to deploy this, we would have to create our pipeline.

I'm not sure if I have enough in my quota here, but let's go give it a go.

So I think what it's wanting is one of these here.

I think we'd want this wherever we are, right? I'm not sure where we are.

If This Is Us, east or west here.

Let's go check.

Studio.

Azure Machine Learning hits us.

No, I never did this when I was.

I just use use the Azure Container instance.

But I'm just curious here.

say next.

My diabetes prod, we will need to choose some nodes.

The number of nodes multiplied by the virtual machines, number of cores must be greater or equal to 12.

Okay? Now again, if you're not confident, like every concern about costs, you can just again, watch, you don't have to do right.

This is again, a fundamental certification, it's not super important to get all the hands on experience yourself.

But I'm just trying to explore this so we can see, right, because I don't care about costs.

It's not a big deal to me, on my machine here, so probably I don't have simple must use a vn SKU with more than two cores and four gigabytes.

Well, what did I choose? Did I not choose the right one? We'll try this again.

Oh, I chose three.

Yeah, that's fair.

What did it want 12 cores set before I think.

Invalid parameters, more details? Because that already exists based on that name a two.

It's given us all this trouble I this one will go ahead and delete you think like, it wouldn't matter.

Like I wouldn't have to delete it out.

But that's fine.

This one failed.

Now, what's the problem? quota exceeded so I can't do it.

Because I don't I'd have to go make a support request increase it.

So it's not a real big deal.

I guess what we could do is instead of doing it on a KS, we just deploy to container instance, if it will let us notice I don't have to fill anything additional.

It'll just deploy I think.

Great.

And so I guess we'll let that deploy.

And I'll see you back here in a bit.

Okay.

Alright, so I'm back here, checking it out on my are checking up on my auto ml here.

So if we go over to compute, we go to inference clusters, we don't have anything under there if we go over to our experiments under our diabetes here.

Because we did choose to deploy the model.

Right, we clicked deploy.

So it should have created an ACI instance, let's make our way over to the portal.

The reason why it might not be showing up is because I'm just running out of compute.

Because again, it's a quota thing.

It's not a big deal for us to get a deploy.

So we're gonna do anything with it.

But yeah, so we can see that we have a container over here, and it's running.

So we must be able to see if we go to endpoints here.

Here it is.

Right, I was under models as my problem.

So pipeline endpoints, that would be something I think that if we had deployed our designer, I thought we would have thought under there.

But here we have our binary pipeline, or our diabetes, prod pipelines.

So if we wanted to like test data, you know, we could pass stuff in here.

I think if we wanted to try to just like see this in action, I'm not sure if it's going to work, but we'll give it a go.

So if we go into our sample diabetes data set, and we just explore some of the data, we should be able to kind of select out some values, because I don't know what these values mean.

So let's just say like 36 oops, 36.

But we already know that BMI is the major factor here.

Sex is either one or two.

So we'll say to BMI, we'll say 25.3.

The BP will be 83 or whatever.

Oops.

83.

Here.

s 160.

s two can be 99.63 4545 and 5.10.

The only we're running out of metrics here 82.

What do I doesn't give us all them? Oh, I guess it does.

It's up to six.

Okay, so let's go ahead and test that.

So we get and we got a result back 168.

So that is auto ml all complete there for you.

Yeah, so there you go.

Alright, so let's take a look here at the visual designer because it's a great way to get started very easily.

With If you don't know what you're doing, and you want something a little bit more advanced than auto ml and have some customization, it's great to start with one of these samples.

So let's go ahead and expand and see what we have here.

We have binary classification with custom Python script, tune parameters for binary classification, multi class, multi class classification, so letter recognition, text classification, all sorts of things.

Usually binary classification, classification is pretty easy.

I'm looking for one that is pretty darn simple.

Let's go take a look here.

So this says the sample shows how to filter based feature selection to selection features.

binary classification, so how to predictors related to customer relationships using binary classes, how to handle imbalanced datasets, using smote.

And modules, I'm not really worried about balancing customized Python script to perform cost sensitive, binary classification, tune parameters.

So you tune model parameters, best models during the training process, let's go with this one.

This one seems okay to me.

And so what you can see here is that it's using a sample data set, I believe, I think this is a sample.

And if you wanted to see all of them, you can literally drag them out here and do things with them.

I haven't actually built one end to end yet for for this again, I don't think it's like super important for this level of exam.

But this just shows you that there's a pre built one, if you've started to get the handle of ml, you know, the full pipeline.

This isn't too confusing.

So at the beginning, here, we have our classification data.

And then what it's going to do is say select columns in the data set.

So it says exclude column names work class, occupation, native country, so it's doing some pre processing there, excluding that data might be interesting to go look at that data set.

So if we go over to our data sets tab, it should show up here, I believe.

Maybe because we haven't committed or submitted this, we can't see that data set yet.

But we'll look at it for a moment that we want to clean our data.

So here's saying clean all the columns.

So custom substitution value, see if we can see what it's substituting out.

It's not saying what's so clean missing data.

So I'm not sure what it's cleaning out there.

But because that would suggest that it's using some kind of custom script, I'm not sure where it is.

But that's okay.

We have split data, pretty common to split your data.

So you would have a training and test data set, it's usually really good to randomize it.

So you want to randomize it, then split it.

And that's, that's just so you get better results, that it has model hyper parameter tuning.

So the idea is that it's going to use ml to figure out the the best parameters for tuning.

Over here we have the two classes decision tree where it's going to do some work there, it's going to score our model, and then it's going to evaluate our model and see if it's successful.

So this is all set up to go.

So all we're going to do is go to the top here, this is setting wheel here.

And we need to choose some type of compute.

So I'm going to go here, and we have this one here.

But I'm going to go create, as for my, my diabetes one, I'm going to go ahead and make a new one.

And we're going to say, recommend using a predefined configuration to quickly set compute training.

This one looks okay, I don't know if it needs two nodes.

But I guess we can do this one.

So we'll just say binary was just like binary pipeline.

Okay.

Say save, save.

Hopefully, it's making good suggestion.

And we will have to wait for that to spin up.

It's going to take a little bit of time.

Okay, so I'll see you back here in a moment.

Alright, so I got a little message saying that that is ready.

So what we can do, I think it was here, my notebook instance.

Now that's not it, but I definitely saw a pop up on my screen.

You might have saw it to you that to be paying close attention for that.

But if you go over, it says that it's it's ready to go.

So what I'm going to do is make my way back over here, we're going to select our compute, there is our binary pipeline, I'm going to select that.

And there are some other options, we're not gonna fiddle around with that, we're going to go ahead and hit submit.

So we need a new experiment.

So I'm going to just say, binary pipeline.

We'll hit submit.

Okay, and so this is now running.

So after a little while here, we're going to start seeing these go green.

So this is not started.

We'll give it a moment here.

So we can see some kind of animation.

And there it goes, it's off to the races.

There's not much to do here.

This is going to take a while.

I don't know, I have never ran this one in particular.

So I don't know if it's an hour or 30 minutes.

So I'll see you back when it's done running.

But yeah, it's it's not that fun to watch, but it's cool that you get a visual illustration.

So I'll see you back in a bit.

I just wanted to peek in here and take a look at how it's progressing here and you can see it's still going and it's just cleaning the data, it's still not done.

I'm not sure how long this has been running for if we go over to our experiments, and we go into our binary pipeline, and we look at the runtime, we're about eight minutes in, and it hasn't done a whole lot.

So it's still cleaning the data, I would have thought a bit, it'd be a little bit faster.

I'm kind of used to using like AWS, and it goes, sage makers.

This doesn't usually take this long.

But I mean, it's nice that it's, it's going here.

But yeah, so we're almost out of the pre processing phase.

And we'll be on to the model tuning, okay.

Alright, so after waiting a little while, it looks like our pipeline is done.

So if we make our way over to experiments and go to binary pipeline, we can see that it took 14 minutes and 22 seconds, we can go here and just see some additional information, there's nothing really else to see we saw all the steps already ran, so you can see them all here.

Okay, and so let's say we want to there's nothing under metrics, but able to actually slog data points, compare these data within across runs, really did a single run, so there's nothing to compare.

So let's say we were happy with this, and we want to deploy this model, what what I'm going to do is go back to the designer, click back here.

And so now in the top right corner, we can create our inference pipeline.

So I can remember who submits going to run it, I don't want to run it again, I just want to go ahead and create ourselves a real time or batch pipeline, let's say real time bi pipeline here.

And what this will do is it'll actually create a completely different pipeline.

So here's a completely new one.

But it's specifically designed to do deployment.

Okay, so this is now one was for training the model.

This one is actually for taking in data and doing inference.

Okay, so what we can do is, we can go ahead and just submit this.

That's it, we'll put this under our binary pipeline here.

We'll go ahead and hit submit.

And I believe that we need a different kind of compute here.

I'm surprised that it's even running.

I guess it has a compute there.

So it's going to run and once it finishes running that I believe that we can go ahead and deploy it.

Okay, so let's just wait for that to finish.

All right.

Alright, so after a little while, there, we ran our inference pipeline.

And so it's definitely something that is ready for use.

The idea is that what we actually use, it's going to go through this web service input to this web service output, but not so important at this level of certification.

Let's see what it looks like to go ahead and deploy it.

So yep, we have the option between a real time endpoint and an existing endpoint.

We don't have an endpoint yet.

So we'll just say, binary pipeline.

Okay.

And notice we have the option between wants it lowercase binary pipeline.

And we have the option between Azure Kubernetes service and Azure Container instance, it's a lot easier to deploy, I think, to container instance.

So because it will be waiting forever for Kubernetes to start up.

So we're going to do container instance, we have some options like SSL and things like that, not too worried about it.

So we're just going to go ahead and hit deploy.

Okay.

And so that is going to go ahead and deploy that.

So we'll wait for this real time inference, if we go over to our compute, it should spin up.

So this is for Eks.

I don't know if it'll show up here.

I think only I've seen things under here.

But I think this will be for Azure Kubernetes service.

And I don't think we're gonna see it show up under there.

However, we do not need to be running this anymore.

So we'll go ahead and delete the binary pipeline, because we're not, we don't have it for any use right now.

And we might need to free it up for something else.

Okay.

So go ahead and delete it, we don't need it.

And coming back to our pipeline, or designer here, I'm just trying to see where we can keep track of it.

I know that it's deploying.

So waiting for real time endpoint.

So I'll see you back here when this is done.

Okay, takes a little bit of time.

Alright, so I think our pipeline is done.

If we make our way over to endpoints, there it is the binary pipeline.

If we wanted to go ahead there, we could test the data.

And so it actually already has some pre loaded data for us.

We had test.

It's nice that it fills it in a we get some results back.

Okay.

So, I mean, that we see like scored labels and income and score probability.

So things like that, that is useful.

So it's getting back all all the results, but I don't think it has.

Yeah, it doesn't have scored labels and scored probabilities, which is the value we want to come back here.

So There are endpoints, and that is the end of our exploration with designer, okay? Alright, so let's take a look at what it would be to actually train a job programmatically through the notebook.

So remember, we saw these samples over here.

And so we saw this image classification, m NIST.

And this is a very popular data set for doing computer vision.

And these are really great, if you want to really learn you should really go through these and just read through them, because they're, they're probably very, very useful.

I've done a lot of this before.

So for me, it's, it's just, it's not too hard to figure out.

But I've actually never ran this one.

So let's run it together.

Again, we want to be in Jupiter lab.

So you can go here and click it there or go to the compute.

If it's been a bit finicky.

And just here, we'll get a tab open here.

And we'll see how this goes.

So what I want to do and is just make sure we're back here, I can click into this one.

And we have a few.

So there's part one, and then we have the deploy stage.

So let's look at training.

I don't know if we really need to deploy, but we'll give it a read here.

So in this tutorial, you train an ml model under compute resource resources will be training and training and deployment workflow via the Azure Machine Learning service.

In a notebook, there's two parts to this.

This is using the amnesty data set and psychic learn.

And with Azure Machine Learning probably SDK, it's a popular data set with 70,000, grayscale images, each image is handwritten digits of 28 times by 28 times pixels representing numbers from zero to nine, the goal is to create multi class classifier to divide the digits in a given image that represents.

So we're gonna learn a few things here, but let's just jump into it.

So the first thing is that we need to import our packages.

So here, it does that map plot plot live in lines, just make sure that when we print things that we visually see them, we're going to NumPy and then matplotlib itself, the Azure ML core, and then we're going to import a workspace since we'll need one there.

And then I guess it just checks the version making sure if we have the right version here.

Okay, so this is one point 28.

Zero, it's pretty common, even as an AWS, they'll have like a script in here to update it in case it is out of date.

I'm surprised it didn't include it in here, but that's okay.

We'll scroll on down.

And by the way, we're using Python 3.6 Azure ML.

If this is the future, that you know, they might retire the old one, you're using 3.8.

But you know, to generally work if it's in their sample data set, I assume they try to maintain that.

Okay, so connect to a workspace.

So create a workspace object from an existing workspace reads the file config dot JSON.

So what we'll do is go run that I assume it's kind of like a session.

And so here it says, It's figured found our workplace.

So really, it's just it's not creating a workspace, it's just returning the existing one so that we have it as a variable here, create an experiment.

So that's pretty clear.

We saw experiments in the auto ml and the designer.

So we'll just hit run there.

Okay.

So we named it core ml.

And we said experiment.

I wonder if it actually created one yet.

Let's go over to experiment to see if it's there.

So there's there cool, I was fast, I thought it would like print something out, but it didn't do anything there.

So creator, attach an existing compute resource by using Azure Machine compute a managed service data scientists, etc, etc.

yada, yada, yada.

So create a copy.

creation of a compute takes about five minutes.

So let's see what it's trying to create.

So we have some environment variables that wants to load in I'm not sure how these are getting in here.

I'm not sure we're environment variables are set in Jupiter, or even how they get fitted in.

But apparently they're somewhere.

But we have, it doesn't matter because these are defaulting.

So here's a CPU cluster, zero and four, it's going to use a standard D two v two, that is the cheapest one that we can run.

I kind of want something a little bit more powerful just for myself.

Just because I want this to be done a lot sooner.

But again, you know, if you're don't have a lot of money, just stick with what's there.

Okay.

So and this is CPU clusters.

So if we go here, I just want to see what our options are.

I'm not sure why it's not showing us options here.

You don't have enough quota for the following VM sizes.

So it probably it's because I'm running more than one VM right now.

Yeah, so I've said I've hit my quota.

Okay, so like I probably would have to request for an hour.

So I think this is the one I'm using.

What's the difference here? This standard dv two v CPUs.

The same one, right? So request quota increase.

I don't know if this is incident or not, I'd have to make a support ticket.

All that's going to take Long.

So the thing is, is that because the reason is is that I'm running the auto ml and the designer and the designer in the background here trying to create all the workshops or the, the follow along at the same time.

But what I'll do is I'll just come back and when I'm not running one of those other ones, then I will, I'll come back here and continue on.

But we're just here at the step, we want to create a new computer.

Okay.

All right, so I'm back and I freed up one of my compute instances, if I go over here, now I just have the one cluster instance for my auto ml.

But what we'll do here is again, just read through this.

So this will create a CPU cluster zero to four nodes, standard G two v two, I guess we'll just stick with what what is here, I'm just reading through here, it looks like it tries to find the compute target, it's going to provision it, it will create the cluster called pool for a minimum numbers of nodes for a specific time.

So wait for completion.

So we'll go ahead and hit play.

And so that's going to go and create us a new cluster.

So we're just going to have to wait a little while here for to create about five minutes, and I'll see you back here in a moment.

Alright, so the cluster started up, if we go back over here, we can see that it's confirmed, I don't know why it was so quick, but it went pretty quick there.

So we're on the next section here, explore the data.

So download the emnes data set display some sample images.

So it's just talking about it being the open data set.

The code retrieves in the file data set object, which is a subclass of data set file data set references a single or multiple files of any format in your data store.

The class provides you with the ability to download or mouth files to your computer by creating a reference to the data source location.

Additionally, you register the data set to your workspace for easy retrieval.

During training.

There's a bit more how tos, but we'll give it a good read here.

So we have the open data set and missed.

It's kind of nice that they have that reference there.

So we have a data folder, we make the directory, we are getting the dataset, we download it, and then we are registering it.

So let's go ahead and run that.

Not sure how fast but it shouldn't take too long as it's running.

We'll go over here the left hand side refresh, and we'll see if it appears.

Not as of yet.

There it is.

Go into here, maybe explore the data.

I'm not sure how it would look like because these are all images, right? Yeah, so they're in you byte Gz.

So they're in compressed files, we're not going to be able to see within them but they're definitely there.

We know they're there.

So that that is now registered into our data set, display some sample images, so load the compressed into a files into NumPy, then use matplotlib plot 30 random images from the data set from above note, the step requires load data function, it's included in the utils.

py file is included in the sample folder, we have it over here, we just double click, very simple file to load data.

And we'll go ahead and run that.

And it's pretty, pretty simple here.

So load data x train x test it are we setting up our training and testing data here, it kind of looks like it because it says train and test data.

That's when we usually see that kind of split.

And again, it's doing a random split.

So that sounds pretty good to me.

Let's show some randomly chosen images.

Yeah, so I guess they do set up the training data here.

And then down below, we're actually showing the images.

So here's some random images train on a remote cluster.

So for this task to submit the job to run on the remote training cluster to set up earlier submit your job.

Create the directory, create a training script created scripts, run configuration, submit the job.

So first, we'll create our directory.

And notice it created this directory over here.

Because I guess it's going to put the training file in there.

And so this will actually write to a training file.

This makes quite a bit of sense.

So if we click into here, it should now have a training file.

It'll just give it a quick read, see what's going on here.

So a lot of times when you create these training files you have to do and this is the same if you're using AWS, like when you're creating train, like or Sage maker, you create a train file because it's part of frameworks is just how the frameworks work.

But you'll have these arguments.

So it could be like parameters to run for training.

And there could be a whole sorts of ones here.

Here they are loading in the training and testing data.

So it's the same stuff we saw earlier when we were just viewing the data.

Here it's doing a logistic regression.

It's using lib.

So linear, maybe linear learning model.

They're sitting multiclass on that there.

And so what's going to do is fit so fit is actually performing the training.

And then what it's going to do is make a prediction on the test set.

That it's going we're going to get accuracy so we're getting kind of a score.

So notice that it's using accuracy As a valuation metric, I suppose, right.

And then at the end, we're going to dump the data, a lot of times, like you have to save the model somewhere.

So they're outputting, the actual weights of the neural network and all other stuff.

It's a plk file.

I don't know what that is.

But if you're using like TensorFlow, you would use TensorFlow serving at the end of this, a lot of times frameworks, like pytorch, or TensorFlow, or MX net, they'll have a serving layer.

But since we're just using scikit, learn, which is very simple, it's just going to dump out that file into our outputs, this is going to probably run a container.

So this outputs isn't going to necessarily be on the outputs into here, it's more like the outputs of the container.

And a lot of times the container will then place this somewhere.

So like, it'll be saved on the container.

But it'll be passed out to the register or, or something like that, like model registry.

So anyway, we ran this, so that generated the file, we don't want to keep on running this multiple times, I probably just overwrite the file.

So it's not a big deal.

Here, it says notice how the script gets saved in the data model.

So here, it's saying the data data folder, I guess we didn't look at that.

So if we go top here, I didn't see this is data folder.

wasn't really paying attention to where that was.

Because it looks like where more so it's loading the data in.

So here, it saves the data that put anything written to this directory is automatically uploaded to your workspace.

So I guess that's just how it works.

So it probably will end up in here then.

So you tell py reference the training script to load the dataset correctly, and copy the file over.

So we will run this to copy the file over.

So I'm guessing did it put it into here? I'm just wondering, yeah, so it just put it in there.

Because when it actually packages it for the container, it's going to bring that fall over because it's a dependency.

So configure the training jobs.

So create a script, run config, the directory that contains the script, the compute target, the training script, train, file, etc.

Sometimes like in other frameworks, we'll just call them estimators.

But here's just called a script run config.

So I'm just trying to see what it's doing.

So scikit learn is the dependency.

Okay, sure.

We'll just hit run.

Okay.

And then down below here, we have script run config.

So it looks like we're passing our arguments that we're saying this is our data folder, which is apparently here, we're mounting it.

And then we're setting regularization to 0.5.

Sometimes you'll pass in dependencies in here as well, I guess these are technically our parameters that are getting configured up here at the top, right.

But sometimes you'll have dependencies if you're in it, including other files here.

And I guess that's up here, right? So see where it says environment.

And then we're saying include the Azure ML defaults into psychic learn, and stuff like that.

And so then it gets passed in the end.

Does that make sense to me, we haven't ran that yet.

Because we don't see any number here.

Submit the job to the cluster.

So let's go ahead and do that.

says it returns a preparing a running state as soon as the job is completed.

So it's in a starting state.

Monitor remote run.

So in total, the first run takes 10 minutes, but the second run is as long as the dependencies and Azure ML.

Farming don't change the same images reuse and hence the Start Here Start Time is much faster.

Here's what's happening while you wait.

The image creation a Docker image is created matching the Python environment specified by the Azure ML environment.

The image is built and stored in the ACR, the Azure Container Registry associated with your workspace.

Let's go take a look and see if that's the case.

Sometimes, like resources aren't visible to us, so I'm just curious, do we actually see it? Okay.

And Yep, there it is.

Okay, so that did not lie.

So especially your workspace immigration uploading takes about five minutes the stage happens once.

For each Python environment.

Since the containers cache subsequent runs during image creation, a logs are stemmed to the run history, you can monitor the image creation process process using these logs wherever those are, if you if the remote cluster requires more nodes to execute the run than currently available, additional nodes are added automatically.

Scaling takes typically takes about five minutes.

And I've seen this before, where if you're in your compute here, and sometimes they'll just say like scaling because there's just not enough.

So running in the stage, the necessary scripts and files are sent to the compute target than the data source or a mounted copy.

The entry script is run Sentry script is actually the train.py file.

While the job is running STD out in the files is in the logs directory or stem to the run history.

You can monitor the runs progress using In these logs, the dot outputs directory of the run is copied over to the run history in your workspace.

So you can access these results.

You can check the progress of a running job in multiple ways.

This tutorial uses the Jupiter widget.

So looks like we can run this, watch the progress.

So maybe we will run that.

And so it's actually showing us the progress.

That's kind of cool.

I really like that.

So it's just a little widget, join us all the things that it's doing.

Let's go take a look and see what we can see under experiments and our run pipeline.

He was talking about things like outputs and things like that.

So over here in the outputs and logs, I'm just curious.

Is it this is the same thing? I'm not sure if this is the this tails.

Yeah, it does tail, it just moves so we can actually monitor it from here.

I guess that's what it was talking about.

so here we can see that it's setting up Docker, it's actually building a Docker image.

And then I'm not sure did it send it to I mean, it's on ACR already.

I think.

It looks like it's still installing extracting packages.

So maybe it's actually running on the image now.

So just wait there, we pop back over here.

You know, we can see probably the same information is identical.

Yep, it is.

So we're three minutes in, it's probably not that fun to watch it in real time and, and talk about it.

So let's just wait until it's done.

I'll see you back then.

Okay.

Alright, so I'm about 17 minutes in here.

I'm not seeing any more movement here.

So it could be that it is done.

It does say if you run this next step here will wait for completion.

Specify show output to true for verbose log.

So here actually did output a moment ago.

So maybe it actually was done.

I just ran it twice.

So I'm not sure if that's going to cause me issues there.

So because I can't run the next step, unless I stopped this guy individually cancel this one here.

I think I can just hit interrupt the colonel, there we go.

Okay, so I think that it's done.

Okay, because it's 18 minutes in.

And I don't see any more logging in here.

It's just not very clear.

And also, the logs, we just have a lot of stuff going on here.

Like, this is so much.

So you know, if we were keeping keeping pace, we probably would have saw all these credit.

Yeah, so another, we just had a few more outputs there.

But I think that it's done.

Okay.

It's just there's nothing definitively saying like, done.

Do you know I'm saying and then up here, it doesn't say, oh, oh, I guess it does say that.

It's done.

All right.

So yeah, I just never ran it with the school.

So I just don't know.

So I guess it does definitively say that.

I already ran this.

So we don't need to run that.

Again.

I just feel like we'll get stuck there.

So let's take a look at the metrics.

So regularization rate is 0.5 accuracy is nine to nine is pretty good.

The last step is train the script wrote in the output S.

S, sk learn, I want to see if it's actually in our environment here.

I don't think it is.

So I'll put this somewhere.

It's in our workspace somewhere, but it's just not.

We just don't know where it's right here.

Okay.

So they'll put it the actual model right there.

And so you can see the associated files that are ran, okay, we'll run it.

register the work model and space that you can work with other collaborators.

Sure.

So if I click on that here, and we go back over to our models, it is now registered over here.

Okay.

So we're done part one.

I don't want to do all these other parts.

Training is enough as it is, but let's just take a look at the deploy stage.

Okay, so for prerequisites.

We're saying have a workspace we have our we are loading our registered model.

Okay, we register it where you have to import packages, we are going to create scoring script, deploy to an ACI model, test the model, if you want to do this, you can go through all the steps, it does talk about confusion matrix, and that is something that can show up on the exam is actually talking about a confusion matrix.

But we do cover that in lecture content.

So you generally understand what that is.

But, you know, I'm just I'm too tired.

I don't want to run through all this.

And there's not a whole lot of value other than reading, reading through it yourself here.

So I think we're all done here.

Okay.

Okay, one service we forgot to check out was data labeling.

So let's go over there and give that a go.

So I'm going to go ahead and create ourselves a new project, I'd say my labeling project and we can say Whether we want to classify images or text, we have multi class multi label bounding box segmentation, let's go with multi class.

I'll go back here for a second multiclass.

Whoops.

I don't know if we create dataset, but we could probably upload some local files.

Let's say, my Star Trek dataset does let us choose the image file type here.

Good.

So these are images.

Gonna tell us what here.

It's very finicky this input here.

file this it references a single or multiple files in your public data store or private public URL.

Okay, so we go next.

If we can upload files directly, that'd be nice.

Ooh, upload a folder.

I like that.

So what we'll do is we do have some images in the free AI here, under Cognitive Services assets we have we'll go back here and we'll say I think objects would be the easiest.

But we just want a folder right? So yeah, we'll just take objects.

Yep, we'll upload the 17 files.

Yep, we'll just let it stick to that path.

That seems fine to me.

We'll go ahead and create it.

And so now we have a data set there, we'll go ahead and select that data set, we'll say next, your data says periodically check for new data points and data points will be added as tasks, it doesn't matter.

We're only doing this for test.

Enter the list of labels that we have TMG DS nine, Voyager tasks toss.

That's the types of Star Trek episodes.

Label which Star Trek series The images from, say next.

I don't want enabled but you can have auto enabled assistant labeler.

I'm gonna say No, we'll create the project.

Okay, I'll just wait for that.

Great.

I'll see you back here in a moment.

Okay.

All right.

So I'm back here actually didn't have to wait long.

I think it instantly runs.

I just assumed like I was waiting for a state that says completed.

But it's not something we have to do.

So we have zero to 17 progress, we're going to go in here, we're going to go label some data, we can view the instructions.

It's not showing up here.

But that's fine.

If we go to tasks, we can start labeling.

So what season is this from or series, this is Voyager, we'll hit submit.

This is Voyager we'll hit submit.

This is toss, we'll hit submit.

This is TMG.

This is TMG.

This is DS nine, DS nine, Voyager.

wager TMG.

Ds nine, you get the idea though, you've got some options here like change the contrast, if someone can't see the photo, or rotate it, this is Voyager, Voyager TMG, DS, nine, Voyager, Voyager.

And we're done.

So we'll go back to our labeling job here, we'll see we have the breakdown there in our data set is labeled.

We can export our data set CSV, cocoa, as your ml data set, I believe that means it will go back into the data sets over here.

This will make our lives a little bit easier.

Go back to data labeling.

Okay.

So you just grant people access to the studio, they'd be able to just go in here and jump into that job.

Okay.

If we go over to the data set, I believe we should have a labeled version of it now.

So my labeling project.

So I believe that is the labeled stuff here, right? Yep, so it's labeled.

So there you go.

We're all done Azure machine learning.

And so all that's left is to do some cleanup.

Okay, so we're all done with Azure Machine Learning if we want to and go to our compute, and just kill the services we have here.

Now, if we go to the resource group and delete everything, it'll take all these things down anyway, but I'm just gonna go with a paranoid so I'm gonna just manually do this, okay.

hit Delete.

Okay, so we'll go back to portal dot Azure calm.

And I'm going to go to my resource groups, and everything is contained.

It should be all contained within my studio, just be sure to check these other ones for that.

And we can see all the stuff that we spun up.

We'll go ahead and hit Delete resource group.

I don't know if it includes like, because I don't see like Container Registry, right? So I know like it puts stuff there.

I guess it does.

It's this Container Registry.

So that's pretty much everything right? And I'll take down everything So, and if you're paranoid, all you can do is go to all resources and double check over here, because if there's anything running, it'll show up here, okay? But that's pretty much it.

And so just delete and we're all done.

Hey, this is Andrew Brown from exam Pro, and we're on to the AI 900 cheat sheet and this one is seven pages long, so let's get to it.

At the top of our list, we're starting with artificial intelligence and machine that can perform jobs that mimic human behavior.

Machine learning a machine that gets better at a task, explicit programming, deep learning and machine that has artificial neural nets.

Inspired by the human brain to solve complex problems.

a data scientist is a person with multidisciplinary skills in math statistics, predictive modeling, machine learning to make future predictions.

data set is a logical grouping of units of data that are closely related or share the same data structure.

Examples of this would be m&s and cocoa data labeling the process of identifying raw data, so images, text files, videos, and adding one or more meaningful and informative labels to provide context to a machine learning model can learn supervised learning data that has been labeled for training, unsupervised learning data that has not been labeled.

An ml model needs to do its own labeling, reinforcement learning, so there is no data and there's an environment and an ml model generates data with many attempts to reach a goal.

You have neural networks also abbreviate to nn, a network of nodes organized into layers of input hidden output that is used to train ml models.

We have deep neural nets.

So dnn, a neural net that has three or more hidden layers to their deep learning backpropagation moves backwards through neural net adjusting weights to improve outcome on the iteration.

This is how a neural net learns loss function, a function that compares the ground truth to the prediction to determine the error rate, how bad the network performed, activation functions and algorithm applied to a hidden layer node at that affects connected output.

So Arielle use a very common one, you have a dense layer, this is when the next layer increases the amount of nodes you have a sparse layer, this is one of the next layer decreases the amount of nodes, you have GPUs that especially designed to quickly render high resolution images and videos concurrently.

Commonly used for non graphical tasks such as machine learning and scientific computing.

You have CUDA which is a parallel computing platform and API by Nvidia that allows developers to use CUDA enabled GPUs for general purpose computing, also known as GPU GPU.

On to the second sheet here for ml pipeline, we have pre processing, I didn't outline this in the course.

So I'm going to just do that now.

So preparing data and feature engineering before passing data to ml model for training inference, you might have data cleaning, so this is correcting errors within the data set that could negatively impact the results data reduction, reducing the amount of data or applying dimensionality reduction to reduce the dimensions of inputted vectors, feature engineering, transforming data into numerical vectors to be ingested by the ML model, sampling or resampling pouncing a data set to be uniform across labels by adding or removing records.

post processing translate the output of an ml model back into human readable format, training.

In the process of training the model serving the process of deploying the model to an endpoint to be used for inference inference invoking an ml model by sending requests expecting back a prediction, we have real time endpoints that optimize optimize for small or single item payloads.

Returns results quickly usually uses a dedicated running server.

batch transform optimized for larger batch predictions server runs only for the duration of the batch.

There's forecasting make a prediction with relevant data analysts of trends edit stock guessing, predicting make a future prediction with without relevant data using statistics to predict future outcomes more of guessing using decision theory.

For performance and evaluation metrics are used to evaluate different machine learning algorithms just to select a few here classification we have accuracy f1 score precision recall, for regression metrics we have MSE, our MSE, ma remember mean squared errors okay.

Jupyter Notebooks a web based application for author and documents combined live code narrative texts equations of visualizations.

classification is the process of finding a function to divide a label data set into classes and categories.

A confusion matrix is a table to visualize the model predictions of predictive versus ground truth actual take the time to go look up how confusion matrix work, because they will absolutely ask you questions on the exam for the 900.

Okay, regression is the process of finding a function to correlate a labeled data set into continuous variable numbers.

clustering is the process of grouping unlabeled data based on similarity and differences.

Okay, on to our third sheet here Cognitive Services an umbrella AI service that enables customers to access multiple AI services with an API key an endpoint we have the category of decision so anomaly detector identify potential problems early on content moderator detect potential offensive or unwarranted content personalizer create rich personalized experience for everyone.

language understanding so build natural language understanding into the app spots in our devices q&a maker create a conversational Question and Answer layer over the data.

Text Analytics.

detect sentiment key phrases and add named entries.

translator detect translate and more than 90 supported languages.

For speech we have speech to text transcribe audible speech into readable text text to speech convert text to lifelike speeches for more natural interfaces speech translation integrate real time speech translation of your apps, speak recognition identify verify the people speaking based on the audio for vision we have computer vision so analyze content and images and videos custom vision customized image recognition to fit your business needs, face detect the detect and identify people and emotions and images.

Knowledge mining is a discipline in AI that uses a combination of intelligence services to quickly learn from vast amounts of information.

And there's three things to this there's ingest of content from a range of sources using connectors to the first and and third party data stores enrich the content with the AI capabilities that let you extract information find patterns, deep understanding explore the newly indexed data via search boxes, existing business applications and data visualizations onto our fourth sheet.

We have Mark stuff AI principles, so this is responsible AI remember there's six.

So fairness an AI system should treat all people fairly reliability and safety AI systems should perform reliability and safety, privacy and security AI systems should be secure and respect privacy.

inclusiveness AI system should empower everyone engaged people, transparency, AI systems should be understandable accountability people should be accountable for the AI systems, common ml workloads.

So for this, we have anomaly detection is the process of finding outliers with a data set called anomaly.

Computer Vision is when we use ml neural nets to gain high level understanding of digital images and videos.

And LP is the is the machine learning that can understand the contents of a corpus or body of text.

conversational AI is technology that can participate in conversations with humans.

I know it feels like we're repeating the same thing quite in different ways.

But that's the way we're going to learn well here, okay.

Azure Machine Learning service allows you to provision ml studio to build and maintain ml models and pipelines.

We have authors so either that we have notebooks that Jupyter notebooks and ID to write Python code to build ml models.

Remember, you can launch it in Jupiter labs and VS code as well probably once you have an example just so you know, auto ml completely automated process to build and train an ml model, designer visual drag and drop designer construct and build pipelines.

We have data sets of data that you can upload, which will be used for training data can be versioned.

Open datasets are publicly hosted datasets are commonly used for learning how to build ml models.

experiments are logical grouping of runs, runs our ML tasks that perform on virtual machines or containers pipeline so ml workflows you have built or have used in the designer, you have a training pipeline.

So pipelines to build in Train and ml model inference pipelines pipelines that are used to train that use a trained model to make a prediction on real data.

Then you have models as a model registry containing trained models that can be deployed endpoints.

When you deploy a model, it's hosted on an accessible endpoints the REST API.

So real time endpoints invokes an ml model for inference pipeline endpoint, invokes that running on a pipeline.

So for ci CD, under manage, we have compute the underlying computing instances used for notebooks, training inference, so we have compute instances, that all workstations that data scientists use to work with the data models.

This is generally for your notebooks, computer clusters, scalable clusters for virtual machines on demand processing of experimental code, so training and pre processing, inference clusters, deployment targets for predictive services that use for train models.

So for inference, attached compute links to existing Azure compute resources, such as virtual machines, Azure data, bricks, clusters, there's another one in there, but it's not gonna show up an exam probably a patchy Spark, but I guess it's covered under databricks.

So for environments that reproduce pre reproducible Python environment for machine learning experts or experiments, data stores securely connect to your storage service on Azure without putting your authentication credentials in so it has Blob Storage file share data, data lake storage Gen two, as your SQL data as your Postgres MySQL database data labeling have humans and ml assisted labeling to label your data for supervised learning human and loop labeling, machine learning since the data data labeling, we have linked services so external services that you can connect to a workspace such as Azure synapse analytics, I think that's the only way you can connect right now, then for text analytics.

So now we're out of the Azure Machine Learning Services, we're ingesting the cognitive services so text analytics, sentiment analysis, find out what people think of your brand or topic.

Labels include negative, positive, mixed or neutral confidence scores ranging from zero to one opinion mining granular information about the opinions related to aspects granular data with a subject and opinion tied to a sentiment.

key phrase extraction quickly identifies the main concepts in text.

Language detection detects the language, input text is written in named entity recognition.

ner detects words and phrases mentioned in unstructured text that can be associated with one or more sentiment types.

We have Louis or Luis language understanding a no code ml service to build natural language into apps, bots and IoT devices.

Use NLU the ability to transform a linguistic statement to a representative that enables you to understand your users naturally, Louis key schemas component so we have intentions to user what the user is asking for.

So Louis app contains a nun intent entities, what parts of the entity intent is used to determine the answer utterances examples of the user input that includes intent and entities who trained the ML model to match predictions against the real user input for q&a maker generate a bot from a URL PDF, and it's supposed to be do cx for docs, that's a spelling mistake, then go the doc x file, multi turn conversation, so follow up prompts to narrow a specific answer to chat personalized canned responses for Azure bot service allow you to host bots.

So you have the Bot Framework SDK, which is an end to end SDK to build, test, publish, connect, evaluate bots, that's the entire pipeline that they describe Bot Framework composer a desktop application to design bots, leverage the Bot Framework SDK, so there you go, that's the whole cheat sheet.

Usually, I would break it up for service but there's a lot of intermixing.

So that's why I did it this way.

But, you know, good luck on your exam, and I hope you pass.