by Rishal Hurbans
Artificial Intelligence in Perspective
The buzz words of today: artificial intelligence (AI), machine learning, and deep learning. Business circles, corporations, startups, developers, and the average person have heard about these terms and seen them appear more and more often in news and online chatter. But what do they really mean?
The concepts and methodologies behind artificial intelligence are not new. Known techniques are used in different ways to achieve new and extraordinary things. It’s made possible today by some key factors in the advancement of technology and business, namely:
Computational hardware advancement: Over the last two decades, technological advancement in computational hardware has drastically improved, allowing for general access to powerful hardware at cheaper costs. Contrary to popular belief, AI algorithms utilize the GPU (Graphic Processing Unit) in a computer, not the CPU (Central Processing Unit). Historically, the GPU was required for playing the latest games, however, the architecture is well suited for algorithms that make AI possible.
Lots of data to work with: The catalyst that makes artificial intelligence and machine learning possible is data. The more data an algorithm has to work with, the more refined it can become. The terms big data, and data mining boomed in the recent past. Data collected via various mechanisms on various things, and uncovering insights on that data, provides a solid foundation for artificial intelligence and machine learning. With that said, understanding, cleansing, and preparing data is crucial step in implementing most artificial intelligence algorithms successfully.
Business opportunities: Businesses strive to make a profit. If an initiative adds no value to the business and does not contribute in some way or another towards increasing profit, a business won’t adopt it. Given the amount of data businesses have acquired, new use cases and opportunities have emerged with the potential to make profit. This makes AI a feasible area to experiment in, even if it’s simply a tool used to understand a business, its offerings, and its customers.
What is AI?
Before we understand how AI works, what algorithms exist, and where they are useful, we need an understanding of what it is. By definition, “artificial” means something that is simulated, not organic, and typically created by humans.
What about intelligence? Intelligence is a somewhat subjective and philosophical matter. Is the ability to understand and implement the process of making toast an example of intelligence? Is making a decision about whether someone has enough money in their account for a transaction an example of intelligence? Is beating a chess master at a game of chess an example of intelligence? Intelligence is a philosophical question, and can be highly subjective.
What is Intelligence?
By definition, “intelligence” means to acquire and apply knowledge and skills. This is still very vague. What is knowledge? By definition, “knowledge” is an acquired understanding about a concept. This is still not something tangible.
As humans, we believe that we’re intelligent since we dominate the world. We have evolved into a species that endures, adapts, and innovates the way we live. However, we see many examples of what could be classified as intelligent behavior from seemingly unintelligent organisms. Ants exhibit complex intelligent behavior when navigating terrain and transporting food. Birds flock together for protection against predators and their environment.
So, the question remains, what is intelligence?
Is something that learns intelligent? If something is continually aware of knowledge acquired in the past and is able to apply it going forward, is it intelligent? As humans, we make the same mistakes over and over again, even though we have knowledge of the outcome from past experience, yet it still happens. Does this make us unintelligent?
Is something that reasons intelligent? If something acquires knowledge in various areas, and strings that knowledge together to form an opinion or way of thinking, is it intelligent?
Is something that creates intelligent? Creativity is strange. It’s a special kind of cognitive activity that is very difficult to quantify. Many humans struggle with creativity because of the unique way different people think. Even the most prolific people can draw a blank when it comes to creativity.
As humans, our sensory system, and survival instinct impacts how we learn, reason, and create. For now, let’s agree that we look at ourselves when understanding what intelligence is and assume that we’re intelligent because we’re the most dominant species. Call it arrogance, but we don’t know any better, and it’s more than likely that we’re far from the pinnacle of true intelligence.
This is what we care about. We want to continually improve, as people, teams, communities, and as a species. Let’s take a look at the past. The industrial revolution happened in the 1800’s, and the first digital computer was invented in the 1900’s. That’s about a hundred years apart. The time between the first digital computer, to the first human in space was less than a hundred years. The time between the first human in space, and the first personal computer was a couple of decades. The time between the first personal computer and mobile phones was just several years.
This exponential technological advancement is an example of human advancement. We create things that change the way we live, and interact with the world. We rapidly change economies by this advancement. From looking at the past, it is clear that the driving factors for advancement is money, power, improvement, and sometimes curiosity…
Given this, it’s highly unlikely that we would create something of the likes of the terminator, if it does not benefit our advancement. However, malicious people will do malicious things with whatever they have at their disposal. To understand more about what we need to control, let’s have a look at different categorizations of artificial intelligence.
Artificial Narrow Intelligence (ANI)
This categorization of AI includes implementations that focus on something very specific. It may be solving a specific problem, learning something very specific, or making decisions on something specific. Examples include; a program that makes smart decisions in the game of chess, a program that predicts the shelf life of products, a program that understand speech, etc.
These are typically applications of AI that focus on a narrow domain. This is not to say that multiple narrow intelligence implementations can’t be integrated to work together. Although multiple ANI implementations may be integrated together, this is not considered artificial general intelligence.
Artificial General Intelligence (AGI)
This is a huge jump from narrow intelligence. This categorization of AI is essentially human-like. If we think about how we think, things get complicated. We have gained so much knowledge that we’re not consciously aware of. We have a bias in the way we think that we don’t consciously know about. It’s not as simple as stringing a bunch of narrow intelligence units together.
Artificial Super Intelligence (ASI)
Super intelligence. This is where things could get scary. We don’t know what could be more intelligent than us. Theories suggest that if we achieve artificial super intelligence, it will surpass our intelligence in a matter of seconds. This is eerie since we may not even be able to comprehend it.
Until we understand exactly how our brains work and are able to quantify how we think and what makes us think that way, we may never understand what true intelligence really is.
Ethics and Responsibility
There is a question of ethics around artificial intelligence and its applications. Could super intelligent machines rebel and rule over us? If an intelligent car injures someone, who’s responsible? This isn’t about the computers and AI, it’s about people.
Whether we like it or not, some people will be malicious for self benefit and utilize anything they can towards it. The question of ethics is a philosophical one as well. What makes us feel bad when we do something considered bad? Can we simulate that in a machine? It comes down to malevolent and mature decision making from people in leadership or decision making roles.
If the intentions are positive, and the impacts are analyzed, we will be less likely to create technology that hurts us as a species. This isn’t something new, most technological breakthroughs could have been widely used for malicious intentions, but they weren’t. I believe that as a species, we exhibit behavior for collective advancement, even though there may be a few bad apples in the batch.
With regards to AI replacing existing jobs, it’s natural progress. The internet boom hurt physical newspaper sales, but industries such as SEO (Search engine optimization), and social media management wouldn’t exist without it. Technological progress hurts some occupations, but inevitably creates more. We are unable to tell what new industries AI will create until it actually happens.
What Can AI Do?
Okay. Enough with philosophy and ethics. Let’s look at where AI can be applied to add value. As mentioned previously, multiple techniques and intelligent units can be used together, but independently. Current implementations of AI usually do one of the following.
Make a finite decision
This has historically been used in AI that play games or work within a finite set of rules. Given a current state with known data, an algorithm is able to determine the best decision in its context.
An example is a game of chess. By evaluating the moves that have happened and as many possible future moves, an algorithm can determine the best possible decision for the next move to make. Or given a set of images, an algorithm that can classify which are pictures of people.
Currently, decisions made by machine learning algorithms usually happen under something called supervised learning which is elaborated on later.
Make a prediction or recommendation
Making a prediction requires recognizing patterns, and calculating probabilities to find trends. It can also form a model that may not have been known upfront.
This is very similar to making a decision, however, the possible outcome may not have even been an option at the start. An example is, given a picture, categorize the physical objects in the picture whilst considering other pictures.
In machine learning, these non-finite outcomes fall under the category of unsupervised learning.
As humans, our brains form connections between different pieces of knowledge which creates the concept of reasoning. By definition, “reasoning” means to think about something in a logical manner. However, with regards to intelligence, it’s a complex cognitive trait that is difficult to understand.
As humans, we form opinions and conclusions based on how we reason without consciously trying to reason. It’s a hidden language that ties the strings of knowledge together in our minds.
So can machines reason? Yes. It’s happened already. An example being Google Translate. It is able to translate phrases between two different languages without using an intermediary language understood by humans. We will likely never understand the reasoning that happens for it to achieve this.
This form of intelligence emerges more when concepts of deep learning are applied and implemented over time on a large dataset.
How Does AI Achieve This?
From a technical perspective, the terms artificial intelligence, machine learning, and deep learning tend to be confusing and one is sometimes unsure about how they relate. Artificial intelligence encompasses different techniques to synthesize intelligence.
The following sections describe different types of approaches where each approach uses specific principles to achieve a goal. The respective approach is selected based on the data available, the goal that’s trying to be achieved, and the nature of the problem. There are many other approaches, but these are most popular ones used today.
These algorithms are based on concepts of biological evolution. From scientific studies, we have observed the process and outcomes of reproduction, mutation, and individual selection in natural organisms.
Essentially, these algorithms are based on the premise that organisms reproduce to create more organisms, the children of the original organisms are comprised of a combination of the genetic make-up of them. However, there are slight variants in the children; this is called mutation. Given the mixed genetic make-up of the children and their mutations, they could potentially be “better” than their parents even in the case that their parents as individuals are considered “inferior”. Individuals are selected to live on based on their fitness which is derived from how “good” they are. This is the general process that most living organisms have followed over millions of years to be what they are today, including us humans.
Evolutionary algorithms are suited for problems where a single result is comprised of permutations of finite things. These algorithms are geared towards finding incrementally better solutions, but cannot guarantee finding the most optimal solution.
As an example; consider the problem of optimizing package delivery by drones from warehouses to customers where there are constraints on weight that the drones can carry. Each action for a specific drone is finite — let’s call this a gene. Permutations of sequences of possible actions across all drones can be generated — let’s call this a chromosome. And each chromosome will have a different performance — let’s call this fitness.
These chromosomes are generated, reproduce new sequences, and the fitness of each is evaluated to determine which should live on. This happens for a number of generations or a specified stopping condition is reached. The most fit chromosome is then used as the most optimal solution. This means that an optimal sequence of actions for drones will eventually emerge.
The underlying algorithms used for machine learning are essentially based around statistics. Machine learning is similar to the concepts around data mining. An algorithm attempts to find patterns in data to classify, predict, or uncover meaningful trends. Machine learning is only useful if enough data is available, and if the data has been prepared correctly.
As a toy example, consider that evaluation of password strength depends on the length of the password, if it contains numbers, and if it contains special characters. Let’s also assume that we have a list of passwords and their respective strength. Simply using the raw textual representation of the password for a machine learning algorithm to learn what makes a password strong or not will not work.
Extraction of metadata such as the number of characters, the number of special characters, and the number of numeric digits is required before a machine learning algorithm can learn any trends. This metadata and the process of preparing it is imperative to successful machine learning.
Machine learning consists of two categories, namely supervised learning, and unsupervised learning.
Supervised Learning: Most practical solutions use supervised learning. Supervised learning encompasses approaches to satisfy the need to classify things into categories — known as classification. It also includes approaches to address the need to provide variable real-value solutions such as weight or height — known as regression.
Unsupervised Learning: The goal of this type of learning is to model data and uncover trends that are not obvious in its original state. This type of learning is used to learn about data.
There are no answers that the algorithm tries to guess. It discovers “hidden” structures and correlations that are not apparent at face value. This is useful for finding groups of data that are similar — known as clustering. It is also useful for discovering rules that govern portions of the data — known as association.
Deep Learning and Neural Networks
Deep learning is a term that sounds very mysterious and complex, and it is to an extent. It is similar to machine learning in that it classifies things and discovers patterns in data, however, deep learning algorithms constantly improve their knowledge on what they have already learned in the past. These algorithms may consist of chaining a number of different AI approaches to achieve its goal.
As an example; consider that a large image database exists and there is a need for an algorithm to describe the objects in pictures. Using deep learning, an algorithm is able to find similar objects in different pictures and group them. After a human labels that group, the algorithm understands what that object is going forward, however, it can create further subgroups within that object for different variants. If a grouping of cars is discovered, the algorithm may find different variations of cars such as sedans, hatchbacks, SUVs, etc. Given enough data, these subtle variants can be discovered.
Neural network algorithms are heavily used in deep learning due to their adaptive nature. Neural networks are based on our understanding of how the human brain and nervous system works. It is the concept of a layered hierarchy of neurons that accept an input, influences the input, and then directs the result to other neurons based on the weightings of the neuron.
The weighting on each neuron changes over time as the network becomes better at classifying the input. A higher-weighted neuron will have more influence on the input it receives and thus could strongly impact the outcome of the network. Neural networks are useful for classification problems where classification can change or be refined in the future.
Artificial intelligence is an exciting concept that will shake industries and the way we live. It’s unlikely that we will create human hating robots that go bananas and destroy us, if we focus on the benevolent uses of it. This piece attempts to make AI concepts more clear to you and demystify the buzzwords. Equipped with this knowledge, I challenge you to learn more about AI, and find valuable practical uses for it in your work and everyday life.