AI Unwrapped

The journey towards AI organisations is now an integral component of corporate strategy as companies focus on redesigning core systems, processes, and business strategies around AI and its possibilities. Its end goal: an organization in which humans and machines work together within designed digital systems to harness data-driven insights.

This human, AI partnership offers many opportunities. It can:

  • Bring analytics to industries and domains where it’s currently underutilized.
  • Improve the performance of existing analytic technologies, like computer vision and time series analysis.
  • Break down economic barriers, including language and translation barriers.
  • Augment existing abilities and make us better at what we do.
  • Give us better vision, better understanding, better memory and much more.

As organizations move from using the technology in isolated pilots to deploying larger AI systems, they should consider three system models that are currently in play:

  • Cloud-native. Given AI’s ascendance in the enterprise technology arena, it is conceivable that an AI-as-a-service platform could be the next big operating system.
  • Package-adjunct. In an alternative approach to the cloud-native model, several vendors are investing in AI platforms as complements to their core functionality.
  • Open-algorithm. Numerous startups and boutique software shops are developing AI solutions to meet specific business needs, use cases, and verticalized issues.

Below we explore exactly what is AI, it’s history and two key subfields Machine Learning & Deep Learning and explain why the time to launch your AI organisation journey is now.

 What is AI?

Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants such as Amazon’s Alexa and Apple’s Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.

History of AI

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.
While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary – or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.

How Artificial Intelligence Works

AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data. AI is a broad field of study that includes many theories, methods and technologies, as well as the following major subfields. Below we explore two key ones, Machine Learning & Deep Learning.

Machine Learning 101

Most recent advances in AI have been achieved by applying machine learning to very large data sets. Machine-learning algorithms detect patterns and learn how to make predictions and recommendations by processing data and experiences, rather than by receiving explicit programming instruction. The algorithms also adapt in response to new data and experiences to improve efficacy over time.

Machine Learning: Supervised Learning

Supervised learning is an algorithm that uses training data and feedback from humans to learn the relationship of given inputs to a given output (eg, how the inputs “time of year” and “interest rates” predict housing prices). It is used when you know how to classify the input data and the type of behavior you want to predict, but you need the algorithm to calculate it for you on new data. It works as follows:

  • A human labels the input data (eg, in the case of predicting housing prices, labels the input data as “time of year,” “interest rates,” etc) and defines the output variable (eg, housing prices)
  • The algorithm is trained on the data to find the connection between the input variables and the output
  • Once training is complete–typically when the algorithm is sufficiently accurate–the algorithm is applied to new data

Algorithms & Use Cases:

 

Machine Learning: UnSupervised Learning

UnSupervised learning is an algorithm that explores input data without being given an explicit output variable (eg, explores customer demographic data to identify patterns). I is used when you do not know how to classify the data, and you want the algorithm to find patterns and classify the data for you. It works as follows:

  • The algorithm receives unlabeled data (eg, a set of data describing customer journeys on a website)
  • It infers a structure from the data
  • The algorithm identifies groups of data that exhibit similar behavior (eg, forms clusters of customers that exhibit similar buying behaviors)

Algorithms & Use Cases:

 

Machine Learning: Reinforcement Learning

Reinforcement Learning is an algorithm learns to perform a task simply by trying to maximize rewards it receives for its actions (eg, maximizes points it receives for increasing returns of an investment portfolio). It is used when you don’t have a lot of training data; you cannot clearly define the ideal end state; or the only way to learn about the environment is to interact with it. It works as follows:

  • The algorithm takes an action on the environment (eg, makes a trade in a financial portfolio)
  • It receives a reward if the action brings the machine a step closer to maximizing the total rewards available (eg, the highest total return on the portfolio)
  • The algorithm optimizes for the best series of actions by correcting itself over time
Screenshot-21

Algorithms & Use Cases:

 

Deep Learning 101

Deep learning is a type of machine learning that can process a wider range of data resources, requires less data preprocessing by humans, and can often produce more accurate results than traditional machine-learning approaches (although it requires a larger amount of data to do so). In deep learning, interconnected layers of software-based calculators known as “neurons” form a neural network. The network can ingest vast amounts of input data and process them through multiple layers that learn increasingly complex features of the data at each layer. The network can then make a determination about the data, learn if its determination is correct, and use what it has learned to make determinations about new data. For example, once it learns what an object looks like, it can recognize the object in a new image.

  • The algorithm takes an action on the environment (eg, makes a trade in a financial portfolio)
  • It receives a reward if the action brings the machine a step closer to maximizing the total rewards available (eg, the highest total return on the portfolio)
  • The algorithm optimizes for the best series of actions by correcting itself over time
Screenshot-23

Deep Learning: Convolutional Neural Network

Deep learning is a type of machine learning that can process a wider range of data resources, requires less data preprocessing by humans, and can often produce more accurate results than traditional machine-learning approaches (although it requires a larger amount of data to do so). In deep learning, interconnected layers of software-based calculators known as “neurons” form a neural network. The network can ingest vast amounts of input data and process them through multiple layers that learn increasingly complex features of the data at each layer. The network can then make a determination about the data, learn if its determination is correct, and use what it has learned to make determinations about new data. For example, once it learns what an object looks like, it can recognize the object in a new image.

  • The algorithm takes an action on the environment (eg, makes a trade in a financial portfolio)
  • It receives a reward if the action brings the machine a step closer to maximizing the total rewards available (eg, the highest total return on the portfolio)
  • The algorithm optimizes for the best series of actions by correcting itself over time

It works as follows:

  • The convolutional neural network (CNN) receives an image–for example, of the letter “A”–that it processes as a collection of pixels.
  • In the hidden layers, it identifies unique features–for example, the individual lines that make up “A”.
  • The CNN can now classify a different image as the letter “A” if it finds in it the unique features previously identified as making up the letter.

Algorithms & Use Cases:

 

Deep Learning: Recurrent Neural Network

Recurrent Neural Network is a multilayered neural network that can store information in context nodes, allowing it to learn data sequences and output a number or another sequence.It is used when you are working with time-series data or sequences (eg, audio recordings or text).

It works as follows:

  • Other neural-network architectures assume all inputs are independent from one another. But this assumption doesn’t work well for some tasks. Take, for example, the task of predicting the next word in a sentence–it’s easier to predict the next word if several words that came before are known
  • A recurrent neural network (RNN) neuron receives a command that indicates the start of a sentence
  • The neuron receives the word “Are” and then outputs a vector of numbers that feeds back into the neuron to help it “remember” that it received “Are” (and that it received it first). The same process occurs when it receives “you” and “free,” with the state of the neuron updating upon receiving each word
  • After receiving “free,” the neuron assigns a probability to every word in the English vocabulary that could complete the sentence. If trained well, the RNN will assign the word “tomorrow” one of the highest probabilities and will choose it to complete the sentence

Technologies that enable and support AI:

  • Graphical processing units are key to AI because they provide the heavy compute power that’s required for iterative processing. Training neural networks requires big data plus compute power.
  • The Internet of Things generates massive amounts of data from connected devices, most of it unanalyzed. Automating models with AI will allow us to use more of it.
  • Advanced algorithms are being developed and combined in new ways to analyze more data faster and at multiple levels. This intelligent processing is key to identifying and predicting rare events, understanding complex systems and optimizing unique scenarios.
  • APIs, or application processing interfaces, are portable packages of code that make it possible to add AI functionality to existing products and software packages. They can add image recognition capabilities to home security systems and Q&A capabilities that describe data, create captions and headlines, or call out interesting patterns and insights in data.

Summary

In summary, the goal of AI is to provide software that can reason on input and explain on output. AI will provide human-like interactions with software and offer decision support for specific tasks, but it’s not a replacement for humans – and won’t be anytime soon.
 
Sources: Poplify, McKinsey, Deloitte
Arrow-up