Human vs. AI: Reasons Why AI Won't (Probably) Take Your Job

For business executives trying to implement artificial intelligence into a workplace, a common concern of employees is that an AI solution will eventually be used to replace them. While the purpose of AI in business is to increase productivity, that doesn’t necessarily mean jobs will be lost.
By Claudia Virlanuta • Updated on Jun 13, 2022

For business executives trying to implement artificial intelligence (AI) into a workplace, a common concern of employees is that an AI solution will eventually be used to replace them. While the purpose of AI in business is to increase productivity, that doesn’t necessarily mean jobs will be lost.

A Gartner study quoted in 2020 by Forbes estimates that by 2021, AI implementation will save 6.2 billion human hours of productivity around the world. However, even at this level of increased productivity, AI probably won’t replace very many jobs. In fact, it will likely do the opposite by helping employees use their human intelligence more effectively than they’ve been able to in the past.

In our brief history of using AI tools, AI has proven it’s at its best when used to aid human decisions rather than replace them entirely. To better understand why the increased use of AI is not likely to take your job, let’s discuss the differences between artificial and human intelligence. What is it that makes our human thinking and learning so unique? Also, what might it take for artificial intelligence to become truly humanlike?



What Jobs Can AI Do (Right Now)?

One of the first industries significantly impacted by machine learning was loan underwriting. Loan underwriting became partially automated in the late 1980s. Because millions of Americans hold loans, lending quickly became a "big data" problem perfectly suited for machine learning. Machine learning was implemented to assess the creditworthiness of an individual.

However, human guidance is still critical within the loan underwriting sector. Humans make the final decisions when it comes to unusual and borderline cases. Humans are also critical to building and maintaining customer relationships, which AI can’t do.

Marketing is another example of AI supplementing human competency. In marketing, improved targeting based on machine learning has helped reduce the nuisance of calling campaigns and junk mail. As a result, human workers in the marketing sector have more time to maximize worthwhile customer contact.

Other examples of jobs AI can help with today include uses in law enforcement to narrow down suspect lists and in matching people and candidates on dating and job websites.



Humans vs. AI: What Are the Differences in Thinking and Learning?

In the examples above, AI is a huge productivity booster. However, there’s still a long way to go for AI-based tools to achieve intelligence levels equal to humans. Today, humans and AI still have many differences in how they “think” and learn.

The differences between human intelligence and artificial “intelligence” were explored in the 2016 Behavioral and Brain Scientists article, “Building Machines that Learn and Think Like People.” The concepts introduced in the article, written in collaboration by researchers at NYU, MIT, and Harvard, are summarized in the paragraphs below.

Despite AI’s exceptional performance in areas such as object and speech recognition, the researchers identified three examples of tasks that humans do exceptionally well and that AI is still far from replicating.

In the human learning process, we excel at the following skills:


  1. Grounding our learning in intuitive theories of physics and psychology to support new knowledge as we gain it
  1. Building causal models of the world that help us understand our surroundings
  1. Possessing a “learning-to-learn” ability in which we identify what we need to know or need to learn to succeed in a new scenario


What's So Special About Humans?

Let’s go into detail about these three skills that distinguish human learning abilities from AI learning abilities.

The “Building Machines” researchers credit distinctly human cognitive capabilities as one of our most important advantages over AI. These cognitive capabilities present themselves early in childhood development, well before we attempt to learn a new task, and are made up of two categories: intuitive physics and intuitive psychology.

Intuitive physics describes how infants have an early understanding of primitive object concepts such as object permanence (when a baby knows an object continues to exist even if they do not see it) and object solidity (when a baby knows that it’s impossible for a solid object to pass through another solid object). Although as adults we may take these physics concepts for granted, they actually help us a great deal by enabling us to make faster and more accurate predictions, all while learning new tasks in complex environments.

Intuitive psychology describes an infant’s ability to understand that other people have their own goals and beliefs. For example, psychologists use the “Smarties” test to indicate intuitive psychology in children.

In the “Smarties” test, a child is given a Smarties candy box and discovers that the box actually contains pencils. When asked what their friend will think is in the box, the child, after a certain age, can understand that their friend has their own beliefs and will not know about the pencils. Intuitive psychology is an innately human skill and helps us learn, predict, make decisions, and navigate our environment.

The second advantage of human learning is our ability to construct causal models of the world around us. The concepts of intuitive physics and psychology explained above are examples of causal models of the world. In general, causal models help us answer the questions “why” and “how” when observing the world around us. So far, humans have the ability to build causal models much faster in real-time than AI.

Model-based reinforcement learning is a type of causal model that uses rewards, and humans use it often. If reinforcement learning rings a bell (no pun intended), you may have heard of Pavlov’s dogs.

The physiologist Ivan Pavlov conducted a famous experiment in which his dogs were trained to expect food, or positive reinforcement, when Pavlov rang a bell. Eventually, Pavlov found that the dogs would salivate at the sound of the bell alone. In this case, the dogs learned to salivate at the bell sound through a causal model. For them, the bell meant food was coming. Humans function in a similar way to plan their actions and maximize their rewards.

The third advantage of human learning discussed by the researchers is our human ability to “learn-to-learn” or, in other words, adapt to our surroundings to understand quickly how to succeed in varying circumstances. The “Building Machines” article uses the example of the famous MNIST character recognition challenge to support their theory on human excellence in the field of “learning-to-learn.”

Today, AI still struggles to distinguish handwritten from printed characters. Likewise, while we have the ability to train AI on large datasets to recognize a majority of characters in a test, the article identifies two reasons why humans still beat AI in character recognition tests (and why that won’t change any time soon):


  1. Humans learn more faster. AI must be trained on datasets with thousands of characters to even come close to human-level performance in character recognition. Humans, on the other hand, can learn a lot from a single handwritten character and even go on to apply our knowledge generally to other similar, handwritten characters.
  1. Humans have the power of inference. AI has difficulty with handwritten characters because of the variability with which they appear. Handwriting is as unique as each human, and your letter “H” may look completely different from your neighbor’s. Humans can process the differences in unique handwriting to infer the identity of the character in front of them, even if it doesn’t look exactly the way they were taught in grade school.



What Are Efficient Human Learning and AI Applications?

As AI development progresses, researchers continue to use the human mind as inspiration for what AI might eventually become. Today, the human brain is already the model for many AI applications. Specifically, the “Building Machines” article pinpoints the human abilities of selective attention, working memory, and experience replay as a few areas that are most promising to lend themselves to AI development.

Let’s look at these concepts one by one:


  1. Selective attention allows humans to focus on a single input or task while choosing to ignore competing distractions.  In AI, the concept of selective attention could be applied as a model processes data. Instead of asking an AI tool to process all of its input data at once, using selective attention would create a model that processes input data in smaller, more digestible pieces and picks only those inputs which are relevant to its learning goals.

    According to “Building Machines,” implementing selective attention in AI has led to “substantial performance gains in a variety of domains, including machine translation, object recognition, and image caption generation.” Selective attention allows the model to focus on smaller sub-tasks rather than solving an entire problem in one shot.

  2. Working memory in humans holds small details in mind that help complete a task while not necessary to remember forever as long-term memories. Working memories are already being developed as features in neural networks and can help enhance the memory performance of some AI applications.
  3. Experience replay, while most commonly an AI term, mirrors the human experience of learning through trial and error from past experiences. Because of this human capability, we typically learn more than AI in fewer examples.

    To return to character recognition as an example, feeding 500,000 examples of a handwritten character “D” into an AI tool as training material will help that tool identify a handwritten “D” with substantial accuracy. A human will likely never see 500,000 unique examples of a handwritten “D” in a lifetime. And yet, humans still exceed AI’s ability to identify handwritten characters because we learn quickly from trial and error as we learn to read and continue reading throughout our lives. Using the concept of experience replay, developers hope to build an AI-based tool that learns as efficiently as a human brain.


As different as AI is from humans, AI and machine learning have made incredible progress in the previous decades in replicating human intelligence. Developers will undoubtedly continue to look to human intelligence for inspiration in driving AI development in areas like self-driving cars, medicine, genetics, drug design, and robotics.

But for now, our natural human intelligence is still superior. While machine performance takes inspiration from humans, AI is still not able to holistically replicate how a person thinks and learns. Humans will continue to learn from less data than machines while generalizing better than machines and with more flexibility.

And while machines may be better at math compared to the average human, we are much better at tasks that are less clear-cut, like character recognition. We can also make better inferences, thanks to the causal nature of our instinctual representations.

AI developers are still hopeful for coming closer to replicating human learning. But for now, take comfort in the fact that AI will most likely not take your job. It could, in fact, make your job more enjoyable. And it’s all thanks to our uniquely human intelligence.


Claudia Virlanuta

CEO | Data Scientist

Claudia Virlanuta

Claudia is a data scientist, consultant and trainer. She is the CEO of Edlitera, a data science and machine learning training and consulting company helping teams and businesses futureproof themselves and turn their data into profits.

Before Edlitera, Claudia taught Computer Science at Harvard, and worked in biotech (Qiagen), marketing tech (ZoomInfo), and ecommerce (Wayfair). Claudia earned her degree in Economics from Yale, with a focus on Statistics and Computer Science.