Can an AI ever be considered human?

Artificial Intelligence

For some years now, Artificial Intelligence has been attracting more and more public attention. You'd think that the subject is still new. In reality, AI has been discussed in the relevant expert circles for over 60 (!) Years, while literature has been playing with similar ideas for hundreds of years.

But why is everyone talking about artificial intelligence now, of all times? Why are there new reports and studies almost every week that prophesy the apocalypse on the job market or warn of autonomous robotic soldiers and the like?
The answer: Because the computing power of computers, in combination with the ability to store almost unlimited amounts of data at minimal cost, is now so great that the processing speed required for artificial intelligence is now available.

If you look at the further increase in performance to be expected in the coming years, even more complex AI solutions will become possible. To illustrate this, let's take a quick look at the development of computing power. I promise you you don't have to be a computer technician to understand this.

In 1961 one would have had to pay around 145 billion US $ for a computing power of 1 GFLOP (1 billion arithmetic operations per second). Yes exactly, billions.
What do you think the same computing power will cost today?
1 GLFOP still cost a ridiculous 4 cents in November 2020. Yes, 4 cents. You will probably get this for free soon.

Do you remember Deep Blue? In 1997 this supercomputer won against the best chess player in the world, Garry Kasparov. Deep Blue had a computing power of 11 GFlops.

The first Apple Watch had an output of 3 GFLOPS. So if you connect four of these little Apple Watches together, you have the same computing power.
And if you have an "old" iPhone 7, then your smartphone is about 30 times more powerful than the supercomputer that won against Garry Kasparov.

What is artificial intelligence and why is computing power so important?

"Well"you may say"Modern smartphones are therefore already more powerful than an old supercomputer. But what does that have to do with AI?".

Good question. To answer this, we should clarify what we mean by artificial intelligence. Which is not that easy to answer.

If we ask ten people what digitization is, we will get just as many different answers. In addition, such a definition changes over time. In 1980 we would have called something "digitized" if a manual calculation process, which was carried out with paper, pen and calculator, had been taken over by software. Digitization now means a lot more - because we have completely different options.

It is the same with artificial intelligence.

Definition of artificial intelligence

Today we can say that we are dealing with artificial intelligence when the following criteria are met:

  • It is a digital system.

  • The system uses algorithms. Depending on the level of maturity of the artificial intelligence, these are written by humans (and improved by the system) or completely developed by the AI ​​itself.

  • The system learns on the basis of existing data and / or independently generates new data in order to learn from it and improve its own algorithms (example: it plays chess against itself and learns from the games - because in chess the computer itself knows which Party won). Depending on the application, the system can also learn completely without a database because it generates and interprets the data itself.

  • The system learns to understand the meaning of data.

  • With networked AI, a large number of individual AI systems learn from each other. This is particularly relevant when the systems are dealing with different situations.
    For example, a system used in diagnosing lung cancer may learn from a similar system used in breast cancer. And an AI system that basically deals with the prediction of health risks will fall back on the knowledge of all special AI systems.

Reading tips:

Cancer research with machine learning - algorithms that detect tumors

How Artificial Intelligence Learns - Is AI Making the Financial Industry Smarter?

Overview of AI - There is no such thing as one artificial intelligence

It is certainly clear to everyone that computing power is a decisive factor for machine learning and the processing of large amounts of data. If, for example, a medical assistance system were to take weeks to evaluate CT images for a patient in order to achieve a quality of information comparable to that of an experienced doctor, the system would unfortunately be too slow. However, if the AI ​​could take millions of evaluations from the last 20 years as a learning basis and then only need 10 seconds for the evaluation of a specific case, many people could be helped.

Speed ​​in the learning phase and in the analysis are therefore crucial for artificial intelligence. And the computing power that is now available ensures that we can develop and use such AI systems today. What was science fiction two years ago because it took too long is reality today. And what sounds like science fiction today will soon be a reality.

But there is much more to the AI ​​wonder box. The "Periodic Table of AI" published by Bitkom gives you a compact overview of many different areas of AI.

What is the difference between artificial intelligence and a "normal" computer program?

Let's take a very simple example - and software developers will forgive me for this simplicity: If you take one of the popular Excel files that are often highly complex programmed with macros, it will never be smarter on its own. It will only deliver better results if we adapt the programming ourselves (!).

However, depending on the topic, an AI system is fed with more or less extensive advance information at the beginning. This usually also includes specifications or sets of rules and a large amount of data, on the basis of which the system can learn what is "right" and what is "wrong". An example is the image search from Google or Facebook. The system was never "programmed" as I, Axel Rittershaus, look like.

First it learned which criteria it can best use to recognize faces. In the beginning, programmers defined which criteria the system should use, such as the eye relief. But the system learned all by itself that other criteria were even better. Then the system was used more and more and now it works on our smartphones and on the Internet.

As soon as my own smartphone has found out how I look (for example from the photo I take of myself when I create my profile picture), it will recognize me in my picture album in the future. Through artificial intelligence, it also learns how I change over the years and still recognizes me. And through the networking, others could also identify me in their pictures.

The system learns itself - 24 hours a day, on billions of end devices.
The more and more effectively artificial intelligence can learn, the smarter it becomes.

While in the past one tried to calculate as many variations of a chess game as possible by using massive computing power in order to determine the next move, the "intelligence" of today's systems is much stronger in the preceding learning phase. In this phase, the computer develops its intelligence - just like a child who learns to play chess gets better and better the more they practice it.

Artificial intelligence learns itself - but is that really "intelligent"?

That is one of the typical and almost philosophical questions. Because: How do you recognize intelligence? Can a machine ever be intelligent? Isn't intelligence what distinguishes us humans from everything else?

This question is incredibly exciting. We can discuss it endlessly without reaching a conclusion. This is similar to having to decide which religion is actually "right". That's impossible.

Our human intelligence is perhaps our greatest obstacle here. Because we, especially we Germans, prefer to discuss such questions first, while others in Silicon Valley, Israel or India go full throttle to use artificial intelligence. Does it really matter whether a system is "intelligent" in the end, or just delivers incredibly good results that we as humans would never have achieved, or at least not so quickly?

And what about "human intuition"?

When the discussion about intelligence reaches a dead end, "human intuition" is cited. "A computer will never be able to be as intuitive and creative as a human.", it says then.
Counter question: How does human intuition work? Where do the intuitive ideas come from that we get while showering, jogging or cooking that we would never have come up with if we had pondered them for ten hours at our desk? Yes, our subconscious will probably be the cause.

But how does it work? How does it decide whether an idea is utter nonsense, or should start its way into our consciousness?
We just don't know.

Can it not then be that a computer can also develop ideas "intuitively"?

When AlphaGo outclassed the world's best Go players (Go is a highly complex strategic board game) in 2016, the AI ​​community was jolted. Because Go is considerably more complex than chess and, due to the large number of possible moves, cannot be mastered with brute computing power. In the analyzes of the competitive games between AlphaGo and its human opponents, it made various maneuvers that were described by observing experts as completely surprising. They were maneuvers that a person would never play like that. But AlphaGo has apparently discovered new maneuvers that work during the millions of training games he played against himself.

Here, too, we can now discuss indefinitely whether a computer can have "intuition". Or we use the time and instead develop precisely such systems and learn from them.

Where do we use artificial intelligence without even realizing it?

In reality, we are already using AI systems without our being aware of it. Mostly they are systems that were developed by those who did not spend their time on the above discussions.

  • Do you have a smartphone?

  • Have you ever taken photos with it?

  • Have you already seen that your smartphone knows which photos you are in?

Then you also use artificial intelligence, regardless of whether you have decided to do it or not. Huawai even explicitly advertised that the Mate10 Pro is optimized for artificial intelligence in order to offer the user an even better experience. "This is not a smartphone, it is a smart machine," the advertisement says.

My iPhone always surprises me with the knowledge it gains about me. That's the fascinating thing about AI - it learns from us without our contributing to it. For example, I go to the gym on Wednesdays and my alarm clock rings at 4:25 a.m. A few weeks ago, my iPhone asked me for the first time on Tuesday evening whether it should set the alarm for 4:25 a.m. On its own!

In combination with my Garmin watch, I always get my iPhone when I get into the car