How is robotics changing the world

How artificial intelligence is changing our everyday life

Peter Schöll

Artificial intelligence will not replace people, but will relieve and support them. Here are a few examples of how this human-machine cooperation can look in the future.

EnlargeThe future is man-machine cooperation
© Fotolia / Photocreo Bednarek

1928: It all started with Eric (not Adam in this case). Eric was the first British robot to be developed that year by William Richards, an aerospace engineer. Eric was not yet able to move independently or communicate with Adam and his descendants.

2017: Today it looks different. While Eric was brought back to life by a Kickstarter campaign and is now standing and silent in the Science Museum in London, his brothers are there - and talking.

What was still considered fiction in the SciFi series “Raumschiff Enterprise” in the 1960s is now in more and more households and listens to the word “Alexa” - or, based on the US series, to “Computer”. Alexa turns on the light, reads me my appointments or wakes me up in time to announce the traffic jam reports. We still have traffic jams, even if only a few of us will drive in the coming decades. Eric will drive us.

Here we come to the real issue. What effects does AI have on our everyday life? In order to answer this question, artificial intelligence must first be examined more closely.

What does intelligence actually mean?

Whether someone or something is called intelligent depends on their appearance and actions that we perceive. We can neither look into the brain of our university professor, nor into the processor of supposedly intelligent machines. We only have to classify a course of action as intelligent, which we perceive and evaluate as particularly clever. An example should show that perception can play a trick on us.

A thought experiment by the philosopher John Searle from 1980 describes a room in which a person sits. This person receives texts in Chinese from the outside and is then asked to answer questions about them - also in Chinese. These answers are evaluated by a Chinese native speaker, assessed as understandable or meaningful - and thus as intelligent. If the native speaker is now asked whether the person in the room is intelligent in this sense, he will answer this with yes. In reality - or that of the thought experiment - the person in the room has no command of Chinese. He works out the answers, without actually understanding the foreign characters, only on the basis of scripts available in the room in both languages.

Since his methods are not visible, the person appears to the outside as intelligent. If we now continue this experiment and provide a computer with a corresponding program rather than a human being with a library, the result is the same when viewed from the outside. So intelligence is only the perception of what we perceive to be intelligent and what we classify as such.

With this attitude it is much easier to understand artificial intelligence. AI shows behavior that, based on experience, we would call intelligent. It starts with mundane things like automatically closing the roof window when it rains. This is smart, but also not really that intelligent that we believe it is equipped with artificial intelligence.

Our ideas are shaped by films that depict visions of the future in all their facets. AI talks to us, moves, or even listens.

But how does AI influence our everyday life?

Our lives mostly consist of labor-intensive activities that we like to do more, but often less. And this is exactly where action is needed: In the age of the industrial revolution, human labor was replaced by machines - in the current revolution the focus is less on muscle power, but more on intellectual performance.

Erik Brynjolfsson, director of the Center for E-Business at the Massachusetts Institute of Technology (MIT) and author of the book "The Second Machine Age," describes it as the second machine age. He assumes that in 30, 50 or 100 years we will be living in a world in which machines do most of the jobs. A study by the University of Oxford concludes that around 47 percent of all jobs in the US could fall victim to automation by 2030. The London School of Economics comes to a similar conclusion in its study for Germany.

The keyword automation should be emphasized here. It is not about simply replacing people with machines, but about reallocating activities - an ideal distribution of work in terms of individual skills. People, but also computers, benefit from this on a case-by-case basis: A person builds relationships with other people, is intuitive and creative. Computers, on the other hand, calculate faster, they analyze vast amounts of data in a shorter time than any human.

The future of human-machine cooperation

An example is intended to show what human-machine cooperation can look like in the near future. A doctor can, for example, discuss a diagnosis with his team on the basis of the digital analysis of an X-ray image and convey it to his patient in a human manner. Millions of records are available to the computer to perform the analysis you want - in a fraction of the time, with high accuracy and a broad coverage of the possible factors to be diagnosed.

A doctor would analyze the X-ray for the suspected disease; a computer can also analyze it for all known risk factors and thus also make a diagnosis that was not the focus of the investigation. This not only enables a safer, faster analysis, but also enables a health check that goes beyond what is previously known. Whether this is directly desired and should be passed on to people unfiltered is another matter. Obtaining a diagnosis of life expectancy even though only the annual routine blood test is pending - certainly nobody wants that.

The fact is that there will be even closer cooperation with intelligent machines in the future. Only those who participate in this digital change will still be a wheel in the gears of working society in the future.

Another example shows how far the development of the second machine age has already advanced, and that none of these are fictional prospects that will no longer affect us directly: Eatsa - a robot restaurant that started in San Francisco, USA, has abolished waiters and advertises among other things with the slogan “No lines. No cashier. No nonsense. "

With this, the restaurant has already skipped a big step and transferred the idea that we actually had of a robot restaurant into an automated and simplified version: No robots that balance trays or ask us in a friendly manner about what we want to eat. Automation takes over the work of a waiter. The reception of the order by the operator and the entry into a cash register system, which in turn produces a receipt in the kitchen, are no longer necessary. This is called disruptive technology. In other words, technology that displaces the existing and makes it superfluous.

Uber, made famous by its aggressive entry into the long-established taxi driver market, has been and is dubbed a job killer. But the fact is that Uber has created more jobs than there were in San Francisco in the entire taxi environment before. This is a clear sign of a restructuring of the working society. Taxi drivers will be abolished, new jobs in the development and support of digital, intelligent systems will be created.

The acceptance of a speaking robot was still limited in 2017

Hardly anyone would speak to a humanoid robot free of shyness, only with a good dose of curiosity. Acceptance must be created for this.

Mankind must be ready to experience the evolution of machine beings step by step. This starts with lawnmowers and vacuum cleaners that have already found their way into our home environment. With Alexa, Siri, Cortana and all the other voice assistants, we are already talking to systems that appear intelligent.

If the hardware, i.e. the lawn mower and others, is still connected to the speaking systems, the robot is perfect. At least almost: The perfection is still pending - the external appearance often still has to be adapted to the standards of human sensitivity. A pleasant outward appearance leads to more acceptance.

EnlargeThe robotic seal Paro is supposed to cheer up the elderly or nursing home
© picture-alliance / dpa / dpa

The Japanese love everything that is "kawai", that is, sweet. A well-known example is the Paro robotic seal, which is supposed to cheer up the elderly or nursing home. In Germany, its use as a therapy seal for people with dementia is somewhat controversial.

Another appearance of a robot-like being are the creations of the Japanese professor Hiroshi Ishiguro. At CeBIT 2017 in Hanover, he said: “I think we will have a robot society in the near future”. He himself created an image of himself. Even in Japan, Ishiguro is perceived as a radical proponent of robotics through his prognosis of such a robot society.

EnlargeWhen robots become more and more like humans ... who is the real Hiroshi Ishiguro?