What is preventing your company from adopting AI?

For enthusiastic experts, there is no question for a long time that artificial intelligence will prevail. However, if you take off the AI ​​glasses and switch to the perspective of customers and employees, stumbling blocks become apparent beyond pure technology: There are 3 problems that still prevent the widespread acceptance of artificial intelligence.

Table of Contents

  1. What is Artificial Intelligence actually?
  2. What can artificial intelligence do today?
  3. Cautiously positive: the attitude of employees to AI
  4. Still skeptical: customers' attitudes towards AI
  5. The 3 Big AI Problems - and Potential Solutions
  6. Conclusion: how to increase the acceptance of AI

One of the most basic human qualities is skepticism towards the new and the unknown. In the case of artificial intelligence (in the following also: KI / AI) this even seems to have two reasons: On the one hand, the term AI itself is surrounded by a dark cloud of uncertainty and a lack of knowledge: Where does artificial intelligence begin? Am I already unknowingly using applications that are based on AI? Where does AI affect my daily life? On the other hand, there is the almost unpredictable development of technology - including horror scenarios à la Machines take power. But even less abstract fears prevent people from using AI and accepting it as an aid: Does AI make me redundant in my job? How does the algorithm get exactly this result? Am I being unconsciously influenced by AI?

It is clear from the start: General acceptance for such a powerful, disruptive technology cannot be achieved overnight. It is not only in customer service that you should now think about how to design, support and approach the use of AI to customers and employees. In order to make the topic of AI more tangible, two questions must first be answered: What is artificial intelligence - and what can it do?

1. What is Artificial Intelligence?


A look at the history of artificial intelligence shows that research into technology has been under the radar since the 1950s. But only with the victory of the chess computer Deep blue against the then world champion Garri Kasparov in 1997, the rise of artificial intelligence began to become the hype topic it is today.

Put simply, artificial intelligence is nothing more than the ability of machines to understand, think, plan and recognize - analogous to human intelligence. Such a technical replica of intelligence has almost infinite possibilities on paper: In contrast to human intelligence, AI applications are not limited to the performance and memory space of the brain. Rather, today's computing power and access to data of all kinds allow artificial intelligence to evaluate complex and huge amounts of data in real time, recognize patterns and draw conclusions from them. Depending on how the underlying algorithm was programmed, AI can also come to the solution in a dynamic way or learn independently. The most important terms relating to the technology are listed and explained again here:

  • Algorithm: Describes a sequence of instructions that can be used to solve a specific problem. A guide, so to speak, according to which a task can be solved step by step.
  • Big Data: A huge amount of structured and unstructured data that would be impossible to evaluate using manual or conventional data processing methods.
  • Machine Learning: Generic term for the "artificial" generation of knowledge from experience. In a learning phase, an AI first recognizes patterns and regularities in training data. The AI ​​can then generalize these examples and thus also assess and evaluate unknown data.
  • (Artificial) neural networks: Artificial structure of connections based on the human brain. It consists of many layers in which data processing takes place.
  • Deep learning: sub-area of ​​machine learning that uses the various layers of a neural network. Complicated data processing is divided into a series of nested, simple associations.
  • Weak AI: The aim of weak AI is to simulate intelligent behavior using mathematics and computer science.
  • Strong AI: The aim of strong AI is also to give the AI ​​an awareness or a deeper understanding of intelligence.

2. What can artificial intelligence do today?

Despite the seemingly endless possibilities of artificial intelligence, according to Professor Kristian Kersting (head of the ML department at TU Darmstadt), a big mistake was made to equate AI with human intelligence: “It will take a few years, if not decades or maybe centuries . ”As of now, one could speak of the“ island talents ”of AI - that is, the ability to implement only clearly defined requirements quickly and efficiently (see above Weak AI). There are already plenty of examples from customer service:


The potential of AI is recognized - this is shown not least by investments by the federal government or other countries. However, because it is not clear where the journey is going, the issue is received in two ways among the population: In addition to euphoria about the new opportunities, fear and uncertainty prevail in many places. Studies predict up to 4 million new jobs through AI, while at the same time one in five runs the risk of losing their previous job. And there are also concerns outside of the economy: If, for example, AI drives cars autonomously or applicants are selected, it must also be ethically competent. In the recent past, however, AI applications have often proven the opposite (example: Microsoft's chatbot Tay). Finally, the support of AI could also have negative consequences for our own intelligence. If we give more and more brain teasers to the computer, we could become dumbfounded - so the theory goes.

Without education there is also no acceptance

It quickly becomes clear: In order for Artificial Intelligence to be accepted by both customers and employees, educational work and further research must be carried out in many places. It is important here to “demystify” the concept of AI, i.e. to make it understandable how an algorithm came to its decision. This challenge has already been tackled: If one understands the development of AI technology in three waves, the first two waves dealt primarily with pure technical advancement. The aim of the so-called “third wave of AI” will be to make results comprehensible. For example, how an algorithm comes up with exactly one result by analyzing thousands of data and relationships. At the current level, for example, it is not yet possible to see what exactly is happening in a neural network - this is often referred to as one Black box.

However, the following applies to both customers and employees who come into contact with AI applications: the more transparency is guaranteed and the more information is made available, the more acceptance the use of artificial intelligence in the application will experience. However, since this rarely happens - as of now - attitudes towards AI are still very divided.

3. Cautiously positive: employee attitudes towards AI

At least one agrees on the effects of the technology: The emergence of AI means a major change and is already on the agenda for the majority of companies: According to an Adesso study, 87% of company decision-makers are of the opinion that AI means a clear competitive advantage. With 48% of those surveyed, the use of AI applications is planned for the next three years - no other topic came up with similarly high values. But how do the employees deal with these plans?

Even if many reports predict the loss of thousands of jobs, employees in customer service see the new technology surprisingly positively: 88% of all employees in the industry are looking forward to the use of AI. Above all, the reasons are interesting: AI deals with simple, recurring malfunctions automatically and quickly. This gives human service employees more time for more complex issues where they are only supported by the AI. In addition to more efficient work and optimized business processes, AI primarily enables service employees to interact more closely with people. However, according to the Genesys study, employees in other areas such as gastronomy, human resources or production are extremely critical of new technology. The main motives here are the loss of jobs and a lack of empathy on the part of the AI. Overall, 80% of employees in Germany do not see their job at risk from AI.

Three general conditions for acceptance

However, if you want to implement artificial intelligence in your own company, a positive mood towards the technology is not enough. Rather, important framework conditions must be in place right from the start.

  • TIME: Instead of introducing AI overnight, employees should be given enough time to get used to the interaction with artificial intelligence. Training and time to get used to it reduce frustrating experiences and also show you where the advantages of AI lie.
  • INFORMATION: Ignorance always leads to fear and rejection. It is precisely for this reason that AI concepts must be clarified with employees in advance. This is important in two ways: On the one hand, informed workers lose their inhibitions, can assess the limits of technology and make suggestions for improvement. On the other hand, employees are much more open to such new technology if they understand that it will not make them superfluous.
  • SENSE: Because when the focus is primarily on the customer when using artificial intelligence, service employees are by no means superfluous. Rather, their work takes on a completely new meaning, which must also be conveyed. Instead of stupid repetitive work, they now primarily have to deal with more complex inquiries in personal contact. When it comes to customer contact, human skills are now required that AI cannot muster: creativity, interpersonal communication and empathy.

In customer service, companies see the opportunities offered by the new technology primarily in the area of ​​customer loyalty and the customer experience. However, if you want to create added value for customers, you also have to consider their attitude towards artificial intelligence. The opinion of users could become a stumbling block, as the figures from the Adesso study mentioned above show: While companies are firmly convinced that customers would like to make use of various AI services (84, or 76%), only shares a fraction of the customers themselves have this opinion (38, or 30%).

4. Still skeptical: the attitude of customers to AI

Omni-channel customer service
Multiple support channels (phone,
Self-service, chatbot, ...) can be used
Information from the customer is stored centrally in the knowledge base
Switching between the various support channels is possible at any time

In retrospect, customers viewed the general digitization of customer service with skepticism: When asked by the Deutsche Gesellschaft für Qualität e.V., 59% said that digitization has fundamentally changed customer service, but not only for the better. Nevertheless, new technologies are not categorically rejected - on the contrary: 41% are even of the opinion that companies invest too little in technology. The biggest problem is the lack of personal contact: 58% criticize the lack of direct access to customer service. As paradoxical as it sounds - this is where AI can help. By relieving the service staff and using an omni-channel approach.

In general, the topic of AI in customer service moves in a complex area of ​​tension. The use of new digital aids is welcomed, but is far from being used across the board. Large differences in user behavior within the various age groups can be broken down into at least one finding: the customer wants to be able to access a large number of service channels - according to an E.ON survey that is 81%. Again, omni-channel can help. The split target group - divided into innovators and latecomers - also brings with it other requirements.

For many customers, a usage barrier arises simply due to a lack of knowledge: It is therefore the job of the company to explain and convey the technology in an understandable way when it is used. In this context, it helps to clearly communicate the role of artificial intelligence: As long as AI does not yet have human-like properties such as empathy, it is only used as a supplement to the service employee. Clearly, this explanation is easier said than done. Many users still have a completely wrong idea of ​​artificial intelligence and the terms associated with it.

Favorable starting conditions

Nevertheless, the starting conditions are favorable to implement artificial intelligence: According to a study by the Capgemini Research Institute, the acceptance of voice assistants and chatbots is slowly increasing. And Next Media Hamburg was able to determine in mid-August 2019 that the willingness to communicate with AI has increased by 25 percentage points to 83% within a year. At the same time, 77% want AI applications to be labeled and speak out against the "humanization" of AI. An interesting finding can also be derived from a study by the ZHAW School of Management and Law in Winterthur, Switzerland: Customer satisfaction increases with moderate automation, but decreases sharply when the degree of automation is too high.


>> Customer satisfaction increases with moderate automation, but falls sharply when the degree of automation is too high <<

Knowledge of the investigation Customer benefits through digital transformation


Again, successful implementation means a combination of AI and people, labeling, transparency and, last but not least, education. Concerns about AI arise primarily from a lack of understanding, media coverage, distrust of technology, fear of data espionage, hacker attacks and financial damage. If you want to ensure that artificial intelligence is accepted across the board, you should also know the three main problems of technology and how to deal with them. These are in particular (1) a lack of ethics, (2) a lack of transparency and (3) a lack of knowledge.

5. The 3 Big AI Problems - and Potential Solutions

5.1. Ethics and morals of AI

The is an example of the ethical problem of artificial intelligence
Chatbot Tay: When Microsoft released him on Twitter in March 2016 to learn the language of the youth, he had to be taken offline 24 hours and countless racist tweets later. Similar examples abound. Mark Surman, Director of the Mozilla Foundation, is campaigning for compulsory ethics training for computer specialists because otherwise artificial intelligence could be a driver of discrimination, spreading fake news and propaganda.

And of course this also plays a major role in customer acceptance. In any case, respondents to a study by Pegasystems Inc. doubt the moral convictions of companies - 65% do not believe that companies use new technologies to communicate with them in a trusting manner. In addition, there are concerns about AI's moral decision-making: More than half said that AI distorted decision-making and only 12% attribute AI to the ability to distinguish between good and bad. To put it provocatively: In their opinion, AI has neither morals nor ethics.

Figures from a study by the Capgemini Research Institute show that ethics can also become a success factor in this context: 62% of the consumers surveyed trust companies more if AI-based interactions are perceived as ethical - and they would also have these positive experiences with friends and family share. At the same time, ethical problems with AI contact would have serious problems: a third of those surveyed would cut off contact with the company immediately.

Where do ethical problems arise?

Ethical and moral faux pas arise in the gap between theory and practice. An AI application can only ever be as good as the underlying data, because it accepts information without human reflection. So if a data set contains an unequal weighting of men and women, it will be adopted by the AI. Basically, the application then often only mercilessly reflects one's own prejudices and inequalities. The difference: a person always compares circumstances with existing moral and empathy principles. An AI cannot (yet) do this. One of their biggest weaknesses.Nevertheless, the above example shows that AI could also bring injustices and new ethical issues into focus.

So the challenge is to close the empathy gap between humans and AI. Technology must act on the basis of moral principles and thus become more human. AI needs social skills because: Ethics determine trust. This is the direction in which Prof. Kristian Kersting is researching at the Center for Cognitive Science at TU Darmstadt. His project aims to teach an artificial intelligence moral categories. However, it could be a while before a breakthrough in this area is achieved. What can companies do today to prevent a lack of empathy?

  • Human contact: Man is the guardian of empathy and morality. In complex situations that require abstraction and reflection, there should always be the possibility of contact with a human employee. Here it also becomes clear why artificial intelligence (in service) is initially only a supplement to people - and in no way replacing them.
  • Test processes and reflection: In order to avoid possible errors, test processes are also required that continuously check the results of the AI ​​systems and report any problems that arise. As the AI ​​application evolves with the data, these should also be checked for inequalities at all times.
  • Follow research: The problem of a lack of empathy and morality is currently the focus of AI research. Here it is a good idea to follow relevant journals or blogs.

5.2. Lack of transparency


The principle of artificial intelligence is sometimes creepy: If you train an AI with hundreds of photos of plants, it can at some point automatically recognize and assign them. The result can then be checked, but nobody can say exactly how exactly the AI ​​came to the result. In this example, that would be relatively unproblematic, but imagine an AI making the difference between life and death in a hospital. Then a lack of transparency becomes a problem. In the case of complex AI applications, due to the large number of parameters and conditions, it is often no longer possible to understand how the system came to certain assumptions.

Of course, such a black box does not necessarily contribute to the acceptance of artificial intelligence. It is much more likely to stir up further concerns and resentment - and also alienate AI. However, the prerequisite for success is transparency and the associated security against manipulation, predictability and trust. Google boss Sundar Pichai is also of this opinion: "As long as we do not have this explainability, we cannot use machine learning for these areas."

Make AI as vivid as possible

The solution to this problem is simple and simple, realizing important applications exclusively with transparent AI. In general, algorithms should be presented as clearly and comprehensibly as possible. If this does not succeed, at least the underlying data should be disclosed. A study of Uber drivers shows what opportunities for abuse and what bad influence a lack of transparency can have on your own employees: Drivers were increasingly frustrated because they were not given any information about the algorithm that determined their routes. In retrospect, it was found that the AI ​​had even subconsciously manipulated drivers into longer working hours. One of the main findings of the study was therefore: “(…) the more sophisticated these algorithms get, the more opaque [dt .: opaque; N.G.] they are, even to their creators. "

5.3. Lack of knowledge and distrust of AI

Last but not least, a problem with artificial intelligence is a lack of knowledge on the part of users. Although today only the so-called Weak AI is in use, the horror scenario of the world domination of machines is already circulating - spurred on by science fiction films and novels. This uncertainty hinders the use and support for new projects. However, explaining how artificial intelligence works would only be a first step, because ignorance goes much deeper.

A study by ECC Cologne reveals that the majority of customers themselves do not know what data they are disclosing or which services they are already using. For example, 63% of respondents claim that they have never shared personal contacts, while 95% of them regularly use Whatsapp. That alone shows that if AI is to be accepted, the framework conditions must also be explained accordingly and made transparent.

Hidden algorithms everywhere

At the same time, however, in most use cases it is not even apparent that artificial intelligence is in use. Google alone has around 20,000 AI programs in use. For example, Google Lens automatically detects over 1 billion objects in photos using AI algorithms. Google “Auto Ads” is an artificial intelligence that automatically places advertisements. Numerous algorithms run in the background of our online activities that recommend products, decide on creditworthiness or influence our job opportunities. So rightly there is a demand from many directions to scrutinize AI algorithms more closely. 76% of the participants in another Capgemini survey are also in favor of regulating AI. And despite a lack of knowledge on the one hand, these calls for control are quite understandable as long as a secret is made of where AI is used.

So companies shouldn't just make sure that their customers are informed about the technology themselves. Rather, it must be disclosed which data is used and collected and Where the AI ​​is even in use. Far away from GDPR guidelines, the focus here must be on making the information available to users in a clear and understandable manner. After all, it is in the interests of companies that their applications are well received and that they do not add to fears and skepticism.

6. Conclusion: How to increase the adoption of AI

The starting conditions for the use of AI in customer service are largely favorable: Employees see the advantages of the technology, while customers demand and use the interaction of automated and personal service. At the same time, there are many factors that provide arguments for both groups against the use of AI. General rules and principles for the use of AI are emerging and also necessary - that much is clear.

But even companies themselves can take measures to increase the acceptance of artificial intelligence:

Increase employee acceptanceIncrease customer acceptance
  • Inform in detail in advance
  • Give time to get used to
  • Explain the sense of using AI - above all, that the employee does not become superfluous
  • Show how AI can change / improve the work of employees
  • Make it clear where AI is used
  • Inform what data is being collected / used
  • Explain technology clearly
  • Always maintain the possibility of human contact

Nonetheless, the three problems of ethics, transparency and lack of knowledge will of course not be solved overnight. It will take time and further research in these areas before artificial intelligence is really accepted across the board. The technology undoubtedly raises new and interesting ethical questions - even though it has so far only actually evaluated data sets. In spite of all the justified criticism, one should therefore not forget: When we speak of an ethics problem in AI, we are also talking of an ethics problem in humans. Or in the words of Prof. Kristian Kersting: “All existing AI studies hold up a mirror to ourselves. They say much more about us humans than about the machine itself. "

>> All existing AI studies hold up a mirror to ourselves. They say much more about us humans than about the machine itself. <<

Prof. Kristian Kersting (TU Darmstadt)