Should We Be Afraid of Self-Learning Chatbots

By in LiveChat Blog > Live chat use cases,
self-learning chatbots

There’s a lot of mess around the term “self-learning chatbot”. Many people do think that self-learning chatbots are direct predecessors of such an AI that is depicted in futuristic movies. Therefore, all too often people fear that those intelligent assistants may get out of hand.

Before we directly address the question of whether such fears are reasonable, we have to realize what kind of AI chatbots are built of and where we head to. Chatbots are totally data-driven, which is their main advantage in the era of information.

On the other hand, the intelligent assistant approach data once may become crucial. Therefore, to measure all the risks, we also need to have a clear understanding of the ways modern AI deals with data.

The way it works

Every intelligent assistant, or chatbot, relies on the basis of AI origin. What kind of basis is it? Data science, NLP, speech and image recognition, text analytics, etc. All these techniques are relatively new and based on the Machine Learning as a core element. Why is that?

Well, imagine how would a program be able to recognize image if it didn’t have prior experience with the similar ones? The same happens to the process of recognizing texts and speech.

ML is the technique for specific data manipulations, which enables the program to improve by means of data it has access to. This technique liberates developers from the need to explicitly program the way application interacts with users becoming an extremely efficient way for the both sides: development and the way users will interact with the intelligent assistant.

Consider the following case. You want to build a chatbot with a highly natural conversational interface. In this case, ML enables you to avoid creating multiple scenarios of conversations and databases of replies. Instead of crafting an extensive hard-coded logic, you can make your chatbot able to learn from the data it gets from interactions with users. Consequently, it will store this data and use it to generate new responses in the subsequent cases.

In the early days of chatbot history, developers were building chatbots mostly on the retrieval-based model. It was the only option as the computational power was much lower and incapable of providing such analytical operations on the data like those ML is based on. As the time had passed and new possibilities of using computational power had come in, the need for an alternative became more apparent.

As the transition state between old retrieval-based models and modern AI, pattern-based approach came up. Initially, the developer creates a set of patterns, chatbot is going to rely on in order to efficiently communicate with the users. Therefore, when chatbot receives user’s input, it has to go through all the patterns and pick the one that matches.

If you take a look at the sample, you may notice an obvious similarity with XML:

pattern based models XML

In fact, the language that is used for creating such pattern-based models is called AIML and it’s an XML dialect that was created particularly for AI-concerned purposes.

The newest models that make possible to create self-learning chatbot are generative. While chatbots created in a retrieval-based way just detect familiar words and retrieve corresponding replies from their database, generative chatbots every time come up with the new response.

Thanks to generative-based models, new intelligent assistants provide more flexible and natural conversations as they don’t use the same patterns but create new ones depending on the situation. Just think about how difficult it is to program all the possible scenarios of conversations (no matter whether it was retrieval-based or the pattern-based model that we would consider).

In fact, even if you have an incredibly logical way of thinking, you won’t be able to do it manually.

Generative-based bots are actually those self-learning chatbots you might have feared when you didn’t know what they are. To understand where those fears stem from I propose you to go deeper inside AI.

Types of AI you have to be aware of

1. Narrow or weak AI

This is the first stage and the one that we’re now at. This stage of AI is also called weak AI due to the comparison with what we expect from AI in the future. It’s narrow because it can operate only in the narrow field of issues. Just consider different self-learning chatbots – whatever smart they are – mostly they operate within the one particular type of questions.

We have talked about retrieval-based and generative models – it’s time to recall them. Let’s contemplate on which of those models we can use to create chatbot with narrow AI and make sure that we mean the same about chatbot with narrow AI. Definitely, it’s not the one that relies on either retrieval- or pattern- based approach.

Of course, I say “old” or “outdated” about retrieval-based approach because the topic of the article implies the contemplation on the more advanced applications than just “if-else”s. However, I need to note here that retrieval-based approach is currently used in many applications (including bot applications).

Sometimes you just have a pretty straightforward purpose without the need for any smart system. So why would you bother to create something difficult?

As for pattern-based approach, it’s still used for creating AI in games. In the most advanced and complicated situations, AIML is combined with the machine learning techniques. However, the most common and suitable option for many games is still the plain pattern-based approach without any AI.

Coming back to the question of picking the right model for self-learning chatbot, it may become clearer that retrieval- and pattern- based approaches are insufficient. They are just mimicking AI but would never lead to the real AI (even narrow or weak one). Therefore, the option of choice should be the generative model, by means of which chatbot would be able to create responses by itself.

If you’re familiar at least with the basics of ML, you may know that it entails different types: Supervised Learning, Unsupervised Learning, and Reinforcement Learning. Loosely speaking, if you use SL, you need to train your chatbot with the set of input-output pairs to show what kind of relation it should look for in its further conversations.

Let’s say you are unable to feed your program with sample inputs and corresponding outputs. In other words, it means that don’t have any patterns and relations and you want the application to be able to find those on its own – then, you would turn to UL.

Reinforcement learning is a pretty different from the previous types as the program should rely on the feedback of the environment. In the case of RL, program constantly receives rewards when its actions match expected ones (it’s widely applied in the creation of self-driving cars and robotics).

In the case of chatbots, all the types of ML can be applied. While earlier it was mostly SL that was in common use for such applications, today Deep Learning ( a subtype of UL, which is based on artificial neural nets) is becoming more popular due to the wider possibilities it provides and computational power we already have. Moreover, in the modern cases of chatbots we can even observe combinations of Reinforcement Learning with Deep Learning.

Though all these remarkable examples are worth your “wow!”, they are still incapable of providing a full-fledged artificial mind you’re anticipating and fearing at the same time.

2. General or strong AI

General AI is exactly what you may think of when you imagine artificial mind. General AI should be as good in everything as humans are. It wouldn’t operate only in the one field (as narrow AI currently does) but would be good regardless what kind of issue it is.

In fact, this is where we should start being afraid as the transition between this stage and the next – the most powerful one – may be pretty fast. However, the road to this stage from where we are now may be quite long. And the thing is that nobody really knows how long exactly it will be.

3. Super AI

This AI is predicted to become thousands of times more intelligent than any human mind is. This is a completely different level of thinking where human and AI mind will be most likely unable to communicate and reach mutual. Purposes of such super-smart systems may become hardly understandable for humans. You can think about it in terms of humans and the smallest insects – this is what can reflect the relation between Super AI and humans.

Also, this super-power AI may become as hardly understandable as extraterrestrial creatures. On the other hand, a lot of problems of humanity – death, hunger, poverty – will be solved if humanity harnesses this super power.

The switch between General and Super AI already has its name – Intelligence Explosion. Firstly it was used by Irwin Good, a British mathematician who used to work with Alan Turing. In his study “Speculations Concerning the First Ultraintelligent Machine”, Irwin Good approached all the fears on the “ultraintelligent machines” from a scientific point of view. Scientists across the world often reference to it while contemplating on the AI-concerned fears.

The stage between General AI and Super AI may probably become the most important step where human control can make difference for the direction of further AI development. However, on the stage of narrow AI, we can hardly predict how everything is going to be on the higher levels.

Self-learning chatbots are on the good side

It’s definitely not the time when one should fear self-learning chatbots, even if they seem to be pretty smart. Just remember that they are still vastly programmed even if they can generate responses on their own. When we reach general AI, it will be the time to bother about the way we treat such programs to not let them get out of hand.

If you’re interested in creating your own bot, you can also check out our platform BotEngine.

discussion

comments powered by Disqus