Watson Conversation WCS Intent design

By Jeremiah Mannings
Consultant in Melbourne


This series of articles is designed to offer a useful overview of the capabilities and functions of the Watson Conversational API to use in your own projects. For the next few articles, a banking/finance type demo will be used to explain the concepts. There will be a number of tips, tricks, and shortcuts that will be explained so you can start building your own bots in no time! This is aimed at both technical people and those wanting a deeper understanding of some of the core NLP concepts that related to chatbot development.


  • Watson Conversation Entity & Dialog Design (coming soon!)
  • Watson Conversation WCS Further Tips & Tricks (coming soon!)

Intent Design

Intents are the backbone of the conversational system. They are used to ascertain what the user is trying to ask, and the ‘intent’ of their communication. Typically, intents are being added to the system as unique questions but an intent is not just a question it is the ‘intent’ of the user, this typically relies less on the action word (what the user is trying to find out) but more on the phrasing of the question.

A good example of this is:

  • “How do I buy a house”
  • “How do I get a unit”

These are different questions, but they have the same general intent of ‘How do I buy …”. With the action word changing, the action words are then picked up by the entity design, which will be discussed later.

Changing the ethos of the intent design then enables training intents to become far easier. The main premise to intents is the sentence phrasing. This enables you to combine similar intents into groups, with a primary intent and example variations. An example of a primary intent may be “I want to get a loan”. Some examples of variations of this primary could be:

  • “I want a loan”
  • “How do I get a loan?”
  • “What is the process to get a loan?”

The actual intent of these questions is all identical to that of the primary, the user wants to get a loan. Therefore these can be grouped into one intent. This enables the conversation AI to focus on identifying the phrasing and usage of intent and less on differentiating the words (which is the role of entities in the system, entities will be covered in the next article!).


Example of Intent training:

Primary Intent: I want to get a loan*

Variations: I want a loan, How do I get a loan?, What is the process to get a loan?, Help me get a loan, What do I do to get a loan

Primary Intent: What is stamp duty**

Variations: What does stamp duty do?, Tell me what stamp duty is?, Can you explain stamp duty?

* This is asking ‘I want to get a …’ and variations of how to articulate that question.

** This is asking ‘What is …’ and variations along that line, the entity match will then determine how to answer the question.

Another way of considering this intent design is the example shown below (it is not actually treated by the system like this, however). The system gives less weight to the actual loan type at the end, as it is matching the general phrasing and structure of the questions.

Primary Intent: I want to get a @loantypes

Variations: I want a @loantypes, How do I get a @loantypes?, What is the process to get a @loantypes?, Help me get a @loantypes, What do I do to get a @loantypes

This way the system is matching the intent, and then the entity is differentiating what content to show. This significantly reduces the number of intents needed, and increases the accuracy of the system, as the content shown is dependent on matching both an intent, and a subsequent entity which produces less false positives.

Using this type of generalised intent examples and using entities to match produces the following dialog code. The first node matches the loan intent, and when it does it jumps to the first of the children nodes to try and match on the entity value given. This then cascades down the child nodes until it finds the correct value or hits the jump-to node that takes the query back to the main set if nothing is matched.

The output of this looks like this test snippet on the left. As you can see from the readout, it has matched both the loan intent and the entity that was given and provided the answer.

This is the ‘recommended’ way of doing it, however, there is a simpler way of doing this which may fit more so with a simpler setup.

The initial node in the previous example can be bypassed and the matches can be done within the node themselves using dual triggers of both the intent and the entity. It’s technically more work to enact, but it may help reduce the changes needed to the training front end that is already built, which could mean this would be a better option.

This is the easier way of doing it, the main disadvantage is continually entering the intent, however, that is avoided due to the use of the custom training system. This allows the current system to continue and employ the use of entities to further enhance and expand the system.

This creates a dual trigger condition, which evaluates to: #wantloan AND @loantype:boat

Meaning the query is being assessed on both the entity and intent in a single node. This can also be combined with confidences which are covered later on. Multiple entity matches can also be used for complex multifaceted queries.


Overtraining in the system is where one intent has more examples than others and thus ‘catches’ questions it shouldn’t with high confidence. This typically happens with disparaged datasets where there are more high traffic queries and outlying queries that get less attention. The temptation can be to add more variations and examples to certain intents as there is more user data to support this, but increasing variations in one intent and keeping it uneven with others can cause the system to bias and favour the well-trained intent even when the answer may not be as correct. This is called overfitting the dataset.

To avoid this, intents and their variations should be regularly reviewed to make sure that the variations match the intents they are assigned to they should also be reasonably close to the average number of intent/variation mapping pairs in the rest of the system.

Variation Strength

There is the concept of ‘strong’ and ‘weak’ variations in the system. A strong variation means it is quite distinct from the rest of the dataset, there are no similar intent/variations anywhere in the system. These strong variations are very easy for the system to recognise and match due to the uniqueness of the variation. Weak variations work in a similar way, if there are quite similar phrased intents and variations in the system it will give them less weight.

  • Strong variations will be given more preference
  • Weak variations will be ignored

This is why it can be very hard to surface intents when there are thousands of intents, as the system will focus and match more so on the strong variations and give them more weight (confidence).

Try to avoid overtraining by adding too many weak intents and try to keep primary intents strong and use the strongest variations in the training set first. The ideal match is an even spread of primary/variation ratios with the aim to have between 10-20 variations per primary intent. If the ratios are kept even the variations per intent can grow larger over time.

Conclusion and notes

I hope this quick article starts to give you an insight into the way intents work in WCS. I am aiming to get the follow-up articles to this covering entities and dialog, as well as some helpful tricks and tips up soon.

Let me know what you thought of the article and be sure to comment if you have any questions or need any clarification!


www.jmannings.io www.medium.com/@jmannings


Join the conversation

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.


Post has no comments.