One of the biggest pain points for building an AI chatbot today is to know how good is the training dataset that would ensure higher success rates and less confusions for the conversational agents.
To tackle the same issue (that even we went through firsthand, while training our chatbots using other 3rd party software) we at Kontiki AI have launched Alter NLU. An #OpenSource tool to train #AI based #chatbots that is powered by #DeepLearning.
Alter NLU has come up with ‘Reports’ for every bot that is built on the platform. These reports are basically an in depth analyses of the training dataset that calls out the loopholes and provides recommendations on how to overcome those loopholes. The ‘Reports’ lists out these — Distribution of Intents per chatbots, intents that require more training sentences, limitations in the entity section, examines the training dataset to extract the untagged entities, captures duplication and redundancy.
We have been working super hard on the latest version, and it’s finally ready! Here is a sneak peek:
Github AlterNLU v1.0.0-beta (https://github.com/Kontikilabs/alter-nlu/tree/v1.0.0-beta):
1. Integrated CRF based Entity model.
i. Handles Out-of-vocabulary words.
ii. Considers sentence structure.
iii. Added new key in the API response, named “parsed_value”. Parsed value is the value that is extracted from the user query mapped to the relevant entity.
iv. “category” key renamed to “name”.
2. Updated Intent model saving algorithm for an enhanced accuracy in intent detection.
Kontiki Console:
1. Enabled an option to download the training datasets in RASA NLU format along with Alter NLU.
2. More user-friendly UI to create datasets without mistakes.
Give it a try and see for yourself! - https://console.kontikilabs.com
#naturallanguageprocessing #chatbots #voicebots
For additional information you can visit the below URLs:
https://www.youtube.com/channel/UC6NVFVbYr_HMB58IIUbeJwg/featured
https://medium.com/kontikilabs