Haley AI-as-a-Service and Tensorflow

There is a lot of excitement in the Machine Learning world around Deep Learning and Neural Networks, and one of the most popular libraries available for creating Deep Learning models is Tensorflowhttps://www.tensorflow.org/ ).

What follows is an example of using a Tensorflow model within a Haley AI-as-a-Service dialog.

A few quick notes about Haley AI-as-a-Service:

Haley provides a software platform for Artificial Intelligence Agents, which enables automation of business processes — such as chatting with a customer service agent, reacting to Internet-of-Thing data to control devices, or classifying loan applications for risk assessment.  Haley integrates with a number of endpoints, which are the sources and destinations of Haley messages.  These endpoints include email, web applications, mobile applications, Facebook, Twitter, Slack, SMS, IoT Devices, Amazon Alexa, and others.  Communication with an endpoint is over a channel which groups messages together, such as a channel #sales-team for communication among members of a sales team.  AI Agents, called bots, receive messages and send messages on channels, and bots use dialogs  a series of steps to handle events — and workflows — business processes that may be composed of many dialogs — to accomplish tasks.

Haley AI-as-a-Service and ML Models:

Haley AI-as-a-Service supports a variety of machine learning models including those from Apache Spark MLlib and Tensorflow.  We also have support for libraries such as Keras used with Tensorflow and BigDL from Intel ( https://bigdl-project.github.io/ ) used with Spark.  Based on customer demand, we continue to add support for others.

In this example, first, we’ll create a model to classify text.  Then, we’ll use this model in a dialog to classify the text that occurs in a channel.  To see the results, we’ll use a chat web application to communicate on the channel.  This classification of the text into a topic could be used in the dialog to react to what a person is saying, but in our short example we’ll just report back the topic.  So, if someone says something like: “My ford needs an oil change”, we want to classify this to be about “Cars” and respond, “Your message appears to be about Cars.”

Creating the Tensorflow Model

For the tensorflow model, we’ll be using the Keras Deep Learning Library ( https://keras.io/ ) running on the tensorflow backend.

There is a great tutorial here: https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html  which covers creating a text classification model which uses word2vec-style word embeddings using the GloVe word embeddings from Stanford: https://nlp.stanford.edu/projects/glove/ .  This tutorial uses the 20news dataset which consists of around 20,000 documents equally divided into 20 categories from USENET postings.

The categories include:

(and 16 others)

The complete code for training the model can be found here:


The critical training part of the code is:

print(‘Training model.’)

# train a 1D convnet with global maxpooling
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype=’int32′)
embedded_sequences = embedding_layer(sequence_input)
x = Conv1D(128, 5, activation=’relu’)(embedded_sequences)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation=’relu’)(x)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation=’relu’)(x)
x = MaxPooling1D(35)(x)
x = Flatten()(x)
x = Dense(128, activation=’relu’)(x)
preds = Dense(len(labels_index), activation=’softmax’)(x)

model = Model(sequence_input, preds)

model.fit(x_train, y_train,
validation_data=(x_val, y_val))

We can use Jupyter ( http://jupyter.org/ ) to run this to create the model, and then save the model to a file.

In a production application, the model would be trained within Haley, with new models swapped in on an ongoing basis, but in this example we are uploading the trained model file from a local machine.

Here’s a screenshot from within Jupyter:


Note we’re saving the model with model.save() at the end of the training.  We’ve also turned on logging for tensorboard in the above screenshot.

The tutorial reports an accuracy of around 95%.

Once we have our trained model file we upload it to the Haley Admin Dashboard and deploy it.  Now we’re ready to call it from a dialog.

Creating the Dialog


The above screenshot is the Haley Dialog Designer tool, which is a visual drag-and-drop interface to create dialogs.  We drag-and-drop and configure a handful of steps to create the dialog.  The important ones are:



This step in the dialog gets a text message on the channel and puts it into a fact variable called textFact



This step in the dialog (shown selected, with it’s Configure panel on the right) calls the tensorflow model passing in the parameter textFact which is classified by the model, putting the results into the variable classifyResults.





This step in the dialog sends a message out on the channel, reporting back the classification using the classifyResults fact.



For reporting back the classification, we take the top-most category and its score and send the message:  “That appears to be about: $category with a score of $score”.  For diagnostic purposes we also send the full list of results back in a JSON list.

Once we’ve created the dialog, we then need to connect it to a bot and a channel.

Haley Admin Dashboard, Bot Screen:


Here in the Haley Admin Dashboard, we create a new bot that just contains our new dialog, and set the dialog as the default, so it is the default action for messages that the bot receives.

Haley Admin Dashboard, Channel Screen:


And here in the dashboard we connect the bot up to the channel “docclassify”.  Now, any user which has access to that channel over an endpoint, such as in a web application, can send messages on the channel and access our new classifying bot.


Using the Tensorflow Model in a Chat Interface


Now, by logging into a web application connected to Haley we can see the available channels on the left, select the “docclassify” channel, and send a message like:

Person: “sam hit the ball over the fence for a homerun”

and we get our answer back:

Haley: “That appears to be about: rec.sport.baseball with a score of 0.7076626”

We also send the complete classification and score list for diagnostics:


Based on the score, the “baseball” category is the far winner, with a score of 0.70 compared to the next best score of 0.07 for “motorcycles”, so the model is roughly 70% “sure” that the correct answer is “baseball”.

Using other Tensorflow & ML Models on Haley AI-as-a-Service

In this example, we’ve created a new model, trained it, and uploaded it to Haley.

If you would like to incorporate ML models into Haley AIaaS, there are a few options:

  • You create the model, train it, deploy it on Haley AIaaS as we have done in this example
  • Vital AI creates the model, trains it, and/or deploys it, for you to use
  • Use an “off the shelf” model that Haley already uses or one taken from open sources, potentially training it with your data

Additionally, the training of the models can take place on our infrastructure — this is particularly useful for ongoing training scenarios where a data pipeline re-trains the model to incorporate new data periodically, or an external vendor could be used for this training, such as Databricks or Google, with some additional data coordination to share the training data.  To reduce latency, it’s usually best that the “inference” step (using the model to make a prediction) is as closely integrated as possible, thus it is usually best that this resides within Haley AIaaS, although there can always be exceptional cases.


In this example, we have:

  • Trained a text classification model using Tensorflow and Jupyter
  • Uploaded the model and deployed it using the Haley Admin Dashboard
  • Using the Visual Designer, created a dialog that uses the model to classify incoming text messages
  • Added dialog steps to generate response messages based on the classification, and connected the dialog to a bot, and connected the bot to a channel
  • Used a web application logged in to Haley to send messages on the channel and receive replies

This example can be extended in many ways, including:

  • Connect to other endpoints besides a web application such as classifying Tweets, Facebook Messages, EMails, SMS Messages, and others
  • Use a Tensorflow model to process different types of messages, such as those from IoT devices or images
  • Use a generative Tensorflow model that creates a response to an input rather than classifying the input.  Such models can generate text, audio, images, or actions — such as a proactive step to prevent fraud
  • Add Tensorflow models to workflows to incorporate them into business processes, such as processing insurance claims

If you would like to incorporate Tensorflow or other ML Models into Haley AIaaS, you could create the model, we at Vital AI could create it for you, or an off the shelf model could be used.

To train the model, either you could train it, we could train it on our infrastructure, or a third party vendor could be used — such as Google’s Cloud ML Engine for Tensorflow.

I hope you have enjoyed learning about the Haley AI-as-a-Service platform can utilize Tensorflow Models.  Please contact us to learn more!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s