Haley AI-as-a-Service and Tensorflow

There is a lot of excitement in the Machine Learning world around Deep Learning and Neural Networks, and one of the most popular libraries available for creating Deep Learning models is Tensorflowhttps://www.tensorflow.org/ ).

What follows is an example of using a Tensorflow model within a Haley AI-as-a-Service dialog.

A few quick notes about Haley AI-as-a-Service:

Haley provides a software platform for Artificial Intelligence Agents, which enables automation of business processes — such as chatting with a customer service agent, reacting to Internet-of-Thing data to control devices, or classifying loan applications for risk assessment.  Haley integrates with a number of endpoints, which are the sources and destinations of Haley messages.  These endpoints include email, web applications, mobile applications, Facebook, Twitter, Slack, SMS, IoT Devices, Amazon Alexa, and others.  Communication with an endpoint is over a channel which groups messages together, such as a channel #sales-team for communication among members of a sales team.  AI Agents, called bots, receive messages and send messages on channels, and bots use dialogs  a series of steps to handle events — and workflows — business processes that may be composed of many dialogs — to accomplish tasks.

Haley AI-as-a-Service and ML Models:

Haley AI-as-a-Service supports a variety of machine learning models including those from Apache Spark MLlib and Tensorflow.  We also have support for libraries such as Keras used with Tensorflow and BigDL from Intel ( https://bigdl-project.github.io/ ) used with Spark.  Based on customer demand, we continue to add support for others.

In this example, first, we’ll create a model to classify text.  Then, we’ll use this model in a dialog to classify the text that occurs in a channel.  To see the results, we’ll use a chat web application to communicate on the channel.  This classification of the text into a topic could be used in the dialog to react to what a person is saying, but in our short example we’ll just report back the topic.  So, if someone says something like: “My ford needs an oil change”, we want to classify this to be about “Cars” and respond, “Your message appears to be about Cars.”

Creating the Tensorflow Model

For the tensorflow model, we’ll be using the Keras Deep Learning Library ( https://keras.io/ ) running on the tensorflow backend.

There is a great tutorial here: https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html  which covers creating a text classification model which uses word2vec-style word embeddings using the GloVe word embeddings from Stanford: https://nlp.stanford.edu/projects/glove/ .  This tutorial uses the 20news dataset which consists of around 20,000 documents equally divided into 20 categories from USENET postings.

The categories include:

rec.autos
rec.motorcycles
rec.sport.baseball
rec.sport.hockey
(and 16 others)

The complete code for training the model can be found here:

https://github.com/fchollet/keras/blob/master/examples/pretrained_word_embeddings.py

The critical training part of the code is:

print(‘Training model.’)

# train a 1D convnet with global maxpooling
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype=’int32′)
embedded_sequences = embedding_layer(sequence_input)
x = Conv1D(128, 5, activation=’relu’)(embedded_sequences)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation=’relu’)(x)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation=’relu’)(x)
x = MaxPooling1D(35)(x)
x = Flatten()(x)
x = Dense(128, activation=’relu’)(x)
preds = Dense(len(labels_index), activation=’softmax’)(x)

model = Model(sequence_input, preds)
model.compile(loss=’categorical_crossentropy’,
optimizer=’rmsprop’,
metrics=[‘acc’])

model.fit(x_train, y_train,
batch_size=128,
epochs=10,
validation_data=(x_val, y_val))

We can use Jupyter ( http://jupyter.org/ ) to run this to create the model, and then save the model to a file.

In a production application, the model would be trained within Haley, with new models swapped in on an ongoing basis, but in this example we are uploading the trained model file from a local machine.

Here’s a screenshot from within Jupyter:

jupyter-training

Note we’re saving the model with model.save() at the end of the training.  We’ve also turned on logging for tensorboard in the above screenshot.

The tutorial reports an accuracy of around 95%.

Once we have our trained model file we upload it to the Haley Admin Dashboard and deploy it.  Now we’re ready to call it from a dialog.

Creating the Dialog

classify-dialog

The above screenshot is the Haley Dialog Designer tool, which is a visual drag-and-drop interface to create dialogs.  We drag-and-drop and configure a handful of steps to create the dialog.  The important ones are:

chatrules

 

This step in the dialog gets a text message on the channel and puts it into a fact variable called textFact

 

 
datascript

This step in the dialog (shown selected, with it’s Configure panel on the right) calls the tensorflow model passing in the parameter textFact which is classified by the model, putting the results into the variable classifyResults.

 

 

text_message

 

This step in the dialog sends a message out on the channel, reporting back the classification using the classifyResults fact.

 

 

For reporting back the classification, we take the top-most category and its score and send the message:  “That appears to be about: $category with a score of $score”.  For diagnostic purposes we also send the full list of results back in a JSON list.

Once we’ve created the dialog, we then need to connect it to a bot and a channel.

Haley Admin Dashboard, Bot Screen:

docclassify-bot

Here in the Haley Admin Dashboard, we create a new bot that just contains our new dialog, and set the dialog as the default, so it is the default action for messages that the bot receives.

Haley Admin Dashboard, Channel Screen:

docclassify-channel

And here in the dashboard we connect the bot up to the channel “docclassify”.  Now, any user which has access to that channel over an endpoint, such as in a web application, can send messages on the channel and access our new classifying bot.

 

Using the Tensorflow Model in a Chat Interface

doc-classify-screen

Now, by logging into a web application connected to Haley we can see the available channels on the left, select the “docclassify” channel, and send a message like:

Person: “sam hit the ball over the fence for a homerun”

and we get our answer back:

Haley: “That appears to be about: rec.sport.baseball with a score of 0.7076626”

We also send the complete classification and score list for diagnostics:

{“result”:[[“rec.sport.baseball”,0.7076626],[“rec.motorcycles”,0.07813326],[“rec.sport.hockey”,0.074284434],[“talk.religion.misc”,0.020479599],[“misc.forsale”,0.020106543],[“rec.autos”,0.017262887],[“alt.atheism”,0.016764276],[“talk.politics.misc”,0.014698057],[“sci.med”,0.013586524],[“comp.graphics”,0.006986827],[“talk.politics.mideast”,0.005926949],[“sci.electronics”,0.0049545723],[“sci.space”,0.0036540392],[“comp.sys.mac.hardware”,0.003515738],[“talk.politics.guns”,0.0030825695],[“comp.windows.x”,0.0028197556],[“comp.sys.ibm.pc.hardware”,0.0022112958],[“comp.os.ms-windows.misc”,0.0020292562],[“sci.crypt”,0.0013376401],[“soc.religion.christian”,0.0005032166]]}

Based on the score, the “baseball” category is the far winner, with a score of 0.70 compared to the next best score of 0.07 for “motorcycles”, so the model is roughly 70% “sure” that the correct answer is “baseball”.

Using other Tensorflow & ML Models on Haley AI-as-a-Service

In this example, we’ve created a new model, trained it, and uploaded it to Haley.

If you would like to incorporate ML models into Haley AIaaS, there are a few options:

  • You create the model, train it, deploy it on Haley AIaaS as we have done in this example
  • Vital AI creates the model, trains it, and/or deploys it, for you to use
  • Use an “off the shelf” model that Haley already uses or one taken from open sources, potentially training it with your data

Additionally, the training of the models can take place on our infrastructure — this is particularly useful for ongoing training scenarios where a data pipeline re-trains the model to incorporate new data periodically, or an external vendor could be used for this training, such as Databricks or Google, with some additional data coordination to share the training data.  To reduce latency, it’s usually best that the “inference” step (using the model to make a prediction) is as closely integrated as possible, thus it is usually best that this resides within Haley AIaaS, although there can always be exceptional cases.

WrapUp

In this example, we have:

  • Trained a text classification model using Tensorflow and Jupyter
  • Uploaded the model and deployed it using the Haley Admin Dashboard
  • Using the Visual Designer, created a dialog that uses the model to classify incoming text messages
  • Added dialog steps to generate response messages based on the classification, and connected the dialog to a bot, and connected the bot to a channel
  • Used a web application logged in to Haley to send messages on the channel and receive replies

This example can be extended in many ways, including:

  • Connect to other endpoints besides a web application such as classifying Tweets, Facebook Messages, EMails, SMS Messages, and others
  • Use a Tensorflow model to process different types of messages, such as those from IoT devices or images
  • Use a generative Tensorflow model that creates a response to an input rather than classifying the input.  Such models can generate text, audio, images, or actions — such as a proactive step to prevent fraud
  • Add Tensorflow models to workflows to incorporate them into business processes, such as processing insurance claims

If you would like to incorporate Tensorflow or other ML Models into Haley AIaaS, you could create the model, we at Vital AI could create it for you, or an off the shelf model could be used.

To train the model, either you could train it, we could train it on our infrastructure, or a third party vendor could be used — such as Google’s Cloud ML Engine for Tensorflow.

I hope you have enjoyed learning about the Haley AI-as-a-Service platform can utilize Tensorflow Models.  Please contact us to learn more!

Vote for Haley AI-as-a-Service to speak at Botscamp

Voting is open for one more day at Botscamp to select speakers and we’re in the running!

Please check out our short video below pitching our presentation, and please vote for us to see the full presentation about Haley AI-as-a-Service online later this month at Botscamp!

Our presentation will cover the Haley AI-as-a-Service platform providing A.I. automation for business tasks.

To learn more and vote, you can go to:

https://beeq.typeform.com/to/xaDqHp?source=bc_slack

The main Botscamp website can be found at http://www.botscamp.co/

 

 

Adventuring with a Facebook Messenger Bot and Haley AI-as-a-Service

For some tech retro fun we recently published the classic text adventure game Colossal Cave Adventure (circa 1977) as a Facebook Messenger Bot running on the Haley AI-as-a-Service platform.

In this post I’ll describe how it works.

But first, do some adventuring!

Facebook Page: https://www.facebook.com/adventurebotai/

Messenger Link: http://m.me/adventurebotai

Here’s a screenshot of the beginning of the game:

adventurebot-1

Via the Haley AI dashboard, we can set up a Bot, connect it to a Facebook app (this is what we call an “Endpoint”), and connect the Bot to dialogs to process incoming messages and generate outgoing messages.  The dashboard also provides user management screens, analytics, data management, prediction models (via machine learning), and other functionality.

The heart of the Adventure Bot is a dialog, composed with the Haley Dialog Designer, which is a visual drag-and-drop tool to create dialogs:

advent-dialog1

Pictured above is the dialog for Adventure with a “ChatRule” step selected.  This step waits for a message from the adventurer, like “go north” or “kill dragon”.

Here’s a few details about the important steps in the Adventure dialog:

chatrules

 

The ChatRule step collects a text message and processes it into the “intent” of the message, turning text into structured data.

 

assign_fact

 

The Assign step assigns a value to a “fact”.   Here we get the output of the game based on the input message, and assign it into a fact.

 

text_message

 

Using the Message step, we send the output of the game back to the user.

 

loop

 

We use the Loop step to loop back to the ChatRule step, and wait for the next message.

 

For the game implementation, we used a port of the game for Inform7 (actually Inform6 code compiled using Inform7).  Inform7 is a wonderful interactive fiction design tool, which can be found here: http://inform7.com/

To give you a sense of what the game code looks like, here’s a snippet about a location:

Room In_Hall_Of_Mt_King “Hall of the Mountain King”
with name ‘hall’ ‘of’ ‘mountain’ ‘king’,
description
“You are in the hall of the mountain king, with passages off in all directions.”,
cant_go “Well, perhaps not quite all directions.”,
u_to In_Hall_Of_Mists,
e_to In_Hall_Of_Mists,
n_to Low_N_S_Passage,
s_to In_South_Side_Chamber,
w_to In_West_Side_Chamber,
sw_to In_Secret_E_W_Canyon,
before [;
Go:
if (Snake in self && (noun == n_obj or s_obj or w_obj ||
(noun == sw_obj && random(100) <= 35)))
“You can’t get by the snake.”;
];

And here is a snippet about an object:

Object -> Snake “snake”
with name ‘snake’ ‘cobra’ ‘asp’ ‘huge’ ‘fierce’ ‘green’ ‘ferocious’
‘venemous’ ‘venomous’ ‘large’ ‘big’ ‘killer’,
description “I wouldn’t mess with it if I were you.”,
initial “A huge green fierce snake bars the way!”,
life [;
Order, Ask, Answer:
“Hiss!”;
ThrowAt:
if (noun == axe) <<Attack self>>;
<<Give noun self>>;
Give:
if (noun == little_bird) {
remove little_bird;
“The snake has now devoured your bird.”;
}
“There’s nothing here it wants to eat (except perhaps you).”;
Attack:
“Attacking the snake both doesn’t work and is very dangerous.”;
Take:
deadflag = 1;
“It takes you instead. Glrp!”;
],
has animate;

Sorry, spoiler!  There is a snake in the game.

Inform7 has a different, more natural language based syntax.  If you are interested, here is a screencast about the syntax and the editor: https://vimeo.com/4221277

The game compiler produces a game “binary” for the Glulx virtual machine (I didn’t know what that was either).  Fortunately there is a Glulx interpreter for Java available ( https://github.com/Banbury/zag ) so after some edits to the interpreter to make it more easily embeddable, we are able to use the interpreter and the game “binary” within Haley.

Haley has a number of different types of facts to hold strings, numbers, dates, lists, et cetera — and fortunately this includes a fact-type for a “Java Object”, so we can use an Assign step (see above) to set up the interpreter for Adventure, and hold the game state in a JavaObject fact associated with the player.

The nice thing about this implementation is we can support any interactive fiction story/game via the same method.  It will be interesting to include more story telling capabilities within Haley, as well as provide a platform for such experiences.   Please contact us if you would like to create such narrative experiences via the Haley platform!  Two obvious upgrades would be to use our more robust ChatRules text parser and include media (images, sound, and video) in the messages.

I hope you’ve enjoyed learning about creating a Facebook Bot using the Haley AI-as-a-Service platform.

Please contact us to learn more about using Haley AI, and enjoy Adventuring!

 

Haley AI Dialog Demo Video

Here is a quick 3 minute video of some features of Haley AI, focusing on our visual dialog designer tool.

The video highlights:

  • Quickly creating a chatbot dialog using a visual design tool
  • Deploying the dialog in a web application or on Facebook
  • Using a dialog in a conversational e-commerce application
  • Using a form instead of a chat interface
  • Adding Haley AI to teams for collaboration

 

Hope you enjoyed the demonstration.

Please contact us today to learn about using Haley AI in your organization!

info@vital.ai

http://haley.ai/#contact

 

Welcome Xin Yee, our new intern from Singapore!

The Vital AI team is excited to welcome Xin Yee Wong as a new Sales and Marketing intern!

Processed with VSCO with b1 preset
Xin Yee Wong

Xin Yee is a marketing major at the National University of Singapore and is participating in the NUS Enterprise program, helping to train Singapore’s next generation of entrepreneurs.  An important part of NUS Enterprise is the NUS Overseas Colleges Programme, which arranges for internships at innovative start-up companies around the world.  We at Vital AI are proud to participate and have Xin Yee join us for the next year!

Xin Yee will be focused on interesting and effective marketing ideas for Vital AI to increase customer outreach so that more people can benefit from Vital AI’s software and services.  Of particular focus will be Vital AI’s upcoming launch of the Haley AI assistant service.

In addition to her sales and marketing activities, Xin Yee, as a budding entrepreneur, will be assisting with Vital AI’s strategic business goals including fundraising, product development, and partnerships.  She will also be taking over Vital AI’s social media outreach efforts.  So do keep a lookout on Vital AI’s social media channels for Xin Yee, and say hello!

Once more, we welcome Xin Yee to the United States, to New York City, and to Vital AI!

 

NY Tech Day 2016

We had a great time at NY Tech Day and we hope you did too!

IMG_3879

Here is Marc discussing some new Artificial Intelligence Apps with some guests at our booth.

And some other photos from the day:

IMG_3893.png

IMG_3883.png

IMG_3870.png

Thanks to everyone for dropping by our booth to learn more about building artificial intelligence applications using the Vital AI Development Kit!

We look forward to keeping in touch with all the awesome people we met yesterday.

Also, many thanks to the organizers for such a wonderful event!  Great job again, and looking forward to next time!

AI hacking at the Jibo Hackathon

I am very fortunate to be among the first few members of the nascent Jibo developer community, which kicked off today at the first Jibo Hackathon.

The Hackathon was held on the MIT campus, where social robotics was born.

After getting our development environments set up, we got our hands on the Jibo simulator, the SDK, and of course the early Jibo robots.

The Jibo development environment will be familiar to any web application developer, with some added screens reminiscent of Disney cell animation.

We spent some time with some sample code and the simulator.

And then, with a simple shell command on my Mac of ‘jibo run’, my newly created skill (Jibo-speak for “app”) is deployed to my robot friend for the day, and Jibo comes alive.

We got to experiment with a number of Jibo features: animating the Jibo body, Voice Recognition, Natural Language Understanding, Text-to-Speech, Dialogs, Face Tracking.

My first skill was pretty simplistic, but included a bit of all the major Jibo features of the SDK, including snapping a photo, displaying it on the screen, and asking if I liked it.  Plus some Jibo dance moves.  I was in the process of connecting the image up to a Deep Learning image classification API, which sort of worked except for my forgetfulness of JavaScript syntax, when we ran low on time and all happily retired to the local pub.

The ease of working with the simulator and SDK must truly be emphasized.  There is a magic in creating an arc of motion in the simulator, hitting the “Run” button, and having Jibo swing into motion.

Looking forward to the arrival of Jibo in early Spring!  At Vital we’ll be honing our skills in the meanwhile.

jibo