Haley AI-as-a-Service and Tensorflow

There is a lot of excitement in the Machine Learning world around Deep Learning and Neural Networks, and one of the most popular libraries available for creating Deep Learning models is Tensorflowhttps://www.tensorflow.org/ ).

What follows is an example of using a Tensorflow model within a Haley AI-as-a-Service dialog.

A few quick notes about Haley AI-as-a-Service:

Haley provides a software platform for Artificial Intelligence Agents, which enables automation of business processes — such as chatting with a customer service agent, reacting to Internet-of-Thing data to control devices, or classifying loan applications for risk assessment.  Haley integrates with a number of endpoints, which are the sources and destinations of Haley messages.  These endpoints include email, web applications, mobile applications, Facebook, Twitter, Slack, SMS, IoT Devices, Amazon Alexa, and others.  Communication with an endpoint is over a channel which groups messages together, such as a channel #sales-team for communication among members of a sales team.  AI Agents, called bots, receive messages and send messages on channels, and bots use dialogs  a series of steps to handle events — and workflows — business processes that may be composed of many dialogs — to accomplish tasks.

Haley AI-as-a-Service and ML Models:

Haley AI-as-a-Service supports a variety of machine learning models including those from Apache Spark MLlib and Tensorflow.  We also have support for libraries such as Keras used with Tensorflow and BigDL from Intel ( https://bigdl-project.github.io/ ) used with Spark.  Based on customer demand, we continue to add support for others.

In this example, first, we’ll create a model to classify text.  Then, we’ll use this model in a dialog to classify the text that occurs in a channel.  To see the results, we’ll use a chat web application to communicate on the channel.  This classification of the text into a topic could be used in the dialog to react to what a person is saying, but in our short example we’ll just report back the topic.  So, if someone says something like: “My ford needs an oil change”, we want to classify this to be about “Cars” and respond, “Your message appears to be about Cars.”

Creating the Tensorflow Model

For the tensorflow model, we’ll be using the Keras Deep Learning Library ( https://keras.io/ ) running on the tensorflow backend.

There is a great tutorial here: https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html  which covers creating a text classification model which uses word2vec-style word embeddings using the GloVe word embeddings from Stanford: https://nlp.stanford.edu/projects/glove/ .  This tutorial uses the 20news dataset which consists of around 20,000 documents equally divided into 20 categories from USENET postings.

The categories include:

rec.autos
rec.motorcycles
rec.sport.baseball
rec.sport.hockey
(and 16 others)

The complete code for training the model can be found here:

https://github.com/fchollet/keras/blob/master/examples/pretrained_word_embeddings.py

The critical training part of the code is:

print(‘Training model.’)

# train a 1D convnet with global maxpooling
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype=’int32′)
embedded_sequences = embedding_layer(sequence_input)
x = Conv1D(128, 5, activation=’relu’)(embedded_sequences)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation=’relu’)(x)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation=’relu’)(x)
x = MaxPooling1D(35)(x)
x = Flatten()(x)
x = Dense(128, activation=’relu’)(x)
preds = Dense(len(labels_index), activation=’softmax’)(x)

model = Model(sequence_input, preds)
model.compile(loss=’categorical_crossentropy’,
optimizer=’rmsprop’,
metrics=[‘acc’])

model.fit(x_train, y_train,
batch_size=128,
epochs=10,
validation_data=(x_val, y_val))

We can use Jupyter ( http://jupyter.org/ ) to run this to create the model, and then save the model to a file.

In a production application, the model would be trained within Haley, with new models swapped in on an ongoing basis, but in this example we are uploading the trained model file from a local machine.

Here’s a screenshot from within Jupyter:

jupyter-training

Note we’re saving the model with model.save() at the end of the training.  We’ve also turned on logging for tensorboard in the above screenshot.

The tutorial reports an accuracy of around 95%.

Once we have our trained model file we upload it to the Haley Admin Dashboard and deploy it.  Now we’re ready to call it from a dialog.

Creating the Dialog

classify-dialog

The above screenshot is the Haley Dialog Designer tool, which is a visual drag-and-drop interface to create dialogs.  We drag-and-drop and configure a handful of steps to create the dialog.  The important ones are:

chatrules

 

This step in the dialog gets a text message on the channel and puts it into a fact variable called textFact

 

 
datascript

This step in the dialog (shown selected, with it’s Configure panel on the right) calls the tensorflow model passing in the parameter textFact which is classified by the model, putting the results into the variable classifyResults.

 

 

text_message

 

This step in the dialog sends a message out on the channel, reporting back the classification using the classifyResults fact.

 

 

For reporting back the classification, we take the top-most category and its score and send the message:  “That appears to be about: $category with a score of $score”.  For diagnostic purposes we also send the full list of results back in a JSON list.

Once we’ve created the dialog, we then need to connect it to a bot and a channel.

Haley Admin Dashboard, Bot Screen:

docclassify-bot

Here in the Haley Admin Dashboard, we create a new bot that just contains our new dialog, and set the dialog as the default, so it is the default action for messages that the bot receives.

Haley Admin Dashboard, Channel Screen:

docclassify-channel

And here in the dashboard we connect the bot up to the channel “docclassify”.  Now, any user which has access to that channel over an endpoint, such as in a web application, can send messages on the channel and access our new classifying bot.

 

Using the Tensorflow Model in a Chat Interface

doc-classify-screen

Now, by logging into a web application connected to Haley we can see the available channels on the left, select the “docclassify” channel, and send a message like:

Person: “sam hit the ball over the fence for a homerun”

and we get our answer back:

Haley: “That appears to be about: rec.sport.baseball with a score of 0.7076626”

We also send the complete classification and score list for diagnostics:

{“result”:[[“rec.sport.baseball”,0.7076626],[“rec.motorcycles”,0.07813326],[“rec.sport.hockey”,0.074284434],[“talk.religion.misc”,0.020479599],[“misc.forsale”,0.020106543],[“rec.autos”,0.017262887],[“alt.atheism”,0.016764276],[“talk.politics.misc”,0.014698057],[“sci.med”,0.013586524],[“comp.graphics”,0.006986827],[“talk.politics.mideast”,0.005926949],[“sci.electronics”,0.0049545723],[“sci.space”,0.0036540392],[“comp.sys.mac.hardware”,0.003515738],[“talk.politics.guns”,0.0030825695],[“comp.windows.x”,0.0028197556],[“comp.sys.ibm.pc.hardware”,0.0022112958],[“comp.os.ms-windows.misc”,0.0020292562],[“sci.crypt”,0.0013376401],[“soc.religion.christian”,0.0005032166]]}

Based on the score, the “baseball” category is the far winner, with a score of 0.70 compared to the next best score of 0.07 for “motorcycles”, so the model is roughly 70% “sure” that the correct answer is “baseball”.

Using other Tensorflow & ML Models on Haley AI-as-a-Service

In this example, we’ve created a new model, trained it, and uploaded it to Haley.

If you would like to incorporate ML models into Haley AIaaS, there are a few options:

  • You create the model, train it, deploy it on Haley AIaaS as we have done in this example
  • Vital AI creates the model, trains it, and/or deploys it, for you to use
  • Use an “off the shelf” model that Haley already uses or one taken from open sources, potentially training it with your data

Additionally, the training of the models can take place on our infrastructure — this is particularly useful for ongoing training scenarios where a data pipeline re-trains the model to incorporate new data periodically, or an external vendor could be used for this training, such as Databricks or Google, with some additional data coordination to share the training data.  To reduce latency, it’s usually best that the “inference” step (using the model to make a prediction) is as closely integrated as possible, thus it is usually best that this resides within Haley AIaaS, although there can always be exceptional cases.

WrapUp

In this example, we have:

  • Trained a text classification model using Tensorflow and Jupyter
  • Uploaded the model and deployed it using the Haley Admin Dashboard
  • Using the Visual Designer, created a dialog that uses the model to classify incoming text messages
  • Added dialog steps to generate response messages based on the classification, and connected the dialog to a bot, and connected the bot to a channel
  • Used a web application logged in to Haley to send messages on the channel and receive replies

This example can be extended in many ways, including:

  • Connect to other endpoints besides a web application such as classifying Tweets, Facebook Messages, EMails, SMS Messages, and others
  • Use a Tensorflow model to process different types of messages, such as those from IoT devices or images
  • Use a generative Tensorflow model that creates a response to an input rather than classifying the input.  Such models can generate text, audio, images, or actions — such as a proactive step to prevent fraud
  • Add Tensorflow models to workflows to incorporate them into business processes, such as processing insurance claims

If you would like to incorporate Tensorflow or other ML Models into Haley AIaaS, you could create the model, we at Vital AI could create it for you, or an off the shelf model could be used.

To train the model, either you could train it, we could train it on our infrastructure, or a third party vendor could be used — such as Google’s Cloud ML Engine for Tensorflow.

I hope you have enjoyed learning about the Haley AI-as-a-Service platform can utilize Tensorflow Models.  Please contact us to learn more!

Vote for Haley AI-as-a-Service to speak at Botscamp

Voting is open for one more day at Botscamp to select speakers and we’re in the running!

Please check out our short video below pitching our presentation, and please vote for us to see the full presentation about Haley AI-as-a-Service online later this month at Botscamp!

Our presentation will cover the Haley AI-as-a-Service platform providing A.I. automation for business tasks.

To learn more and vote, you can go to:

https://beeq.typeform.com/to/xaDqHp?source=bc_slack

The main Botscamp website can be found at http://www.botscamp.co/

 

 

Adventuring with a Facebook Messenger Bot and Haley AI-as-a-Service

For some tech retro fun we recently published the classic text adventure game Colossal Cave Adventure (circa 1977) as a Facebook Messenger Bot running on the Haley AI-as-a-Service platform.

In this post I’ll describe how it works.

But first, do some adventuring!

Facebook Page: https://www.facebook.com/adventurebotai/

Messenger Link: http://m.me/adventurebotai

Here’s a screenshot of the beginning of the game:

adventurebot-1

Via the Haley AI dashboard, we can set up a Bot, connect it to a Facebook app (this is what we call an “Endpoint”), and connect the Bot to dialogs to process incoming messages and generate outgoing messages.  The dashboard also provides user management screens, analytics, data management, prediction models (via machine learning), and other functionality.

The heart of the Adventure Bot is a dialog, composed with the Haley Dialog Designer, which is a visual drag-and-drop tool to create dialogs:

advent-dialog1

Pictured above is the dialog for Adventure with a “ChatRule” step selected.  This step waits for a message from the adventurer, like “go north” or “kill dragon”.

Here’s a few details about the important steps in the Adventure dialog:

chatrules

 

The ChatRule step collects a text message and processes it into the “intent” of the message, turning text into structured data.

 

assign_fact

 

The Assign step assigns a value to a “fact”.   Here we get the output of the game based on the input message, and assign it into a fact.

 

text_message

 

Using the Message step, we send the output of the game back to the user.

 

loop

 

We use the Loop step to loop back to the ChatRule step, and wait for the next message.

 

For the game implementation, we used a port of the game for Inform7 (actually Inform6 code compiled using Inform7).  Inform7 is a wonderful interactive fiction design tool, which can be found here: http://inform7.com/

To give you a sense of what the game code looks like, here’s a snippet about a location:

Room In_Hall_Of_Mt_King “Hall of the Mountain King”
with name ‘hall’ ‘of’ ‘mountain’ ‘king’,
description
“You are in the hall of the mountain king, with passages off in all directions.”,
cant_go “Well, perhaps not quite all directions.”,
u_to In_Hall_Of_Mists,
e_to In_Hall_Of_Mists,
n_to Low_N_S_Passage,
s_to In_South_Side_Chamber,
w_to In_West_Side_Chamber,
sw_to In_Secret_E_W_Canyon,
before [;
Go:
if (Snake in self && (noun == n_obj or s_obj or w_obj ||
(noun == sw_obj && random(100) <= 35)))
“You can’t get by the snake.”;
];

And here is a snippet about an object:

Object -> Snake “snake”
with name ‘snake’ ‘cobra’ ‘asp’ ‘huge’ ‘fierce’ ‘green’ ‘ferocious’
‘venemous’ ‘venomous’ ‘large’ ‘big’ ‘killer’,
description “I wouldn’t mess with it if I were you.”,
initial “A huge green fierce snake bars the way!”,
life [;
Order, Ask, Answer:
“Hiss!”;
ThrowAt:
if (noun == axe) <<Attack self>>;
<<Give noun self>>;
Give:
if (noun == little_bird) {
remove little_bird;
“The snake has now devoured your bird.”;
}
“There’s nothing here it wants to eat (except perhaps you).”;
Attack:
“Attacking the snake both doesn’t work and is very dangerous.”;
Take:
deadflag = 1;
“It takes you instead. Glrp!”;
],
has animate;

Sorry, spoiler!  There is a snake in the game.

Inform7 has a different, more natural language based syntax.  If you are interested, here is a screencast about the syntax and the editor: https://vimeo.com/4221277

The game compiler produces a game “binary” for the Glulx virtual machine (I didn’t know what that was either).  Fortunately there is a Glulx interpreter for Java available ( https://github.com/Banbury/zag ) so after some edits to the interpreter to make it more easily embeddable, we are able to use the interpreter and the game “binary” within Haley.

Haley has a number of different types of facts to hold strings, numbers, dates, lists, et cetera — and fortunately this includes a fact-type for a “Java Object”, so we can use an Assign step (see above) to set up the interpreter for Adventure, and hold the game state in a JavaObject fact associated with the player.

The nice thing about this implementation is we can support any interactive fiction story/game via the same method.  It will be interesting to include more story telling capabilities within Haley, as well as provide a platform for such experiences.   Please contact us if you would like to create such narrative experiences via the Haley platform!  Two obvious upgrades would be to use our more robust ChatRules text parser and include media (images, sound, and video) in the messages.

I hope you’ve enjoyed learning about creating a Facebook Bot using the Haley AI-as-a-Service platform.

Please contact us to learn more about using Haley AI, and enjoy Adventuring!

 

Haley AI Dialog Demo Video

Here is a quick 3 minute video of some features of Haley AI, focusing on our visual dialog designer tool.

The video highlights:

  • Quickly creating a chatbot dialog using a visual design tool
  • Deploying the dialog in a web application or on Facebook
  • Using a dialog in a conversational e-commerce application
  • Using a form instead of a chat interface
  • Adding Haley AI to teams for collaboration

 

Hope you enjoyed the demonstration.

Please contact us today to learn about using Haley AI in your organization!

info@vital.ai

http://haley.ai/#contact

 

Vital.AI @ NY TechDay on Tuesday, April 18th!

c1b4f38d2d

We’re excited for TechDay coming up next Tuesday! (April 18th)

Join 35,000 other tech lovers and check out the booths of 574 startups + 1 very special startup (us!).

When you stop by our booth, we can show you a sneak preview of our A.I. platform Haley.ai, including a visual designer tool to quickly build A.I. Agents for your business.

Here are the TechDay details:

Cost: Free!
When: April 18th, 10am to 5pm
Where: Pier94 @ 711 12th Avenue New York, NY
More details: https://techdayhq.com/new-york

See you there!

 

VDK Release 0.2.304, VitalService ASync with Kafka

We released VDK 0.2.304 a couple weeks back, and it contains many new features.

One of the most significant is the asynchronous version of the VitalService API, which uses an underlying realtime distributed messaging system in its implementation.

Quick aside for some definitions: In synchronous systems, one component sends a request and waits for an answer.  This usually involves “blocking” — holding on to system resources while waiting.   In asynchronous systems, one component sends a request and moves on to other things, and processes the answer when it arrives.  This is usually “non-blocking” with no or limited resources being held while a response is pending.  Applications usually combine both methods — there are advantages and disadvantages of each.  Generally, asynchronous requests require more overhead but can scale up to large numbers, and synchronous requests can get individual answers much quicker.  Most modern websites include asynchronous communication with the server, whereas microservices ( https://en.wikipedia.org/wiki/Microservices ) are synchronous.

kafka-integration
Architecture of an application using VitalService API Client with an underlying asynchronous implementation using a Kafka cluster and set of Workers to process messages.

While we have often combined VitalService with a realtime messaging systems in Big Data applications for fully “reactive” applications ( http://www.reactivemanifesto.org/ ), this deeper integration enables a much simpler realtime application implementation and a seamless flow across synchronous and asynchronous software components.

So, the advantage in the 0.2.304 release is a simplification and streamlining of development processes via a unification of APIs — resulting in fewer lines of code, quicker development, fewer bugs, lower cost, and less technical debt.

Using the updated API, a developer works with a single API and chooses which calls to be synchronous or asynchronous based on the parameters of the API call.

For our messaging implementation we use Kafka ( http://kafka.apache.org/ ) , however we could also use alternatives, such as Amazon Kinesis ( https://aws.amazon.com/kinesis/ ).

In the above diagram, we have an application using the VitalService API client.  The application may be processing messages from a user, requesting realtime predictions from a predictive model, querying a database, or any other VitalService API function.

This VitalService API client is using Vital Prime as the underlying implementation.  See: https://console.vital.ai/productdetails/vital-prime-021 and http://www.vital.ai/tech.html for more information about Vital Prime.

Vital Prime acts as both a REST server as well as a message producer/consumer (publisher/subscriber).

When VitalService API client authenticates with Prime, it internally learns the details of the distributed messaging cluster (Kafka cluster), including the connection details and the current set of “topics”(essentially, queues) with their statuses .  Prime coordinates with the Kafka Cluster using Zookeeper ( https://zookeeper.apache.org/ ) to track the available brokers and the status of topics. The VitalService API Client can then seamlessly direct incoming API calls into messages, and direct incoming messages to callback functions.

Thus, an API call like a Query has a synchronous version and an asynchronous version which is the same except for the callback function parameter (the callback function can be empty for “fire-and-forget” API calls).  If the synchronous version is used, a blocking REST call is made to Prime to fulfill the request.  If the asynchronous version is used, the call is directed into the messaging system.

In our example application, we have three pictured “workers” which are processing messages consumed from Kafka, coordinating with Prime as needed.  By this method, work can be distributed across the cluster according to whatever scale is needed.  Such workers can be implemented with instances of Prime, a Spark Cluster ( http://spark.apache.org/ with http://aspen.vital.ai/ ), or any scalable compute service such as Amazon’s Lambda ( https://aws.amazon.com/lambda/ ).

This deeper integration with realtime messaging systems, especially Kafka, has been spurred on by the upcoming release of our Haley AI Assistant service, which can be used to implement AI Assistants for different domains, as well as related “chatbot” services.

More information on Haley AI to come!

 

 

Welcome Xin Yee, our new intern from Singapore!

The Vital AI team is excited to welcome Xin Yee Wong as a new Sales and Marketing intern!

Processed with VSCO with b1 preset
Xin Yee Wong

Xin Yee is a marketing major at the National University of Singapore and is participating in the NUS Enterprise program, helping to train Singapore’s next generation of entrepreneurs.  An important part of NUS Enterprise is the NUS Overseas Colleges Programme, which arranges for internships at innovative start-up companies around the world.  We at Vital AI are proud to participate and have Xin Yee join us for the next year!

Xin Yee will be focused on interesting and effective marketing ideas for Vital AI to increase customer outreach so that more people can benefit from Vital AI’s software and services.  Of particular focus will be Vital AI’s upcoming launch of the Haley AI assistant service.

In addition to her sales and marketing activities, Xin Yee, as a budding entrepreneur, will be assisting with Vital AI’s strategic business goals including fundraising, product development, and partnerships.  She will also be taking over Vital AI’s social media outreach efforts.  So do keep a lookout on Vital AI’s social media channels for Xin Yee, and say hello!

Once more, we welcome Xin Yee to the United States, to New York City, and to Vital AI!

 

Updating the “If” statement in the JVM for Truth Value Logic

The Vital Development Kit (VDK) provides development tools and an API for Artificial Intelligence (AI) applications and data processing.  This includes a “Domain Specific Language” (DSL) for working with data.

To this DSL we’ve recently added an extension to the venerable “If” statement in the JVM to handle Truth Values. (Truth Values, see: Beyond True and False: Introducing Truth to the JVM )

The “If” statement is the workhorse of computer programming.  If this, do that.  If something is so, then do some action.  The “If” statement evaluates if some condition is “True”, and if so, takes some action.  If the condition is “False”, then it may take some other action.

The condition of an “If” statement yields a Boolean True or False and typically involves tests of variables, such as:  height > 72, speed < 50, name == “John”.

The “If” statement is a special case of the “switch” statement, such that:

if(name == "John") { do something }
else { do something different }

is the same as:

switch (name == "John") {
case true: { do something; break }
case false: { do something different; break }
}

In the VDK we have an extension of Boolean in the JVM called Truth.  Truth may take four values: YES, NO, UNKNOWN, or MU compared to the Boolean TRUE or FALSE.  YES and NO are the familiar TRUE and FALSE, with UNKNOWN providing a value for when a condition cannot be determined because of unknown inputs, and MU providing a value for when a condition is unknowable because it contains a false premise.

For example, for UNKNOWN, the color of a traffic light might be red, green, or yellow but its value is currently UNKNOWN.  And for MU, the favorite color of a traffic light is MU because inanimate objects don’t have favorite colors.

UNKNOWN and MU are part of the familiar Boolean Truth Tables.  For instance True AND True yields True, whereas True AND Unknown yields Unknown.

Details of the Truth implementation in the JVM can be found in the blog post: Beyond True and False: Introducing Truth to the JVM

Because Truth has four values, we need a way to handle four cases when we test a condition.

As above, we could use a “switch” statement like so:

switch (Truth Condition) {
case YES: { handle YES; break }
case NO: { handle NO; break }
case UNKNOWN: { handle UNKNOWN; break }
case MU: { handle MU; break }
} 

This is a little verbose, so we’ve introduced a friendlier statement: consider.

consider (Truth Condition) {
YES: { handle YES }
NO: { handle NO }
UNKNOWN: { handle UNKNOWN }
MU: { handle MU }
} 

So we can have code like:

consider (trafficlight.color == GREEN) {
YES: { car.drive() }
NO: { car.stop() }
UNKNOWN: { car.stop() } // better be safe and look around
MU: { car.stop(); runDiagnostics(); } // error! error!
} 

In the above code, if evaluating our truth condition results in an UNKNOWN value (perhaps a sensor cannot “see” the Traffic Light), we can take some safe action.  If we get a MU value, then we have some significant error, such the “trafficlight” object not actually being a trafficlight — perhaps some variable mixup.  We can also take some defensive measures in this case.

We can also stick with using “If” and use exceptions for the cases of UNKNOWN and MU:

try {
if(trafficlight.color == GREEN) { car.drive() }
else { car.stop() }
} catch(Exception ex) {
// handle UNKNOWN and MU
}

This works because Truth values are coerced to Boolean True or False for the cases of YES or NO.  This coercion throws an exception for the cases of UNKNOWN or MU.  JVM exceptions are a bit ugly and should not be used for normal program control flow (exceptions as flow control is often an anti-pattern), so the consider statement is much preferred.

The logic of Truth is very helpful in defining Rules to process realtime dynamic data, and answer dynamic data queries.  The consider statement allows such rules to be quite succinct and explicitly handle unknown data or queries with non-applicable conditions.

For instance, if we query an API for the status of traffic lights and ask how many are currently yellow, we might get back a reply of 0 (zero).  We might wonder are there really zero traffic lights yellow presently, or is the API not functioning and always returning zero, or does it not track yellow lights?  It would be better to get a reply of UNKNOWN if the API was not functioning.  If we asked how many traffic lights were displaying Purple, a reply of zero would be correct but it would be better to get a reply of MU –  there are no such thing as Purple traffic lights in a world of Red/Yellow/Green lights.

As AI and data-driven applications incorporate more dynamic data models and data sources, instances of missing or incorrect knowledge are more the rule rather than the exception, so the software flow should treat these as normal cases to consider rather than exceptions.

Hope you have enjoyed learning about how the Vital Development Kit has extended “If” to handle Truth Values.  Please post and questions or comments below, or get in touch with us at info@vital.ai.