FinTech Artificial Intelligence SaaS Startup seeking Senior Java/JVM Developer to join the team!

Seeking: Senior Java/JVM Software Engineer, with Operations Experience 

FinTech Artificial Intelligence SaaS Startup seeking Senior Java/JVM Developer to join the team! provides an Insurance Supply Chain/Marketplace driven by Artificial Intelligence.  Our infrastructure processes and generates realtime messages via our A.I. workflow engine to facilitate transactions across our Marketplace.  We work with all participants in the insurance industry from agents to reinsurance, and the insured customers.

We are seeking immediately a Senior Java/JVM Developer (5+ Years Experience) to implement new features and help improve and scale our SaaS infrastructure, and to help integrate our Marketplace suppliers (such as Insurance Companies).  Required experience with Streaming applications (Websockets, Kafka); XML; Security; API Integrations; Database Queries, Transactions, and Infrastructure; and Monitoring and Managing AWS infrastructure for scaling, performance, and uptime.  There is some overlap with operations (“DevOps”), but this is primarily a software development role.  Primary development environment is on the JVM using Java, Scala, and Groovy.  Experience with Apache Spark and Tensorflow is a plus.  Experience with Web technologies (NodeJS, VertX) is a plus.

Location is Lower Manhattan.

Salary, Benefits, Equity commensurate with experience.  No recruiters please. is a joint venture of Vital AI and Diversified Risk.





Haley AI-as-a-Service and Tensorflow

There is a lot of excitement in the Machine Learning world around Deep Learning and Neural Networks, and one of the most popular libraries available for creating Deep Learning models is Tensorflow ).

What follows is an example of using a Tensorflow model within a Haley AI-as-a-Service dialog.

A few quick notes about Haley AI-as-a-Service:

Haley provides a software platform for Artificial Intelligence Agents, which enables automation of business processes — such as chatting with a customer service agent, reacting to Internet-of-Thing data to control devices, or classifying loan applications for risk assessment.  Haley integrates with a number of endpoints, which are the sources and destinations of Haley messages.  These endpoints include email, web applications, mobile applications, Facebook, Twitter, Slack, SMS, IoT Devices, Amazon Alexa, and others.  Communication with an endpoint is over a channel which groups messages together, such as a channel #sales-team for communication among members of a sales team.  AI Agents, called bots, receive messages and send messages on channels, and bots use dialogs  a series of steps to handle events — and workflows — business processes that may be composed of many dialogs — to accomplish tasks.

Haley AI-as-a-Service and ML Models:

Haley AI-as-a-Service supports a variety of machine learning models including those from Apache Spark MLlib and Tensorflow.  We also have support for libraries such as Keras used with Tensorflow and BigDL from Intel ( ) used with Spark.  Based on customer demand, we continue to add support for others.

In this example, first, we’ll create a model to classify text.  Then, we’ll use this model in a dialog to classify the text that occurs in a channel.  To see the results, we’ll use a chat web application to communicate on the channel.  This classification of the text into a topic could be used in the dialog to react to what a person is saying, but in our short example we’ll just report back the topic.  So, if someone says something like: “My ford needs an oil change”, we want to classify this to be about “Cars” and respond, “Your message appears to be about Cars.”

Creating the Tensorflow Model

For the tensorflow model, we’ll be using the Keras Deep Learning Library ( ) running on the tensorflow backend.

There is a great tutorial here:  which covers creating a text classification model which uses word2vec-style word embeddings using the GloVe word embeddings from Stanford: .  This tutorial uses the 20news dataset which consists of around 20,000 documents equally divided into 20 categories from USENET postings.

The categories include:
(and 16 others)

The complete code for training the model can be found here:

The critical training part of the code is:

print(‘Training model.’)

# train a 1D convnet with global maxpooling
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype=’int32′)
embedded_sequences = embedding_layer(sequence_input)
x = Conv1D(128, 5, activation=’relu’)(embedded_sequences)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation=’relu’)(x)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation=’relu’)(x)
x = MaxPooling1D(35)(x)
x = Flatten()(x)
x = Dense(128, activation=’relu’)(x)
preds = Dense(len(labels_index), activation=’softmax’)(x)

model = Model(sequence_input, preds)
metrics=[‘acc’]), y_train,
validation_data=(x_val, y_val))

We can use Jupyter ( ) to run this to create the model, and then save the model to a file.

In a production application, the model would be trained within Haley, with new models swapped in on an ongoing basis, but in this example we are uploading the trained model file from a local machine.

Here’s a screenshot from within Jupyter:


Note we’re saving the model with at the end of the training.  We’ve also turned on logging for tensorboard in the above screenshot.

The tutorial reports an accuracy of around 95%.

Once we have our trained model file we upload it to the Haley Admin Dashboard and deploy it.  Now we’re ready to call it from a dialog.

Creating the Dialog


The above screenshot is the Haley Dialog Designer tool, which is a visual drag-and-drop interface to create dialogs.  We drag-and-drop and configure a handful of steps to create the dialog.  The important ones are:



This step in the dialog gets a text message on the channel and puts it into a fact variable called textFact



This step in the dialog (shown selected, with it’s Configure panel on the right) calls the tensorflow model passing in the parameter textFact which is classified by the model, putting the results into the variable classifyResults.





This step in the dialog sends a message out on the channel, reporting back the classification using the classifyResults fact.



For reporting back the classification, we take the top-most category and its score and send the message:  “That appears to be about: $category with a score of $score”.  For diagnostic purposes we also send the full list of results back in a JSON list.

Once we’ve created the dialog, we then need to connect it to a bot and a channel.

Haley Admin Dashboard, Bot Screen:


Here in the Haley Admin Dashboard, we create a new bot that just contains our new dialog, and set the dialog as the default, so it is the default action for messages that the bot receives.

Haley Admin Dashboard, Channel Screen:


And here in the dashboard we connect the bot up to the channel “docclassify”.  Now, any user which has access to that channel over an endpoint, such as in a web application, can send messages on the channel and access our new classifying bot.


Using the Tensorflow Model in a Chat Interface


Now, by logging into a web application connected to Haley we can see the available channels on the left, select the “docclassify” channel, and send a message like:

Person: “sam hit the ball over the fence for a homerun”

and we get our answer back:

Haley: “That appears to be about: with a score of 0.7076626”

We also send the complete classification and score list for diagnostics:


Based on the score, the “baseball” category is the far winner, with a score of 0.70 compared to the next best score of 0.07 for “motorcycles”, so the model is roughly 70% “sure” that the correct answer is “baseball”.

Using other Tensorflow & ML Models on Haley AI-as-a-Service

In this example, we’ve created a new model, trained it, and uploaded it to Haley.

If you would like to incorporate ML models into Haley AIaaS, there are a few options:

  • You create the model, train it, deploy it on Haley AIaaS as we have done in this example
  • Vital AI creates the model, trains it, and/or deploys it, for you to use
  • Use an “off the shelf” model that Haley already uses or one taken from open sources, potentially training it with your data

Additionally, the training of the models can take place on our infrastructure — this is particularly useful for ongoing training scenarios where a data pipeline re-trains the model to incorporate new data periodically, or an external vendor could be used for this training, such as Databricks or Google, with some additional data coordination to share the training data.  To reduce latency, it’s usually best that the “inference” step (using the model to make a prediction) is as closely integrated as possible, thus it is usually best that this resides within Haley AIaaS, although there can always be exceptional cases.


In this example, we have:

  • Trained a text classification model using Tensorflow and Jupyter
  • Uploaded the model and deployed it using the Haley Admin Dashboard
  • Using the Visual Designer, created a dialog that uses the model to classify incoming text messages
  • Added dialog steps to generate response messages based on the classification, and connected the dialog to a bot, and connected the bot to a channel
  • Used a web application logged in to Haley to send messages on the channel and receive replies

This example can be extended in many ways, including:

  • Connect to other endpoints besides a web application such as classifying Tweets, Facebook Messages, EMails, SMS Messages, and others
  • Use a Tensorflow model to process different types of messages, such as those from IoT devices or images
  • Use a generative Tensorflow model that creates a response to an input rather than classifying the input.  Such models can generate text, audio, images, or actions — such as a proactive step to prevent fraud
  • Add Tensorflow models to workflows to incorporate them into business processes, such as processing insurance claims

If you would like to incorporate Tensorflow or other ML Models into Haley AIaaS, you could create the model, we at Vital AI could create it for you, or an off the shelf model could be used.

To train the model, either you could train it, we could train it on our infrastructure, or a third party vendor could be used — such as Google’s Cloud ML Engine for Tensorflow.

I hope you have enjoyed learning about the Haley AI-as-a-Service platform can utilize Tensorflow Models.  Please contact us to learn more!

Vote for Haley AI-as-a-Service to speak at Botscamp

Voting is open for one more day at Botscamp to select speakers and we’re in the running!

Please check out our short video below pitching our presentation, and please vote for us to see the full presentation about Haley AI-as-a-Service online later this month at Botscamp!

Our presentation will cover the Haley AI-as-a-Service platform providing A.I. automation for business tasks.

To learn more and vote, you can go to:

The main Botscamp website can be found at



Adventuring with a Facebook Messenger Bot and Haley AI-as-a-Service

For some tech retro fun we recently published the classic text adventure game Colossal Cave Adventure (circa 1977) as a Facebook Messenger Bot running on the Haley AI-as-a-Service platform.

In this post I’ll describe how it works.

But first, do some adventuring!

Facebook Page:

Messenger Link:

Here’s a screenshot of the beginning of the game:


Via the Haley AI dashboard, we can set up a Bot, connect it to a Facebook app (this is what we call an “Endpoint”), and connect the Bot to dialogs to process incoming messages and generate outgoing messages.  The dashboard also provides user management screens, analytics, data management, prediction models (via machine learning), and other functionality.

The heart of the Adventure Bot is a dialog, composed with the Haley Dialog Designer, which is a visual drag-and-drop tool to create dialogs:


Pictured above is the dialog for Adventure with a “ChatRule” step selected.  This step waits for a message from the adventurer, like “go north” or “kill dragon”.

Here’s a few details about the important steps in the Adventure dialog:



The ChatRule step collects a text message and processes it into the “intent” of the message, turning text into structured data.




The Assign step assigns a value to a “fact”.   Here we get the output of the game based on the input message, and assign it into a fact.




Using the Message step, we send the output of the game back to the user.




We use the Loop step to loop back to the ChatRule step, and wait for the next message.


For the game implementation, we used a port of the game for Inform7 (actually Inform6 code compiled using Inform7).  Inform7 is a wonderful interactive fiction design tool, which can be found here:

To give you a sense of what the game code looks like, here’s a snippet about a location:

Room In_Hall_Of_Mt_King “Hall of the Mountain King”
with name ‘hall’ ‘of’ ‘mountain’ ‘king’,
“You are in the hall of the mountain king, with passages off in all directions.”,
cant_go “Well, perhaps not quite all directions.”,
u_to In_Hall_Of_Mists,
e_to In_Hall_Of_Mists,
n_to Low_N_S_Passage,
s_to In_South_Side_Chamber,
w_to In_West_Side_Chamber,
sw_to In_Secret_E_W_Canyon,
before [;
if (Snake in self && (noun == n_obj or s_obj or w_obj ||
(noun == sw_obj && random(100) <= 35)))
“You can’t get by the snake.”;

And here is a snippet about an object:

Object -> Snake “snake”
with name ‘snake’ ‘cobra’ ‘asp’ ‘huge’ ‘fierce’ ‘green’ ‘ferocious’
‘venemous’ ‘venomous’ ‘large’ ‘big’ ‘killer’,
description “I wouldn’t mess with it if I were you.”,
initial “A huge green fierce snake bars the way!”,
life [;
Order, Ask, Answer:
if (noun == axe) <<Attack self>>;
<<Give noun self>>;
if (noun == little_bird) {
remove little_bird;
“The snake has now devoured your bird.”;
“There’s nothing here it wants to eat (except perhaps you).”;
“Attacking the snake both doesn’t work and is very dangerous.”;
deadflag = 1;
“It takes you instead. Glrp!”;
has animate;

Sorry, spoiler!  There is a snake in the game.

Inform7 has a different, more natural language based syntax.  If you are interested, here is a screencast about the syntax and the editor:

The game compiler produces a game “binary” for the Glulx virtual machine (I didn’t know what that was either).  Fortunately there is a Glulx interpreter for Java available ( ) so after some edits to the interpreter to make it more easily embeddable, we are able to use the interpreter and the game “binary” within Haley.

Haley has a number of different types of facts to hold strings, numbers, dates, lists, et cetera — and fortunately this includes a fact-type for a “Java Object”, so we can use an Assign step (see above) to set up the interpreter for Adventure, and hold the game state in a JavaObject fact associated with the player.

The nice thing about this implementation is we can support any interactive fiction story/game via the same method.  It will be interesting to include more story telling capabilities within Haley, as well as provide a platform for such experiences.   Please contact us if you would like to create such narrative experiences via the Haley platform!  Two obvious upgrades would be to use our more robust ChatRules text parser and include media (images, sound, and video) in the messages.

I hope you’ve enjoyed learning about creating a Facebook Bot using the Haley AI-as-a-Service platform.

Please contact us to learn more about using Haley AI, and enjoy Adventuring!


Haley AI Dialog Demo Video

Here is a quick 3 minute video of some features of Haley AI, focusing on our visual dialog designer tool.

The video highlights:

  • Quickly creating a chatbot dialog using a visual design tool
  • Deploying the dialog in a web application or on Facebook
  • Using a dialog in a conversational e-commerce application
  • Using a form instead of a chat interface
  • Adding Haley AI to teams for collaboration


Hope you enjoyed the demonstration.

Please contact us today to learn about using Haley AI in your organization!


NY Tech Day ’17

Thank you everyone who popped by our booth at Tech Day 2017! We’re proud to have received an overwhelming interest in our new AI platform, Haley AI!



We are excited to keep in touch with all whom we met yesterday! Feel free to hit us up at should you have any question about Vital AI!



Marc introduced Haley AI to one of the many interested attendees.

And a 360-degree view from our booth!


It was a busy day but we sure had a blast!

For those who did not get a chance to speak to us yesterday, connect with us at and we’d love to speak to you more about Vital!

Once again, thank you to all organizers for this amazing event! And kudos to 575 other innovative startups for the great job!

See you guys again at Tech Day in 2018!

Vital.AI @ NY TechDay on Tuesday, April 18th!


We’re excited for TechDay coming up next Tuesday! (April 18th)

Join 35,000 other tech lovers and check out the booths of 574 startups + 1 very special startup (us!).

When you stop by our booth, we can show you a sneak preview of our A.I. platform, including a visual designer tool to quickly build A.I. Agents for your business.

Here are the TechDay details:

Cost: Free!
When: April 18th, 10am to 5pm
Where: Pier94 @ 711 12th Avenue New York, NY
More details:

See you there!


Season’s Greetings from the team at Vital A.I.


It’s been an exciting time in the data and artificial intelligence world!

Global interest in A.I. has never been higher, and new algorithms, APIs, and applications are appearing on a near daily basis. We’ve had a busy year building artificial intelligence applications for our customers, as well as implementing our A.I. agent platform Haley A.I. to be released this upcoming year.

We have had the pleasure of collaborating with many of you throughout the year, and we are so excited for all we can do together next year. Here are a few milestones from the past year, and some things to look forward to next year!

To All, Happy Holidays and have a Happy, Healthy and Prosperous New Year!

— Marc Hadfield, Founder of Vital AI

haley-02 Pre-release of Haley A.I.

During the year, we created the pre-release version of our Haley A.I. intelligent agent platform including chatbot functionality, recommendations & predictions via machine learning, dialog management, and integrations with numerous APIs and devices such as the Amazon Echo and the Jibo robot. Haley A.I. is now available to customers and partners to create intelligent agent applications.



  Upcoming Launch of Insurance-aaS managed by Haley A.I. in early 2017

In collaboration with our Insurance Broker firm, we are proud to announce the launch of an Insurance-as-a-Service A.I. platform in early 2017! Our intelligent agent Haley A.I. will manage all interactions among insurance companies, customers, and brokers, with a focus on commercial insurance.


        Upcoming Launch of Haley SaaS

In the new year, we are looking forward to the launch of our Haley A.I. intelligent agent platform. Using the Haley A.I. service, you can hire Haley to automate business processes with applications across industries like Financial Services, Healthcare, and IT Management.


Welcome our new intern, Xin Yee

In August, Xin Yee joined us from the National University of Singapore (NUS).  As the marketing intern at Vital.AI, she is driven by her passion in marketing to promote the power of artificial intelligence to the world. She is excited to run marketing campaigns to promote Haley SDK and Haley SaaS in the new year!


Tech Day 2017

In the upcoming year, Vital A.I. will again be participating in one of the largest tech event in NYC – NY Tech Day. With over 30,000 attendees and over 500 startups presenting, it will be an exciting day. Mark your calendar for April 18th! Stop by our booth to check out our software and meet Haley!

Best wishes from the team at Vital AI


VDK Release 0.2.304, VitalService ASync with Kafka

We released VDK 0.2.304 a couple weeks back, and it contains many new features.

One of the most significant is the asynchronous version of the VitalService API, which uses an underlying realtime distributed messaging system in its implementation.

Quick aside for some definitions: In synchronous systems, one component sends a request and waits for an answer.  This usually involves “blocking” — holding on to system resources while waiting.   In asynchronous systems, one component sends a request and moves on to other things, and processes the answer when it arrives.  This is usually “non-blocking” with no or limited resources being held while a response is pending.  Applications usually combine both methods — there are advantages and disadvantages of each.  Generally, asynchronous requests require more overhead but can scale up to large numbers, and synchronous requests can get individual answers much quicker.  Most modern websites include asynchronous communication with the server, whereas microservices ( ) are synchronous.

Architecture of an application using VitalService API Client with an underlying asynchronous implementation using a Kafka cluster and set of Workers to process messages.

While we have often combined VitalService with a realtime messaging systems in Big Data applications for fully “reactive” applications ( ), this deeper integration enables a much simpler realtime application implementation and a seamless flow across synchronous and asynchronous software components.

So, the advantage in the 0.2.304 release is a simplification and streamlining of development processes via a unification of APIs — resulting in fewer lines of code, quicker development, fewer bugs, lower cost, and less technical debt.

Using the updated API, a developer works with a single API and chooses which calls to be synchronous or asynchronous based on the parameters of the API call.

For our messaging implementation we use Kafka ( ) , however we could also use alternatives, such as Amazon Kinesis ( ).

In the above diagram, we have an application using the VitalService API client.  The application may be processing messages from a user, requesting realtime predictions from a predictive model, querying a database, or any other VitalService API function.

This VitalService API client is using Vital Prime as the underlying implementation.  See: and for more information about Vital Prime.

Vital Prime acts as both a REST server as well as a message producer/consumer (publisher/subscriber).

When VitalService API client authenticates with Prime, it internally learns the details of the distributed messaging cluster (Kafka cluster), including the connection details and the current set of “topics”(essentially, queues) with their statuses .  Prime coordinates with the Kafka Cluster using Zookeeper ( ) to track the available brokers and the status of topics. The VitalService API Client can then seamlessly direct incoming API calls into messages, and direct incoming messages to callback functions.

Thus, an API call like a Query has a synchronous version and an asynchronous version which is the same except for the callback function parameter (the callback function can be empty for “fire-and-forget” API calls).  If the synchronous version is used, a blocking REST call is made to Prime to fulfill the request.  If the asynchronous version is used, the call is directed into the messaging system.

In our example application, we have three pictured “workers” which are processing messages consumed from Kafka, coordinating with Prime as needed.  By this method, work can be distributed across the cluster according to whatever scale is needed.  Such workers can be implemented with instances of Prime, a Spark Cluster ( with ), or any scalable compute service such as Amazon’s Lambda ( ).

This deeper integration with realtime messaging systems, especially Kafka, has been spurred on by the upcoming release of our Haley AI Assistant service, which can be used to implement AI Assistants for different domains, as well as related “chatbot” services.

More information on Haley AI to come!