AI hacking at the Jibo Hackathon

I am very fortunate to be among the first few members of the nascent Jibo developer community, which kicked off today at the first Jibo Hackathon.

The Hackathon was held on the MIT campus, where social robotics was born.

After getting our development environments set up, we got our hands on the Jibo simulator, the SDK, and of course the early Jibo robots.

The Jibo development environment will be familiar to any web application developer, with some added screens reminiscent of Disney cell animation.

We spent some time with some sample code and the simulator.

And then, with a simple shell command on my Mac of ‘jibo run’, my newly created skill (Jibo-speak for “app”) is deployed to my robot friend for the day, and Jibo comes alive.

We got to experiment with a number of Jibo features: animating the Jibo body, Voice Recognition, Natural Language Understanding, Text-to-Speech, Dialogs, Face Tracking.

My first skill was pretty simplistic, but included a bit of all the major Jibo features of the SDK, including snapping a photo, displaying it on the screen, and asking if I liked it.  Plus some Jibo dance moves.  I was in the process of connecting the image up to a Deep Learning image classification API, which sort of worked except for my forgetfulness of JavaScript syntax, when we ran low on time and all happily retired to the local pub.

The ease of working with the simulator and SDK must truly be emphasized.  There is a magic in creating an arc of motion in the simulator, hitting the “Run” button, and having Jibo swing into motion.

Looking forward to the arrival of Jibo in early Spring!  At Vital we’ll be honing our skills in the meanwhile.

jibo

Amazon Echo tells “Yo Mama…” jokes.

To experiment with the Amazon Echo API, we created a little app called “Funnybot”.

The details of the app can be found in the previous post here:

https://vitalai.com/2015/07/building-an-amazon-echo-app-with-vital-ai.html 

 

All the source code of the app can be found on github here:

https://github.com/vital-ai/vital-examples/tree/master/amazon-echo-humor-app

The Vital AI components are available here: https://dashboard.vital.ai/

You may notice a Raspberry Pi in the video also — we’re in the midst of integrating the Echo with the Raspberry Pi for a home automation application.

Big Data Modeling at NoSQLNow! / Semantic Technology Conference, San Jose 2014

We had a wonderful time in San Jose last week at the NoSQLNow! / Semantic Technology Conference.

Many thanks to the organizers Tony Shaw, Eric Franzon, and the rest of the Dataversity team for putting on a great event!

My presentation on Thursday afternoon was “Big Data Modeling”.

The presentation is available below:

Vital AI: Big Data Modeling from Vital.AI

Generating a Wordnet Dataset using Vital AI Development Kit

Part of a series beginning with:

https://vitalai.com/2014/04/29/data-visualization-with-vital-ai-wordnet-and-cytoscape/

To import a new dataset into Vital AI with the VDK, the first thing we need to do is add any needed classes and properties into our data model to help model the dataset.

In the  case of Wordnet, we like to use it as an example, and so have added classes and properties for it into the main Vital data model (vital.owl).

The main Node we’ve defined is the SynsetNode, as Wordnet uses “synset” objects for synonym-sets.  This node has sub-classes for Verbs, Adjectives, Adverbs, and Nouns for those different types of words.

6a00e5510ddf1e883301a3fcfbc975970b-800wi

To connect the Wordnet SynSetNodes together, we represent the various Wordnet relationship types as Edges (there are a bunch).  Two such relationships are HyperNym and HypoNym which are sometimes called the type-of or is-a relationship, such as the relationship between Tiger/Animal or Red/Color.

More information about HyperNyms and HypoNyms is available via Wikipedia here:  http://en.wikipedia.org/wiki/Hyponymy_and_hypernymy.

6a00e5510ddf1e883301a3fcfbcb05970b-800wi

The current version of the Vital AI ontologies are available on github here:https://github.com/vital-ai/vital-ontology/tree/rel-0.1.0

Now that we have our data model ready, we can generate a dataset.

There is an open-source API to access the Wordnet dictionary files via Java available from:  http://projects.csail.mit.edu/jwi/

We can use this API to help generate our dataset with code like this to create all our nodes:

for(POS p : POS.values()) {
 
     for( Iterator
          synsetIterator = _dict.getSynsetIterator(p);
          synsetIterator.hasNext(); ) {
 
          ISynset next = synsetIterator.next()
 
          String gloss = next.getGloss();
 
          List words = next.getWords();
 
          String word_string = words.toString()
 
          String idPart = "${next.getPOS().getTag()}_${((ISynsetID)next.getID()).getOffset()}"
 
          SynsetNode sn = cls.newInstance();
 
          sn.URI = URIGenerator.generateURI("wordnet", cls)
          sn.name = word_string
          sn.gloss = gloss
          sn.wordnetID = idPart
 
          writer.startBlock()
          writer.writeGraphObject(sn)
          writer.endBlock()
     }
 
}

This mainly iterates over the parts-of-speech, iterates over the synonym-sets (“concepts”) in each part-of-speech, collects the words associated with each synonym-net, and adds a new SynsetNode for each synonym-set setting a URI (unique identifier), the set of words, the gloss (short definition), and Wordnet identifier.

and code like this to create all our edges:

for(POS p : POS.values()) {
 
for( Iterator synsetIterator = _dict.getSynsetIterator(p); synsetIterator.hasNext(); ) {
 
ISynset key = synsetIterator.next();
 
String uri = synsetWords.get(key.getID())
 
for( Iterator<Entry<IPointer, List>> iterator2 = key.getRelatedMap().entrySet().iterator(); iterator2.hasNext(); ) {
 
Entry<IPointer, List> next2 = iterator2.next();
 
IPointer type = next2.getKey();
List l = next2.getValue();
 
for(ISynsetID id : l) {
 
String destURI = synsetWords.get(id);
  
Edge_hasWordnetPointer newEdge = cls.newInstance();
 
newEdge.URI = URIGenerator.generateURI("wordnet", cls)
 
newEdge.sourceURI = uri
 
newEdge.destinationURI = destURI 
 
writer.startBlock()
writer.writeGraphObject(newEdge)
writer.endBlock()
 
 
}
 
}
 
}
 
}

This iterates over the parts-of-speech, iterates over all the synsets, gets the set of relationships for each, and adds an Edge for each such relationship using Edges of specific type, like HyperNym and HypoNym.

With this we have all our Nodes and Edges written to a dataset file (see previous blog entries for our file “block” format).

We can then import the dataset file into local or remote Vital Service endpoint instance.

Next Post: https://vitalai.com/2014/04/29/building-a-data-visualization-plugin-with-the-vital-ai-development-kit/