Updating the “If” statement in the JVM for Truth Value Logic

The Vital Development Kit (VDK) provides development tools and an API for Artificial Intelligence (AI) applications and data processing.  This includes a “Domain Specific Language” (DSL) for working with data.

To this DSL we’ve recently added an extension to the venerable “If” statement in the JVM to handle Truth Values. (Truth Values, see: Beyond True and False: Introducing Truth to the JVM )

The “If” statement is the workhorse of computer programming.  If this, do that.  If something is so, then do some action.  The “If” statement evaluates if some condition is “True”, and if so, takes some action.  If the condition is “False”, then it may take some other action.

The condition of an “If” statement yields a Boolean True or False and typically involves tests of variables, such as:  height > 72, speed < 50, name == “John”.

The “If” statement is a special case of the “switch” statement, such that:

if(name == "John") { do something }
else { do something different }

is the same as:

switch (name == "John") {
case true: { do something; break }
case false: { do something different; break }
}

In the VDK we have an extension of Boolean in the JVM called Truth.  Truth may take four values: YES, NO, UNKNOWN, or MU compared to the Boolean TRUE or FALSE.  YES and NO are the familiar TRUE and FALSE, with UNKNOWN providing a value for when a condition cannot be determined because of unknown inputs, and MU providing a value for when a condition is unknowable because it contains a false premise.

For example, for UNKNOWN, the color of a traffic light might be red, green, or yellow but its value is currently UNKNOWN.  And for MU, the favorite color of a traffic light is MU because inanimate objects don’t have favorite colors.

UNKNOWN and MU are part of the familiar Boolean Truth Tables.  For instance True AND True yields True, whereas True AND Unknown yields Unknown.

Details of the Truth implementation in the JVM can be found in the blog post: Beyond True and False: Introducing Truth to the JVM

Because Truth has four values, we need a way to handle four cases when we test a condition.

As above, we could use a “switch” statement like so:

switch (Truth Condition) {
case YES: { handle YES; break }
case NO: { handle NO; break }
case UNKNOWN: { handle UNKNOWN; break }
case MU: { handle MU; break }
} 

This is a little verbose, so we’ve introduced a friendlier statement: consider.

consider (Truth Condition) {
YES: { handle YES }
NO: { handle NO }
UNKNOWN: { handle UNKNOWN }
MU: { handle MU }
} 

So we can have code like:

consider (trafficlight.color == GREEN) {
YES: { car.drive() }
NO: { car.stop() }
UNKNOWN: { car.stop() } // better be safe and look around
MU: { car.stop(); runDiagnostics(); } // error! error!
} 

In the above code, if evaluating our truth condition results in an UNKNOWN value (perhaps a sensor cannot “see” the Traffic Light), we can take some safe action.  If we get a MU value, then we have some significant error, such the “trafficlight” object not actually being a trafficlight — perhaps some variable mixup.  We can also take some defensive measures in this case.

We can also stick with using “If” and use exceptions for the cases of UNKNOWN and MU:

try {
if(trafficlight.color == GREEN) { car.drive() }
else { car.stop() }
} catch(Exception ex) {
// handle UNKNOWN and MU
}

This works because Truth values are coerced to Boolean True or False for the cases of YES or NO.  This coercion throws an exception for the cases of UNKNOWN or MU.  JVM exceptions are a bit ugly and should not be used for normal program control flow (exceptions as flow control is often an anti-pattern), so the consider statement is much preferred.

The logic of Truth is very helpful in defining Rules to process realtime dynamic data, and answer dynamic data queries.  The consider statement allows such rules to be quite succinct and explicitly handle unknown data or queries with non-applicable conditions.

For instance, if we query an API for the status of traffic lights and ask how many are currently yellow, we might get back a reply of 0 (zero).  We might wonder are there really zero traffic lights yellow presently, or is the API not functioning and always returning zero, or does it not track yellow lights?  It would be better to get a reply of UNKNOWN if the API was not functioning.  If we asked how many traffic lights were displaying Purple, a reply of zero would be correct but it would be better to get a reply of MU –  there are no such thing as Purple traffic lights in a world of Red/Yellow/Green lights.

As AI and data-driven applications incorporate more dynamic data models and data sources, instances of missing or incorrect knowledge are more the rule rather than the exception, so the software flow should treat these as normal cases to consider rather than exceptions.

Hope you have enjoyed learning about how the Vital Development Kit has extended “If” to handle Truth Values.  Please post and questions or comments below, or get in touch with us at info@vital.ai.

Beyond True and False: Introducing Truth to the JVM

In this post we introduce a new type for “Truth” to the JVM in the Vital Development Kit (VDK), to include cases that don’t fit into Boolean True and False — in particular a value for Unknown and Mu (nonexistence).  We use this new Truth type in the VDK for logical and conditional expressions, especially in rules and inference.

Computer hardware has binary — ones and zeros.
Computer software has boolean — true and false.
This seems a perfect match, immutable, a Platonic Ideal.
But there are some wrinkles.

Programming languages deeply use booleans to control the flow of a program:

if this is true, then do that
if this is false, then do something else

What if the software doesn’t know a particular value at that moment?

What if the value doesn’t make sense in the current context?

For instance, in the code:

if(TrafficLight.color == RED) then { Stop() }
else { Go() }

What if the TrafficLight color is unknown?  The software would drive through the traffic intersection.  (Hope for the best!)

Or, what if we had code like:

if(Penguin.flightspeed < 10) then { ThrowAFish() }

Should this code work, even if penguins don’t have a flight speed?

In programming languages like Java, these problems lead to a lot of custom workarounds and error checking, so much so that you can no longer use “normal” boolean expressions without a bunch of checks to ensure that it is “safe”.

In rule-based systems or logic languages, often “unknown” is treated as False — this is because a rule like isTall(John) tries to prove that John is tall, and if it can’t it returns “false” or “no”, meaning “I can not prove that”.

But, if the code re-uses that result like:

isShort(X) := NOT isTall(X)

then it is incorrectly combining the Is-Tall case with the I-Don’t-Know case, causing software errors — that is, if the height is unknown, then the person is short, which is quite a leap of logic to be sure.

Three Value Logic

This is not a new problem.  SQL tries to solve this with a third logic value called NULL to mean “UNKNOWN”.  A description of this is here: https://en.wikipedia.org/wiki/Null_(SQL)

Three value logic (3VL) for True/False/Unknown is well established, with a full description here: https://en.wikipedia.org/wiki/Three-valued_logic

Unfortunately 3VL is not built into languages like Java.  And, it gets worse.

Java has low level base types that are efficient like int for integers and object-oriented classes like Integer which have some additional Object overhead but are more “friendly”, and there are ways to convert between Objects and base types.  So far, so good.

All objects (instances of classes) may be set to the value of “null”.  There is a class for Boolean which may be set to True, False, or Null.  And, there is the low-level type “boolean” which may only have the values of true and false.  So a Boolean object has a value (“null”) which can not be set in the base type boolean.  It is like having a base type for integer that you can’t set to zero.

(Mis)Using Null as a Value

In Java, “null” means uninitialized and is not a true distinct “value”.   Some code uses (abuses) “null” to mean “unknown”, but this means we then can’t tell apart an uninitialized value from a initialized “unknown” value — similar to confusing “I-can’t-prove-it” with “false”.  Moreover, we can’t store “null” in the low level boolean type anyway, which again can only be true or false.

1aab4d13e65d4c245d46c0ab4679ffa7
“If that is okay, please give me absolutely no sign.”

The value of “null” also occurs in lots of error situations — the network connection failed, the database connection timed out, the password is incorrect, no memory is available, and on and on.

Using “null” as a value reminds me of this Simpon’s line with Homer interpreting the absence of a message (“null”) as a message.

So, the addition of “null” to Java’s Boolean doesn’t provide a way to represent “unknown” unambiguously, and it causes even more confusion with uninitialized objects and various error conditions.

A Value for Nonexistence

Besides the “unknown” case there is also the case above of: Penguin.flightspeed < 10.

Since we know that a penguin can’t have a flightspeed, this isn’t a case of “unknown.”  We could argue that this statement of less than 10 is “true” since “0” is the absence of a speed and 0 is less than 10, but that requires embedding domain knowledge about how speeds work, i.e. Is the temperature of a song absolute zero? (-459.67F)  Is the color of an integer black?  There isn’t a universal way to assign these True or False when the question doesn’t make sense.

500px-無-still.svg
Mu

And so, we need a fourth value to handle the case of “nonexistence” or “non-applicable”.

We’ve chosen to use the symbol Mu for this as it has been popularly been used in this sense.  One notable example of its use is in Douglas R. Hofstadter’s wonderful book:

Gödel, Escher, Bach: An Eternal Golden Braid.  (http://www.amazon.com/G%C3%B6del-Escher-Bach-Eternal-Golden/dp/0465026567)

Some more details regarding Mu are in its wikipedia article here: https://en.wikipedia.org/wiki/Mu_(negative)

 

Truth implementation in the VDK

In the VDK, we use the Groovy language for scripting on the JVM.  Truth was added as a new type to Groovy with the values: YES, NO, UNKNOWN, and MU.  We use YES and NO instead of True and False to avoid confusion with the existing Boolean values and reserved words.

We follow the truth tables as specified here for 3VL: https://en.wikipedia.org/wiki/Three-valued_logic#Kleene_and_Priest_logics

Some example logic statements using AND, OR, and NOT with Truth:

  • YES AND UNKNOWN := UNKNOWN
  • YES OR UNKNOWN := YES
  • NO OR UNKNOWN := UNKNOWN
  • NOT UNKNOWN := UNKNOWN

For Mu, any logic expression with Mu yields MU:

  • NOT MU := MU
  • YES OR MU := MU

Using our new Truth definition, we can now write or generate code that handles the cases of Unknown or Mu without any ambiguity.

For example:


Truth isTall(Person p) {
if( p.height == UNKNOWN ) { return UNKNOWN }
if  ( p.height > 72.0)  { return YES }
else { return NO }
}

Truth isShort(Person p) {
if( p.height == UNKNOWN ) { return UNKNOWN }
if  ( p.height < 60.0)  { return YES }
else { return NO }
}

Truth averageHeight(Person p) {
return ! ( isTall(p) || isShort(p) )
}

In the above code, the “averageHeight()” function returns a Truth value, and is completely defined by boolean NOT and the isShort() and isTall() functions, and it returns UNKNOWN if either of these functions is UNKNOWN.  The NOT (!) works properly now instead of the previous definition which mistakenly combined the “I can’t prove it” case with the “false” case.

Instead of a “if..then” statement in code, we can use a “switch” statement to handle YES, NO, UNKNOWN, and MU, like so:

switch(averageHeight(p) || isShort(p) ) {
case YES:
println "YES" ; break;
case NO:
println "NO" ; break;
case UNKNOWN:
println "UNKNOWN" ; break;
case MU:
println "MU" ; break;
}

We’d like to add a little DSL syntactic sugar to the switch statement, so the above could be written a bit more succinctly, something along the lines of:

/* idea for improvement to Truth DSL */
if (averageHeight(p) || isShort(p) ) {

YES: { println "YES" }
NO:  {println "NO" }
UNKNOWN: { println "UNKNOWN" }
MU: { println "MU" }
}

which would try to follow the pattern of if()…then()…else() but be: if…yes()..no()…unknown()…mu() with any of the blocks optional.

Please comment if you like this proposed addition to the DSL, or have any suggestions!

Feedback

I hope you have enjoyed learning about how Truth is implemented in the VDK to provide a richer logic representation than the Boolean True/False.

Please post any comments or questions in the comment section, or contact Vital AI directly at info@vital.ai

Is it really equal? Introducing Semantic Equality to the JVM.

One of the most fundamental functions of a programming language is to decide if two things are “the same” or are “different”.

The determination of “sameness” can be quite tricky, and introduce subtle software errors or require a significant amount of code to check many cases.

As a simple example, imagine two separate database queries, one for all people with “John” in their name, and another for all people with “Smith” in their name — how to tell that a “John Smith” from the first query is the same as a “John Smith” in the second query, without custom code?

The Vital Development Kit (VDK) includes domain specific language bindings (a DSL) specific to data comparison, inference, and manipulation.  Recently we introduced a new feature called Semantic Equality to the VDK.

With the VDK and Semantic Equality, Data Scientists and Developers can write less code, have fewer bugs, and more easily work with large amounts of diverse data.

Background

The VDK, using the VitalSigns component, manages the domain models of your application and generates code to interact with different types of databases, data predictive components, and user interfaces.  This makes it easy to combine different components into a unified application: such as NoSQL Databases, SQL Databases, Apache Spark, and JavaScript Web Applications.

Since the JVM compares objects “by reference” — the “reference” is a pointer to the bit of memory used to store the object — the following code will typically not be true if the objects were loaded or created at different times (like the “John Smith” objects mentioned before:

/* true only if they have the same memory reference */
if(object1 == object2) { }

To mitigate this, it’s common to write custom code to override the “equals” function in the JVM so that objects can be compared by their data values.  Frameworks such as Object-Relational Mapping tools often include generating such “equals” methods, but this only covers application to database interactions, and even more custom code needs to be written to incorporate other components like machine learning.

The VDK takes a more general approach.

Each VDK data object object has a globally unique identifier, called a URI, associated with it.  So determining if one object refers to the same identical thing as another object is as simple as:

/* they refer to the same thing! */
if(object1.URI == object2.URI) {  }

This is universally true, regardless of the source of the data or the types of objects being compared.

But, what if you want to compare data fields of the objects, like determining if two people have the same birthday?

/* they have the same birthday! */
if(person1.birthday == person2.birthday) {  }

This works when the values of the “birthday” fields match because the VDK handles the “equals” methods.

Terminology note: we call the “birthday” data field a “property” of the “Person” class.  The “Person” class and properties like “birthday” are specified in an external data model, with code generated for the JVM (or JavaScript) using vitalsigns.

What if we tried to do:

/* the dates match! */
if(person1.hireDate == person2.birthday) {  }

If the values were the same, this would be true, but it looks like it might be a programming error as we’re comparing hiring dates with birthdays — apples to oranges instead of apples to apples.

Bugs such as this can be difficult enough with developer created code, but it gets much worse in data analysis and machine learning with comparisons like:

/* data driven action */
/* such as increase likelihood of customer retention */
if(property537 == property675) {  }

which are typically generated through an automated process where it is very difficult to track the meaning of the many thousand properties being analyzed.

Adding Semantics and Semantic Equality

In the VDK, both classes like “Person” and properties like “birthday” have a semantic marker to specify what they “mean”.  So in addition to “birthday” being associated with the “Date” data type, it also has a semantic marker like:

http://vital.ai/ontology/vital-examples#birthday

This URI places “birthday” into a domain model, which can then be used to see if comparisons are “compatible” with another property.  Using such logic we can compare fields like “birthday” and “age” since we can convert one such property to another.

Implementation Note: we use the “trait” language capability of the JVM (Java/Groovy/Scala) to semantically “mark” objects.  Some documentation about the Groovy implementation of traits can be found here: http://docs.groovy-lang.org/next/html/documentation/core-traits.html

With these URIs associated with properties, we modified “==” to take into account whether two properties (or classes) can be compared semantically.

We call the redefined “==” symbol: Semantic Equality.

For example, let’s say we have a property “name” and a subproperty “nickName” and another subproperty of “name” for “familyName”, so a property hierarchy like:

name
+—- nickName
+—- familyName

Then we can have:

/* this could be true */
if(person1.name == person2.nickName) {  }

/* this could be true */
if(person1.name == person2.familyName) {  }

/* this can't be true! */
if(person1.nickName == person2.familyName) {  }

The last case can’t be true because familyName is not an ancestor of nickName or vice versa according to the property hierarchy.

This helps us catch bugs like:

/* now is always false! */
if(person1.hireDate == person2.birthday) {  }

by making them never evaluate to true because “birthday” and “hireDate” are not semantically compatible.  In the same way, your favorite food can not be “armchair” because “armchair” is not a food.

The Semantic Equality operator is similar to the JavaScript “===”, except stronger.  In JavaScript, the “==” operator will try to convert one type to another, like a number to a string so that two things can be compared with a common type, so it is forgiving of type differences, i.e. weakly typed.  This can be handy, but often leads to bugs. The JavaScript “===” operator on the other hand, does not do type conversion, so it “strongly” enforces data type comparison.  The VDK Semantic Equality adds one more “level” to this by enforcing that the compared data is semantically compatible.

Comparing Values without Semantics

Now, let’s say we really want to compare the values and not take into account the semantics of the properties.

We introduced an operator for this case, “^=“, by redefining the XOR assignment operator.  Mainly we don’t do XOR assignments often, but also the caret “^” is sometimes used for boolean NOT operations, so we thought it would be a good match for when the semantics are “NOT” a match.

So, if we really want to compare the values of birthday and hireDate, we can do:

/* true if values match */
if(person1.hireDate ^= person2.birthday) {  }

which is true when the hireDate and birthday values match, ignoring the semantics of the properties.  This is analogous to the JavaScript difference between “===” (strong) and “==” (weak) except in JavaScript is it either enforcing datatypes (or not) and with the VDK we are enforcing semantics (or not).

VDK Groovy Language Extensions

Semantic Equality is part of the language extensions and DSL (domain specific language) incorporated into the Groovy JVM language with the Vital Development Kit (VDK) to make it easier to work with diverse data.

Feedback!

I hope you have enjoyed learning about the Semantic Equality feature of the VDK!  Please post your comments and questions here, or follow up with us at Vital AI via info@vital.ai.

Optimizing the
 Data Supply Chain
 for Data Science

I gave a talk at the Enterprise Dataversity conference in Chicago in November.

The title of the talk was:

Optimizing the Data Supply Chain for Data Science“.

data-supply-chain-edv2015-hadfield-submitted.001

Below are the slides from that presentation.

Here is a quick summary of the talk:

The Data Supply Chain is the next step in the progression of large scale data management, starting with a “traditional” Data Warehouse, moving to a Hadoop-based environment such as a Data Lake, then to a Microservice Oriented Architecture (Microservices across a set of independently managed Hadoop clusters, “Micro-SOA”), and now to the Data Supply Chain which adds additional data management and coordination processes to produce high quality Data Products across independently management environments.

A Data Product can be any data service such as an eCommerce recommendation system, a Financial Services fraud/compliance predictive service, or Internet of Things (IoT) logistics optimization service.  As a specific example, loading the Amazon.com website triggers more than 170 Data Products predicting consumer sentiment, likely purchases, and much more.

The “Data Supply Chain” (DSC) is a useful metaphor for how a “Data Product” is created and delivered.  Just like a physical “Supply Chain”, data is sourced from a variety of suppliers.  The main difference is that a Data Product can be a real-time combination of all the suppliers at once as compared to a physical product which moves linearly along the supply chain.  However, very often data does flow linearly across the supply chain and becomes more refined downstream.

Each participant of a DSC may be an independent organization, a department within a large organization, or a combination of internal and external data supplies — such as combining internal sales data with social media data.

As each participant in the DSC may have its own model of data, combining data from many sources can be very challenging due to incompatible assumptions.  As a simple example, a “car engine supplier” considers a “car engine” as a finished “product“, whereas a “car manufacturer” considers a “car engine” to be a “car part” and a finished car as a “product“, therefore the definitions of “product” and “car engine” are inconsistent.

As there is no central definition of data as each data supplier is operating independently, there must be an independent mechanism to capture metadata to assist flowing data across the DSC.

At Vital AI, we use semantic data models to capture data models across the DSC.  The models capture all the implicit assumptions in the data, and facilitate moving data across the DSC and building Data Products.

We generate code from the semantic data models which then automatically drives ETL processes, data mapping, queries, machine learning, and predictive analytics — allowing data products to be created and maintained with minimal effort while data sources continue to evolve.

Creating semantic data models not only facilitates creating Data Products, but also provides a mechanism to develop good data standards — Data Governance — across the DSC.  Data Governance is a critical part of high quality Data Science.

As code generated from semantic data models is included at all levels of the software stack, semantic data models also provide a mechanism to keep the interpretation of data consistent across the stack including in User Interfaces, Data Infrastructure (databases), and Data Science including predictive models.

As infrastructure costs continue to fall, the primary cost component of high quality Data Products is human labor.  The use of technologies such as semantic data models to optimize the Data Supply Chain and minimize human labor becomes more and more critical.

To learn more about the Data Supply Chain and Data Products, including how to apply semantic data models to minimize the effort, please contact us at Vital AI!

— Marc Hadfield

Email: info@vital.ai
Telephone: 1.917.463.4776

Vital AI example apps for prediction using AlchemyAPI (IBM Bluemix), Metamind.io, and Apache Spark

Along with our recent release of VDK 0.2.254, we’ve added a few new example apps to help developers get started with the VDK.

By starting with one of these examples, you can quickly build applications for prediction, classification, and recommendation with a JavaScript web application front end, and prediction models on the server.  The examples use prediction models trained using Apache Spark or an external service such as AlchemyAPI (IBM Bluemix), or Metamind.io.

There is also an example app for various queries of a document database containing the Enron Email dataset.  Some details on this dataset are here: https://www.cs.cmu.edu/~./enron/

The example applications have the same architecture.

6a00e5510ddf1e883301bb086472c6970d-800wi

The components are:

  • JavaScript front end, using asynchronous messages to communicate with the server.  Messaging and domain model management are provided by the VitalService-JS library.
  • VertX application server, making use of the Vital-Vertx module.
  • VitalPrime server using DataScripts to implement server-side functionality, such as generating predictions using a Prediction Model.
  • Prediction Models to make predictions or recommendations.  A Prediction Model can be trained based on a training set, or it could interface to an external prediction service.  If trained, we often use Apache Spark with the Aspen library to create the trained prediction model.
  • A Database such as DynamoDB, Allegrograph, MongoDB, or other to store application data.

Here is a quick overview of some of the examples.

We’ll post detailed instructions on each app in followup blog entries.

MetaMind Image Classification App:

Source Code:

https://github.com/vital-ai/vital-examples/tree/master/metamind-app

Demo Link:

https://demos.vital.ai/metamind-app/index.html

Screenshot:

6a00e5510ddf1e883301b7c7c02e88970b-800wi

This example uses a MetaMind ( https://www.metamind.io/ ) prediction model to classify an image.

AlchemyAPI/IBM Bluemix Document Classification App

Source Code:

https://github.com/vital-ai/vital-examples/tree/master/alchemyapi-app

Demo Link:

https://demos.vital.ai/alchemyapi-app/index.html

Screenshot:

6a00e5510ddf1e883301b8d14a0496970c-800wi

This example app uses an AlchemyAPI (IBM Bluemix) prediction model to classify a document.

Movie Recommendation App

Source Code (Web Application):

https://github.com/vital-ai/vital-examples/tree/master/movie-recommendations-js-app

Source Code (Training Prediction Model):

https://github.com/vital-ai/vital-examples/tree/master/movie-recommendations

Demo Link:

https://demos.vital.ai/movie-recommendations-js-app/index.html

Screenshot:

6a00e5510ddf1e883301b7c7c03038970b-800wi

This example uses a prediction model trained on the MovieLens data to recommend movies based on a user’s current movie ratings.  The prediction model uses the Collaborative Filtering algorithm trained using an Apache Spark job.  Each user has a user-id such as “1010” in the screenshot above.

Spark’s collaborative filtering implementation is described here:

http://spark.apache.org/docs/latest/mllib-collaborative-filtering.html

The MovieLens data can be found here:

http://grouplens.org/datasets/movielens/

Enron Document Search App

Source Code:

https://github.com/vital-ai/vital-examples/tree/master/enron-js-app

Demo Link:

https://demos.vital.ai/enron-js-app/index.html

Screenshot:

6a00e5510ddf1e883301b7c7c0314e970b-800wi

This example demonstrates how to implement different queries against a database, such as a “select” query — find all documents with certain keywords, and a “graph” query — find documents that are linked to users.

Example Data Visualizations:

The Cytoscape graph visualization tool can be used to visualize the above sample data using the Vital AI Cytoscape plugin.

The Cytoscape plugin is available from:

https://github.com/vital-ai/vital-cytoscape

An example of visualizing the MovieLens data:

6a00e5510ddf1e883301b8d14a0660970c-800wi

An example of visualizing the Wordnet Dataset, viewing the graph centered on “Red Wine”:

6a00e5510ddf1e883301b8d14a07de970c-800wi

For generating and importing the Wordnet data, see sample code here:

https://github.com/vital-ai/vital-examples/tree/master/vital-samples/src/main/groovy/ai/vital/samples

Information about Wordnet is available here:

https://wordnet.princeton.edu/

Another example of the Wordnet data, with some additional visual styles added:

6a00e5510ddf1e883301b7c7c03384970b-800wi

Vital AI Dev Kit and Products Release 254

VDK 0.2.254 was recently released, as well as corresponding releases for each product.

The new release is available via the Dashboard:

https://dashboard.vital.ai

Artifacts are in the maven repository:

https://github.com/vital-ai/vital-public-mvn-repo/tree/releases/vital-ai

Code is in the public github repos for public projects:

https://github.com/orgs/vital-ai

Highlights of the release include:

Vital AI Development Kit:

  • Support for deep domain model dependencies.
  • Full support of dynamic domains models (OWL to JVM and JSON-Schema)
  • Synchronization of domain models between local and remote vitalservice instances.
  • Service Operations DSL for version upgrade and downgrade to facilitate updating datasets during a domain model change.
  • Support for loading older/newer version of domain model to facility upgrading/downgrading datasets.
  • Configuration option to specify enforcement of version differences (strict, tolerant, lenient).
  • Able to specify preferred version of imported domain models.
  • Able to specify backward compatibility with prior domain model versions.
  • Support for deploy directories to cleanly separate domain models under development from those deployed in applications.

VitalPrime:

  • Full dynamic domain support
  • Synchronization of domain models between client and server
  • Datascripts to support domain model operations
  • Support for segment to segment data upgrade/downgrade for domain model version changes.

Aspen:

  • Prediction models to support externally defined taxonomies.
  • Support of AlchemyAPI prediction model
  • Support of MetaMind prediction model
  • Support for dynamic domain loading in Spark
  • Added jobs for upgrading/downgrading datasets for version change.

Vital Utilities:

  • Import and Export scripts using bulk operations of VitalService
  • Data migration script for updating dataset upon version change

Vital Vertx and VitalService-JS:

  • Support for dynamic domain models in JSON-Schema.
  • Asynchronous stream support, including multi-part data transfers (file upload/download in parts).

Vital Triplestore:

  • Support for EQ_CASE_INSENSITIVE comparator
  • Support for Allegrograph 4.14.1

Tracking Big Data Models in OWL with Git Version Control

In my presentation this year at NoSQL Now! / Semantic Technology Conference, I discussed Big Data Modeling.

A key point is using the same Data Model throughout an application stack, so data can be collected, stored, and analyzed in a streamlined way without introducing data inconsistencies, which otherwise inevitably occur during manual data transformations.  Ideally the Data Model can be used to integrate additional components into your application stack with no additional manual integration effort, such as adding Machine Learning Analyzers with the Data Model specifying data elements to use in the analysis.

I presented OWL Ontologies ( http://www.w3.org/TR/owl2-overview/ ) as a great means of capturing Data Models, which can then be automatically transformed into the “schema” needed by different elements of the application stack, such as NoSQL databases or Machine Learning Analyzers.  At Vital AI, we use our tool VitalSigns to transform OWL Ontologies into code and schema files for a variety of components like HBase and Hadoop MapReduce/Spark Jobs.

You can see the full presentation here:
https://vitalai.com/2014/08/26/big-data-modeling-at-nosqlnow-semantic-technology-conference-san-jose-2014/

An OWL Data Model used in this way is part of your codebase, and should be managed in the same way as the rest of your code.

Git is a wonderful code management tool — let’s use OWL and Git together!

Git can be used as a service from providers such as Github and Bitbucket.  Whether you use git internally or via a service provider, it’s a great way to keep developers organized while still working in a distributed and independent way.

As part of Vital AI’s VitalSigns tool, we’ve integrated Git and OWL in the following way:

Within our “home” directory, we keep a directory of domain ontologies in OWL at:

{home}/domain-ontology/

Previous versions of an ontology get moved to an archive directory at:

{home}/domain-ontology/archive/

We keep a strict naming convention of the ontologies:

{Domain}-{version}.owl

The Domain is kept unique and is the key element in the Ontology URI, such as:

http://www.vital.ai/ontology/nycschools/NYCSchoolRecommendation.owl

with “NYCSchoolRecommendation” as the Domain in this case, with “http://www.vital.ai/ontology/nycschools/” providing a unique namespace for an application.

The version follows the Semantic Versioning standard described here:

http://semver.org/

with a value like “0.1.8”

This value is also in the OWL ontology, specified like:

&lt;owl:versionInfo&gt;0.1.8&lt;/owl:versionInfo&gt;

This makes the filename of this OWL ontology:

NYCSchoolRecommendation-0.1.8.owl

When we want to modify an ontology we first increase the patch number using a script:

vitalsigns upversion NYCSchoolRecommendation-0.1.8.owl

which increases the version to 0.1.9, moves the old file to the archive, and creates a new version:

NYCSchoolRecommendation-0.1.9.owl

that is ready to be modified.

We keep the previous versions of the Ontology in the archive so that we can easily “roll back” to a previous version.  This is especially helpful as we may have data conformant to older versions of the Ontology — we can can use the older Ontology version to interpret these data sets.  We may have years worth of data in our Data Warehouse (such as in a Hadoop cluster), and we don’t want to lose what the data means by losing our data model.

To update the ontology files, basic git commands such as “git add” and “git rename” are being used, so that the git repository is aware of the new ontology, and the moved old version.

Updating the git repository is then just a matter of using the git commands such as “git push” to push updates to a remote repository, and “git pull” to bring in updates from a remote repository.  By making modifications and using git push and pull, your entire development team can keep update-to-date with the latest versions of the OWL ontologies.

Git integration requires a few more steps for full integration.

When a file is moved into the archive, we add the username to the filename — this avoids clashes in the archive if two (or more) users independently moved the OWL ontology into the archive.  Thus, in the archive, we may have a OWL file with a name like:

NYCSchoolRecommendation-johnsmith-0.1.8.owl

when the user “johnsmith” moved it into the archive.  This won’t collide with a file like:

NYCSchoolRecommendation-maryjones-0.1.8.owl

if “maryjones” also was working on that version of the file.

Git compares files to determine if they are different or the same using a command called “diff” (coming from “differences”).   The “diff” command compares files line by line to find how they differ.  Software source code is generally always in linear order (Step 1, followed by Step 2, followed by Step 3, …), so this is a very natural way to find differences in source code.  However, order is not necessarily important in OWL files — the data model can be defined in any order.  If we define classes A and then B, this is the same as defining classes B and then A.  Thus, diff does not work well with OWL files — unless you give it a little help.

OWL is made up of definitions of classes, properties, annotations, and other elements.  Each of these has a unique identifier (a URI) associated with it.

This identifier gives us a way to sort the OWL ontology so we can always put it in the same order.  Once in the same order, we can compare the elements of the OWL ontology, such as class to class, property to property, to detect differences.

So, with a little help, we can continue to use an updated version of “diff” to find the differences between OWL ontologies, which is a  key part of tracking changes.

The final addition to git required for supporting OWL ontology files is to the “merge” operation.  Git uses “merge” to merge changes between two versions of a file to create a new file.  Similar to the case with “diff”, the files are expected to be starting from the same order.  So, for an OWL merge, we must first sort the elements like we did with diff, and compare them one by one to merge changes into a merged file.

To summarize, to use OWL files and Git together we must:

  • Enforce a naming convention using the version number in both the file and the version annotation so that our archive will have historical versions of the OWL ontologies — we can easily “roll back” to a previous version, especially when interpreting data that may be conformant to an earlier version of the Ontology.
  • The naming convention should incorporate the username of the user making the change to prevent clashes in the archive
  • Update diff to put OWL files in sorted order to line up differences
  • Update merge to use sorted OWL files to help merging differences

For helpful code for the diff and merge cases above, check out the open-source:

https://github.com/utapyngo/owl2vcs

VitalSigns makes use of this and the other mentioned methods to integrate OWL and Git.

Please contact us to help your team use Git and OWL Ontologies together!

http://vital.ai/#contact

Big Data Modeling at NoSQLNow! / Semantic Technology Conference, San Jose 2014

We had a wonderful time in San Jose last week at the NoSQLNow! / Semantic Technology Conference.

Many thanks to the organizers Tony Shaw, Eric Franzon, and the rest of the Dataversity team for putting on a great event!

My presentation on Thursday afternoon was “Big Data Modeling”.

The presentation is available below:

Vital AI: Big Data Modeling from Vital.AI

Generating a Wordnet Dataset using Vital AI Development Kit

Part of a series beginning with:

https://vitalai.com/2014/04/29/data-visualization-with-vital-ai-wordnet-and-cytoscape/

To import a new dataset into Vital AI with the VDK, the first thing we need to do is add any needed classes and properties into our data model to help model the dataset.

In the  case of Wordnet, we like to use it as an example, and so have added classes and properties for it into the main Vital data model (vital.owl).

The main Node we’ve defined is the SynsetNode, as Wordnet uses “synset” objects for synonym-sets.  This node has sub-classes for Verbs, Adjectives, Adverbs, and Nouns for those different types of words.

6a00e5510ddf1e883301a3fcfbc975970b-800wi

To connect the Wordnet SynSetNodes together, we represent the various Wordnet relationship types as Edges (there are a bunch).  Two such relationships are HyperNym and HypoNym which are sometimes called the type-of or is-a relationship, such as the relationship between Tiger/Animal or Red/Color.

More information about HyperNyms and HypoNyms is available via Wikipedia here:  http://en.wikipedia.org/wiki/Hyponymy_and_hypernymy.

6a00e5510ddf1e883301a3fcfbcb05970b-800wi

The current version of the Vital AI ontologies are available on github here:https://github.com/vital-ai/vital-ontology/tree/rel-0.1.0

Now that we have our data model ready, we can generate a dataset.

There is an open-source API to access the Wordnet dictionary files via Java available from:  http://projects.csail.mit.edu/jwi/

We can use this API to help generate our dataset with code like this to create all our nodes:

for(POS p : POS.values()) {
 
     for( Iterator
          synsetIterator = _dict.getSynsetIterator(p);
          synsetIterator.hasNext(); ) {
 
          ISynset next = synsetIterator.next()
 
          String gloss = next.getGloss();
 
          List words = next.getWords();
 
          String word_string = words.toString()
 
          String idPart = &quot;${next.getPOS().getTag()}_${((ISynsetID)next.getID()).getOffset()}&quot;
 
          SynsetNode sn = cls.newInstance();
 
          sn.URI = URIGenerator.generateURI(&quot;wordnet&quot;, cls)
          sn.name = word_string
          sn.gloss = gloss
          sn.wordnetID = idPart
 
          writer.startBlock()
          writer.writeGraphObject(sn)
          writer.endBlock()
     }
 
}

This mainly iterates over the parts-of-speech, iterates over the synonym-sets (“concepts”) in each part-of-speech, collects the words associated with each synonym-net, and adds a new SynsetNode for each synonym-set setting a URI (unique identifier), the set of words, the gloss (short definition), and Wordnet identifier.

and code like this to create all our edges:

for(POS p : POS.values()) {
 
for( Iterator synsetIterator = _dict.getSynsetIterator(p); synsetIterator.hasNext(); ) {
 
ISynset key = synsetIterator.next();
 
String uri = synsetWords.get(key.getID())
 
for( Iterator&lt;Entry&lt;IPointer, List&gt;&gt; iterator2 = key.getRelatedMap().entrySet().iterator(); iterator2.hasNext(); ) {
 
Entry&lt;IPointer, List&gt; next2 = iterator2.next();
 
IPointer type = next2.getKey();
List l = next2.getValue();
 
for(ISynsetID id : l) {
 
String destURI = synsetWords.get(id);
  
Edge_hasWordnetPointer newEdge = cls.newInstance();
 
newEdge.URI = URIGenerator.generateURI(&quot;wordnet&quot;, cls)
 
newEdge.sourceURI = uri
 
newEdge.destinationURI = destURI 
 
writer.startBlock()
writer.writeGraphObject(newEdge)
writer.endBlock()
 
 
}
 
}
 
}
 
}

This iterates over the parts-of-speech, iterates over all the synsets, gets the set of relationships for each, and adds an Edge for each such relationship using Edges of specific type, like HyperNym and HypoNym.

With this we have all our Nodes and Edges written to a dataset file (see previous blog entries for our file “block” format).

We can then import the dataset file into local or remote Vital Service endpoint instance.

Next Post: https://vitalai.com/2014/04/29/building-a-data-visualization-plugin-with-the-vital-ai-development-kit/