We released VDK 0.2.304 a couple weeks back, and it contains many new features.
One of the most significant is the asynchronous version of the VitalService API, which uses an underlying realtime distributed messaging system in its implementation.
Quick aside for some definitions: In synchronous systems, one component sends a request and waits for an answer. This usually involves “blocking” — holding on to system resources while waiting. In asynchronous systems, one component sends a request and moves on to other things, and processes the answer when it arrives. This is usually “non-blocking” with no or limited resources being held while a response is pending. Applications usually combine both methods — there are advantages and disadvantages of each. Generally, asynchronous requests require more overhead but can scale up to large numbers, and synchronous requests can get individual answers much quicker. Most modern websites include asynchronous communication with the server, whereas microservices ( https://en.wikipedia.org/wiki/Microservices ) are synchronous.
While we have often combined VitalService with a realtime messaging systems in Big Data applications for fully “reactive” applications ( http://www.reactivemanifesto.org/ ), this deeper integration enables a much simpler realtime application implementation and a seamless flow across synchronous and asynchronous software components.
So, the advantage in the 0.2.304 release is a simplification and streamlining of development processes via a unification of APIs — resulting in fewer lines of code, quicker development, fewer bugs, lower cost, and less technical debt.
Using the updated API, a developer works with a single API and chooses which calls to be synchronous or asynchronous based on the parameters of the API call.
In the above diagram, we have an application using the VitalService API client. The application may be processing messages from a user, requesting realtime predictions from a predictive model, querying a database, or any other VitalService API function.
This VitalService API client is using Vital Prime as the underlying implementation. See: https://console.vital.ai/productdetails/vital-prime-021 and http://www.vital.ai/tech.html for more information about Vital Prime.
Vital Prime acts as both a REST server as well as a message producer/consumer (publisher/subscriber).
When VitalService API client authenticates with Prime, it internally learns the details of the distributed messaging cluster (Kafka cluster), including the connection details and the current set of “topics”(essentially, queues) with their statuses . Prime coordinates with the Kafka Cluster using Zookeeper ( https://zookeeper.apache.org/ ) to track the available brokers and the status of topics. The VitalService API Client can then seamlessly direct incoming API calls into messages, and direct incoming messages to callback functions.
Thus, an API call like a Query has a synchronous version and an asynchronous version which is the same except for the callback function parameter (the callback function can be empty for “fire-and-forget” API calls). If the synchronous version is used, a blocking REST call is made to Prime to fulfill the request. If the asynchronous version is used, the call is directed into the messaging system.
In our example application, we have three pictured “workers” which are processing messages consumed from Kafka, coordinating with Prime as needed. By this method, work can be distributed across the cluster according to whatever scale is needed. Such workers can be implemented with instances of Prime, a Spark Cluster ( http://spark.apache.org/ with http://aspen.vital.ai/ ), or any scalable compute service such as Amazon’s Lambda ( https://aws.amazon.com/lambda/ ).
This deeper integration with realtime messaging systems, especially Kafka, has been spurred on by the upcoming release of our Haley AI Assistant service, which can be used to implement AI Assistants for different domains, as well as related “chatbot” services.
More information on Haley AI to come!