The Open-KL run-time is a modern virtual machine for knowledge processing applications. It encapsulates the data structures and operators that cover the math and computer science of intelligent systems operators, and presents high level abstractions for use by the application. The run-time is aware of the underlying hardware, and will dispatch a low level algorithm to execute the operator that is matched to the characteristics of the machine.
A key problem created by hardware acceleration is a new asymmetry between computational resources. In the standard stored program machine model, processing resources are instruction driven and access a shared, flat memory space. Coordination between computational resources is managed by instruction stream barriers, and pipes. Since there is symmetry among the processing elements each thread of execution assumes the same model of computation. However, for asymmetric hardware accelerated platforms, threads of execution have a very specific context, with very different performance and power characteristics. This creates the problem of coordination and collaboration between different computational resources, typically the central processor and the hardware accelerator.
This coordination and collaboration tries to minimize power consumption, and computational time. This minimization problem is the same for all accelerators, and thus a common run-time that manages this minimization is advantageous.
Figure 1: Stillwater OpenKL Run-Time Architecture.
The OpenKL run-time application interface is defined by abstract data structures and operators on these data structures. The run-time is responsible for resource and memory management of the underlying hardware. Furthermore, the runtime manages an abstraction of the underlying hardware accelerator in terms of operator latency and/or power consumption.
As the application requests services from the run-time, it will consult this information to dispatch the operator to the most advantageous subsystem. As different subsystems become occupied, this dispatch is dynamic, and enables parallelism between the different subsystems. The interface of OpenKL to the hardware accelerators is through a plug-in interface that requests the above mentioned information about operator latency and power consumption. Common resource management operations, such as memory allocation, event notifications, and performance counters, are also exposed through the plug-in interface.
As data structures and operators differ between verticals, OpenKL Knowledge Engines are introduced to aggregate best known methods. These Knowledge Engines are akin to application libraries, and enable additional software development productivity.
Knowledge processing operators, such as machine learning and sensor fusion, are complex algorithms. OpenKL provides finely tuned parallel implementations that work with CPU, GPU, KPU, and in the elastic cloud.
When applying knowledge processing techniques on Big Data, you'll want to leverage scalable cloud platforms. OpenKL provides implementations that setup and tear down clusters, in the cloud if needed.
Just shoot us an email and we'll be glad to give you a hand with anything you need. Or just say hi!