AdaptiveAI™
Train Once, Run Anywhere
In one of our most important breakthroughs, Config applied Self-Aware™ Computing technology to AI, allowing inference engines to be configurable after training so that outputs can be manipulated by goals, without re-training.
The power of inference reasoning and having information about its outputs (in response to goals modifying inference output exits) means that “train once, run anywhere” takes on huge significance in the AI revolution taking place. Goals can easily be applied to training and inference and periodically changed by developers to radically improve AI performance.
We call this tool set AdaptiveAI™, systems that can adjust, evolve and improve their performance over time based on new data, experiences and environmental changes without the need for explicit reprogramming. These systems are designed to learn from their interactions, optimize their operations and adapt to dynamic conditions, using goals to make them more robust and perform more effectively.
To be Self-Aware™, Config first needed the configuration parameters to be first class and that meant being able to change them while the AI is running. It is a common belief that AI is infinitely configurable, which is true when you’re training the AI, but once AI is trained and ready to deploy an inference engine, today’s AI systems are no longer configurable. This necessitates some design changes to adapt the AI for use with the Sequitur™ Platform tools to gain access to the performance improvements that only AdaptiveAI™ can deliver from dynamic responsiveness to goals.
So, we developed a suite of methods to make AI training and inference configurable: one way is to work with trained AI models, add a little additional special sauce to change their configurations dynamically and bolt that on top; or, develop a much richer suite of configuration parameters that can be tuned during inference by building AdaptiveAI™ requirements initially into the original neural network architecture decisions.
With inference tuning capability, a developer can dynamically adapt inference to ensure that the goals are met, even if the assumptions made when training the AI no longer hold in practice. For example, if running on a system that suddenly lost capacity, instead of the model just failing to deliver a result, the inference output would dynamically adjust to deliver a slightly worse result. On the other hand, If new hardware became available, the adaptive AI that was trained with Self-Aware™ capabilities would simply use modified goals to take advantage of that increased capacity.
In this example, a developer could plan their AI inference to realize progressively better results until the system exhausts its resources or runs out of time.
Another powerful application of these tools is when a Self-Aware™ / AdaptiveAI™ model is trained and deployed on hardware with whatever goals it initially needs. Then, a developer can adapt those goals for different pieces of hardware using the same training set and the Config system will automatically configure the inference to whatever new configurable hardware is deployed.
AdaptiveAI™ represents a powerful shift from traditional, static AI systems to dynamic, continuously learning systems that can thrive in uncertain, rapidly changing environments such as complex system of all types, including industrial process controls. These technologies apply everywhere inference is deployed and have particular advantages when applied to reducing the cost of training and improving the performance of agent-based applications.
