The latest platform release enables developers to easily build real-time applications that combine sensor data and edge analytics with multiple Large Language Models (LLMs) to create mission-critical and operational Generative AI solutions. The release focuses on support for many new LLM’s available in the market as well as generative AI functions to enable LLM services to use Vantiq to take automated actions.
Support for Additional Industry Leading LLMs
Vantiq allows developers to use any of the most common large-language models (LLMs) in their applications including OpenAI’s GPT4, AWS Bedrock, Anthropic’s Claude.ai, and Meta’s Llama. Vantiq support for AWS SageMaker enables hosting of custom/private models or access to the growing model libraries from services such as Hugging Face.
Combine Multiple LLMs the Same Application
Vantiq allows for the orchestration of multiple models within an application thereby decreasing hallucinations and improving the trustworthiness of real-time responses.
Low-Code Development of LLM Functions
Vantiq’s low-code builders can now rapidly create functions that can be used by LLMs to request more information or take automatic action. This means that it is possible to build LLM solutions that not only communicate with humans but dynamically call on functions to do work based on input from the user or real-time data from the environment.
LLM Conversation Management Tools
Maintaining context for LLMs is a common development chore that Vantiq solves in this release by providing tools to easily manage conversation memory in its Services:
– Programmatic conversation management tools
– Automatically store and recall conversation memory
– Features for “refocusing” conversations that veer from the original topic
– Maintain multiple conversations within a single collaboration
Want to get started with Vantiq? Contact us here to request access to the platform
Learn more about Vantiq’s Generative AI Capabilities