Nebula Update v0.0.5: Advanced configurations, new endpoints, and simplified context filters
Jake Loo
Samina Kabir
This release focuses on granular configurations for responses to control randomness, limitations, and formats for outputs. Additionally, two new OpenAI compatible endpoints including /chat/completions
and /models
, and simplified context filter endpoint for an improved developer experience.
- Enable advanced LLM configurations to control output randomness and formatting, including temperature, presence penalty, max tokens, new response formats (JSON object, JSON schema), and nucleus sampling (top-p).
- New endpoint for
/chat/completions
compatible with OpenAI client to process chat history and generate context-aware responses.
- New OpenAI compatible endpoint to list out models used by Nebula
- Simplified
context_filter
andexecute_config
intocontext
for/chat
,/execute
, and/chat/completions
endpoints.
Before:
New:
Nebula is currently in Alpha for a select number of developers. For access to the product, please sign up on the Nebula waitlist.
For any feedback or support inquiries, please visit our support site.