Run Llama,Qwen,Gemma,Mistral, any warm/cold LLM. No GPU req.
Create and manage OpenAI assistants for chat
Interact with NCTC OSINT Agent for OSINT tasks
Chat with AI with ⚡Lightning Speed
This is open-o1 demo with improved system prompt
Example on using Langfuse to trace Gradio applications.
Interact with PDFs using a chatbot that understands text and images
Uncesored
Generate code and answers with chat instructions
Interact with an AI therapist that analyzes text and voice emotions, and responds with text-to-speech
Vision Chatbot with ImgGen & Web Search - Runs on CPU
This Chatbot for Regal Assistance!
NovaSky-AI-Sky-T1-32B-Preview
Serverless TextGen Hub is a serverless platform designed to run advanced language models such as Llama, Qwen, Gemma, and Mistral. It allows users to deploy and interact with these models without requiring GPU support, making it accessible and cost-effective. The platform is tailored for creating customizable AI assistants that can be integrated into various applications, enabling seamless chatbot functionality and text generation capabilities.
What models are supported by Serverless TextGen Hub?
Serverless TextGen Hub supports a variety of models, including Llama, Qwen, Gemma, Mistral, and other warm/cold LLMs.
Do I need a GPU to run Serverless TextGen Hub?
No, Serverless TextGen Hub is designed to operate without requiring GPU support, making it accessible on standard computing resources.
How do I obtain API keys for the models?
API keys or model access tokens can be obtained from the respective model providers. Follow their instructions to set up and use the keys within Serverless TextGen Hub.