👋 Welcome to the Together AI docs! Together AI makes it easy to run or fine-tune leading open source models with only a few lines of code. We offer a variety of generative AI services:
Serverless models
Use the API or playground to evaluate 100+ models run out of the box with our Inference Engine. You only pay per token/image.
On-demand dedicated endpoints
Run models on your own private GPU, with a pay-per-second usage model. Start dedicated endpoints here and review our docs.
Monthly reserved dedicated endpoints
Larger capacity reserved instances starting at a one month minimum, including VPC options for large deployments. Contact us
See our full quickstart for how to get started with our API in 1 minute.
Copy
Ask AI
from together import Togetherclient = Together()completion = client.chat.completions.create(model="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",messages=[{"role": "user", "content": "What are the top 3 things to do in New York?"}],)
Together hosts many popular models via our serverless endpoints. You can also use our dedicated GPU infrastructure to configure and host your own model.When using one of our hosted serverless models, you’ll be charged based on the amount of tokens you use in your queries. For dedicated models you configure and run yourself, you’ll be charged per minute as long as your endpoint is running. You can start or stop your endpoint at any time using our online playground.To learn more about the pricing for both our serverless and dedicated endpoints, visit our pricing page.Check out these pages to see our current list of available models: