Together’s API is compatible with OpenAI’s client libraries, making it easy to try out our open-source models on existing applications.
Together’s API endpoints for chat, language and code, images, and embeddings are fully compatible with OpenAI’s API.If you have an application that uses one of OpenAI’s client libraries, you can easily configure it to point to Together’s API servers, and start running your existing applications using our open-source models.
To start using Together with OpenAI’s client libraries, pass in your Together API key to the api_key option, and change the base_url to https://api.together.xyz/v1:
Now that your OpenAI client is configured to point to Together, you can start using one of our open-source models for your inference queries.For example, you can query one of our chat models, like Meta Llama 3:
Copy
Ask AI
import osimport openaiclient = openai.OpenAI(api_key=os.environ.get("TOGETHER_API_KEY"),base_url="https://api.together.xyz/v1",)response = client.chat.completions.create(model="meta-llama/Llama-3-8b-chat-hf",messages=[{"role": "system", "content": "You are a travel agent. Be descriptive and helpful."},{"role": "user", "content": "Tell me about San Francisco"},])print(response.choices[0].message.content)
Or you can use a language model to generate a code completion:
You can also use OpenAI’s streaming capabilities to stream back your response:
Copy
Ask AI
import osimport openaisystem_content = "You are a travel agent. Be descriptive and helpful."user_content = "Tell me about San Francisco"client = openai.OpenAI(api_key=os.environ.get("TOGETHER_API_KEY"),base_url="https://api.together.xyz/v1",)stream = client.chat.completions.create(model="mistralai/Mixtral-8x7B-Instruct-v0.1",messages=[{"role": "system", "content": system_content},{"role": "user", "content": user_content},],stream=True,)for chunk in stream:print(chunk.choices[0].delta.content or "", end="", flush=True)