Documentation Index Fetch the complete documentation index at: https://together-ai-preview.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
The following code gives an example of automatically formatting a prompt during inference to match the expected format for the given model. Note, not all models include a prompt_format; make sure you validate that a prompt format is present in your code.
import together
together.api_key = "xxxxx"
model = "garage-bAInd/Platypus2-70B-instruct"
stop_words = list (together.Models.info(model)[ 'config' ][ 'stop' ]) # =
[ '</s>' , '###' ]
prompt_format = str (together.Models.info(model)[ 'config' ][ 'prompt_format' ])
# = '### Instruction:\n{prompt}\n### Response:\n'
prompt = "hello"
formatted_prompt = prompt_format.format( prompt = prompt)
for token in together.Complete.create_streaming( prompt = formatted_prompt,
model = 'upstage/SOLAR-0-70b-16bit' , stop = stop_words):
print (token, end = "" , flush = True )
print ( " \n " )
Try RedPajama-INCITE-Chat-3B to correct sentences to standard English.
For example, begin the prompt by asking to correct the grammar of a sentence:
Correct this to standard English:
I no Sandwich want.
Correct this to standard English:
If I’m stressed out about something, I tend to have problem to fall asleep.
Sample response
I don’t want a sandwich.
Correct this to standard English:
If I’m stressed out about something, I tend to have a hard time falling asleep
Send the prompt to the API with any appropriate parameters. The code below shows an example in Python using the requests package.
import requests
url = "https://api.together.xyz/inference"
payload = {
"model" : "togethercomputer/togethercomputer/RedPajama-INCITE-7B-Chat" ,
"prompt" : "Correct this to standard English: \n I no Sandwich want." ,
"max_tokens" : 256 ,
"stop" : "." ,
"temperature" : 0.1 ,
"top_p" : 0.7 ,
"top_k" : 50 ,
"repetition_penalty" : 1
}
headers = {
"accept" : "application/json" ,
"content-type" : "application/json" ,
"Authorization" : "Bearer <YOUR_API_KEY>" ,
"User-Agent" : "<YOUR_APP_NAME>" ,
}
response = requests.post(url, json = payload, headers = headers)
print (response.text)
Prompt engineering is a relatively new discipline for developing and optimizing prompts to efficiently use language models (LMs) for a wide variety of applications and research topics. The Prompt Engineering Guide is a great introduction to the subject.
Updated 22 days ago