R
RunPod9mo ago
avif

Having problems working with the `Llama-2-7b-chat-hf`

I have the following request going to the runsync endpoint.
{
"input": {
"prompt": "the context. Give me all the places and year numbers listed in the text above"
}
}
{
"input": {
"prompt": "the context. Give me all the places and year numbers listed in the text above"
}
}
(Full request here: https://pastebin.com/FLqjRzRG) this is the result:
{
"delayTime": 915,
"output": {
"input_tokens": 794,
"output_tokens": 16,
"text": [
" Sure! Here are all the places and year numbers listed in the text:\n"
]
},
"status": "COMPLETED"
}
{
"delayTime": 915,
"output": {
"input_tokens": 794,
"output_tokens": 16,
"text": [
" Sure! Here are all the places and year numbers listed in the text:\n"
]
},
"status": "COMPLETED"
}
This is a very bad answer: " Sure! Here are all the places and year numbers listed in the text:\n"` What am I missing? Thanks
5 Replies
JJonahJ
JJonahJ9mo ago
It’s because your output tokens is set to 16. You should send a bunch of parameters too, not just the prompt.
JJonahJ
JJonahJ9mo ago
This is all the stuff I send alongside my prompt.
No description
avif
avifOP9mo ago
Thank you. Can i see your complete json? Thanks again
JJonahJ
JJonahJ9mo ago
Here, you should be able to figure it out from this
No description
avif
avifOP9mo ago
Appreciated
Want results from more Discord servers?
Add your server