Inconsistency in Displaying the Last Assistant Message When Listing Messages with Typebot via HTTP

Hey community, I've encountered a challenge while using the OpenAI Assistant via HTTP requests to list the messages of a specific thread and extract the most recent messages. The goal is to display the Assistant's latest response to the user. However, when listing the messages, the last message shown is the user's last message, not the Assistant's response. Despite this, when performing a test call by passing the conversation 'thread', the Assistant's response is completed and available, indicating that the response was correctly generated by the Assistant, but it's not being displayed as expected in the message listing. Could you help me with this issue?
13 Replies
Baptiste
Baptiste12mo ago
I really suggest that you use the OpenAI block with the Ask Assistant action You will struggle otherwise
Nilton Rocha
Nilton RochaOP12mo ago
@Baptiste Thank you for suggesting using the OpenAI block with the Ask Assistant action. However, this approach does not resolve my specific situation due to the current limitation of not allowing dynamic variables in the Assistant ID and Thread ID fields. At the beginning of the flow, I use the Ask Assistant, but for a specific point, I need these fields to accept dynamic values defined during the execution of the flow. This flexibility is essential to adapt the assistant's behavior according to different contexts and real-time user inputs.
Baptiste
Baptiste12mo ago
Only assistant ID is not ID Thread ID is always a variable Indeed we could make the assistant ID dynamic
Nilton Rocha
Nilton RochaOP12mo ago
I appreciate knowing that the Thread ID is already a dynamic variable. Making the Assistant ID dynamic would significantly enhance the flexibility and adaptability of my workflows. Being able to dynamically assign the Assistant ID during the flow execution would allow the assistant to respond more appropriately to different contexts and user inputs in real-time.
Baptiste
Baptiste12mo ago
I should work on that fairly soon Will deploy that in a couple hours 🙂
Nilton Rocha
Nilton RochaOP12mo ago
I am eager to test this improvement; it will be wonderful. Baptiste, I don't want to take advantage of your goodwill, but I need help understanding an issue. When the flow contains an 'HTTP Request' using the GET method: During tests, everything seems to work correctly. However, when the lead uses it, it seems like the JSON is incomplete, and part of the response is missing. When we check it in Postman, the response is correct.
Baptiste
Baptiste12mo ago
Indeed, the saved body is automatically truncated to avoid storing big objects
Nilton Rocha
Nilton RochaOP12mo ago
I understand, but this creates a significant problem. The content being truncated is exactly what we need when using the GET parameter, which in this case is the AI's response. Would it be possible to adjust it to show this part again? If we consider saving space, we can remove all the currently displayed content, as there is another endpoint that performs this function.
Baptiste
Baptiste12mo ago
The response on your screenshot is fully displayed here though
Nilton Rocha
Nilton RochaOP12mo ago
It is only displayed when we test with the node, but for the user, only their own question is shown.
Baptiste
Baptiste12mo ago
Look at assistant_id it is null for some reason That’s why it fails You should wait for the ask assistant improvement (deploying that tomorrow)
Nilton Rocha
Nilton RochaOP12mo ago
Thank you in advance for your attention. 🤝 Good morning Master! @Baptiste I performed about 10 updates throughout the day without seeing the new variable. It was only in the late afternoon that I realized the update was available only in the .IO version, while I am using the version on the VPS. Is there any timeline for updating the image with this implementation on the VPS?

Did you find this page helpful?