N
Nuxt4w ago
Jonas

How do I get the actual Stream with useChat using the vercel AIsdk?

I want to use https://github.com/antfu/shiki-stream but it requires the raw stream. I tried creating a stream from the reactive text but that didn't work out well because I couldn't get working properly. I know, that this is a bit off-topic, sry.
GitHub
GitHub - antfu/shiki-stream: Streaming highlighting with Shiki. Use...
Streaming highlighting with Shiki. Useful for highlighting text streams like LLM outputs. - antfu/shiki-stream
2 Replies
kapa.ai
kapa.ai4w ago
Hi @Jonas. I'm kapa.ai, your friendly support bot. I will try to answer your question by searching through the knowledge base. :wave: ---------------------- Based on the knowledge sources provided, I can help you understand how to work with streams in a Nuxt application, particularly when using Vercel's AI SDK. When using the useChat hook from Vercel's AI SDK in a Nuxt application, there are some reactivity issues that need to be addressed. According to the information provided, the useChat composable had a design problem that didn't allow for reactive values in the body object. To properly handle streaming data in your Nuxt application, you have a few options: 1. For the useChat hook specifically, you should use a computed property for the body:
const body = computed(() => ({ agent: selectedAgent.value }));

const { messages, input, handleSubmit } = useChat({
body,
// other options
})
const body = computed(() => ({ agent: selectedAgent.value }));

const { messages, input, handleSubmit } = useChat({
body,
// other options
})
This approach ensures that the body object is reactive to changes in your component state Stack Overflow Post > Why is the body object not reactive when sending POST API request?. 2. For handling raw streams in Nuxt, especially with POST requests, you can use the following approach:
// Make a POST request to the SSE endpoint
const response = await $fetch<ReadableStream>('/your-endpoint', {
method: 'POST',
body: {
// your request body
},
responseType: 'stream',
})

// Create a new ReadableStream from the response with TextDecoderStream
const reader = response.pipeThrough(new TextDecoderStream()).getReader()

// Read the chunk of data as we get it
while (true) {
const { value, done } = await reader.read()

if (done)
break

// Process your stream data here
console.log('Received:', value)
}
// Make a POST request to the SSE endpoint
const response = await $fetch<ReadableStream>('/your-endpoint', {
method: 'POST',
body: {
// your request body
},
responseType: 'stream',
})

// Create a new ReadableStream from the response with TextDecoderStream
const reader = response.pipeThrough(new TextDecoderStream()).getReader()

// Read the chunk of data as we get it
while (true) {
const { value, done } = await reader.read()

if (done)
break

// Process your stream data here
console.log('Received:', value)
}
This approach allows you to consume Server-Sent Events (SSE) via a POST request Docs > Getting-started > Data Fetching > Recipes. If you need to process base64 audio streams or other binary data, you might need to use additional utilities like the base64ToBlob function shown in one of the examples. For integrating with shiki-stream specifically, you would need to adapt these approaches to provide the raw stream data that the library requires. However, the knowledge sources don't contain specific information about integrating with shiki-stream. ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ----------------------
kapa.ai
kapa.ai4w ago
<@507643246902640641> Kapa.ai is still learning and improving, please let me know how I did by reacting below.

Did you find this page helpful?