How to implement text streaming(the same way it is done in ChatGPT)?
Use case:
I am requesting chat completion from LLM(different ones, not only ChatGPT);
I want to stream the response back as it arrives on the server, from the server to the client, word by word.
What is the easiest way to do it in Wasp?
Thanks!
10 Replies
Hey the easiest way to do this is by using custom APIs
Here's an example app https://github.com/wasp-lang/wasp/tree/main/examples/streaming
GitHub
wasp/examples/streaming at main · wasp-lang/wasp
The fastest way to develop full-stack web apps with React & Node.js. - wasp-lang/wasp
Thanks 👍
You made the example after my question, that is very fast!
Hahah let's say I know Wasp well 😂
@martinsos @matijash looks cool
This is pretty cool @Artomatica !!! How was your experience of building it in Wasp?
I had zero experience with JS development before. The majority of the project is made using GPT-4 and asking questions in the Discord at night.
I like Wasp, it is flexible, easy to use and deploy, very fast for prototyping. Examples help a lot. I am not sure it is viable for big projects(due to the structure, too much is getting added to queries and actions, main.wasp file and it becomes unreadable), but for a startup and smaller software projects, it is perfect.
I especially like integration with fly.io, build in db studio editor, containerization, and how easy is to deploy. Most questions I have with Wasp are Architecture related: integration of jobs, external APIs, streaming.
Typescript and "use effect" is a nightmare after using C all my life. npm and Node feels bloated as hell.
Tailwind is very good, because of how easy gpt-4 can create an element for you, and it is very easy to add UI from examples.
Generally, Wasp solves all my needs for now (except testing and CI, buy it is much broader topic).
Woah thanks for the detailed analysis! Sounds great -> yeah we are missing out on some of those "productions" features, like multiple wasp files, better structure, better testing support -> that is what we are working on towards 1.0 -> but those will be coming relatively soon! I am glad you found prototyping fast. If you will have any additional ideas, feel free to share them, we have #💡feature-suggestions channel for any feedback!