Steve - We have some performance issues with bi...
We have some performance issues (in browser) with big blobs of json using
transform()
and lazy()
. Any general tips and tricks to make it more performant?Solution:Jump to solution
I don't think there is much that can be done there: if you are making big transforms to large blobs of JSON data, Zod is an expensive way to do it (since we're doing checking and transforming).
17 Replies
big arrays?
Solution
I don't think there is much that can be done there: if you are making big transforms to large blobs of JSON data, Zod is an expensive way to do it (since we're doing checking and transforming).
we also do big deep copying, which also is not ideal for performance sensitive transforms like this.
Clear.
Loads of arrays and objects in array with big arrays, recursively
yep. that's very high overhead, so if you need to do this in a hot loop, I'd maybe build some custom machinery to do it rather than using Zod.
you can probably use Zod in your custom machinery to do some checking at the boundaries (like maybe test the first element in a large array, and then assume the rest conforms?)
We built a whole converter in those transforms ;p
to convert between DTO's
@Scott Trinh This is a bit of a huge giant issue due to deadlines. Time is really thight and people are stressed.
Thanks to your response, which we used for the foundation of our decision, we made the call to not use Zod for these purposes and switch to an API implementation in Elixir.
Really thanks for your help once again!
No problem! Yeah Zod is great as a general purpose solution but if you have big perf needs, especially if you control the data, no general purpose parser is going to beat a purpose built one.
I have to get my parser finished like yesterday 😅
Luckily I still have a very, very powerful tool for this in my toolbox
Ah man, best of luck on the deadline! 🏃💨
Thanks, we are now doing the parsing using a functional programming language called Elixir
If you ever have time, read into it, I can tell you all day long on how we beat Haskell-based projects such as Pandoc (https://pandoc.org) in things like OOXML (
.docx
) conversion to JSON / HTML 😛 especially in performance, resource usage, scalability and development timeYeah, I'm familiar with Elixir. Also its cousin Gleam is something I want to play around with more.
I'm definitely following all development on Gleam, yeah interesting stuff
To be clear, I think you could write a parser/transformer in TS that would do a fine job, but BEAM stuff will definitely be very fast.
We tried before
We switched to Zod because our types are very recursive by nature
And Zod can do that well, in theory, except for my memory going to > 10 GiB
But the recursiveness, yeah that keeps an issues in Zod because on runtime you cannot reference to what is not defined, so you have to keep using
z.lazy()
Most languages seem to be notoriously bad at recursive stuff so I don't really blame Zod for that
Given the environment, it probably is the best schema-thing in the world of TypeScript at this current moment.
But z.transform()
's.. never again! 😛Well even just straight parsing copies every property to a new object, so if you have very large objects, you're going to waste a lot of memory with our strategy.
ah yeah
Explain a lot
Are you also doing stuff on the BEAM professionally for clients?