opened a PR to allow native web_sys
opened a PR to allow native web_sys request, without having to ditch the event macro: https://github.com/cloudflare/workers-rs/pull/525
10 Replies
with that, for users who want to use native web_sys, I'd say workers-rs should not provide any helpers- it's totally up to the user.
for example, getting a response as Rust data via JSON/serde is actually pretty opinionated, i.e. is it better to:
1) call .text() and then
serde_json
2) call .json() and then serde_wasm_bindgen
Probably not a significant difference, but imho it makes sense to let users choose that (esp if they're opting out of http/axum
)Is there a practical benefit to doing this yourself?
I find it's much simpler to work with this than the http type
for example - it's not clear what the generic in http should be... String? Stream? Unit? Vec<u8>?
I can imagine use-cases where all of those make sense, but it's different per-route.
a framework like axum decides these tradeoffs for you, and it's totally reasonable (i.e. axum::body::Body) - but when not using a framework, I think the native web_sys types are much simpler to think about
also - though this isn't my main motivation - the native web_sys types really cover absolutely everything I need. Just sprinkle in a very tiny amount of extra local helpers and done. Paying a bloat/performance cost, even if small, for something that's already available and implemented in the runtime, just doesn't make sense to me personally (but I do understand why others might want to pay that very small cost to use the http crate, e.g. for re-use with libraries that expect that type, even if not full frameworks. I don't have that need personally)
That's fair
work-in-progress, but here's all the extensions I need to add to the native types so far, it's really very little. Stuff like cookies and headers already use Strings in the native API so they don't even need help:
Unknown User•9mo ago
Message Not Public
Sign In & Join Server To View
The "futures executor" part of things pretty much is bypassing JS
In more detail, it's just a single Promise (or calling
queueMicrotask()
if available) to drive all the Futures forward: https://github.com/rustwasm/wasm-bindgen/blob/9347af3b518e2a77139f01b56cc8c785cf184059/crates/futures/src/queue.rs#L66), but the actual queue of tasks is held on the Rust side: https://github.com/rustwasm/wasm-bindgen/blob/9347af3b518e2a77139f01b56cc8c785cf184059/crates/futures/src/queue.rs#L22, the only thing crossing the FFI boundary is the single callback on the microtask tick: https://github.com/rustwasm/wasm-bindgen/blob/9347af3b518e2a77139f01b56cc8c785cf184059/crates/futures/src/queue.rs#L101
For network requests themselves, I don't see how it would be possible to bypass JS or do it more efficiently than what web_sys is doing, it needs to call into the host environment with the request data... can't do the same trick of a simple callback and heavy lifting on Rust side. Would be nice to not have to pay that somehow though, I agree! Only way to avoid that is with a different runtime completely though, afaictUnknown User•9mo ago
Message Not Public
Sign In & Join Server To View
I think the worker runs in a V8 isolate, and so things like fetching must cross over and be handled in JS-engine land (where the server its running on orchestrates all the gazillions of network requests in the V8 engine)
.. but the "engine" part is interesting
I tried looking at the workerd code to get my feet wet with understanding, and I don't quite get how it all fits together (i.e. with jsg), but my current mental model is something like:
worker code <-> workerd (c++) <-> JSG glue layer (C++ <-> JS language runtime ?) <-> V8 (JS engine)
So even though there is no way to be more efficient than calling into the V8 engine maybe there's a way of bypassing calling into it by going through the JS language part, just call V8 via C++ hooks ?
And I believe that is where the component wasm spec fits in... I'm not clear on the details, but iiuc once this spec lands and is implemented, it allows wasm in JS environments (browsers, node, workerd, ...) to bypass that JS language layer and just talk directly to the engine
In terms of calling into the host from rust-powered wasm- iiuc the architecture of web_sys was designed such that you get these benefits automatically once it lands.
TL;DR, unless I am misunderstanding something, using web_sys is the most efficient way now, and when the wasm components spec lands everywhere, suddenly all the web_sys powered code everywhere will get a speedup
Oh but wait- I forgot that we're not actually using any web_sys calls, just the types...
But I think it's ultimately the same idea, the API will be in terms of types the JS engine expects, so the wasm application code should just use that for fastest approach, then if/when the wasm runtime allows calling into the engine directly, it'll just get the speedup for free (not actually the same as web_sys calls in browsers, but same concept)
btw none of this is affected by the PR itself, the macro just allows a more ergonomic way of setting up a fetch handler, it always gets expanded into web_sys types and cloudflare JS bindings (this PR just makes that more flexible by allowing any From<web_sys::Request> instead of just specifically Into<worker::Request> (which only worked by way of From<web_sys::Request))
Unknown User•9mo ago
Message Not Public
Sign In & Join Server To View