-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
When hot reloading and running futures, oftentimes a panic will be printed of the sort at the bottom of this issue.
A couple of ways that I can imagine fixing this are below. Very curious for your thoughts and for if I might be missing anything!
I'm currently running with the latter (simpler) PR – #4905 – in dev.
Make subsecond defer handler execution on wasm (link to PR)
Keep the immediate behaviour for native targets, but when commit_patch runs on wasm schedule the handlers onto a fresh microtask instead of calling them inline. That way the current executor poll can finish (dropping the RefMut) before any wakeups fire.
Adjust from:
dioxus/packages/subsecond/subsecond/src/lib.rs
Lines 308 to 321 in 06f39cd
| unsafe fn commit_patch(table: JumpTable) { | |
| APP_JUMP_TABLE.store( | |
| Box::into_raw(Box::new(table)), | |
| std::sync::atomic::Ordering::Relaxed, | |
| ); | |
| HOTRELOAD_HANDLERS | |
| .lock() | |
| .unwrap() | |
| .clone() | |
| .iter() | |
| .for_each(|handler| { | |
| handler(); | |
| }); | |
| } |
to:
unsafe fn commit_patch(table: JumpTable) {
APP_JUMP_TABLE.store(
Box::into_raw(Box::new(table)),
std::sync::atomic::Ordering::Relaxed,
);
let handlers = HOTRELOAD_HANDLERS.lock().unwrap().clone();
#[cfg(target_arch = "wasm32")]
{
for handler in handlers {
// run after the current Task::run has unwound
wasm_bindgen_futures::spawn_local(async move {
handler();
});
}
}
#[cfg(not(target_arch = "wasm32"))]
{
for handler in handlers {
handler();
}
}
}Because spawn_local enqueues a brand-new executor task, it doesn’t wake the one that’s still actively polling the patch future, so the RefCell isn’t re‑borrowed.
Make Dioxus schedule its handler asynchronously (link to PR)
Wrap the channel send inside another spawn_local so that even if commit_patch stays synchronous the wake happens later, changing from:
dioxus/packages/core/src/virtual_dom.rs
Lines 770 to 776 in 06f39cd
| #[cfg(debug_assertions)] | |
| fn register_subsecond_handler(&self) { | |
| let sender = self.runtime().sender.clone(); | |
| subsecond::register_handler(std::sync::Arc::new(move || { | |
| _ = sender.unbounded_send(SchedulerMsg::AllDirty); | |
| })); | |
| } |
to:
#[cfg(debug_assertions)]
fn register_subsecond_handler(&self) {
let sender = self.runtime().sender.clone();
subsecond::register_handler(std::sync::Arc::new(move || {
let sender = sender.clone();
wasm_bindgen_futures::spawn_local(async move {
let _ = sender.unbounded_send(SchedulerMsg::AllDirty);
});
}));
}This has the caveat that it'd not work for other handlers 'out of the box', but is more contained.
The panic
13:40:15 [web] panicked at /Users/amar/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/wasm-bindgen-futures-0.4.54/src/task/singlethreadRefCell already borrowed
Stack:
Error
at https://domain/wasm/render.js:10325:21
at logError (https://domain/wasm/render.js:14:18)
at imports.wbg.__wbg_new_8a6f238a6ece86ea (https://domain/wasm/render.js:10324:66)
at render.wasm.__wbg_new_8a6f238a6ece86ea externref shim (https://domain/wasm/render_bg.wasm:wasm-function[76599]:0x14f5c72)
at render.wasm._ZN24console_error_panic_hook4hook17h10f7c3e6803ac586E (https://domain/wasm/render_bg.wasm:wasm-function[6095]:0x78a4dc)
at render.wasm._ZN4core3ops8function2Fn4call17h59128e3e78c15912E.llvm.8575340594148377989 (https://domain/wasm/render_bg.wasm:wasm-function[96367]:0x1540d03)
at render.wasm._ZN3std9panicking15panic_with_hook17h96a36e151620b995E (https://domain/wasm/render_bg.wasm:wasm-function[27212]:0x10b03b2)
at render.wasm._ZN3std9panicking13panic_handler28_$u7b$$u7b$closure$u7d$$u7d$17he6e0314da46f86d3E (https://domain/wasm/render_bg.wasm:wasm-function[33231]:0x11cbeb9)
at render.wasm._ZN3std3sys9backtrace26__rust_end_short_backtrace17hb34feac54bc1249aE (https://domain/wasm/render_bg.wasm:wasm-function[93125]:0x1538f87)
at render.wasm._RNvCs50wjZh92VUG_7___rustc17rust_begin_unwind (https://domain/wasm/render_bg.wasm:wasm-function[63548]:0x149598d