Show HN: QuickBEAM – run JavaScript as supervised Erlang/OTP processes

110 points
1/21/1970
4 days ago
by dannote

Comments


hosh

1. Are each of the JS processes running in its own process and mailbox? (I assume from the description is that each runtime instance is its own process)

2. can the BEAM scheduler pre-empt the JS processes?

3. How is memory garbage collected? Do the JS processes garbage collect for each individual process?

4. Are values within JS immutable?

5. If they are not immutable, are there risk for memory errors? And if there is a memory error, would it crash the JS process without crashing the rest of the system?

3 days ago

dannote

1. Yes. Each runtime is a GenServer (= own process + mailbox). There's also a lighter-weight Context mode where many JS contexts share one OS thread via a ContextPool, but each context still maps 1:1 to a BEAM process.

2. No. JS runs on a dedicated OS thread, outside the BEAM scheduler. But there's an interrupt handler (JS_SetInterruptHandler) that checks a deadline on every JS opcode boundary — pass timeout: 1000 to eval and it interrupts after 1s, runtime stays usable. For contexts there's also max_reductions — QuickJS-NG counts JS operations and interrupts when the budget runs out, closest analog to BEAM reductions.

3. QuickJS-NG uses refcounting with cycle detection. Each runtime/context has its own GC — one collecting doesn't touch another. When a Runtime GenServer terminates, JS_FreeContext + JS_FreeRuntime release everything.

4. No, standard JS mutability. But the JS↔Erlang boundary copies values — no shared mutable state across that boundary.

5. QuickJS-NG enforces JS_SetMemoryLimit per-runtime (default 256 MB) and JS_SetContextMemoryLimit per-context. Exceeding the limit raises a JS exception, not a segfault. It propagates as {:error, ...} to the caller. Since each runtime is a supervised GenServer, the supervisor restarts it. There are tests for OOM in one context not crashing the pool, and one runtime crashing not affecting siblings.

3 days ago

zkldi

All of these replies are AI slop.

3 days ago

tipsysquid

is the response also incorrect?

2 days ago

jbpd924

Interesting!! I've been playing around with QuickJS lately and uses Elixir at work.

I'm interested to hear about your sandboxing approach running untrusted JS code. So you are setting an memory/reduction limit to the process which 100% is a good idea. What other defense-in-depth strategies are you using? possible support for seccomp in the future?

3 days ago

dannote

Layers right now:

— Memory limits: JS_SetMemoryLimit per-runtime (256 MB default), JS_SetContextMemoryLimit per-context. Exceeding → JS exception, not a crash.

— Execution limits: interrupt handler checks a nanosecond deadline every opcode. For contexts, max_reductions caps JS operations independently of wall-clock time.

— API surface: apis: false gives bare QuickJS — no fetch, no fs, no DOM, no I/O. You control exactly which Elixir functions JS can call via the handlers map. JS cannot call arbitrary Elixir code.

— Conversion limits: max_convert_depth (32) and max_convert_nodes (10k) prevent pathological objects from blowing up during JS↔BEAM conversion.

— Process isolation: separate OS thread, separate QuickJS heap per runtime.

No seccomp — QuickJS runs in-process so seccomp would restrict the entire BEAM. The sandbox boundary is QuickJS-NG's memory-safe interpreter (no JIT, no raw pointer access from JS) plus the API surface control above.

3 days ago

waffleophagus

Running JS on the Beam VM, all written in C. I don't know if this is just cursed, or absolutely brilliant, either way I love it and will be following closely. Will definitely have to play with it.

3 days ago

dnautics

did you notice that the middleware between C and BEAM is in zig! (disclaimer self promotion)

3 days ago

kvirani

Whoa! you have quite the profile.

3 days ago

dnautics

love this! a while back i noodled around with this idea, but didn't get that far:

https://github.com/ityonemo/yavascript

glad to see someone do a fuller implementation!

3 days ago

steffs

The no-JSON-boundary piece is the part that stands out to me. Most polyglot runtimes spend a lot of cycles serializing and deserializing at the language boundary, and that cost compounds fast when you are doing SSR or tight per-connection loops. Having Erlang read the native DOM directly without a string rendering step is a real architectural win, not just a convenience. Curious how you handle the supervision semantics when a JS runtime crashes.

3 days ago

fouc

"is a real architectural win, not just a convenience." AI use spotted

3 days ago

theflyinghorse

This is very interest to me because we have accumulated a few node packages containing logic that services simply import. So in theory I could now use those node packages in elixir?

3 days ago

dannote

Yes, if the packages are pure JS logic (no native C++ addons, no Node-specific I/O like child_process or net). The script option auto-resolves imports from node_modules/ and bundles via OXC. Node compat APIs (process, path, fs, os, Buffer) are available with apis: [:browser, :node]. For packages with native .node addons, there's load_addon/3 which supports N-API.

3 days ago

lpgauth

I also built a NIF wrapping QuickJS-NG recently to enable "code mode" on our MCP server.

https://github.com/lpgauth/quicksand

3 days ago

vical

that's fantastic, congratulations!

3 days ago