Announcing NAPI-RS v3
🦀 NAPI-RS v3 - WebAssembly! Safer API design and new cross compilation features.
📅 2025/07/07
It has been 4 years since the release of NAPI-RS V2. During this time, the NAPI-RS community has been developing rapidly. We have identified many problems with the API design in the community. For example, ThreadsafeFunction
has always been difficult to use. The main reason is that Node-API's ThreadsafeFunction is designed too complexly, which causes the Rust encapsulation to leak too much of the underlying complexity. However, through collaboration with the
Rolldown
(opens in a new tab) and
Rspack
(opens in a new tab) teams, we have finally found a design that can balance API complexity and correctness.
WebAssembly
is the biggest update this time. In V3, you can compile your project into WebAssembly
with almost no code changes. If the compilation target is wasm32-wasip1-threads
or higher, you can directly run code that uses Rust features like std::thread
and tokio
in the browser without any additional modifications.
Cross compilation is also a big update. In previous versions, you need to use nodejs-rust:lts-debian
or nodejs-rust:lts-debian-aarch64
docker images to build your project. These images are huge, it slows down the CI build time, and it's hard to sync the tools and infrastructure with the community.
Now let's dive into the new features of V3.
WebAssembly
Supporting WebAssembly
means a lot for the NAPI-RS community. There are several scenarios that only WebAssembly
can handle:
- Provides the playground and reproducible environment in the browser, like Rolldown repl (opens in a new tab) and Oxc playground (opens in a new tab).
- Provides fallback packages for platforms that don't have pre-built binaries. For some projects, it's hard to maintain pre-built binaries for all possible platforms.
- Make the project usable in
StackBlitz
(opens in a new tab).
Why don't use wasm-bindgen
(opens in a new tab) instead
One of the main reasons is that you don't need to write 2 different bindings for the same project.
For example, the Oxc project maintained a wasm-bindgen
binding (opens in a new tab) before. However, as the project grew larger, the APIs gradually increased, and the maintenance cost became higher and higher. It was often necessary to port the same logic from the Node.js binding to the wasm-bindgen
binding.
Besides that, using wasm-bindgen
has many limitations, such as the inability to use std::thread
and third-party libraries that depend on std::thread
. For example, you may need to write code like:
#[cfg(not(target_arch = "wasm32"))]
use rayon::prelude::*;
...
#[cfg(not(target_arch = "wasm32"))]
const hash = entries.par_iter().map(|chunk| chunk.compute_hash()).collect::<Vec<_>>();
#[cfg(target_arch = "wasm32")]
const hash = entries.iter().map(|chunk| chunk.compute_hash()).collect::<Vec<_>>();
...
With NAPI-RS, you can compile the codes and run them without pain.
Another pain point is if you are using crates written in C
or C++
, setting up the wasm-bindgen
build process is very complex. See:
cc @RReverser @alexcrichton. I'm not sure if there would be a better or more prominent place to document this problem.
See WebAssembly for more details.
Sample App using NAPI-RS WebAssembly
This is a sample app using NAPI-RS WebAssembly. You can transform the image to webp
jpeg
or avif
with different quality.
The webp
feature is coming from the
libwebp-sys
(opens in a new tab) crate. It's using the
libwebp
(opens in a new tab) under the hood. The
libwebp
is a C library, but you can feel free to use it in NAPI-RS
project, and build it into WebAssembly
without any additional modifications.
The avif
feature is coming from libavif (opens in a new tab). It's a C/C++ mixed library.
You can also feel free to use it in NAPI-RS project.
API Improvements
There are a lot of improvements in the API design, both usability and security have been improved.
lifetime
The lifetime is introduced in NAPI-RS V3, see Understanding Lifetime for more details.
In V2, due to the complexity of designing codegen and APIs, we didn't have time to add lifetimes to the APIs, which led to some issues.
- Some types have safety issues, such as the previous
JsObject
, which could escape and be used outside the scope of#[napi] fn
calls, when in fact its underlyingnapi_value
had already become invalid. We can now constrain such behavior using Rust's lifetimes #[napi] struct
previously couldn't contain lifetimes, which caused some usability issues
For example:
use napi::bindgen_prelude::*;
use napi_derive::napi;
#[napi]
pub fn promise_finally_callback(mut promise: PromiseRaw<()>, config: Object) -> Result<()> {
// ❌ compile Error
// borrowed data escapes outside of function
// `config` escapes the function body here
// lib.rs(5, 62): `config` is a reference that is only valid in the function body
// lib.rs(5, 62): has type `napi::bindgen_prelude::Object<'1>`
// borrowed data escapes outside of function argument requires that `'1` must outlive `'static`
promise.finally(|env| {
let on_finally = config.get_named_property::<Function<(), ()>>("on_finally")?;
Ok(())
})?;
Ok(())
}
There is Reference
API for this case in V3, you can see JavaScript Value Reference for more details.
ThreadsafeFunction
ThreadsafeFunction
has been redesigned in V3. In previous versions, the API of ThreadsafeFunction
is too low level, and it's not safe at all.
In the new API, we have hidden Node-API concepts such as ref
unref
, and acquire
release
, using ownership to encapsulate these APIs, and prohibiting lifecycle management using the underlying ref count model.
If you want to pass ThreadsafeFunction
to different threads, we now allow using std::sync::Arc
to achieve this.
use std::sync::Arc;
use napi::{
bindgen_prelude::*,
threadsafe_function::{ThreadsafeFunction, ThreadsafeFunctionCallMode},
};
use napi_derive::napi;
#[napi]
pub fn pass_threadsafe_function(tsf: Arc<ThreadsafeFunction<u32, u32>>) -> Result<()> {
for i in 0..100 {
let tsf = tsf.clone();
std::thread::spawn(move || {
tsf.call(Ok(i), ThreadsafeFunctionCallMode::NonBlocking);
});
}
Ok(())
}
TypeScript type generation for ThreadsafeFunction
has also been improved, you can only generate (...args: any[]) => any
type in the previous version, but since V3 defines the FnArgs
and Return
types in the generic, you can now generate the correct type for ThreadsafeFunction
.
The example above will generate the following TypeScript type:
export declare function passThreadsafeFunction(
tsf: (err: Error | null, arg: number) => number,
): void
Function
Like the ThreadsafeFunction
API, the Function
has also been redesigned in V3.
The JsFunction
API is deprecated in V3, the new Function
API can generate the correct TypeScript types, more safety and easier to use.
For example:
use napi::bindgen_prelude::*;
use napi_derive::napi;
#[napi]
pub fn call_function(callback: Function<u32, u32>) -> Result<u32> {
callback.call(1)
}
⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️ ⬇️
export declare function callFunction(callback: (arg: number) => number): number
For more details you can see Function.
Cross Compilation
Cross compilation is painful for the Rust community. There is the rust-cross
(opens in a new tab) project, but it's not easy to use:
- The
GLIBC
version for GNU Linux distributions is too new, for examplearm-unknown-linux-gnueabihf
only supportsGLIBC 2.31
. - Configuring compilation for the older
GLIBC 2.17
(still a version that many enterprises need to support) is very complex. - Runs on
QEMU
and Docker, which is slow and limited.
NAPI-RS V3 introduces a new cross compilation feature, with the napi build --use-napi-cross
flag, this is the supports matrix for the cross compilation:
Target/Host | x86_64 | arm64 |
---|---|---|
x86_64-unknown-linux-gnu | ✅ | ✅ |
aarch64-unknown-linux-gnu | ✅ | ✅ |
armv7-unknown-linux-gnueabihf | ✅ | ✅ |
powerpc64le-unknown-linux-gnu | ✅ | ✅ |
s390x-unknown-linux-gnu | ✅ | ✅ |
All these targets are supported to GLIBC 2.17.
This is the repo for the cross toolchain. We basically extract the necessary tools and files from the manylinux-cross
(opens in a new tab) project, then upload them to the npm
registry. The @napi-rs/cli
will then pick the correct toolchain and inject environment variables into the build process.
Cross compile toolchain
We also integrate cargo-zigbuild
and cargo-xwin
in the @napi-rs/cli
, so you can build many different targets on a single machine. See Cross Compilation for more details.
Brand new pnpm package template
We have supported yarn as the package manager for the package-template
(opens in a new tab) project since V2, because yarn's supportedArchitectures (opens in a new tab) feature is very friendly to cross-platform compilation and testing, and it has good Docker support.
As pnpm
becomes popular, we also support package-template-pnpm (opens in a new tab) in V3.
Why there is no npm
package template?
Because of this issue:
### Is there an existing issue for this? - [X] I have searched the existing issues ### This issue exists in the latest npm version - [X] I am using the latest npm ### Current Behavior ``` [user@host:foo] $ npm -v 8.8.0 [user@host:foo] $ node Welcome to Node.js v16.14.2. Type ".help" for more information. > process.arch 'arm64' ``` I'm working on a team that utilizes a mix of x64-based and m1-based macs, and has CI build processes that uses musl. We're seeing that `npm` is skipping platform-specific optional dependencies for packages such as [`@swc/core`](https://github.com/swc-project/swc/blob/1d1e7dcea5abc8aed5de75ec0326cc72d76dd1d4/package.json#L123-L137) as a result of the `package-lock.json` file being generated without all of them included. In our case, this then causes linting to throw an exception, because one of our eslint plugins depends on @swc, which depends on having the platform specific @swc package also installed. There seems to be at least two stages of cause to this. Firstly, when installing `@swc/core` from a clean slate working directory `npm` generates a `package-lock.json` with all of the optional dependencies for `@swc/core` listed: ``` [user@host:foo] $ npm install @swc/core [user@host:foo] $ grep 'node_modules/@swc/core-*' package-lock.json "node_modules/@swc/core": { "node_modules/@swc/core-android-arm-eabi": { "node_modules/@swc/core-android-arm64": { "node_modules/@swc/core-darwin-arm64": { "node_modules/@swc/core-darwin-x64": { "node_modules/@swc/core-freebsd-x64": { "node_modules/@swc/core-linux-arm-gnueabihf": { "node_modules/@swc/core-linux-arm64-gnu": { "node_modules/@swc/core-linux-arm64-musl": { "node_modules/@swc/core-linux-x64-gnu": { "node_modules/@swc/core-linux-x64-musl": { "node_modules/@swc/core-win32-arm64-msvc": { "node_modules/@swc/core-win32-ia32-msvc": { "node_modules/@swc/core-win32-x64-msvc": { ``` And it only installs the platform specific package: ``` [user@host:foo] $ ls -l node_modules/@swc/ total 0 drwxr-xr-x 22 user staff 704 Apr 29 15:39 core drwxr-xr-x 6 user staff 192 Apr 29 15:39 core-darwin-arm64 ``` If I then remove my `package-lock.json`, leave my `node_modules` directory as-is, and then reinstall, I get: ``` [user@host:foo] $ rm -rf package-lock.json [user@host:foo] $ npm install [user@host:foo] $ grep 'node_modules/@swc/core-*' package-lock.json "node_modules/@swc/core": { "node_modules/@swc/core-darwin-arm64": { ``` That is, it **then generates a package-lock.json with only the platform-specific dependency that was installed on this machine, and not with the other optional dependencies that should also be listed**. If you delete **both** `node_modules` AND `package-lock.json`, and then re-run `npm install`, it generates the correct lockfile with all of those optional dependencies listed. The problem is that then, If the `package-lock.json` with the missing optional platform-specific dependencies gets checked into git and an x64 user pulls it down, or vice-versa, `npm` fails to detect that your platform's optional dependencies are missing in the lockfile and just silently skips installing the platform-specific dependency. For example, when I've got a package-lock.json that only contains the x64 @swc package because of the above problem (generated by my coworker on his x64 machine): ``` [user@host:foo] $ node Welcome to Node.js v16.14.2. Type ".help" for more information. > process.arch 'arm64' > [user@host:foo] $ grep 'node_modules/@swc/core-*' package-lock.json "node_modules/@swc/core": { "node_modules/@swc/core-darwin-x64": { [user@host:foo] $ ls package-lock.json package.json ``` And I then install: ``` [user@host:foo] $ npm install added 1 package in 341ms 1 package is looking for funding run `npm fund` for details [user@host:foo] $ ls node_modules/@swc/ core ``` You can see that it fails to install the arm64 dependency or warn me in any way that the `package-lock.json` is missing my platform's dependency. So yeah, two problems: 1. npm is generating an inconsistent package-lock.json when node_modules has your platform-specific dependency installed. 2. When installing from this inconsistent package-lock.json, npm fails to try to correct the problem by comparing the optional dependencies to what's listed upstream ### Expected Behavior 1. `npm` should preserve the full set of platform-specific optional deps for a package like @swc when rebuilding `package-lock.json` from an existing `node_modules` tree 2. `npm install` should warn if the `package-lock.json` becomes inconsistent because of the first case ### Steps To Reproduce See above. ### Environment - npm: 8.8.0 - Node.js: - OS Name: OSX - System Model Name: Macbook Pro ``` [user@host:foo] $ npm -v 8.8.0 [user@host:foo] $ node -v v16.14.2 [user@host:foo] $ uname -a Darwin host.foo.com. 21.3.0 Darwin Kernel Version 21.3.0: Wed Jan 5 21:37:58 PST 2022; root:xnu-8019.80.24~20/RELEASE_ARM64_T8101 arm64 ``` ``` [user@host] $ npm config ls ; "user" config from /Users/user/.npmrc ; node bin location = /Users/user/.nvm/versions/node/v16.14.2/bin/node ; node version = v16.14.2 ; npm local prefix = /Users/user/Development/foo ; npm version = 8.8.0 ; cwd = /Users/user/Development/foo ; HOME = /Users/user ; Run `npm config ls -l` to show all defaults. ```
The npm
team spent several years resolving this critical issue for native addons, although the issue itself wasn't complex. While this issue has been fixed in npm
11, the Node.js team encountered other problems when upgrading to npm
11, resulting in both Node.js
LTS versions and default Docker images still using npm
10, which contains this bug.
From this issue, we can see that the npm
team does not prioritize native addon scenarios, so currently NAPI-RS neither supports nor recommends using npm
as a package manager. It's hard to say whether the npm
team will fix similar critical issues in a timely manner in the future.
@napi-rs/cli
API
You can now easily integrate the NAPI-RS tools into your JavaScript infra:
// Programmatically
import { NapiCli } from '@napi-rs/cli'
const cli = new NapiCli()
const { task, abort } = await cli.build({
release: true,
features: ['allocator-api'],
esm: true,
platform: true,
})
const outputs = await task
All napi commands have corresponding APIs, you can visit cli
to learn more.
Community is growing fast!
When V2 was released, only Next.js (opens in a new tab), Parcel (opens in a new tab), SWC (opens in a new tab) were using NAPI-RS.
Today, NAPI-RS has been widely used in developing various types of applications.
Cursor (opens in a new tab) is using NAPI-RS to build their Desktop and Node.js server high performance addons.
is using NAPI-RS to build their oxide
high performance engine. Tailwind CSS is also one of our platinum sponsors!
is using NAPI-RS for their Electron Desktop App and Node.js server high performance addons.
is using NAPI-RS to build their Electron Desktop App crypto components.
At the same time, with the rise of AI, NAPI-RS has also begun participating in the development of AI tools. For example, is a vector database that uses NAPI-RS to provide a Node.js embedded experience; Chroma is an open-source search and retrieval database for AI applications. Tokenizers (opens in a new tab) is a tokenizer library developed by
Hugging Face.
TensorZero is an open-source stack for industrial-grade LLM applications, they also use NAPI-RS to build their
tensorzero-node
client.
In the frontend build field, NAPI-RS has been widely used. Almost all Bundler use NAPI-RS to improve their performance:
Monorepo tools:
And Oxc provides Linter, Transformer and all kinds of apis via NAPI-RS.
TypeScript team is exploring to use NAPI-RS to build API layer for the typescript-go
(opens in a new tab) project.
How will the API work? Will developers have a canonical JavaScript-based API for integrating with this new version of TypeScript? ____ While we are porting most of the existing TypeScript compiler and language service, that does not necessarily mean that all APIs will be ported over. Because of the challenges between language runtime interoperability, API consumers will typically not communicate within the same process. Instead, we expect our API to leverage a message-passing scheme, typically over an IPC layer. Because this sort of bridging is not "free", exposing all functionality will be impractical. We expect to have a more curated API that is informed by critical use-cases (e.g. linting, transforms, resolution behavior, language service embedding, etc.). We knew that providing a solid API would be a big challenge, so as soon as we started exploring the TypeScript port, we investigated the possibilities here. Beyond how capable the API would be, we asked ourselves whether the sorts of use-cases would be constrained by the performance of IPC solutions. More specifically, even if we could come up with a set of APIs that did what users wanted, would the throughput of requests be a limiting factor? Would the new TypeScript be fast enough to offset the cost of serialization and data transfer? Would the chattiness of API usage overwhelm the wins of having a much faster compiler? We've become increasingly confident and optimistic in answers around the IPC performance. @zkat has built a Node native module to use synchronous communication over standard I/O between external processes. Building on that, @andrewbranch has experimented with exposing an API server entrypoint to our compiler, along with a JavaScript client that "speaks" to the server over that communication layer. What we've found is fairly promising - while IPC overhead is not entirely negligible, it is small enough. We also can imagine opportunities to optimize, use other underlying IPC strategies, and provide batch-style APIs to minimize call overhead. As our experiments solidify, we will post more concrete details on our plans, and what the API will look like.
and Bun have improved their Node-API compatibility, so almost all NAPI-RS projects can run in Deno and Bun.
Calling for sponsorship
NAPI-RS will continue to improve the development experience, and it requires more time and effort to maintain the project. Please consider sponsoring the project - it will help us improve the project and make it better.