Skip to main content

路 One min read

This is the first post from the rewritten website for neon. This site is based on a static site generator, called docusarus, and it is specifically for documentation. Here are some of the goals of this new website some of the benefits that come with choosing this stack:

The goal of this new website is to:

  • Replace the existing website, which is based on jekyll
  • Use newer web stack and infra: React, docusarus, babel, webpack, etc
  • Easy to maintain docs (Docusarus makes this easy)
  • Built-in search functionality with algolia (Docusarus makes this easy)
  • Ready for Translations (Docusarus makes this easy)

路 5 min read
caution

This article contains references to an outdated version of neon

While I've been thoroughly enjoying the Rust community's spirited #Rust2018 blog-fest, I wasn't really thinking of participating myself until Julia Evans pointed out the leadership wants to hear from everyone---even if I might not have anything especially new to add. So here's my little wish list for Rust in 2018.

Since I'm not in Rust's leadership, I don't have to worry about synthesizing some grand narrative for the whole of Rust. So I'll just focus on a few things that would be personally useful to me. In particular, I'll stick to topics that would be helpful for my Neon project, a set of bindings for writing native Node extension modules in Rust.

Stabilize impl trait

The most challenging part of keeping Neon's design manageable is the annotation burden. Neon provides a safe API for managing handles into Node's JavaScript garbage collector, and to do this, it requires passing around a "handle scope" parameter everywhere, which tracks the lifetimes of handles. There are a few flavors of handle scopes, which means helper functions in Neon projects often end up with some pretty hairy signatures:

fn get_foo_bar<'a, S: Scope<'a>>(scope: &mut S, obj: Handle<'a, JsObject>) -> JsResult<'a, JsValue> {    // extract the `obj.foo` property and check that it's an object    let foo = obj.get(scope, "foo")?.check::<JsObject>()?;    // extract the `obj.foo.bar` property    let bar = foo.get(scope, "bar")?;    Ok(bar)}

I would love for Neon users to be able to combine lifetime elision and the impl trait shorthand syntax to write something like:

fn get_foo_bar(scope: &mut impl Scope, obj: Handle<JsObject>) -> JsResult<JsValue> {    // ...}

(With an upcoming cleanup of the core Neon API, the details of this would change a bit, but impl trait would be just as appealing.)

Syntactic abstraction for error propagation

I adore the ? syntax, but it's not enough! Expressions like Ok(bar) in the above example are an indication that we don't have a complete abstraction layer in the syntax for error propagation. I find it particularly galling when I have to see Ok(()). It's dipping down into an unnecessary abstraction layer, distracting the core logic of the function with mechanical details of the representation of Rust's control flow protocols.

I'm excited about the discussions around "catching functions". I think we can get to a sweet spot where we have an abstraction layer in the syntax that never exposes the Result type for error handling, while still explicitly annotating every point that may throw (thanks to ? syntax, and by contrast to exceptions). The above examples might look something like:

fn get_foo_bar(scope: &mut impl Scope, obj: Handle<JsObject>) -> Handle<JsValue> catch JsException {    let foo = obj.get(scope, "foo")?.check::<JsObject>()?;    let bar = foo.get(scope, "bar")?;    return bar;}

Make cargo more extendable

Like xargo and wargo, Neon comes with a command-line tool that wraps cargo's behavior in order to abstract away a bunch of build configuration details. I'm proud of this abstraction, because it makes building native modules in Node far easier than they are with C++. But I would much rather Neon programmers could use cargo directly, calling all their usual familiar commands like cargo build and cargo run.

To support this, Neon will need a handful of extension points that don't exist today:

  • The ability to extend the memoization logic with extra environmental information (e.g. which version of Node is being built for and the values of some Node-specific environment variables).
  • Post-build hooks, so I can generate the final DLL and put it in the right directory.
  • The ability to add default build flags (for example, on macOS, neon build actually calls cargo rustc with some extra low-level linking flags).
  • Project templates for cargo new.

Being able to write

$ cargo new --template=neon my-first-neon-project$ cd my-first-neon-project$ cargo run

would be so amazing.

Neon is about welcoming JS programmers

I promised no narrative, but there is a common thread here. I started the Neon project because I thought it would make a great bridge between the JavaScript and Rust communities. All of the topics in this post are about facilitating that connection:

  • Neon forces JS programmers to get more explicit about working with the garbage collector than they normally have to, so making that as lightweight as possible makes falling into native code less of a steep cliff.
  • JS is a language with exceptions, so making the protocol for emulating exceptions in Rust as ergonomic as possible will make Rust a better environment for JS programmers.
  • And just as Node projects have a workflow oriented around npm, giving Neon projects a standard cargo-based workflow will feel familiar and pleasant to Node programmers.

My dream is that Neon can serve as a gateway welcoming JS programmers into Rust and systems programming for years to come. The more we smoothe the path between them, the more people we invite into our community.

路 7 min read
caution

This article contains references to an outdated version of neon

My history with Rust goes back a long way. But it was when I really started to understand its enabling potential, its capacity to empower whole groups of people to do things they couldn't do before, that I just had to find a more direct way to get involved with making that promise a reality.

I decided that the best way I could help widen the on-ramp to Rust was to create Neon: a library for conveniently implementing native Node.js modules with Rust instead of C/C++. With Neon, JavaScript programmers can get access to all the power that Rust offers: high-performance native code, convenient multithreading, freedom from memory faults and data races, and access to native libraries and the Cargo ecosystem. And they can do this without throwing away their working apps or existing expertise. In short, my goal with Neon is to make it easy for JavaScript programmers to "dip their toe" into Rust without diving straight into the deep end.

We've made some great progress recently, with some cool new features including Electron support and a new Task API for asynchronously spawning Rust computations to run in a background thread. But Neon is still a young project, and could use your help to take it to the next level! Neon is still primarily built by me and a small set of contributors, so I'm looking for contributors with a wide range of skills and interests who can join us to take Neon to the next level and eventually grow into a project leadership team. I think we're onto something exciting here: a chance to build bridges between the JavaScript and Rust worlds and to create opportunities for aspiring new systems programmers. And I hope you'll consider being a part of it!

...And I Mean a Wide Range#

My dream is to make Neon:

  • Easy to learn: The default abstraction layer should be intuitive enough that a newcomer's first experience coming from JavaScript should be approachable, and there should be documentation and learning materials to smoothe the on-boarding experience.
  • Rock-solid: Users should feel confident that refactoring their code in Rust should be no more likely to crash their Node server than vanilla JavaScript.
  • Fully-featured: The Neon API should be able to express everything you could do in JavaScript itself.
  • Stable: Once we start approaching 1.0, Neon should get on a regular release cycle, with strong commitment to semantic versioning and backwards compatibility.

Just to give you a sense of the many varied kinds of contributions we could use, here's a taste:

Project management. We should keep on top of issues and PRs. I would love to set up a regular meeting with anyone who's willing to help out with this! I could also use help setting up a simple RFC process similar to Rust RFCs, especially for having community discussions around API and workflow design.

Technical writing. The guides are shaping up, but they're incomplete and one of the most important tools for on-boarding new users. The API docs are pretty rudimentary and would benefit from many more examples---we should strive for a high-quality, illustrative example for every single API.

Testing. The test suite has a decent structure but is not at all complete. We should aim for complete test coverage of the API!

Teaching. I would love to get some good thinking into how to teach Neon to various audiences, especially people who are new to Rust and systems programming. We could use this to drive the way we structure the guides, tutorial videos, etc.

Windows development. My primary development machine is Windows these days, but I'm not an expert. I recently broke our Appveyor builds just to prove it! 馃槤 We've also seen some intermittent hangs in Appveyor builds and I'd love a Windows expert to do some investigating!

Web development. The Neon web site is currently a static page. It certainly would be fun to set it up as a Node page using Neon itself! One of the nice dynamic things we could do would be to create a roadmap page like the one Helix has, with automatic tracking of milestone progress using GitHub APIs. We should also set up a Neon project blog with Jekyll and style it consistently with the rest of neon-bindings.com.

Ops and automation. I've started an automation label in the issues. A fantastic contribution would be an automated publication script to make releases one-touch. (This is realistically achievable now thanks to some project reorganization.)

Node plugins. We should explore the possibility of supporting using the new N-API as an alternative backend for the implementation. We wouldn't be able to move to this as the default backend right away, but it could pave the way for supporting Node on ChakraCore, and eventually might replace the current backend entirely.

API design. There are lots of things you can do in JavaScript that you still can't do in Neon, so there's plenty of missing APIs to finish. And it's not too late to make incompatible changes to the API that's there currently. For example, I'd be especially interested in ideas about making the Scope API less awkward, if possible.

Cargo extensions. So far, the neon-cli workflow has been reasonably successful at abstracting away the painful configuration details required to build native Node modules correctly. But the ideal situation would be to allow programmers to just use cargo build, cargo run, and the like to build their Neon crates like any other Rust project. The recent discussions around making Cargo extendable open up some exciting possibilities to push in this direction. One of the ways you can indirectly help with Neon is to help that effort.

Macrology. One of the big, exciting projects we have left is to flesh out the high-level macro for defining JavaScript classes and another for defining standalone functions) so users can use simple type annotations to automate conversions between JavaScript and Rust types. We should take inspiration from the design of our sibling project, Helix!

Systems programming. One of the biggest challenges we have to tackle is making the process of shipping Neon libraries practical, especially for shipping prebuilt binaries. One technique we can explore is to create an ABI-stable middle layer so that Neon binaries don't need to be rebuilt for different versions of Node.

Threading architectures. Currently, Neon supports a couple of forms of threading: pausing the JavaScript VM to synchronously run a parallelized Rust computation (via the Lock API), and running a background Task as part of the libuv thread pool. There's more we can do both on the computation side (for example, supporting attaching to different threads than libuv's pool) and the data side (for example, supporting ArrayBuffer transfer).

Getting Involved#

Does any of these sound like something you'd be interested in? Or maybe you have other ideas! If you want to help, come talk to me (@dherman) in the #neon community Slack channel (make sure to get an automatic invite first).

A Note About Community#

As the original creator of this project, I'm responsible not only for the software but for the community I foster. I deeply love this part of open source, and I don't take the responsibility lightly.

Neon has a ton of cool tech inside of it, and if that's the only aspect you're interested in, that's totally OK. Not everyone needs to be passionate about community-building. Still, not unlike Rust, this whole project's purpose is to widen the circle of tech and empower new systems programmers. So I ask of everyone who participates in the Neon project to strive to act in ways that will encourage and motivate as many people as possible to participate.

Concretely, Neon uses the Contributor Covenant to frame the expectations and standards of how we treat people in our community. Behind the policies is a simple goal: to make our community a place that welcomes, trusts, supports, and empowers one another.

If that sounds good to you, wanna come join us?

路 2 min read
caution

This article contains references to an outdated version of neon

Last weekend I landed a PR that adds support for defining custom native classes in Neon. This means you can create JavaScript objects that internally wrap---and own---a Rust data structure, along with methods that can safely access the internal Rust data.

As a quick demonstration, suppose you have an Employee struct defined in Rust:

pub struct Employee {    id: i32,    name: String,    // etc ...}

You can expose this to JS with the new declare_types! macro:

declare_types! {
    /// JS class wrapping Employee records.    pub class JsEmployee for Employee {
        init(call) {            let scope = call.scope;            let id = try!(try!(call.arguments.require(scope, 0)).check::<JsInteger>());            let name = try!(try!(call.arguments.require(scope, 1)).to_string());            // etc ...            Ok(Employee {                id: id.value() as i32,                name: name.value(),                // etc ...            })        }
        method name(call) {            let scope = call.scope;            let this: Handle<JsEmployee> = call.arguments.this(scope);            let name = try!(vm::lock(this, |employee| {                employee.name.clone()            });            Ok(try!(JsString::new_or_throw(scope, &name[..])).upcast())        }    }
};

This defines a custom JS class whose instances contain an Employee record. It binds JsEmployee to a Rust type that can create the class at runtime (i.e., the constructor function and prototype object). The init function defines the behavior for allocating the internals during construction of a new instance. The name method shows an example of how you can use vm::lock to borrow a reference to the internal Rust data of an instance.

From there, you can extract the constructor function and expose it to JS, for example by exporting it from a native module:

register_module!(m, {    let scope = m.scope;    let class = try!(JsEmployee::class(scope));       // get the class    let constructor = try!(class.constructor(scope)); // get the constructor    try!(m.exports.set("Employee", constructor));     // export the constructor});

Then you can use instances of this type in JS just like any other object:

const { Employee } = require('./native');
const lumbergh = new Employee(9001, "Bill Lumbergh");console.log(lumbergh.name()); // Bill Lumbergh

Since the methods on Employee expect this to have the right binary layout, they check to make sure that they aren't being called on an inappropriate object type. This means you can't segfault Node by doing something like:

Employee.prototype.name.call({});

This safely throws a TypeError exception just like methods from other native classes like Date or Buffer do.

Anyway, that's a little taste of user-defined native classes. More docs work to do!

路 7 min read
caution

This article contains references to an outdated version of neon

If you're a JavaScript programmer who's been intrigued by Rust's hack without fear theme---making systems programming safe and fun---but you've been waiting for inspiration, I may have something for you! I've been working on Neon, a set of APIs and tools for making it super easy to write native Node modules in Rust.

TL;DR:

  • Neon is an API for writing fast, crash-free native Node modules in Rust;
  • Neon enables Rust's parallelism with guaranteed thread safety;
  • Neon-cli makes it easy to create a Neon project and get started; and finally...
  • Help wanted!

I Can Rust and So Can You!#

I wanted to make it as easy as possible to get up and running, so I built neon-cli, a command-line tool that lets you generate a complete Neon project skeleton with one simple command and build your entire project with nothing more than the usual npm install.

If you want to try building your first native module with Neon, it's super easy: install neon-cli with npm install -g neon-cli, then create, build, and run your new project:

% neon new hello...follow prompts...% cd hello% npm install% node -e 'require("./")'

If you don't believe me, I made a screencast, so you know I'm legit.

I Take Thee at thy Word#

To illustrate what you can do with Neon, I created a little word counting demo. The demo is simple: read in the complete plays of Shakespeare and count the total number of occurrences of the word "thee". First I tried implementing it in pure JS. The top-level code splits the corpus into lines, and sums up the counts for each line:

function search(corpus, search) {  var ls = lines(corpus);  var total = 0;  for (var i = 0, n = ls.length; i < n; i++) {    total += wcLine(ls[i], search);  }  return total;}

Searching an individual line involves splitting the line up into word and matching each word against the search string:

function wcLine(line, search) {  var words = line.split(' ');  var total = 0;  for (var i = 0, n = words.length; i < n; i++) {    if (matches(words[i], search)) {      total++;    }  }  return total;}

The rest of the details are pretty straightforward but definitely check out the code---it's small and self-contained.

On my laptop, running the algorithm across all the plays of Shakespeare usually takes about 280 -- 290ms. Not hugely expensive, but slow enough to be optimizable.

Fall Into our Rustic Revelry#

One of the amazing things about Rust is that highly efficient code can still be remarkably compact and readable. In the Rust version of the algorithm, the code for summing up the counts for all the lines looks pretty similar to the JS code:

let mut total = 0;for word in line.split(' ') {    if matches(word, search) {        total += 1;    }}total // in Rust you can omit `return` for a trailing expression

In fact, that same code can be written at a higher level of abstraction without losing performance, using iteration methods like filter and fold (similar to Array.prototype.filter and Array.prototype.reduce in JS):

line.split(' ')    .filter(|word| matches(word, search))    .fold(0, |sum, _| sum + 1)

In my quick experiments, that even seems to shave a few milliseconds off the total running time. I think this is a nice demonstration of the power of Rust's zero-cost abstractions, where idiomatic and high-level abstractions produce the same or sometimes even better performance (by making additional optimizations possible, like eliminating bounds checks) than lower-level, more obscure code.

On my machine, the simple Rust translation runs in about 80 -- 85ms. Not bad---about 3x as fast just from using Rust, and in roughly the same number of lines of code (60 in JS, 70 in Rust). By the way, I'm being approximate here with the numbers, because this isn't a remotely scientific benchmark. My goal is just to demonstrate that you can get significant performance improvements from using Rust; in any given situation, the particular details will of course matter.

Their Thread of Life is Spun#

We're not done yet, though! Rust enables something even cooler for Node: we can easily and safely parallelize this code---and I mean without the night-sweats and palpitations usually associated with multithreading. Here's a quick look at the top level logic in the Rust implementation of the demo:

let total = vm::lock(buffer, |data| {    let corpus = data.as_str().unwrap();    let lines = lines(corpus);    lines.into_iter()         .map(|line| wc_line(line, search))         .fold(0, |sum, line| sum + line)});

The vm::lock API lets Neon safely expose the raw bytes of a Node Buffer object (i.e., a typed array) to Rust threads, by preventing JS from running in the meantime. And Rust's concurrency model makes programming with threads actually fun.

To demonstrate how easy this can be, I used Niko Matsakis's new Rayon crate of beautiful data parallelism abstractions. Changing the demo to use Rayon is as simple as replacing the into_iter/map/fold/ lines above with:

lines.into_par_iter()     .map(|line| wc_line(line, search))     .sum()

Keep in mind, Rayon wasn't designed with Neon in mind---its generic primitives match the iteration protocols of Rust, so Neon was able to just pull it off the shelf.

With that simple change, on my two-core MacBook Air, the demo goes from about 85ms down to about 50ms.

Bridge Most Valiantly, with Excellent Discipline#

I've worked on making the integration as seamless as possible. From the Rust side, Neon functions follow a simple protocol, taking a Call object and returning a JavaScript value:

fn search(call: Call) -> JS<Integer> {    let scope = call.scope;    // ...    Ok(Integer::new(scope, total))}

The scope object safely tracks handles into V8's garbage-collected heap. The Neon API uses the Rust type system to guarantee that your native module can't crash your app by mismanaging object handles.

From the JS side, loading the native module is straightforward:

var myNeonModule = require('neon-bridge').load();

Wherefore's this Noise?#

I hope this demo is enough to get people interested. Beyond the sheer fun of it, I think the strongest reasons for using Rust in Node are performance and parallelism. As the Rust ecosystem grows, it'll also be a way to give Node access to cool Rust libraries. Beyond that, I'm hoping that Neon can make a nice abstraction layer that just makes writing native Node modules less painful. With projects like node-uwp it might even be worth exploring evolving Neon towards a JS-engine-agnostic abstraction layer.

There are lots of possibilities, but I need help! If you want to get involved, I've created a community slack (grab an invite from the Slackin app) and a #neon IRC channel on Mozilla IRC (irc.mozilla.org).

A Quick Thanks#

There's a ton of fun exploration and work left to do but I couldn't have gotten this far without huge amounts of help already: Andrew Oppenlander's blog post got me off the ground, Ben Noordhuis and Marcin Cie艣lak helped me wrestle with V8's tooling, I picked up a few tricks from Nathan Rajlich's evil genius code, Adam Klein and Fedor Indutny helped me understand the V8 API, Alex Crichton helped me with compiler and linker arcana, Niko Matsakis helped me with designing the safe memory management API, and Yehuda Katz helped me with the overall design.

You know what this means? Maybe you can help too!