Build a Wasm Compiler in Roc - Part 16
In earlier articles, we have implemented a tokenizer, parser, and transformer to convert WAT syntax to a Wasm AST, and got started on the code generation.
In this part, we’ll finally wrap up code generation and be able
to compile our hello world sample into something that wasmtime
can run.
I hope.
The export section
I’m pretty sure the only code we’re missing from our hello world example now is
the export section. I left it for last, even though it is fairly simple,
because our exports depend on the existence of either an import or a func in
order for wasm-tools dump
to be willing to output an export for us.
I ran wasmtools dump
on our entire hello world example. The export
section
is extracted below:
0x44 | 07 13 | export section
0x46 | 02 | 2 count
0x47 | 06 6d 65 6d | export Export { name: "memory", kind: Memory, index: 0 }
| 6f 72 79 02
| 00
0x50 | 06 5f 73 74 | export Export { name: "_start", kind: Func, index: 1 }
| 61 72 74 00
| 01
The export section has the section identifier 7, and this particular export
section has nineteen bytes (0x13
in hex) in it. Those nineteen bytes expose
two exports in a vector, the length of which is encoded as the single byte
0x02
.
Exports contain the name of the function encoded as a vector of bytes. Both
names happen to be six characters long, so each vector starts with 0x06
.
These are followed by the UTF-8 representation of the names as a sequence of
six bytes.
The 02
and 00
bytes in the second last position of each export indicate
what kind of export this is. 02
means we are exporting a memory, and 00
means we are exporting a function (01
is for tables and 03
is for globals
two Wasm concept I’ve deliberately ignored).
The last byte in each export section is an LEB128 encoded integer. The first
export says we are exporting the memory at index 0
in the memories
section,
and the second export says we are exporting the function at index 1 in the
function index space. The functype space, recall, is all imports followed by
all functions in the module. We import one function, which is at index 0
in
functypes, so our main
function is at index 1.
So our generateExport
function will need to be able to look up indices in the
typeDict. Technically, it should also need to be able to look up indices in the
mems
section of the module. However, I’m going to hardcode that because even
the Wasm standard has not committed to supporting multiple memories yet!
The generateExport function
The generateExport
function will look similar to our other functions, except
it accepts a Transformer.Export
. So the first task is to ensure Export
is
in the exports
section of the Transformer
module.
As a refresher, Export
is defined like this:
Export : {
name : Str,
type : [Mem MemIndex, Func FuncIndex],
}
MemIndex
is a type alias to a U32 and FuncIndex
is a type alias to a
String
. Something in the Transformer
smells there, but let’s roll with it.
This rather messy function will get us close to where we need to go:
generateExport : TypeDict, Transformer.Export -> List U8
generateExport = \typeDict, export ->
name = generateVector (Str.toUtf8 export.name) typeDict \_, b -> [b]
typeAndIndex =
when export.type is
Mem memIndex -> List.concat [0x02] (memIndex |> generateU32)
Func funcId ->
funcIndex =
when (typeDict |> Dict.get funcId) is
Err KeyNotFound -> crash "export Func identifier must always be in functions table"
Ok index -> index
List.concat [0x00] (funcIndex |> generateU32)
List.concat name typeAndIndex
I decided to assign some definitions instead of going all in on pipelines here
because the code to extract the two (plus) bytes for the typeandIndex
is
rather complex. There’s nothing too terribly complicated in here; the bulk of
the mess is because Dict.get
returns a result that the standard library
doesn’t have an unwrap
function for.
I didn’t write a unit test for this. Instead, I ran roc dev -- hello.wat
followed by wasmtime hello.wasm
:
❯ roc dev -- hello.wat
❯ wasmtime hello.wasm
hello world⏎
This is definitely the hardest I have ever had to work to see those two words! But… it works. We’re done!
I’ve pushed my code up to this repo.
Summary of the project
The project is only 2700 lines of code, more than half of which are in tests. The test expectations for the generator are bloated because Roc’s formatter tends to split my long lists of bytes over one line per byte.
The blog articles are 50,000 words spread over 16 parts. I could bundle them into a short book! For comparison, LazyVim for Ambitious Developers is around 75,000 words, and it is a 280 page book.
Though it was released slowly over several months, this whole series took only two weeks to write the first draft, working on it full time. Well, maybe a little more than full time. More like 10 hour days for 14 days. When a project gets interesting it’s hard to pull me away!
Because I documented this as a stream of consciousness as I worked instead of trying to structure it in an educational order, it has a lot of false starts, dead ends, and refactorings. I feel like that kind of suits the project. One drawback is that if you are midway through the series, you may be reading code that I refactored or removed later, and there is no indication that you shouldn’t do it like this. Another is that the sheer volume of content to read is higher than it would be if I retconned it as if I got everything right the first time.
Writing a compiler to compile one specific piece of syntax is a pretty useless project. As with all my code, I tried to leave doors open to improve it in the future. My general policy is to always code only exactly what is needed and nothing more. Trying to anticipate future needs just results in writing a bunch of code that doesn’t, in fact, solve those future needs. If we know what the future needs are, they are present needs.
So extending our tokenizer to support attributes, or adding instructions to the AST, or generating custom name sections shouldn’t be an utterly horrible experience. But there are a few things I would want to refactor before I tried to turn this into a real product, most notably more consistent handling of positions.
What’s next?
This project is dead. I never meant it to be more than an educational adventure (for myself and any of you who wanted to tag along). There are already perfectly good WAT-to-Wasm compilers out there and I have no interest in maintaining another.
However, IF I wanted to maintain it, there are a few things I would tackle:
- We aren’t anywhere near able to parse the entire WAT spec, let alone generate bytecode for it. Supporting new Wasm features is mostly “more of the same”. Piping it through all four layers would be very tedious. But for a certain pedantic kind of mind, it would probably be rather relaxing.
- I intentionally wanted to build a compiler from scratch without any dependencies, but that’s really pretty ridiculous. There is a roc-parser library that is capable of handling the entire tokenizing and parsing phase for us. I have no idea if it is featureful enough to parse the entire WAT syntax, but if it isn’t, I am confident it makes more sense to contribute to that project than continue with this home-built system.
- I didn’t handle token positions consistently or correctly. If I were to go back and start this project from scratch, I wouldn’t store positions at all. They added a lot of distracting code that isn’t really central to the concept of building a compiler. That said, I am proud that my tutorial at least attempted to do something that most compiler tutorials ignore. In the real world, Positions are vital for debugging.
- Our compiler can only compile one module at a time. We could modify it to be able to compile multiple files, but it would still only run on one file at a time. Roc itself doesn’t have any multi-threading or parallel support. So the next step in turning this into a real compiler would almost certainly be to create a custom platform (probably in Rust, Zig, or Go) that provides interfaces to spawn compilation tasks on other worker threads.
- I completely skipped validation of the generated AST. This is actually not allowed by the Wasm spec. A true Wasm compiler is required to follow certain validation requirements or it can’t call itself a Wasm compiler.
Thoughts on Roc
I really like this language. Like, a lot. It is the easiest “compile to native binaries” language I have ever learned. It’s much more pleasant to code in than Go, and I would expect performance to be on par.
It’s also the easiest-to-learn “pure functional” language I have ever studied. It feels less bloated, somehow, and yet more usable. I never once wished I could reach for a mutable variable, or imperative loop. It didn’t really feel like I had to jump through hoops to write functional code. Although that may partially because I’m more used to functional paradigms than I was when I started studying the various other functional languages I’ve learned.
And yet the syntax is so small. There are really only a handful of reusable concepts in Roc, but they combine thoughtfully and elegantly to perform rather complicated tasks. And for the tasks that Roc isn’t good at handling, it is not just possible, but easy to extend the language with tasks in your own custom platform.
Roc has a handful “advanced” concepts in its tutorial that it claims you don’t need to know to be effective coding in the language. And it’s true; this entire project didn’t use any of those concepts.
While Roc is easy to write, where it really shines is in how readable it is. I would say it is as at least readable as Python, which is pretty impressive considering that Roc is soundly and strongly typed and needs to compile to native code.
My main complaint with Roc right now is that it is hard to debug. Compiler errors that have been thoughtfully designed are really pleasant to encounter, but the errors that say “TODO: Add context” are… less so. The problems that manifest as a crash or freeze with no output are even worse, and they are pretty frequent.
For the most part, I consider these problems to be an issue with the young implementation of the compiler than a problem with the language itself. There is one area where language design may be an issue, but I’m not sure:
If you have a fairly complex type that has a typo in it and you try to pass it
to an expression that is expecting a different type, the compiler error is at
best “I got this huge type and expected this huge type.” I noticed this most
often when hand-crafting expected results in my expect
sections that needed
to match the real result. If I misspell or entirely forget a field on a record,
there is very little feedback about where the issue is. It just says, “I
expected x and got y”, where x and y are very large and hard to introspect.
This could be partially solved with better diffing logic in the compiler output, so I’m not too worried about it, but I am a little worried. Because the types are extremely inferenced, if you get the type wrong somewhere, there is no way to notice you got it wrong because it is inferred as the correct type. Explicit types on definitions ameliorate this, but not all definitions deserve explicit types (especially function locals, for example).
For comparison, another functional language I like is Gleam: it also has strong and sound typing, but the only types that exist are types that you explicitly define. This is a much different coding experience from Roc’s implicit and inferred tag unions and so on.
To be clear, I really enjoyed working with Roc’s implicit tag unions… when they were easy to work with. I just think that the industry would have to work with this concept for quite a while before we can collectively be certain whether it’s a good idea. I think it’s a good idea; I just don’t know if it would scale.
I had some concerns about performance midway through the project, as just twenty odd unit tests took around a second to compile and run. I’m slightly less concerned about that now; I’ve tripled the number of tests and probably more than tripled the amount of code that needs to compile, and it’s still only taking about 1.2 seconds to run. So it seems a lot of that is warm-up time of some sort rather than linearly increasing compile or run times. I’d need to make it a lot more complicated to know for sure, but it’s a huge relief compared to my last attempt at a statically compiled project, which was written in too-slow-to-compile Rust.
Thoughts on Wasm
This was a weird project because I’ve been watching Roc from afar for months excited for the opportunity to try it out, but unsure what I might I want to do with it. I’m bored to death with web servers and toy to-do apps! And there are a lot of things that Roc isn’t suited for because of it’s pure functional nature. I knew I wasn’t interested in writing my own platform (at this time), so I was limited to some kind of CLI tool that just manipulates bytes.
I’ve been kind of interested in the “Wasm as a replacement for docker” conversation for a long time, but other than random conversations with a friend enthusiastic about the idea, I didn’t really understand what it entailed. I really thought WASI was further along than it is, for example. I didn’t even know that Wasm bytecode is stack-based, and therefore quite similar to the Python bytecode that I am already a little familiar with.
Writing a compiler for Wasm is a really good way to get a crash course on the entire ecosystem. I’m honestly surprised that anyone uses Wasm for anything at all, considering how little of it has actually be standardized. The fact that Rust can target Wasm and run “pretty much anything” in the browser is really impressive, considering that it only has four types (and they’re all numbers).
I’d say I’m somewhat less excited about Wasm than I was; the actual product is “just another runtime.” It seems ridiculous to me that browsers now have to ship with two completely different runtimes (Javascript and Wasm). Wasm isn’t strictly superior than Javascript, and I do not see a clear path to a future where “all code is Wasm, and Javascript just transpiles to Wasm.”
When it comes to running Wasm outside the browser, it’s even less mature than the browser situation. The current version of WASI is 0.2 AKA preview 2, but it’s so new that runtimes don’t necessary support it. Not only is it an evolving standard, but it depends on other evolving standards. Maybe someday it will be useful, but that someday is a long way off. My instinct is that by the time “Server-side Wasm” is standardized and ready for consumption, we’ll have moved on to completely different technologies and it will be dead in the water.
I do like how simple Wasm is; I feel I understand it fairly well now and if I wanted to implement a real compiler that targeted the entirety of the Wasm specification I would be able to do it (it’d take more than two weeks, though).
I also like that it is intentionally designed to compile to efficient native instructions. If there is a future where Wasm is the base layer for everything, it will be a future where all compilers target wasm, and then Wasm tools translate that bytecode to native code for different architectures. But such tools already exist; what advantage does Wasm have server side over LLVM or Mono IR?
I suppose the answer is “External control over what system calls the program is expected and allowed to use”. That has the potential to be a big win, as it brings us back to safely running code natively on bare metal instead of virtualizing it inside containers. That is the outcome I was excited about when I first started thinking about server-side Wasm, and it’s still a noble goal worth pursuing.
On the unexpected symmetry between Wasm and Roc
I didn’t realize when I chose these two technologies that they have some interesting similarities.
I thought Roc’s “bring your own platform” model was truly unique when I first read about it, but as I explored the Wasm ecosystem, I realized that’s actually what Wasm does! By default, Wasm can’t touch anything outside its own environment unless you explicitly set it up as an import and whatever runtime you supply hooks up that import.
For example, If you write a Rust host that knows how to load Wasm bytecode and execute it, with certain functions you expose to it, you’re conceptually doing something very similar to setting up a Rust platform that knows how to supply those same interfaces to Roc.
This has my head buzzing with possibilities. What if, instead of targeting Mono IR and compiling to native code, Roc instead targeted pure Wasm bytecode modules? Roc itself could be much simpler because it doesn’t need to compile to native (it can already run in Wasm) code, and it doesn’t need to specify its own ABIs and interfaces for communicating with host systems; it could just rely on Wasm infrastructure to do that part (if such Wasm infrastructure existed / was standardized / worked).
That said, I feel that in spite of being younger and more complicated than Wasm, and not even having a 0.1 release yet, Roc’s ecosystem is already stronger and has more potential than Wasm’s does! I suspect Roc would be less pleasant to work with than it currently is if it wholeheartedly embraced the Wasm ecosystem. That is really bad sign for Wasm.
As a thought experiment, what if we considered the inverse? So for example, what if there was an interface to allow running Wasm code inside a Roc platform instead of (or in addition to) Roc code? I don’t know enough about the Roc ecosystem to understand whether this is reasonable or even possible. The Roc ABI is probably very Roc-specific at this time. But it’s an interesting question; In the long run, does Roc the platform have more potential than Roc the language? Does this small group of passionate developers have the potential to build an alternative to Wasm that is better than the product of an entire web standards committee? (Never mind. A small group of passionate developers have often be able to do better than any committee.)
That said, it remains to be seen whether the Roc developers are passionate about actually shipping a version 1.0 product that they are wiling to back in the wild or not. As we’ve seen in this series, writing a compiler is a lot of work. Writing a usable and useful compiler that has all the needed tooling and backwards-compatibility guarantees is many orders of magnitude harder. I wish the team the greatest of luck in this endeavour. The truth is, all recent commercially successful languages have had major corporate backing from major organizations. I would love to see a small passion project like Roc come out of nowhere and take those guys on, head to head!