In earlier articles, we implemented a tokenizer and parser to convert Wasm’s WAT syntax into an S-expression abstract syntax tree, and started to implement a transformer to convert that AST into one more suitable for generating Wasm bytecode.
This article continues where part 8 left off, as we try to expand the parser to something more than an empty WAT (module).
Reminder: You are reading content that took a great deal of effort to craft, compose, and debug.
In earlier articles, we implemented a tokenizer for the Wasm text syntax (WAT) and started on a parser to convert those tokes to a S-expression AST.
In this part, we’ll start to create a transformer to convert that AST to a new one that better matches the WASM output we will be crafting. Don’t ask me how many parts that’s going to take!
Reminder: You are reading content that took a great deal of effort to craft, compose, and debug.
It’s been a while since I published the last chapter of LazyVim for Ambitious Developers on the website, but I was waiting for the print edition to be available to share it widely. And that required waiting (twice) for proofs to be mailed to me.
But it’s finally here! For a direct link to purchase the print edition, click here.
I’m proud of the book contents. I’m pretty confident that even seasoned Vim power users would pick up one or two tricks they didn’t already know.
In earlier articles, we implemented a tokenizer for the Wasm text syntax (WAT). In part 6, we started building a parser. We ended that part on a bit of a down note when I realized we were in for yet another refactor. I’m in a better mood today and it’s looking like it won’t be so bad, after all!
Reminder: You are reading content that took a great deal of effort to craft, compose, and debug.
In earlier articles, I introduced this “WAT to Wasm compiler in Roc” project, wrote some Roc code to load an input file, and implemented a tokenizer for a “hello world” of Wat to Wasm compilation. It was… more work than I expected. Four blog posts more work, to be precise! I have no idea where it’s going to end.
But I do know what’s next! Parsing.
Reminder: You are reading content that took a great deal of effort to craft, compose, and debug.
In earlier articles, I introduced this compiler project, wrote some Roc code to load an input file, and started implementing a tokenizer with error handling.
I think I need to admit that I have absolutely no clue how to estimate how long a blog article is going to be. I thought “build a compiler” would be one article. And then I thought “build a tokenizer” would be one article. sigh
I swear we’ll be done with tokenizing at the end of this post. But first we’ll take a detour to have a look at Roc’s very elegant built-in testing syntax.
Reminder: You are reading content that took a great deal of effort to craft, compose, and debug. If you appreciate this work, consider supporting me on Patreon or GitHub.
In earlier articles, I introduced the project, wrote some Roc code to load an input file, and started implementing a Tokenizer.
This part takes a bit of a detour with a refactor to support rudimentary error reporting.
Reminder: You are reading content that took a great deal of effort to craft, compose, and debug. If you appreciate this work, consider supporting me on Patreon or GitHub.
Handling Errors during Tokenizing Before we start adding more tokens to tokenize the Hello World module, I want to beef up our error handling a bit.