How Inko Does Safe Concurrency
In my last article on Inko, I implemented several data structures to demonstrate how Inko’s single ownership model works.
In this article, I will expose a big lie in that article and also dive into how Inko safely handles concurrency.
Why Concurrency is Hard
Truth be told, concurrency is hard for a lot of reasons, but the one that comes up most often is concurrent memory access. If you have two threads of execution running at the same time, and they both read from and/or write to a variable, the variable is likely to end up with incorrect or garbage data.
The classic dead-simple example is creating threads that all increment an integer variable. To increment an integer, each thread needs to read the current value, add one to it, and write it back to memory.
There are two problems with this; one subtle and one not-so-subtle. The not-so-subtle problem is that any two threads might read the same value and increment it to the same value, when the expected behaviour is that each thread is incrementing one after another. The more subtle problem is that if the value is larger than a single unit of memory (typically 64 bits), it is possible that two different threads will write different values to half the unit. The resulting data in this case is not only wrong, it’s garbage.
There are a lot of different possible ways to solve concurrency problems. Python, for example, literally only allows one thread to be active at any given time. This made more sense when Python was invented in an era of single-core computers than it does today, but this strategy still works for a very large subset of computing problems. It also has the advantage of very easy to reason about.
At the other end of the spectrum, we have languages like Rust, C, and C++, which allow you to access memory however you want, and there are a variety of strategies to solve the concurrent access problem, including channels, locking, and various sychronization primitives.
In the middle are languages that support only one or two of the possible concurrency paradigms. The most common paradigm
is message passing between isolated threads. Erlang, and Pony both use this model to great effect, and even Python
supports it in a somewhat heavy-handed way using the multiprocessing library (message passing using multiprocessing
is
quite a bit more expensive than in the light-weight processes used in the other mentioned languages,
so you have to be careful how much data you share that way).
Go also relies on message passing though it’s a bit less strict about who can access global memory than the others.
The basic idea behind message passing is that any one piece of memory can be accessed by exactly one thread, and if two threads need to “share” data the only way to do so is to send the data (either by moving it or by moving a copy of it) over a channel.
In my opinion, relatively few concurrency problems should not be solved with this message passing model, so it makes sense that many languages really lean into it, making message passing an integral part of the language and forcing a fat runtime for message handling and synchronization on the end coder. These languages might not be able to elegantly solve all possible concurrency problems (you wouldn’t write an operating system in one, for example), but that’s ok; no language needs to satisfy all use cases to still be useful for a wide variety of other cases.
Concurrency and Inko
Inko’s concurrency framework is very similar to Pony’s, although I find it easier to reason about Inko. Mulitple “light-weight processes” (not to be confused with OS-level processes, such as python’s multi-processing uses) are easy to spin up and are managed entirely by Inko’s runtime. Sending messages is as easy as calling an async method, though receiving data back means you need to explicitly pass a channel or reference to the caller to send the response (async methods don’t have return values).
This all comes with added complexity at the memory management level, though. Recall that Inko leans heavily into single ownership. You can have as many references (mutable or immutable) to a value as you like, but one of those values must be the “owner”, and no reference is allowed to outlive its owner (enforced at runtime).
The rules for sending messages from one process to another are necessarily stricter: you are no longer allowed to have references to the unique value when you send it to the other process. This is enforced at compile time, which effectively means that the value must have been unique for its entire lifetime. So you can’t create a value, create a reference to the value, drop the reference, and then pass the owning value to another process. If a value is going to be sent to a different process, it must be unique from the time it is created. The receiving process is ok to convert it to a normal owned value if it doesn’t want to send it on to other processes, though.
Inko has a uni
reference type to model this behaviour. The key features of a uni
type are:
- it can never have any
ref
ormut
references to it. It is unique. There is only one. - Any fields on the object must also contain unique data, meaning there can be no references to those fields from outside the object.
This does not mean that the fields all have to be
uni
pointers, though, as Inko’srecover
expression can be used to guarantee the uniqueness of the fields when you are constructing them. - it cannot contain any
ref
ormut
references to outside data. Again, this doesn’t mean that none of the object’s fields can beref
ormut
references, but if they are, they must be referring to data that is somehow owned by the object that is unique.
So whatever data you share between processes can be as complex (with nested fields, refs and all the rest of that) as you want so long as it is isolated: There can be no external pointers to data inside the isolated value, and the isolated value cannot have any pointers to data outside of it.
Some code
This is basically just going to be the “hello world” of concurrency in Inko. We’ll spin up a separate process
(for the remainder of this article, “process” always means a lightweight Inko process, not an os-level process)
that prints "hello world"
.
Start with this:
import std::stdio::STDOUT
class async Main {
fn pub async main() {
STDOUT.new.print("Hello World")
}
}
That’s it! That’s the code to spin up a separate process that prints “hello world”. It looks suspiciously
identical to the non-concurrent hello world
code, doesn’t it?
In fact, every Inko program has a Main
process, and the first thing Inko does when the program starts up
is initialize that process and call the main
async function on it. That’s why both the class
and the
fn
are marked as async
. Async is the keyword Inko uses to denote interactions with processes. If a class
is defined as async
, then every instance of that class will have a dedicated process for it. And any
time you call an async
method on the class, you are passing a message to it.
Let’s dig into that a little further by moving hello
to a different method:
import std::stdio::STDOUT
class async Main {
fn pub async hello() {
STDOUT.new.print("Hello World")
}
fn pub async main() {
hello()
}
}
In this variation, we are still creating only one process. That process sends a message to itself
to call the hello()
function, which prints the ubiquitous message to the screen.
The output of this program may be a little surprising, though… it’s (probably) empty!
Let’s see why: Behind the scenes, the async hello
method does not execute immediately.
Instead,
it queues up a message to be sent to the existing Main
process (that hello()
call is implicitly a self.hello()
)
But the existing Main
process stops processing messages as soon as main()
is done, and it ignores the queued hello
message when the program exists. This is much different from the behaviour in Inko’s sibling, Pony,
which waits for all processes to become idle with no more messages to send before exiting.
To prove this, consider what happens if we sleep for 1 second before exiting:
import std::stdio::STDOUT
import std::time::(Duration)
import std::process::(sleep)
class async Main {
fn pub async hello() {
STDOUT.new.print("Hello World")
}
fn pub async main() {
hello()
sleep(Duration.from_secs(1))
}
}
You might hope this is giving the main process ample opportunity to process that hello
message, but actually
the output is still empty; it just takes a little longer to exit.
Any one process can only process one message at a time, and it has to process the
message completely before it jumps to the next one. So unlike async
methods in cooperative multitasking
systems (such as Python’s asyncio), it’s not possible to say “pause my execution and execute a different
message in this same process, then come back to this one.”
To further cement this idea in place, let’s make the hello
method accept a channel so we can send a message back
to indicate that we are done.
import std::stdio::STDOUT
import std::channel::(Channel)
class async Main {
fn pub async hello(channel: Channel[Nil]) {
STDOUT.new.print("Hello World")
channel.send(nil)
}
fn pub async main() {
let channel = Channel.new(1)
hello(channel)
channel.receive
}
}
In a coroutine based system, the main
function would be implicitly pausing execution at the hello(channel)
line,
handing control to the event loop to let it process any other messages (specifically the pending hello
call),
and then waiting for a value to come back before continuing. We would expect it to print the words and then return.
But that’s not how Inko works, so that’s not what happens. Instead the program hangs indefinitely and never exits.
We have effectively created a deadlock: The pending hello()
message never gets a chance to run because main
is waiting for a value that it never receives because the value is only sent in the pending hello()
message that
never actually runs.
If you paid attention to my previous articles, you might be wondering why the above code even compiles, and the answer is that I lied to you. Sorry about that, but it was easier to lie than explain all the nuances. But now is the right time to get into those nuances.
If you haven’t read my previous articles, the above code might not look that confusing. The odd thing is, it
is blatantly violating single ownership. I’m passing the channel into the hello
function, and hello
is taking ownership of the channel, so the channel is being moved into the function.
So how is it that I can access channel.receive
in the last line of main
?
The lie I told you was that Inko claims “Single Ownership is all you need.” I believe I hedged my lie a bit by
putting a “more or less” in front of it. But the truth is, certain built-in values in Inko use reference counting
instead of single ownership. Channels
are one of these special cases. In fact, the
Inko source code for channels
(Inko doesn’t have a documentation generator yet, so source code is the way to go) explicitly states:
Channels use atomic reference counting and are dropped (along with any pending # messages) when the last reference to the channel is dropped.
This effectively means you can use a channel in as many places as you like without worrying about what order the references
are dropped in, whether they are unique or not, or keeping mut
or ref
references to it.
Other special case types that are reference counted in Inko include String
and, interestingly (more on this later),
processes.
The solution is to do what we intended to do from the beginning and create a second process for the hello world
functionality. As usual, I’ll do it wrong to explore the pitfalls, then we’ll make it right. (For the record,
in my day job I never try to get away with “I did it wrong on purpose.")
Two Processes
import std::stdio::STDOUT
class async SecondProcess {
fn pub async hello() {
STDOUT.new.print("Hello World")
}
}
class async Main {
fn pub async main() {
let proc = SecondProcess {}
proc.hello
}
}
The cool thing here is that we don’t have to do anything special to initialize the second process. By defining a class
as async
and instantiating it we’ve created a process. Note that Inko only starts
the machinery for a new process after you send the first message. In this case, that happens when we call proc.hello
. Any time
you call an async
function on a process, you’re creating a message to be sent to that process at some point
in the future. It will be the near future, but it’s not immediate.
As I foreshadowed, this still isn’t working. The second process is probably being started, but because the main
function
exits before it gets a chance to process the message, there is still no output.
Let’s try to solve this using the sleep
trick again, only this time it will work:
import std::stdio::STDOUT
import std::time::(Duration)
import std::process::(sleep)
class async SecondProcess {
fn pub async hello() {
STDOUT.new.print("Hello World")
}
}
class async Main {
fn pub async main() {
let proc = SecondProcess {}
proc.hello()
sleep(Duration.from_secs(2))
}
}
If you run this, it’ll print hello
to the screen and you’ll notice that it takes a couple seconds to get the command
prompt back again after it is printed. This is because the main process is idly waiting for that sleep
call to complete.
This is, of course, almost always suboptimal, and if you are using sleep
to solve concurrency issues in your code
(in any language), something is dreadfully wrong.
Let’s try it with the channel pattern instead:
import std::stdio::STDOUT
import std::channel::(Channel)
class async SecondProcess {
fn pub async hello(channel: Channel[Nil]) {
STDOUT.new.print("Hello World")
channel.send(nil)
}
}
class async Main {
fn pub async main() {
let proc = SecondProcess {}
let channel = Channel.new(1)
proc.hello(channel)
channel.receive
}
}
I guess I didn’t full explain all the details of creating a channel before, partially because it didn’t actually work in that situation and partially because I was rambling about channels being reference counted instead.
The hello
function now accepts a channel
argument. Normally when you pass values into an async function,
you need to make sure they are unique, but as discussed Channel
s are special. A channel is just something
you can pass values into or out of, with the caveat that the values in question must be unique (have no references to them).
In this case, I’m doing something weird where I’m passing nil
into the channel because I don’t actually care what the value
is; just that it is being sent. The fact that it has been sent is all I need to indicate to the main process that I’m done
handling it. That’s kind of a code smell, to be honest, but Inko doesn’t yet have the multitude of process management
libraries that Erlang and Gleam have going for them.
The thing that makes this all work, though, is the channel.receive
call in main()
. This is a blocking call;
nothng in the Main
process can execute (including other messages) until something comes in on this channel.
In this case, the “something” comes almost-but-not-quite immediately, since the hello
message doesn’t take long to process.
Once the response is received, the main
process exits without any of the delays we saw in the sleep
version.
Async class fields
Like normal classes, async classes are allowed to have fields on them, but the values assigned to fields
must all be “Sendable” when the object is initialized.
Since Channel
is a reference counted value we don’t need to do anything special to make it Sendable. Let’s
try creating a class constructor for our SecondProcess
and storing the channel
as a field instead of passing
it to hello
every time:
import std::stdio::STDOUT
import std::channel::(Channel)
class async SecondProcess {
let @channel: Channel[Nil]
fn pub static new(channel: Channel[Nil]) -> SecondProcess {
SecondProcess {
@channel=channel,
}
}
fn pub async hello() {
STDOUT.new.print("Hello World")
@channel.send(nil)
}
}
class async Main {
fn pub async main() {
let channel = Channel.new(1)
let proc = SecondProcess.new(channel)
proc.hello()
channel.receive
}
}
There shouldn’t be anything too surprising in this code at this point; it’s all reusing Inko syntaxes that I’ve covered before. But things get more interesting if we want to make a field out of a normal Inko value.
Unique values
The normal way to extend any hello world
is to start dealing with variables and changing it to hello <name>
. That
would be a bit too easy in Inko because String
, like Channel
, is one of the special-cased reference counted types.
Instead, let’s store an array of strings on the class.
A first pass might look like this:
import std::stdio::STDOUT
import std::channel::(Channel)
class async SecondProcess {
let @channel: Channel[Nil]
let @names: Array[String]
fn pub static new(channel: Channel[Nil]) -> SecondProcess {
SecondProcess {
@channel=channel,
@names=[]
}
}
# ...
}
This fails with the following Compiler error:
src/main.inko:11:14 error(invalid-symbol): The field 'names' can't be assigned a value of type 'Array[String]', as it's not sendable
It’s not clear what “sendabale” means, but for now, think of it as “either a unique or reference counted value.” Channel
was
reference counted, so it was sendable in this context. Array[String]
is not reference counted, nor is it unique.
Field values need to be unique when it is constructed so that the compiler can be 100% certain that no other process is going to try to access data stored in that field while the active process does so.
The way to do that is using the uni
keyword. uni
, short for “unique”, is a type modifier similar to ref
and mut
.
The main difference is that where ref
and mut
explicitly do not own the
data they are pointing to, uni
not only owns its own data, but guarantees that there are no other ref
or mut
references to that data.
This means there are actually five kinds of variables in Inko, in increasing order of permissiveness:
Modifier | Owns the data | Sendable | Description |
---|---|---|---|
uni |
Yes | Yes | There is exactly one reference to the data and this is it. |
ref |
No | No | This is a read-only reference to data owned by a different variable. |
mut |
No | No | This is a read-write reference to data owned by a different variable. |
no modifier | Yes | No | This is the owning variable for a piece of data that can have other references. |
no modifier | Shared | Yes | This is a special case type that the Inko runtime uses reference counting for. |
Note that the field itself doesn’t have to be unique; the uniqueness constraint only applies when communication passes between processes, which happens when it is constructed and whenever async methods are called.
In the new
constructor, we are currently setting @names=[]
.
The problem here is that the empty array is not a uni
value. We know that there is only one instance of the single owning
array in this case, but the compiler doesn’t (though in the future, it could theoretically be inferred for such a trivial case).
To solve this problem, we use a new keyword, called recover
. A recover expression can wrap arbitrarily complex initialization
code, with one key rule: The code “inside” the recover
block cannot maintain references to any variables defined outside the
block. This guarantees that the return value of the recover
expression will contain only unique data. Any references
on that data are pointing to owned values that are owned by the uniqe data, so the whole block is isolated.
In this case, we can use a very simple recover expression in our constructor:
fn pub static new(channel: Channel[Nil]) -> SecondProcess {
SecondProcess {
@channel=channel,
@names=recover []
}
}
The @names=
line is saying, “run the arbitrary code after recover and make sure it isn’t accessing anything outside recover.”
The arbitrary code in this case is just constructing an empty list, and since the empty list is not accessed by or accessing
anything outside the recover
expression, Inko knows it is unique and safe to assign to @names
.
In contrast, code such as this won’t work:
# won't work
let some_names = ["Somebody"]
SecondProcess {
@channel=channel,
@names=recover some_names
}
some_names
was defined outside the recover expression, and accessing it inside recover is breaking the rules.
This can get confusing quite quickly, because you might expect this code not to work either:
let somebody = "Somebody"
SecondProcess {
@channel=channel,
@names=recover [somebody]
}
The compiler is ok with this because somebody
is a String
, which is one of the reference counted special cases. In general, these
special cases are very convenient, but when you’re experimenting and learning, they can lead to a lot of “but I thought that was against the rules”
confusion.
Here’s one that surprised me:
let somebody = ["Somebody"]
SecondProcess {
@channel=channel,
@names=recover [somebody.pop.unwrap]
}
I really expected this to fail because I’m accessing somebody
inside the recover expression. But apparently it’s ok to access
references defined outside the recover expression as long as you don’t assign them to the returned value.
Yorick just confirmed that this is a relatively new feature in Inko. In this case, I am able to access
the somebody
reference and extract the String
which, as in the previous example, is legal to assign to the array.
Other places the “sendable” rule applies include parameters passed into an async function and values sent into a channel.
Arguments to async
functions must be sendable
Let’s look at the simple case first: a method that pushes a name onto the @names
array. This is one of those
deceptively simple cases:
fn pub async mut add_name(name: String) {
@names.push(name)
}
The (reference counted) String
is sendable, so this code just works, although I had to experiment a bit
before I remembered what order the pub
, async
, and mut
modifiers were supposed to go in!
But if we want to add multiple names in a (non-sendable) Array
, we have to remember that pesky uni
:
fn pub async mut add_names(names: uni Array[String]) {
@names.append(names)
}
And if we want to call this async method from our main
(or any other) process, we have to remember the (equally pesky) recover
:
let proc = SecondProcess.new(channel)
proc.add_names(recover ["Dusty", "Phillips"])
proc.hello()
channel.receive
Async Classes are reference counted
For my next trick, I want to create a network of three interacting processes. Two processes will send messages back and forth to each other and the main process will sit and wait until those two processes have finished chattering.
But we’re going to do it without holding a channel between the two processes (we’ll keep the one that notifies main
that we’re done, though).
As usual, I’ll build this up in several steps.
Start with some boilerplate for the three processes:
class async Ping {
fn pub static new() -> Ping {
Ping { }
}
}
class async Pong {
fn pub static new() -> Pong {
Pong { }
}
}
class async Main {
fn pub async main() {
let ping = Ping.new
let pong = Pong.new
}
}
Update the Pong
constructor to accept a Ping
so it can eventually respond to any queries:
class async Pong {
let @ping: Ping
fn pub static new(ping: Ping) -> Pong {
Pong {@ping=ping }
}
}
class async Main {
fn pub async main() {
let ping = Ping.new
let pong = Pong.new(ping)
}
}
This may be a bit shocking. Pong
is an async
class, which means that all its fields have to be sendable when it is constructed.
But we didn’t have to use uni
to pass a Ping
to the new
constructor, nor did we have to use recover to constructping
.
This is because async classes in Inko are reference counted, just like
String
s and Channel
s. So the lie I told you before was an even bigger whopper than you guessed. First I said that single
ownership was your only option in Inko. Then I said, well actually, there are some special cases that are
shared ownership, but all your bespoke Inko code has to be single ownership.
Now it turns out that if you want to make use of reference counting for arbitrary objects in your Inko programs, just make those arbitrary
objects async class
and there’s your shared ownership! Once again, I apologize for the dishonesty, but
I hope you agree that it was easier to understand this order.
Now I’m going to do something I probably shouldn’t and also make Ping
aware of Pong
so it can send messages as well.
This can’t be done in the ping
constructor, since we need the Ping
to exist in order to construct the Pong
.
Instead, I’ll set the field to be an option and add a set_pong
to set the value after Pong
is constructed.
This is harder than it sounds because, while Pong
is sendable by default due to its reference counted nature, an option
that contains Pong
is not sendable. So we need to recover the option when we assign it:
class async Ping {
let @pong: Option[Pong]
fn pub static new() -> Ping {
Ping {@pong=recover Option.None}
}
fn pub async mut set_pong(pong: Pong) {
@pong = recover Option.Some(pong)
}
}
If you think about it, this is pretty weird. Our @pong
is unique; we are guaranteeing there are no other references
to that Option
. But the value the option contains is actually not unique; it’s using shared ownership. This smells rotten,
but let’s run with it anyway.
Now let’s add a ping
method to the Ping
class and a pong
method to the Pong
class. They both do the same
thing: spit out some text on stdout and call the opposing method on the other class:
fn pub async ping() {
STDOUT.new.print("PING")
match @pong.as_ref {
case Some(pong) -> pong.pong
case None -> nil
}
}
The ping class needs to first check if its Option
is some or not before it can act on it. We have to use as_ref
so we
are matching on an option that contains a reference (bizarrely, an option that contains a reference to a shared reference,
but we’ve already agreed to ignore the code smells, right?).
Pong
’s method is much simpler, because there’s no option:
fn pub async pong() {
STDOUT.new.print("PONG")
@ping.ping
}
If we start the two processes running with a single call to ping, you might expect to end up in an infinite loop.
Instead, however, you’ll get no output at all, because the main process is exiting immediately. But if we
temporarily add the sleep
trick, we can see the two processes communicating:
class async Main {
fn pub async main() {
let ping = Ping.new
let pong = Pong.new(ping)
ping.set_pong(pong)
ping.ping
sleep(Duration.from_secs(2))
}
}
This will print (I’ve been typing ping and pong so much that I wrote pring
instead of print there…)
line after line of PING
, then PONG
on stdout for two seconds.
Now if you think about it, this is actually pretty impressive. The two independent processes are executing
in lockstep, completely and intentionally synchronized. But we don’t have any awaits in there and there
is no synchronized
keyword, no locks, barriers, or mutexes. The code is easy to read in a linear fashion,
but we are able to model complex synchronization problems (perhaps I’ll do an article on fan-out, fan-in, producer
consumer and related patterns later) safely and easily.
Of course, you know I don’t like the sleep trick, so lets modify Pong so that it keeps track of how many times it’s executed, and have it notify a channel after a certain number of executions:
class async Pong {
let @ping: Ping
let @done: Channel[Nil]
let @count: Int
fn pub static new(ping: Ping, done: Channel[Nil]) -> Pong {
Pong {
@ping=ping,
@done=done,
@count = 0
}
}
fn pub async mut pong() {
STDOUT.new.print("PONG")
if @count <= 5 {
@ping.ping
@count += 1
} else {
@done.send(nil)
}
}
}
I’ve added two fields to Pong
. The done
Channel
is one of the special case refcounted values so it doesn’t need uni
,
and the count
Int
is also a special case, though I assume that is because it is a primitive rather than because it
is reference counted. (Primitive values are always copied in Rust, which is what Inko is implemented in).
The pong
async method is pretty straightforward, although do note that I had to change it to pub async mut
because it is
now modifying its own @count
field.
Now our main
function can look like this:
class async Main {
fn pub async main() {
let done = Channel.new(1)
let ping = Ping.new
let pong = Pong.new(ping, done)
ping.set_pong(pong)
ping.ping
done.receive
}
}
We’ve added a done
channel, passed it into pong
and set the main
process to wait for it to complete using done.receive
.
The code will run five ping pong cycles and exit cleanly as soon as its done.
On the pitfall of reference counting
I want to talk a bit more about the “smell” in this code. The interaction of references and the implicitly reference counted ping and pong instances is kind of weird. Shared pointers with references are fairly easy to work with which is why so many languages use them as their default or only memory access paradigm.
However, they have one key problem: If there is a cycle in the reference
counts, the memory will never be freed. Specifically in this example, our Ping
has a reference to Pong
and Pong
has a reference
to Ping
. Those two references could keep the object alive and consuming precious “idle process” resources, even after both objects have otherwise gone out of scope.
In this case, the leaked memory is benign since the whole problem exits at the same time that the two processes go out of scope, but if you were spontaneously creating a bunch of processes that keep references to each other in an ad-hoc manner, you could end up with a memory leak.
The normal solution to this is to add garbage collection to the runtime to detect these reference cycles. I’m not aware that Inko does this with processes, so you have to be careful to either avoid cyclic references or explicitly clean them up. In this case, it would probably be smarter to use channels to communicate between processes.
On the collision of single ownership and uni
values
While implementing inko-http, a (very very basic) http server, I found that
there was often a clash between my “normal” Inko code and my “concurrency safe” Inko code. In normal code, I want to pass
around singly owned values that may have multiple references to them. As soon as I want to pass that code to a process,
I need to be able to convert the value to a uni
object. Then once it crossed the process boundary, I usually wanted
to convert it back to a normal object that could be passed around by reference.
Converting a unique object to a normal one is safe and easy, but going the other way around is not safe. There’s no way for the compiler to know whether the unique object has references to it or not. So you really have only two options:
- Create the value as unique from the beginning and pass it around by value, never allowing a reference to it.
- Create the value as singly-owned, and create a deep copy of it when you pass it to another process.
The second option has a bit of a performance penalty, but I found it to usually be much much easier to reason about. To facilitate
the converting of objects to and from a uni
value, I created the following trait:
trait pub Uni[T] {
fn pub clone_to_uni() -> uni T
fn pub move into_referrable() -> T
}
Most of the classes in my http server implement this trait.
The first method takes a single owned value by reference and creates a new unique value that is a copy of that value. The second method takes a unique or singly owned value and returns it unmodified; this is kind of just a flagging method to say “this value is no longer unique and can now have references.”
Here’s an example of an implementation of this trait for an AppRequest
class:
impl Uni[AppRequest] for AppRequest {
fn pub clone_to_uni() -> uni AppRequest {
let uni_request = @http_request.clone_to_uni
let path_params = recover Map.new
let iter = @path_params.iter
loop {
match iter.next {
case None -> break
case Some({@key = name, @value= value}) -> {
path_params.set(name, value)
}
}
}
recover AppRequest {
@http_request = uni_request,
@path_params = path_params
}
}
fn pub move into_referrable() -> AppRequest {
self
}
}
As you can see, the clone_to_uni()
function can get rather complicated as you need to recursively clone
all of its fields as well. The main effect is in the recover
expression at the end of the function; I’m
returning a newly constructed AppRequest
that is guaranteed to contain only unique values.
In contrast, the into_referrable()
function is much simpler; it accepts the value as move
meaning it
is taking ownership of it. By having ownership of the unique value, it can opt into doing whatever it wants
with it, which includes returning itself as a non-unique value.
To be honest, I’m not sure if this pattern is a good way to code Inko, but it was the one that I fell into
as I was experimenting with concurrent code. If your program is really doubling down on the actor model,
with tons of separate interacting processes, it is likely that it won’t encounter many non-unique values,
as everything is being passed to an async process and then locally forgotten. On the other hand, if you
are writing a library with zero concurrency, you’ll never need to think about unique values. I actually
really like this about Inko: software that isn’t highly concurrent is relatively easy to reason about,
and you only have to think about unique values if you decide to introduce concurrency. So, for example,
my entire regular expression engine operates in a single process, and as such, there isn’t a uni
or
recover
expression in the entire codebase. That isn’t to say you can’t use my library in a concurrent
codebase, but as the author of the library I don’t have to think about concurrency for you to be
able to use it.
I also expect that concurrent coding with Inko will get easier as the language evolves. I imagine that conversions between unique and non-unique values will become easier as the compiler learns to infer different states better, and there may be new standard (or third-party) library functionality added to facilitate various concurrency patterns over time. Right now it feels like the right bones are in place to create a very strong concurrency paradigm, but you have to jump through some hoops to make your code cooperate with it. But once Inko gets some flesh on those bones it should as much of a joy to use as I have found the non-concurrent coding experience to be.