Skip to content

Functions as a service

What I'm doing

I've been heads-down for a while trying to make a new product. I'm working on a functions as a service (faas) offering for my day job.

It has been 10 years since AWS launched Lambda. It's a cool service that brought the concept of functions-as-a-service to a generation of developers. It's 2025 now though, and the world has moved forward. It's fair to see if there are ways to improve for some use cases.

What about AWS Lambda?

Lambda popularized the idea of ephemeral, pretend-stateless containers as web request handlers. Many developers I talk to have a mental equivalence relationship between "Lambda" and "stateless web request handler." If it's so popular, why would I be taking a look at it and thinking I can do better?

Well hubris is the easy answer, but there's a little more to it than that. I swear. Despite probably most of the stuff you will read on this site.

In 2025, Lambda is both too low-level and too high-level.

Too low level:

You have to make or select a container, which runs inside a VM. That container needs to have an OS and all the trimmings to make your application live.

Yes, this gives you a lot of flexibility, but it puts much on your shoulders. You can ease your pain by staying with languages and versions for which the Lambda team publishes a curated image, but it's a facade.

The low-level OS integration of Lambda prevents them from effectively sharing connections and state. Each invocation is essentially on a different server. You have to publish or choose an operating system for each of your lambdas.

Too high level:

With the challenges associated with making your own image, most people use pre-made images. This means your OS is managed by someone else. There's a lot of stuff in an OS, and your surface area is pretty broad.

Taking an operating system as your function environment is like buying an office building so you can rent an office.

The depth of the low-level difficulties leads people to take the highest-level abstraction they can, to get them to writing their function as quickly as possible.

What's the difference

Instead of writing a computer, I'm writing an environment for webassembly. I'm using the wasmtime family of libraries, which has limited guest language support today but is under active development by a lovely team of maintainers.

Webassembly is pretty low-level, but with bindings for Rust's standard library through wasm32-wasip2, you essentially can treat my execution environment as a library instead of an operating system. You're one hop away from making syscalls like __clock_gettime64 directly from your application code. It's lower-level than Lambda.

Webassembly is safety-conscious way to run untrusted code. It has many of the properties of Lambda, wherein you can express a stateless faas web request handler, but you don't have operating system or CPU architecture considerations getting in the way of your execution. You just write your function, and directly upload it. Connection caching is a problem for me, the host of your function, and a hundred other peoples' functions. It's higher-level than Lambda.

It's a more bare faas offering, doing way less. It doesn't try to pretend that a container on Lambda is the same thing as a container on Kubernetes. It doesn't offer you a whole virtual server to run on for up to 15 minutes, bending the idea of faas. By doing less, it can do more for you, of the things that matter. Things like connection pooling, avoiding cold starts, billing for time spent active instead of time spent waiting for web dependencies (haha lambda's perverse incentive to keep aws slow), and being easy to package.

Show me

Here's a complete hello-world echo function:

rust
momento_functions::post!(echo_handler);
fn echo_handler(payload: Vec<u8>) -> FunctionResult<Vec<u8>> {
    Ok(payload)
}

And here's how you write a function that proxies to DynamoDB GetItem with:

  • Request JSON parsing, with rejection if the request is not valid
  • Response JSON auto-marshaling with a clear type & structure
  • A connection pool to AWS that is always hot, shared between your invocations, and set up even before your first invocation
rust
use momento_functions_host::{
    aws::{
        auth::AwsCredentialsProvider,
        ddb::{DynamoDBClient, Item},
    },
    build_environment_aws_credentials,
};

#[derive(serde::Deserialize)]
struct Request {
    user: String,
}

#[derive(serde::Serialize)]
struct Response {
    user: Option<Item>,
}

momento_functions::post!(ddb_get_user_proxy, Request, Response);
fn ddb_get_user_proxy(request: Request) -> FunctionResult<Response> {
    let client = DynamoDBClient::new(&AwsCredentialsProvider::new(
        "us-east-1",
        build_environment_aws_credentials!(),
    )?);
    let user = client.get_item_raw("users", ("user_id", request.user))?;
    Ok(Response { user })
}

It's all normal-looking rust code; it's just that the execution environment is aware that you want to do this kind of thing, so it's going to optimize your experience for that.

There's no extra magical stuff to do for these functions - you just build for the --target wasm32-wasip2.

Difficulties

For me, there have been a lot of challenges getting here. wasm32-wasip2 is a preview release, and the subset of capabilities exposed in rust's std library aren't super clearly documented. Also, how to implement them is not readily apparent.

The wit_bindgen::generate! macro is so opaque. It creates a bunch of code that lives in memory. Sure, there are ways to output the code and stare at it, but it is not the first-party intended way to consume the output. Debugging and understanding the code is not for the faint of heart, especially when you're trying to link a custom wasi implementation!

Linking was another challenge. I tried to avoid implementing all of wasi at first. The webassembly Linker claimed to be able to define unknown imports as traps, but if you think about it that would have been a terrible thing for me to do to my users. As much as possible, I want your functions to fail to upload if they can't run. That's better than accepting them and blithely running un-runnable code.

I realized a few weeks too late that I needed to make an sdk to make the user experience any good at all. The WIT generated bindings are awesome in that they are type safe and accomplish linking - a feat not to be understated - but they are super low level and verbose to work with directly. Import paths in particular are highly confusing. At least the duplicate import names are aliases of the original definition scope instead of copy/paste duplications of structures with same names but different symbols 🤦! That was an issue in the bindgen infrastructure earlier on and it made for a lot of translation code.

Why?

Well it boils down to customer demand, market opportunity, and technological convenience.

I've had customers ask for a lot of custom stuff over the past couple years. If Lambda + Api Gateway were good enough for them to do the custom stuff, they would have done it there and never asked me. But something there doesn't fit for them.

Cold starts, concurrency limits, slow executions, super high base cost, and charging for dependency latency are all complaints I've hears my customers level against Lambda while asking me to do something custom. That's all well and good, but I can't always move as fast as my customers need me to.

Enter extensions - if I give my users a scripting language and a service that runs their scripts, they can express a lot of custom needs themselves without lobbying for limited developer time.

So the motivation for "extensions" is clear, but [1] why Rust, [2] why faas, and [3]why webassembly?

Well, [1] I've found it sucks to write elaborate scripts in scripting languages, but it doesn't really suck to write simple scripts with powerful programming languages. So why limit my power users' capabilities to my simplest users' needs? I ought better to give my power users power, and ensure that there is a simple path to meet simple needs where they're at.

While of course languages like lua exist and serve a purpose, that particular language hails from the early 90's and lacks modern market penetration and library support. If you wanted lua, well then I might flip it around and ask "why lua?" With webassembly I can provide a faster, lighter, safer runtime - and you can run lua in it if you really want to anyway. You have to compile a lua wrapper to webassembly, and expose whatever functionality you want down to your lua, but nothing prevents you from doing that other than that it's a bunch of work to get less speed, less flexibility and less type-safety. I think your efforts might be better spent in writing a WIT-to-lua generator, and a lua-to-wasm transpiler. You can probably use C, C++, or C# now anyways, so it's not really just about Rust - it just happens that Rust is the best-supported language for this stuff today.

Faas instead of scripting [2] follows from the above. The day 1 ask from a scripting infrastructure is "give me more power." That's the ask every day after day 1 as well, until you have mishmashed abominations like jinja. Jinja is totally fine and of course it's super powerful - but people commit such sins with it as I dare not recount. Evils are committed when one uses the wrong tool to do a rightful task. Jinja is often abused in this way. It's not a stretch to tell people "you are writing a function in a function runner service. Use a language that compiles to webassembly to author it." You can do what you want with that, within a pretty wide range. It's the right level of flexibility.

As for why webassembly [3], we've probably covered it by now. It's the fast, flexible, safe interpreted language infrastructure of the mid 2020's, and its future is looking highly compelling.