Skip to content
Event Sourcing 8 min read

Running Untrusted Code in a Multi-Tenant Platform: How Hapnd's Four-Layer Security Model Works

How Hapnd executes customer-supplied C# reducers and projections safely: a walkthrough of the four independent layers that protect the platform.

Running customer-supplied code in a shared infrastructure is one of the harder security problems in platform engineering. Not because the individual techniques are exotic, but because the threat model is unforgiving: any single failure means someone else’s code runs with privileges it should not have, or reads data it should not see. The stakes are high enough that “pretty secure” is not a useful concept.

When I built Hapnd, I knew from the start that the security model for code execution had to be something I could explain clearly and defend technically. Hapnd lets engineers push their own reducers and projections, written in C#, which the platform compiles and executes in a multi-tenant environment. That is a significant amount of trust to ask for. This post is my attempt to explain what I built and why I built it the way I did.

The model has four independent layers. I will walk through each one in order, from compile-time to runtime, and explain the specific threat each layer exists to prevent.

Layer 1: The Semantic Whitelist

The first line of defence is applied before a customer’s code ever compiles. Hapnd uses Roslyn’s SemanticModel API to analyse every symbol in submitted C# source files. Every type reference, every method call, every static access is resolved to its fully qualified CLR name and checked against a registry of explicitly approved types.

If a type is not on the list, the compilation fails. Not with a warning. With a hard rejection.

This is deliberately a whitelist, not a blacklist. A blacklist approach tries to enumerate the dangerous things and block them. That is a losing game: the BCL is enormous, the attack surface is large, and there will always be something missing. A whitelist inverts the model. The platform defines exactly what customer code is allowed to use, and everything else is unavailable by default.

In practice, this means reducers and projections have access to the types they need to do their job: collections, LINQ, string manipulation, arithmetic, and the types Hapnd itself exposes for working with events and state. What they do not have access to is the file system, the network, reflection, diagnostics, interop, or anything else that could be used to escape the intended execution context.

The Roslyn analysis runs before IL is generated, which means dangerous code never reaches the compiler’s output. There is no DLL produced for bad code to hide in.

Layer 2: DLL Signature Verification

The semantic whitelist prevents dangerous code from being compiled. Layer 2 addresses a different threat: what if a DLL arrives at the execution container that did not go through the compilation pipeline at all?

This matters more than it might initially seem. Pre-existing DLLs from before the whitelist was in place, DLLs injected through compromised storage credentials, files tampered with in transit between storage and the container: all of these represent ways that unsigned or modified code could arrive at a container that has no way to verify where it came from.

Hapnd solves this with ECDSA P-256 signature verification. After compilation, the platform computes a SHA-256 hash of the compiled DLL bytes and signs that hash with a private key. The signature is stored separately from the DLL. When a container starts and attempts to load a customer DLL, it verifies the signature against a public key that is baked into the container image at build time.

The asymmetric design is the important part here. The container holds only the public key, which means it can verify signatures but cannot produce them. A compromised container cannot forge a valid signature for an attacker-supplied DLL. The only entity that can produce a valid signature is the compilation pipeline, which means the only DLLs that will load are the ones Hapnd itself compiled, and which therefore passed through the semantic whitelist first.

If the signature check fails, the container refuses to load the DLL. Execution does not proceed. There is no fallback.

A simplified illustration of the verification logic:

private static bool VerifySignature(byte[] dllBytes, byte[] signature, ECDsa publicKey)
{
var hash = SHA256.HashData(dllBytes);
return publicKey.VerifyHash(hash, signature);
}

This runs once, before anything in the customer assembly is initialised. If the method returns false, the loader refuses to proceed. The decision of what to do with a failed verification sits with the caller, not with the verification logic itself, but the outcome is the same: the DLL does not load and execution does not proceed.

Layer 3: Environment Stripping

The third layer exists because of a specific failure mode in the first two.

Suppose a future change to the platform, perhaps a new approved type added to the whitelist, accidentally opened a path through which customer code could call System.Environment.GetEnvironmentVariable. The semantic whitelist would be weakened, and the DLL would have been compiled legitimately, so it would pass signature verification. The first two layers would not catch it.

If the container’s process environment contained sensitive values, credentials, connection strings, platform internals, that weakness becomes a genuine leak.

Layer 3 removes this risk by removing the values. At container startup, before any services are registered and before any customer DLL is loaded, the application captures the specific environment variables it needs to operate and stores them internally. It then strips everything else from the process environment.

By the time customer code runs, the process environment is empty. There is nothing sensitive to read, even if a future mistake somehow allowed System.Environment through the whitelist.

This layer is not the primary defence. It exists specifically to contain the blast radius if Layer 1 is ever weakened. Defence in depth means that each layer holds independently, and that a failure in one place does not cascade into a catastrophe. This layer is a concrete implementation of that principle: it removes an entire class of potential secrets from the execution environment before it matters whether customer code can access them.

Layer 4: Container Isolation

The fourth layer operates at the OS level and is the safety net for everything else.

Hapnd runs customer code inside a container configured with a deliberately minimal privilege set. The container runs as a non-root user. All Linux capabilities are dropped at startup. The no-new-privileges flag is set, which prevents any process in the container from gaining elevated privileges through setuid or similar mechanisms. The filesystem is mounted read-only. The base image is minimal Alpine, which means the attack surface from available binaries and libraries is as small as I could make it.

Resource limits are enforced at the container level: execution timeouts, memory caps, limits on concurrent executions, and a circuit breaker that trips after repeated failures. The specific values are not published here; the point is that the resource envelope is bounded and enforced by the runtime, not by anything in customer code.

The threat model for this layer is: everything else has failed. Layers 1, 2, and 3 have all been bypassed somehow. What then? The answer is a container with no privilege escalation path, no writable filesystem, no access to the host network stack beyond what the platform explicitly routes, and no way to accumulate resources without hitting enforced limits.

This is the one layer I hope never has to do any real work. It is also the one that matters most if everything else goes wrong.

Why Independence Is the Design

The most important property of this model is not any individual layer. It is that the layers are independent of each other.

Layer 1 operates on source code before compilation. It has no dependency on runtime behaviour. Layer 2 operates on DLL bytes at load time. It does not care what the source code looked like. Layer 3 operates on the process environment before any customer code is initialised. Layer 4 is enforced by the container runtime and the host kernel, entirely outside the application process.

This independence is the point. An attacker who bypasses the semantic whitelist still faces signature verification, environment stripping, and container isolation. An attacker who somehow injects a signed DLL, which would require compromising the private key, still faces an empty environment and a locked-down container.

No individual layer is unbreakable. But defeating all four independently, in a way that still produces useful access, is a substantially harder problem than defeating a single gate. And more practically: when a layer has a gap, which eventually every layer will, the others buy time to detect and respond. They limit the impact of what would otherwise be a clean compromise.

Single-gate security feels simpler to reason about. But when that gate has a weakness, and it will, there is nothing else between the attacker and the data. Layers give you the option of recovering from a mistake rather than merely hoping you never make one.

What This Means When You Are Evaluating Hapnd

If you are considering Hapnd as a platform for your reducers and projections, you are trusting that the whitelist is correctly defined, that the signing infrastructure is not compromised, and that the container configuration is what I say it is. These are reasonable things to scrutinise, and I have tried to explain the model clearly enough that you can do that.

The architecture is not novel: ECDSA signature verification, Roslyn semantic analysis, and container hardening are all well-understood techniques. What I have built is a careful composition of them, arranged so that each layer holds independently of the others.

If you are building with event sourcing and want the infrastructure handled so you can focus on your domain logic, Hapnd is in beta at hapnd.dev. No credit card, no commitment. If you have questions about the security model that this post did not answer, I would genuinely like to hear them.