Skip to content
Event Sourcing 8 min read

Event Versioning in Practice: How to Change Your Events Without Breaking Everything

How to evolve event schemas in an immutable event log: the upcasting pattern, versioning decisions, and testing strategy.

You shipped your first event-sourced aggregate. The business loved it. Three months later, a product requirement lands that needs a field you never included in your original event structure. In a relational database you write a migration script and move on. In event sourcing, those old events are immutable. They are staying exactly as they are, forever. Thousands of them. Now what?

This is the moment every event sourcing practitioner remembers. The first time you look at an immutable event log and realise that the word “migration” means something completely different here. It is not unsolvable. In fact, once you understand the tools available, it becomes manageable. But it requires a shift in how you think about schema changes.

Why Events Are Not Like Database Columns

A relational column represents current state. It holds whatever is true right now. You can update it, migrate it, transform it in place.

An event represents a fact. Something happened. It is recorded as it was, at the moment it occurred. You cannot change what happened. You can only teach your system how to understand what happened, even as your understanding of the domain evolves.

This is not a limitation. It is the feature. Your event log is a complete audit trail of everything that ever occurred. The price of that audit trail is that you cannot retroactively change the facts. The payoff is that you can always replay the entire history and derive any state you want from it.

When engineers first encounter this, the instinct is to reach for migration tooling. Resist that instinct. The goal is not to modify old events. The goal is to teach your system how to handle them, regardless of when they were written.

The Upcasting Pattern: Transform Events at Read Time

The most practical technique for handling schema evolution is upcasting. When you load events from the store, run them through an upcast pipeline that transforms old schema versions into the current version before they ever reach your domain logic.

Here is a concrete example. You have an OrderPlaced event. In version one, it looked like this:

public record OrderPlacedV1(
Guid OrderId,
Guid CustomerId,
decimal TotalAmount
);

Three months later, the product team needs delivery address captured at order creation. Your new event looks like this:

public record OrderPlaced(
Guid OrderId,
Guid CustomerId,
decimal TotalAmount,
Address? DeliveryAddress
);

The old events do not have DeliveryAddress. Every OrderPlacedV1 that comes off the store will fail to deserialise into the new type, or silently produce null in a field you did not expect. The upcaster handles this:

public class OrderPlacedV1Upcaster : IEventUpcaster<OrderPlacedV1, OrderPlaced>
{
public OrderPlaced Upcast(OrderPlacedV1 oldEvent) =>
new OrderPlaced(
oldEvent.OrderId,
oldEvent.CustomerId,
oldEvent.TotalAmount,
DeliveryAddress: null // Sensible default for historical events
);
}

The upcast pipeline runs every event through the appropriate upcaster on load. Your domain logic only ever sees the current OrderPlaced type. The V1 events are handled transparently.

A few rules for upcasters. They must be pure: same input, same output, every time. No external calls, no conditional logic based on runtime environment. They must be deterministic: if you replay the same event stream ten times, you must get the same result. And they compose: if you later release a V3, you chain V1 to V2 and V2 to V3. The pipeline handles the rest.

Versioning by Name: When a New Event Type Is the Right Answer

Not every schema change is a versioning problem. Sometimes what looks like a change to an existing event is actually a new event entirely.

Here is the heuristic: if you can write an upcast that fills in a sensible default value, it is probably a version of the existing event. If the business logic that caused the event has changed in meaning, it is probably a new event type.

Consider two scenarios. First: CustomerAddressUpdated. A customer changed their address, and you now want to capture the reason they changed it. A default of null or “not recorded” is semantically valid. This is a versioning problem. Use an upcaster.

Second: you have CustomerAddressUpdated, and now you need to distinguish between a customer correcting a typo and a customer moving to a new home. These are different business facts. One is a correction, one is a relocation. They might trigger different downstream logic, different notifications, different analytics. Forcing these into a versioned CustomerAddressUpdated would mean losing the semantic distinction forever. Better to create CustomerAddressCorrected and CustomerRelocated as separate event types, and let CustomerAddressUpdated remain as the legacy type that old projections still handle correctly.

The name of an event is doing real work. It is not just a label. It carries the business intention. When the intention has changed, the name should change with it.

Weak Schema vs Strong Schema Events: A Deliberate Choice

Most teams using C# start with strongly typed event classes: a record or class per event type, deserialised directly from storage. This is comfortable and the compiler is your friend. If an upcaster is missing or a type does not match, you know at build time or at startup rather than at 2am.

The tradeoff is that every schema change touches code. Adding a field means a new type or a new version. Renaming a property requires coordination across the codebase.

Some teams go the other direction and store events as JsonDocument or Dictionary<string, object>, resolving the specific type at runtime via a schema registry or a naming convention. This is more flexible for evolution but pushes all safety to runtime. A typo in a property name will not fail until you try to use it.

The middle ground that tends to work well in practice: use strong types for new events, but build your deserialiser to be tolerant for legacy ones. New events get strongly typed classes. Historical events that you need to upcast get flexible handling in the upcaster itself, using JsonElement or similar, before producing the current strongly typed output.

Neither approach is universally correct. The right answer depends on how fast your event schema evolves and how much you trust your test coverage. But you should make the choice deliberately, not by accident.

Testing Your Projections Against Historical Events

The regression nobody sees coming: you write an upcaster, it looks correct in isolation, but you have never actually run it against the real event shapes that exist in your store.

This is more common than it should be. Teams write upcasters against a hypothetical V1 event definition they reconstruct from memory or from old code. The actual events in the store were written by a slightly different serialiser, or had a property spelled differently, or included a field that was removed from the class before the record was finalised.

The testing strategy that works: take a snapshot of representative real events from your event store, anonymise them if needed, and keep them in your test suite. Run them through the full event pipeline, including deserialisation and upcasting, and assert that the projection output matches expectations.

[Fact]
public void OrderProjection_HandlesV1OrderPlaced_Correctly()
{
// Real event JSON captured from the store, anonymised
var rawEvent = File.ReadAllText("TestData/order-placed-v1-real.json");
var events = new[] { _eventDeserialiser.Deserialise(rawEvent) };
var state = events.Aggregate(OrderState.Empty, OrderReducer.Apply);
Assert.Equal(expectedOrderId, state.OrderId);
Assert.Null(state.DeliveryAddress); // V1 events have no address
}

This is a golden-path test. It covers the entire pipeline for a real historical event, not a synthetic one. Run it in CI. If you change your deserialiser or your upcaster, this test will catch it before it reaches production and a five-year-old event suddenly produces wrong state.

Keeping real event samples in your test suite has a secondary benefit worth mentioning: it forces you to think about what events you have actually emitted, across what versions, and whether you have upcasters for all of them. Writing the test is a useful audit.

A Practical Versioning Strategy That Scales

Here is the strategy worth applying to any event-sourced system from day one.

Embed a schema version in your event metadata. Not in the event payload itself, in the metadata envelope. This is the key that your upcast pipeline uses to know which upcaster to apply. Without it, you are inferring versions from payload shape, which is brittle and eventually wrong.

Prefer upcasting for backwards-compatible changes. Adding optional fields, expanding enums, changing defaults. These are mechanical changes. An upcaster with a sensible default is the right tool.

Prefer new event types for semantic changes. When the business meaning of what happened has changed, not just the structure, a new event type is more honest. Old projections keep handling the old events. New projections work with the new type. The history stays clean.

Document every version. Every time you add a V2, write down why. The comment in the code should say what changed and when, not just that it changed. Six months later, when someone is debugging a strange projection state, that comment is the entire story.

Test every event version in CI. Keep real event samples. Run them on every build. Treat a broken upcaster as a critical regression, because it is.

The honest truth at the end of all this: the best versioning strategy is designing events carefully upfront. Upcasters and version pipelines are good tools. They are not a substitute for thinking hard about what your events mean and whether they are capturing the right facts. A poorly designed event model does not become pleasant to evolve just because you have good versioning infrastructure. Take the time at the start. Your future self will notice.

Conclusion

Event versioning is not the unsolvable problem it feels like the first time you encounter it. The immutability of your event log is a feature, and the patterns for evolving around it are learnable. Once you have applied them a few times, they become second nature.

Upcasting handles most of the common cases. Clear naming conventions handle the rest. A disciplined test suite, with real historical events, is the safety net that keeps everything honest. What event sourcing asks of you is deliberate design upfront and clear thinking about what your events actually mean. That is a higher bar than a relational migration script. The payoff is a system that knows exactly what happened, in exactly what order, with no ambiguity about what any of it means.

If you are exploring event sourcing and want to skip the infrastructure overhead of standing up and running your own event store, take a look at what we are building at hapnd.dev. The beta is open and self-service from day one.


James Woodley is the founder of Hapnd, a fully managed event sourcing platform. He has been building software for over 20 years and has spent the last several years frustrated that event sourcing remains harder to adopt than it should be.