image

One of the selling points of Event Sourcing is this idea of the re-playable event stream. There’s plenty of literature around why event sourcing is a good pattern but very little around actual implementation details and the challenges surrounding it’s use. In this article I intend to address the concept of the re-playable event stream.

Events are facts. An immutable append-only log of things that happened in the past. This means that once you hit production you are obligated to handle the events as they were originally published indefinitely. In an event driven architecture, events are published everywhere. All of the struggles that come with exposing a public HTTP APIs in terms of backwards compatibility apply to all of your event publishers. This can add a significant maintenance cost to large applications.

Thankfully there’s tools like avro to assist with maintaining backwards compatible consumers by requiring schema definitions to be maintained in a centralized registry.

A schema registry helps because it helps ensure your pub-sub contracts never change, or, at least in a breaking way. These tools are not without merit; whenever a message is published you don’t know which processes (if any) are listening and which fields they may be reading. As a message publisher, your best bet is to maintain full backwards compatibility to ensure you don’t break downstream consumers.

But what are the less obvious implications of moving forward with an architecture like this? Your code needs to be structured in an event sourced compatible way. In order to be able to replay events your domain, logic needs to maintain backwards compatibility and it needs to isolate state changes from side effects and decision logic.

Let’s cover this in a bit more detail. To provide a concrete example:

class User {
	constructor(initialState) {
		Object.assign(this, initialState);
	}
	emailChangedHandler(payload) {
		this.email = payload.email;
	}
	// event handler
	on(event) {
		const { type, payload } = event;
		
		if(type === 'EMAIL_ADDRESS_CHANGED') {
			this.emailChangedHandler(payload);
		}
  	}
}

Here, a user object can be constructed from an initial state and events can be replayed onto the user to ultimately lead up to its current state.

const user = new User({});
const events = [
	{
		type:  'EMAIL_ADDRESS_CHANGED',
		payload: { email: 'foo@bar.com' }
	},
	{
		type:  'EMAIL_ADDRESS_CHANGED',
		payload: { email: 'baz@bar.com' }
	},
];
for(const event of events) {
	user.on(event);
}
console.log(user) // most current state

but, what often happens in domain logic is that you want to trigger an action when things change, right?

class User {
	// ...
	async _sendPasswordChangedNotification() {
		const { _emailService, email } = this;
		_emailService.sendTemplatedEmail({
			user: email,
			template: 'PASSWORD_CHANGED',
			data: {
				timestamp: new Date(),
			},
		});
	}
	passwordChangedHandler(payload) {
		this.password = payload.password;
		this._sendPasswordChangedNotification()
			.catch(() => {}); // error handling omitted for brevity
	}
	// ...
}

So now, whenever a user changes their password an email will be sent out notifying them that a their password has been changed, and if they didn’t attempt this change their account could be compromised and they should contact customer service. Simple right?

This code works and is totally fine, but its not exactly compatible with the concept of replaying events in the future because there are side effects hooked into the state changing code. If you were to replay the events in the future, the customer would receive an additional email for every time they have previously have changed their password. We don’t want that. So what are our options? We could add conditional logic to each side effect to prevent the code from executing during a replay scenario.

class User {
	// ...
	on(event, options = {}) {
		const { isReplay } = options;
		const { type, payload } = event;	
		if(type === 'EMAIL_ADDRESS_CHANGED') {
			this.emailChangedHandler(payload);			
			if(!isReplay) {
				// actually send email
				this._sendPasswordChangedNotification().catch(() => {});
			}
		}
	}
	// ...
}

for(const event of events) {
	user.on(event, { isReplay: true });
}

Or, we could split out the state updating logic into a separate function and only call this code during reconstitution.

class User {
	// ...
	on(event) {
		const { type, payload } = event;	
		if(type === 'EMAIL_ADDRESS_CHANGED') {
			// actually send email
			this._sendPasswordChangedNotification().catch(() => {});
		}
		this.handleStateChange(event);
  	}
	handleStateChange(event) {
		const { type, payload } = event;
		if(type === 'EMAIL_ADDRESS_CHANGED') {
			this.emailChangedHandler(payload);
		}
	}
	// ...
}
events.forEach(e => user.handleStateChange(e))

I think the first of these options is preferable for maintainability but its really just semantics. On the Client side, patterns have emerged to help deal with these constraints. The most popular of which today is Redux which separates decision logic and side effects into action creators and reducers. An action is another word for an event; a plain only javascript object with a type and a payload.

Side Effects & Decisions are contained in “Action Creators”

const updateVisibilitySetting = () => {
	return async (dispatch) => {
		// side effect
		const settings = await fetch('<https://my-api.com/visibility>').then(res => res.json());
		return dispatch({
			type: 'SET_VISIBILITY',
			payload: settings.visible,
		});
	}
};

State Changes encapsulated in “Reducers”

const initialState = {
	visible: false,
};
function reducer(state = initialState, action) {
  switch (action.type) {
    case 'SET_VISIBILITY':
      return Object.assign({}, state, {
        visible: action.payload,
      });
    default:
      return state
  }
}

This architecture is just an implementation of event sourcing on the client side. Similar code is produced for server side event sourcing where the events are read from a persistent stream and the state is derived as a function of past events; but things get significantly more complex on the server side with longer lived entities.

A concern to highlight is that the persistence mechanism used (event stream) influences the structure of your domain logic. but not necessarily in a bad way. With an object oriented design, state needs to be updated at some point. By stripping out the code out into a separate function, more work is required upfront but its not throwaway if things change later.

This introduces a level of coupling between the domain logic and the persistence layer. Your repository needs execute domain logic in order to recreate state, and now domain logic needs to maintain backwards compatibility in order to ensure the same state is recreated during reconstitution. This can result in verbose forks in the code where both need to be maintained. This problem isn’t as pronounced when state is a snapshot. Normally, what matters is the current state and how business logic functions as a result of that state. With event sourcing, corrections need to be placed into the event stream to handle migrations and decision logic / control flow can be affected in unexpected ways.

Why do this? Ultimately the goal is to be able to replay events up until some point to recreate state (presumably not the most current) in order to debug an issue. The value add is having the event log to understand what happened to derive the current state.

I find, more often than not, that problems lie in business logic where entities interact with rest of the system and external services within the context of a particular environment, more so than the actual state management of an object itself. Usually the environment itself isn’t event sourced — and execution context is critical to the output. I find it is unwise to build the whole system around a single rarely used debug method. Adding all of this complexity and overhead just convolutes the domain, and makes more common debug scenarios more difficult to reason about.

What do you think?

Thank you for reading! To be the first notified when I publish a new article, sign up for my mailing list!

Ben Lugavere is a Lead Engineer and Architect at Boxed where he develops primarily using JavaScript and TypeScript. Ben writes about all things JavaScript, system design and architecture and can be found on twitter at @benlugavere.

Follow me on medium for more articles or on GitHub to see my open source contributions!