01 Feb 2014, 10:16

Immutable SQL

Share

I have strong opinions on data models. They’re not exactly novel opinions, which is good, because if they were truly novel it’d probabably be difficult to reify them on top of existing data stores. As it is, if you want your very own immutable persistence layer, you can call Rich Hickey and he will hook you up.

(I’m going to attempt to use the word “immutable” to signify data, whether it lives on disk or in RAM, which is “updated” via returning new copies and which can be validly observed at any point in time, to avoid confusion between persistence as a synonym for durability, and persistence as non-mutability. In reality it’s a bit of a misnomer.)

But Datomic is kind of expensive; SQLite on the other hand is free, so let’s use that (or really, any other free RDBMS). A data model that is both relational and immutable has certain conceptual advantages anyway, because the immutability applies to both the entities themselves, and also the relations between the entities. A blog post is in some sense the “same” blog post after editing, just with two values for its content over time. Its relation to its comments hasn’t changed. If we remove a comment, the post’s relation with its comments up to that point also doesn’t change - it’s just that we record a new relation from that point on that ommits the comment in question.

This is basically the Clojure idea of identity vs. state vs. value, expressed in a relational world.

How does this work? Basically like this:

create table posts(post_id integer not null, version integer not null, current integer not null, body text not null, primary key(post_id, version));
create table comments(comment_id integer not null, version integer not null, current integer not null, post_id integer, body text not null, deleted integer, primary key(comment_id, version));

--Unique on post_id and version prevents double-insert
insert into posts(post_id, version, current, body) values (1, 1, 1, "Valuable content!");

Notice that post_id does NOT form a primary key; only between post_id and version. We’ll potentially have many “versions” of a post, but they all have the same identity.

Here’s where things get interesting: how do I edit a post?

begin transaction;
insert into posts select 1, 2, 1, "On further reflection..." from a where post_id=1 and version=1 and current=1;
update a set current=0 where post_id=1 and version=1 and current=1; --Invalidate old data
commit;

The only part of a row that changes is the “current” flag, which is just an optimization to prevent us from having to attach a “having version=max(version)” clause to every query for the current state. Now, we live in a world where the present value for the same entity is different, but the history is preserved.

Now, here’s the really cool part: what happens if my blog co-author tries to edit the post at the same time, and submits after I do?

Absolutely nothing.

Those “where” clauses in the insert and update ensure my writes only happen if I’m working with the latest version of the data. If I am mistaken about that, no transaction happens. I can verify whether my write happened via the changes() function, or via a select after the fact if other clients are sharing my connection to the db.

Now: what if, instead of effectively overwriting the “current” state, I want to apply a functional transformation to it? For instance, I decide it’s important that I convert my double-spaces to single-spaces. I grab the current value, compute my changes, and try the quasi-update dance above. If it doesn’t work, I simply try again. Eventually my changes will go through, and it will be a valid transformation of the state - a transformation that does not need to be expressible in SQL, avoids the ABA problem, and doesn’t depend on verifying value equality in SQL (which might not be efficient or even possible).

That looks exactly like Clojure’s atom functionality, where you grab the current value, compute its transformation, and compare-and-swap the reference (which operates by checking pointer addresses, not the contents of the object pointed to), repeating if necessary until it commits.

And finally, let’s explore the relational aspect. How do I add and remove a comment?

--Adding
insert into comments values(1,1,1,1,"You write so good.");

--Removing
begin transaction;
insert into comments select 1,2,1,post_id,body,1 from comments where comment_id=1 and version=1 and current=1;
update comments set current=0 where comment_id=1 and version=1 and current=1;
commit;

Adding a comment is done in the same way as adding a post. Removing a comment is done by adding a value that has been “marked” in some way as deleted. In this case we have an actual column that records that information, but this is the same sort of optimization as having a “current” flag - we could just as easily signify deletion by having a NULL body, or a NULL post_id, and looking for the prior version if we wanted the deleted content. It depends on how we want to think about our data - when someone “deletes a comment”, are they more disassociating it from a post entirely, or “erasing” the content while maintaining the relation? Many sites actually display deleted comments as a regular comment with a “[deleted]” body, to preserve conversation threading - that’s something that’s easy to represent in this schema.

In fact, in a world where a comment was associated with multiple posts and vice versa, you’d typically represent it as a table for posts, a table for comments, and a join table for the relations. In that case, removing a comment from a post really does entail recording an updated relation and nothing else - nothing magical needs to happen when a comment reaches zero references, except perhaps when you “garbage-collect” your database by archiving old / unreferenced data.

UPDATE:

Let’s talk briefly about querying the data and making it efficient. This isn’t anything particularly magical; if you want to restrict yourself to current and non-deleted posts / comments, you just need to embed that restriction in a “where” clause, like so:

--All posts IDs & bodies x all of their comment bodies
select posts.post_id, posts.body, comments.body from 
(select * from posts where current = 1) as posts 
left outer join 
(select * from comments where current = 1 and deleted != 1) as comments
on posts.post_id = comments.post_id;

You’ll be doing a lot of this, so you’ll probably want to encode that as a view for each table, and have a compound index on the “ID” and “current” columns to make those queries efficient.

The point of this is to facilitate a more historical view of the world - if your production systems mostly care about the present state, you can ship the archived state to a completely different system (maybe Hive for storage and bulk queries, and Shark for iterative analysis):

begin transaction;
insert into archived_posts select * from posts where current = 0;
delete from posts where current = 0;
commit;

This only works to archive “dead” data - but the present state is probably something you want in your analysis stack as well. To handle this, just ship your present state as a transient table that’s overwritten, and your archived state as a strictly accumulating store. Query the union.

select * from posts
union all
select * from archived_posts;

And having snapshots isn’t helpful unless we can actually examine the state on a particular date. To do this, you’d need to have a “created_on” column that embeds the date:

-- What version was live on 2010-01-01?
select post_id, version from posts 
where created_on < "2010-01-01" 
group by post_id having version = max(version);
-- Join this with whatever data you're actually interested in.
-- Or examine the subsequent state, if any; because version ID is
-- monotonically increasing by 1, it's trivial to examine deltas.

That requires a group-by over a potentially large amount of data per ID, depending on how many revisions you have. Depending on your particular RDBMS and indexing scheme, it may or may not be efficient to do that in the same system with the same cache that’s managing queries to the present state, which is why being able to ship historical data to a separate system is so handy.