Often the first error handling code I see from new Cassandra users is the client side rollback in an attempt to replicate database transactions from the ACID world. This is typically done when a write to multiple tables fail, or a write is not able to meet requested Consistency Level. Unfortunately, client side rollbacks, while well intentioned, generally create more problems than they solve. Let’s discuss how this will end up
This is a sample of the type of code I see (obviously not with hardcoded values).
session.execute("INSERT INTO my_db.events (id, text, ts) values (1, 'added record', 2015-01-23 11:00:01.923'"); try{ session.execute("INSERT INTO my_db.user_events (user_id, id) values (100, 1)"); }catch(Exception e){ //bad idea session.execute("DELETE FROM my_db.events where id=1") }
The problem with this is, there are several servers responsible for this source of truth, and in a working system, this code will usually work fine with a sleep in between the operations. However, Thread.sleep(100) is rarely a safe approach in practice or remotetly professional. What do you do?
However, the typical approach for experienced Cassandra users is to retry the transactions and most drivers will even do this by default for timeouts. Conceptually, if done by hand the code would look more like:
String query = "INSERT INTO my_db.user_events (user_id, id) values (100, 1)"; try{ session.execute(query); }catch(Exception e){ session.execute(query) // optionally you can even attempt a // circuit breaker pattern // and write a backup location such as a // queue to retry later // only useful in the most extreme cases // or most limited // configurations }
So lets talk about the practical application of these theories in our data model.
CREATE TABLE my_keyspace.users ( user_id uuid,
first_name text, last_name text,
PRIMARY KEY(user_id));
INSERT INTO my_keyspace.users ( user_id, first_name,
last_name) values (
e785e49a-996d-4df0-b378-4404798ce088,
'Ryan', 'Smith' )
Approach
Ids are generated on the client or from an external system and therefore are not tied to anything server side.
Caveats:
CREATE TABLE my_keyspace.users (user_id uuid, ts timeuuid,
attribute_name text, attribute_value text,
PRIMARY KEY(user_id, ts, attribute_name))
INSERT INTO my_keyspace.users ( user_id, ts,
attribute_name, attribute_value) values
( e785e49a-996d-4df0-b378-4404798ce088,
'2015-01-10 09:56:01.000', 'first_name', 'Ryan')
Approach
There are no updates to any writes beyond retries, another name for this isEvent Sourcing. The number of updates will determine how effective this is and there are a whole raft of other considerations when it comes to partition sizing. This will work really well with the Lambda Architecture as different analytics tools can combine down these separt
Retries of the same value will have the same result no matter what. This is safely idempotent and free from race conditions that result in permanently inconsistent state.
Caveats: