It contains a number of remarks about bank applications and their need for strict transactionally consistant RDBMS, like so:
The old guard who start reaching for their heart medication at the news of these new databases are usually bank programmers who want to make sure that the accounts balance at the end of the day.
None of them is right for everyone, and all of them are completely wrong for the bankers out there.
Last year, when I was doing the MySQL Professional Services gigs, I actually had a client who was a gigantic financial services company. I'm not going to name them here, and you've probably never heard of them, but the odds are pretty good that they manage some portion of your money, via two levels of outsourcing indirection. They manage quite literally hundreds of millions of accounts, with an combined value in the high 12 digits. Every account gets at least one change a month, often more, and it all has to balance to the penny. Every account they've ever had, and it's full transaction history back to the beginning of time, must be kept and maintained. And there is very very strict legal and accounting oversight, imposed by internal auditors, external auditors, their direct clients, and a whole alphabet soup of regulatory agencies.
And they do not use a transactional RDBMS. For many reasons:
- They've been around so long that the term "legacy" has real meaning.
- They have a LOT of specialized reporting applications, many of which have been signed off on and locked down by their auditors and regulators. It would cost effectively an infinite amount of money to port them over to using SQL.
- Their dataset is so large, that it would take literally months to just copy it, let alone import it into a new RDBMs.
- Part of the reason they are so successful in their field, is that their margins and costs are so low, that they beat all their competition purely on price, in a very price sensitive market.
- Buying enough IBM DB2 or Oracle RAC systems would cost them more in license fees than they currently charge their clients.
- Estimations of the scalable performance of DB2 or RAC indicate that it would take more than a day, with all the necessary interlocking transactional updates, to apply just one day's worth of updates.
But, in their entire history, they've never lost a penny, or an account. All without a traditional transactional RDBMS.
Instead they have a huge array of VMS ISAM datastores.
How do they do it? They run "transactions" intelligently, in their applications, piece-wise, and carefully.
Remember, keeping transactions in the DBMS is not a law handed down from On High. It's an assumption hack because it was assumed that programmers could not be trusted to do it in their applications, and could not be trusted to understand how to implement good data structures or good enough access patterns.
Often it's a useful assumption, just like assuming that programmers could not be trusted to handle instruction sets, direct jmps, or memory allocation, thus the rise of compilers, while loops, and garbage collection.
But sometimes you have to step back, change your assumptions, and add support for average programmers in some other way. Or make sure you get better than average programmers, and give them brain support some other way.