As the client and I talked and whiteboarded out the design, it evolved into being very complex. It was going to be a challenge, and had a lot of moving parts.
As big and expensive as it was becoming, it still was much simpler and cheaper than what *r*cl* had quoted, they wanted a mid seven digits, PER YEAR, in license fees, for their proposal.
Anyway, as this thing grew on the board, something started itching in my head, and I sat back, looked at it again, and said "we're recreating the binlogs and the replication threads".
The client was very familiar with replication in Oracle and in MS SQL, but not at all so for MySQL, especially it's robustness in the face of lag and link failure. E.G, just let them lag, and let the disconnected one catch up again out of the binlog when it can reconnect.
With that, 90% of the past day and a half's design work got erased, and the whole thing collapsed into a completely bog standard replication setup, that any trained MySQL DBA can instantly recognize, understand, and maintain. One master, a handful of slaves, using InnoDB, replicating over geographic distances, with a needed average replication link bandwidth of less then 10 kilobits/sec. It doesn't even need hot slave promotion, since the system is still live and working, just not updating, if the master fails.
They can go live in *days*, not months.
The most important skill to have regarding any advanced technique, is being able to see when you don't have to use it.