SQL Server - Hub/Spoke disconnect model

230 views Asked by At

Here's my situation - I know it's not the first time it's come up but I can't find just the right documentation to address all of my concerns....we have an environment where there needs to be a centralized/roll-up web service and data warehouse for a bunch of stores and each store will also have local resources (same DB, local web service).

The idea is that everything work as a centralized solution - everything runs through the data warehouse as long as the network is up, but when the network link goes down we want the stores to stay up by relying on their local web service/DB instances. Of course when the link comes back the spokes each need to push their offline x-actions back to the data warehouse.

Under normal circumstances newly added rows should also exist in the store's local DB as soon as they're written to the data ware house so if the link is broken the store has its own data right up to the last second (it would be ironic not have your own store data when the link was out, but still - it has to go through the DW for integrity).

I don't want replicas of all of the important tables in the spokes (though I guess that would solve the problem), like "Offline_Transactions", "Offline_Orders", etc., because the #/tables to manage is going to be huge.

It's the write-back (normal x-actions, pushing DW rows back to the spokes so they're always current) that creates the problem, i think. Because that could conceivably get the DB's out of synch (if a record gets written in the DW but the link goes down or service is down when it attempts to copy the row out to the spoke) even a compound key with StoreId + RowId can potentially contain duplicates.

So what I thought I would do is have a nullable GUID column, which is only used in 'offline' mode at the spokes. Under normal operations this column would be null (when the DW creates the row first and it gets copied back to the store's spoke DB). When the link is down and the store's spoke DB is creating 'detached' records, they would contain GUID's for guaranteed uniqueness, then when the link was restored a process would run that invokes the central DW's web service to merge those row into the DW.

The issue I keep running into with any solution i come up with is the clustered index. There must one one for performance, but if the data on either side can potentially have nulls in the important columns (spoke records would have GUIDs but no sequential, identity column values and the DW records would have ID values but no GUIDs) then either the schemas have to be different (which I want to avoid) or this solution fails.

What I'm considering doing is having a compound clustered index using both col's: the GUID and the true incrementing ID. My thinking is that on the DW side it shouldn't have any impact because there won't be any GUID's to deal with, they'll be stripped out as data is pushed to the central repository from the spokes. It should act/perform just like a standard clustered index on an ID column, right?

The benefit - I hope - is that I can use the same schema at the spokes, and when 'detached' rows are added they'll maintain their uniqueness throughout and while the indexing may be dodgy while the link is down we won't be dealing with millions or even 1,000's of rows using GUIds, just a few until the link is restored, then once the two environments synch back up all is well again.

So - please, tell me why this is the worst idea ever, and/or point out easier/better ways to manage this, or generally help me out.

TIA.

1

There are 1 answers

3
Hogan On

You are going to have crazy use-cases if you don't have an authority for the data.

Here is what I mean. If the data the warehouse controls (that is the data it will change when off line) is changeable by other parties (say another warehouse) then when it comes online it is going to be really hard resolve conficts (without someone doing it by hand.) I say I had 10 cases and now I have 20 cases but someone else changed it from 10 cases to 5 cases (maybe they wanted to order them). How do you resolve this... should the value be 5 10, 20, 15?

This gets very complicated fast.

If you don't allow this then you are safe, if you have data owned by the parties that can modify it off-line then you have a much simpler process -- come back send updates, those are the official correct values. Maybe another warehouse could make a request, but not change these values.

I think you need to design these multi-user rules before you start thinking about your data model because it will have an effect.