Let’s say I have a team updating a database and another team who must work on the same database but unfortunately not in the office, they are on the field.
The concept is easy, make a copy the database on the field team handheld/mobile devices and let them update it, when they come back to the office we detect only the changes between the office database and the field database and import only those changes. The changes are usually called the delta records.
This process is called replication. This process has caused some problems previously
DBMSs allow you to replicate a part of the database or the entire database.
Back to the Roots
So Replication was invented by the database vendors for this main reason, keeping things insync. Why field users don't update the database directly by connecting to the Internet and save all this delta changes hassle. Performance is one reason, security and consistency are other reasons.
Lets Enhance this Architecture
Let’s go back to the roots and try to re-invent the wheel here. Why updating the office database from the field is slow? Because whatever Database software you are using, is designed in a way that allows high interactivity between the client and the server that work on LAN network perfectly. Thus it’s slow for limited Internet connection.
If we redesigned this software or at least created an interface for thin clients, depending on compressing heavy objects or sending named objects instead of heavy ones or even serializing the objects. All this summed would create a more convenient environment for field users and will also create a centralized up to date database.