Hello everyone!

Can you please advice about best aproach on using DO with scenario described below. We have to support following requirements:

1.) There is one centralized database wich contains about 50.000 records, each record containing few simple type fields, some references and one byte[] (GIS geometry) with average size of 3000 bytes

2.) There are 10-20 concurent WPF client apps which connects to centralized database over the web

3.) Client apps should download all off the records (including prefetched geometry field byte[]) and use them in disconnected manner for viewing purposes (readonly)

4.) Client app can edit/add/remove 1 record at time and save changes immediatly to the remote database

5.) Client app should be able to serialize disconnected data and use them later (if not possible alternatively can download all the data each time app starts)

6.) Client app should be able to sinchronyse changes only (made by other users) from remote database without downloading all off the records each time. Synchronisation should be performed very frequently (1 minute period)

The basic idea is to use DO and expose it trough http using WCF Data Services. Since there is no sync support I would do this myself using AD HOC aproach with some thombstone tables and synchronisation logic/queries.

Would that be possible? Is there any better way? Is there something specific about DO to deal with?

Regards, Sandi

This thread was imported from our support forum. The original discussion may contain more detailed answer. Original topic by Zgckula.

asked Mar 10 '10 at 11:00

Editor's gravatar image

Editor
46154156157


One Answer:

Alex (Xtensive) wrote:

> Would that be possible? Definitely - all depends on you...

> Is there any better way? I don't know to what it could be compared...

> Is there something specific about DO to deal with? I can enlist the following specific features / ideas:

  1. Use automatic generic type registration, that is briefly touched here. It allows you to associate a supplementary T with each of your own types, e.g. SyncInfo<t>

  2. Use integrated services (e.g. DirectXxxAccessor) to study\modify the state of persistent objects on synchronization. See http://goo.gl/nHWN

  3. I'd recommend you limit your "sync precision" to entities, but not their fields. Must be much easier.

  4. The whole idea I'd use:

  5. Update associated SyncInfo<t> on entity update. It's better to do this once per transaction, i.e. ~ as it was described in post about audit logging. Or [Version] to put "last modified in transaction" marker there (must be much faster; moreover, "single update per transaction" rule already works for [Version]).

  6. Ensure it's easy to gather all the changes you need to propagate to a particular client. The query doing this must require ~O(N) time, where N is count of objects which changes must be propagated to that client. See e.g. Microsoft Sync Framework - the whole idea is described there.

  7. Use serialization (no description yet, but there are tests for it, look for SerializationContext in tests to identify them). Possibly, we must make some tuning here, since the feature isn't officially released, but I suspect only minor changes might be necessary there (so this won't be an issue). So basically, you should just deserialize the entities to propagate the changes to a particular client. An alternative is your own change propagation protocol.

  8. Think about conflict resolution.

  9. Think about sync protocol in general. Sync implies distributed interaction, so you must plan this very carefully - to ensure sync might not go in wrong way under any case (most important case is failure of any peer during sync action).

  10. Pay attention to Microsoft Sync Framework. Likely, it's a good idea to try to integrated with it. As far as I remember, it is designed to P2P sync, but your case is particular scenario of P2P sync (all-to-one), so on one hand, there is more generic solution, and thus in your particular case it can be optimized. On another, more generic solution might be interesting as well.


Realis wrote:

Hi Alex!

DO is realy great in a way it handles refrential integrity, lazzy loading, ... So we have been trying hard this days finding a way to still use DO on client side but support remote storage based on scenario described earlier in this post.

Here is the idea: We would like to use DO (with disconected state) on client side normaly. To support remote storage we would implement our own WCF storage provider with SQL Server back end storage. Infact this isn’t realy new storage. What we want to do is put some communication layer between. This should be done in generic way. Client app should only differ in config settings when using local or remote storage and have to fully support all DO features (See figure attached)

After a brief examination of DO source code it seams there is no simple contract to implement this.

Can you advise? Is there any explanation about pattern to impement custom storage provider?

Regards, Sandi


Alex (Xtensive) wrote:

This case is really more complex, I can enumerate just few issues:

  • You must be able to serialize RSE queries. Serializing LINQ queries can be a bad idea (e.g. anonymous types may not exist on server at all). So you should deeply know how this stuff work.

  • To reduce chattiness, such a provider must intelligently queue e.g. future queries. Remember, that currently this queuing for SQL is implemented on provider level (and is specific to SQL), so you won't be able to use it.

So I'd advice to avoid this until such provider is implemented by us (there are such plans - I remember, I wrote about "remote://" protocol, but don't remember where ;) ). We understand this must be almost an ideal solution for N-tier development with DO4 ("almost" - because it requires most part of DO to run on the client, which isn't always possible, e.g. currently this won't work with Silverlight), and thus this will be done.

Btw, why you dislike the original idea about sync? I.e. what issues are stopping you here? I'll try to help to resovle them. As far as I see, it must be really much easier to implement.

answered Mar 10 '10 at 15:35

Editor's gravatar image

Editor
46154156157

I'd like to show some magic here, but it seems it will be there only when we'll implement remote:// protocol or sync... Until this is done, the only available option is to care about this by your own.

(Mar 10 '10 at 15:35) Alex Yakunin Alex%20Yakunin's gravatar image

Realis wrote: I expected your answer would be like this. But I hoped on some magic.

If that is the case we'll move back to MS Sync option. Before that we need to study sync framework first. I realy hope it is more evolved than MS entity framework.

Regards, Sandi

(Mar 10 '10 at 15:35) Editor Editor's gravatar image

Yep, such a solution is really nice in many cases... We'll add integrated support for it some day, but for now we must finish with more vital features..

(Mar 10 '10 at 15:35) Alex Yakunin Alex%20Yakunin's gravatar image

Zgckula wrote: Thank you Alex!

What I don't like here is WCF Data Services which somehow undermines the beauty of using DO directly. I Will consider using DO on local storage and Microsoft Sync Framework for synchronisation.

Best regards, Sandi

(Mar 10 '10 at 15:35) Editor Editor's gravatar image
Your answer
Please start posting your answer anonymously - your answer will be saved within the current session and published after you log in or create a new account. Please try to give a substantial answer, for discussions, please use comments and please do remember to vote (after you log in)!
toggle preview

powered by OSQA