It's about the use of lookup-data in WPF dialogs. For a dialog with person entities these are, for example, values for Salutations, Nationalities... or if addresses are assigned Countries, Regions etc. In order that the initial delay of the dialog is short as possible, all database activities for the initialization shall not take more than 0.1 to 0.2 seconds. For the quickly access of the rarely changed values, at program start a DisconnectedState is already filled with the values.
In the actual MDI WPF dialog the cached values are merged to the special session (ClientProfile) with Session.DisconnectedState.Merge(App.DSPersonDlgCache). Unfortunately, this process already takes about 0.3 seconds. Now the lookup-data are being provided via
to the different controls.
1.Despite practically cached data in the DisconnectedState an unwanted DB access is generated for each query. E.g. "SELECT [a].[Id], 110 AS [TypeId], [a].[Code2], [a].[Code3], [a].[CodeInt], [a].[PhoneCode], [a].[CarCode] FROM [dbo].[Country] [a];" How can the DB access be avoided?
2.The use of Session.Query.ExecuteDelayed( qe => ... ) needs in fact only one DB access altogether, but unfortunately can not be used with .Prefetch(...) (error message).
3.When using the query List1 = Session.DisconnectedState.All<t1>()...ToList(); there is no more DB access, but the performance is very slow!
If the lookup-data are read in a separate transaction, the execution is strangely much faster, but still too slow!
What further options are there in DO 4.4 (O2O mapping would only be the last resort) to increase the speed? The planned Caching API in DO 4.5 would be a great help. 4. Session.SaveChanges()After the modifcation of only one person, for example LastName "Maier -> Meier", SaveChanges() also re-read unwanted all lookup-data from the DB.
Right now we use a workaround with Operations.Replay(...). How can the re-reading be prevented?
|
Your tips have helped a lot. The best results so far:
Now:
Thank you! |
On 1:
To avoid this, you must either cache query results by your own (e.g. by caching lists), or use LINQ to Enumerable over On 2:
It's also important that prefetch optimizes the sequence of queries it plans to run relying on delayed queries - i.e. in most of cases there will be just one additional batch. Such an approach has its own pros and cons over standard one (with JOINs): Pros:
Cons:
We use similar strategy for TPT inheritance as well - i.e. try to load less first without torturing the DB, and then load everything else using just index seeks. On 3:All depends on the number of entities you cache in DS. If the timings you show are produced on few thousands of entities, that's not OK. But if this result was produced on e.g. 100K of entities, I'd consider this as fully acceptable. Also note that
Could you describe your specific case? Mainly, how many entities do you enumerate this way? Using transactions is also a necessary performance optimization here (or you should turn off auto transactions in session options).
Actually, it won't: L2 cache will operate on On 4:True, that's default behavior. DO does this to refresh versions of the entities to actual ones. There are several ways to affect on this behavior:
On 4: Just to make sure I understand the scope of the version update functionality that occurs after an update. Is only the Version updated in the DisconnectedState entities from the DB? Or all of the entity's properties as well? Or all of the properties when a difference is found in the the Version? I ask because some business logic on the server side might modify some properties and if only the Version gets updated then the entities in the DisconnectedState may become partially out of sync... On 4: Does it make sense to have another option where only the entites modified by the update are refreshed afterwards? This would seem to cover most of the cases at minimum performance cost? On 4: 1) Yes, only versions are updated on saving, but not the data itself. If you need to update entity states, you should do this manually. 2) Currently it's recommended to completely refresh the whole part that might by affected after update by queries (i.e. have a special routine for this). Dicrionaries and any other "static" content can be cached in separate DS and merged into the current on after or before such refresh. |