I am using an SQL CE Express database with a ClientProfile and AutoActivation in .NET 3.5.

I have an Entity which contains a Start and End point. From a collection of objects I need to get a range. (The collection of objects are non-persistent objects.)

public class Foo : Entity {
    public float Start {get;set;}
    public float End {get;set;}

foreach (var item in collection) {
    if ((item.distance >= Foo.Start) && (item.distance <= Foo.End)) {
        // Do something

When benchmarking this the above operation takes about 200ms. Which is too slow for my liking. I changed it to the following:

float start = Foo.Start;
float end = Foo.End;
foreach (var item in collection) {
    if ((item.distance >= start) && (item.distance <= end)) {
        // Do something

The above operation takes less than 5ms.

Is there a reason using the Entity's properties directly takes so long? I retrieved the object from the database ages ago, and the properties are not marked with LazyLoad. How can I speed this up? It's not doable in the current situation to use a ServerProfile.

Using the second implementation in my software is not an option and the 190ms difference is noticeable when requesting the data in the application.

asked Jun 14 '11 at 15:10

jensen's gravatar image


3 Answers:

Is the collection fetched and iterated in the same transaction?

using (var transactionScope = session.OpenTransaction()) {
   var items = session.Query.All<...>(...);

   foreach (var item in items) { ... }


answered Jun 14 '11 at 15:32

ara's gravatar image


No, the session is auto activated, so I don't perform my own transaction open or so. The collection of objects which is iterated does not belong to the database. (They don't extend Entity or Structure, those are regular objects.)

The Foo object is retrieved at the start of the application and held by a service. Then at some point I have a collection of objects ranging from 0 to 1000. Foo will for example have Start == 10 and End == 250. So that's the range I need from my collection.

All the tests I did point out it's the use of Foo.Start and Foo.End making the operation very slow.

(Jun 14 '11 at 16:38) jensen jensen's gravatar image


The problem is that every time you access Foo.Start or Foo.End, there is a hit to the DB to verify that the data in Foo.Start/End is current... because you're using automatic TRANSACTIONS.

Code that manipulates entities should be wrapped in transactions, like the example above, even if you are using auto-activated sessions.

In your case, since you're caching the entity locally from the beginning of the application, I suggest you use DisconnectedState, or you instead cache the values of Foo.Start/End instead of caching Foo itself.

Or, you can use the approach you used. Difference is that there will be a DB hit every time that code runs.

The Larger Point If you're making this mistake in this one example, it's likely that you've made it throughout your entire app. Make sure to wrap client code in transactions, otherwise your app will perform terribly.

If you're using ASP.NET MVC, there is a SessionManager in Xtensive.Practices.Web that will wrap action methods in transactions for you.

answered Jun 14 '11 at 18:08

ara's gravatar image


edited Jun 14 '11 at 18:12

I'm using WPF (MVVM) in a desktop application.

I guess the DisconnectedState is the way to go for me. I'm not entirely happy about it though. The reason I chose AutoActivation is just to avoid using transaction scopes everywhere, because I simply don't know where the start or end of a transaction can occur.

(Jun 15 '11 at 02:39) jensen jensen's gravatar image

No product can solve that problem for you. You have to know the boundaries of your transactions, and I can't see why you wouldn't be able to.

The idea behind most use cases is that you pull data from the DB, populate it onto the screen in DISCONNECTED form, let the user read/modify it, then push the data back to the DB once the user clicks Save.

In your use case, all you want to do is cache Start/End, either once when the app starts, or once every time your code is executed. Those are the boundaries of your transactions.

(Jun 15 '11 at 02:45) ara ara's gravatar image

I still don't get it. There is no proper example on how to use DisconnectedState, so it's hard for me to figure it out.

I don't want to cache Start and End. I want to cache the entire object. But I don't want to use another data layer. Then I might as well dump DO entirely. The reason I chose it was that it didn't require me to create an entire data layer and a database layer.

I created a small example of what I want and uploaded it to http://www.aimproductions.be/jensen/dss.zip

Can you please point me in the right direction, 'cause I'm simply not seeing it.

(Jun 15 '11 at 04:17) jensen jensen's gravatar image

It's fine if you want to cache the whole object, even if all you want are some of its property values. You can continue using the approach you used in your question or use DisconnectedState. I'm not familiar with DS, so wait for X-tensive to weigh in on it.

You don't need to "worry" about DAL or DB - DO will take care of that. But this isn't about DAL or the DB, and it is NOT specific to DO. All ORMs work this way.

(Jun 15 '11 at 04:26) ara ara's gravatar image

Just keep this in mind for OTHER parts of your app:

When you fetch entities, their values are only "valid" within a transaction scope. If you don't specify one, the transaction scope ends right after the entities are fetched. So if you access their properties, their data will be considered "invalid" and there will be another trip to the DB to refresh the entity's property values.

To prevent that, wrap your FETCH and PROPERTY ACCESSES in the same transaction scope, using the example I gave in my first post below. Or use DisconnectedState.

(Jun 15 '11 at 04:29) ara ara's gravatar image

Again, this is not specific to DO. Any non-POCO based ORM operates this way. And, in my opinion, this is not an inconvenience... it is a control mechanism that is very useful and necessary for creating performant code.

Even if one sees it as an inconvenience, it does NOT justify switching to POCO based on this alone... because POCO has many other disadvantages (and advantages) that must be considered!

(Jun 15 '11 at 04:32) ara ara's gravatar image

DisconnectedState is the only option for me then. But the sample provided is not sufficient. It hardly uses the DisconnectedState. I might have done something wrong in my sample, but at the moment it still takes 200ms to iterate over a collection (of non-DB objects) and use a DB object property as a comparison.

And I can't cache those properties everywhere I want to use them.

(Jun 15 '11 at 04:54) jensen jensen's gravatar image

I haven't used DS so I can't help much. But try:

Sample for DS: http://code.google.com/p/dataobjectsdotnet/source/browse/#hg/Xtensive.Storage.Samples/Xtensive.Storage.Samples.Wpf

Manual for DS: http://help.dataobjects.net/##DataObjects.Net%20v4.4/html/7b45407a-8824-4b32-a260-c1ac661f22a3.htm

Tests for DS: http://code.google.com/p/dataobjectsdotnet/source/browse/Xtensive.Storage/Xtensive.Storage.Tests/Storage/DisconnectedStateTest.cs#289

(Jun 15 '11 at 04:57) ara ara's gravatar image

Jensen, Ara,

Let me explain a bit the situation.

Firstly, starting from version 4.4, DisconnectedState is tightly integrated with Session and is used automatically when SessionOptions.ClientProfile is set, so you no more need to employ it directly.

Secondly, Ara was absolutely correct when described the importance of using transactions while operating with persistent entities. After adding a transaction scope to the whole batch of operations in the sample, it executes 10 times faster. Here is the code:

using (var t = service.ClientConnection.Session.OpenTransaction()) {

    DateTime start = DateTime.Now;
    for (int i = 0; i < 5000; i++) {
        if (i > current.Start && i < current.End) {
    DateTime end = DateTime.Now;

So, without the outer transaction DataObjects.Net has to open automatic transaction for every persistent property access operation and this drastically influence the overall performance.

P.S. Jensen, I could send you the updated sample if you wish. Just send the e-mail to support@x-tensive.com and I'll attach it to the answer.

answered Jun 15 '11 at 05:11

Dmitri%20Maximov's gravatar image

Dmitri Maximov

edited Jun 15 '11 at 05:17

I partially achieved the same result with the example provided by Ara on the Google Code page. Overall it dropped from 200ms to 20ms, which is indeed an improvement. Caching the Start and End properties in local variables makes it drop to 0ms.

Can I safely combine this with AutoActivation? (It seems to work.) There are places in the software where I don't care for speed, and where it is almost impossible to wrap the usage of a property in a transaction scope. It just adds 3 extra lines of code each time, which is not nice and horribly in maintenance.

(Jun 15 '11 at 05:19) jensen jensen's gravatar image

The answer is Yes, they can be combined without any harm.

(Jun 15 '11 at 05:22) Dmitri Maximov Dmitri%20Maximov's gravatar image
Your answer
Please start posting your answer anonymously - your answer will be saved within the current session and published after you log in or create a new account. Please try to give a substantial answer, for discussions, please use comments and please do remember to vote (after you log in)!
toggle preview

powered by OSQA