How does DO fit in with CQRS?

Specifically: 1) CQRS on the write-side is about commands instead of loading DTOs, making changes, sending back the entire DTO and comparing values. It seems like DO works similarly when using DisconnectedState with method calls being recorded and replayed server side. In MVC model, you instead query for the entity and then call its methods (or use a session service to execute the "command")

2) CQRS on the read-side is about projecting DTOs directly from the data store (actually, often from a thin "read layer"), instead of materializing entities and then projecting from the entities to the DTOs. In CQRS, the data store for the read side could be a different source than that of the write side - and this different data source could be in denormalized form to remove joins (e.g. similar to OLAP's benefits). DO, on the other hand, uses prefetch paths (implicit or explicit) to materialize a graph of entities, and then (if applying MVC pattern) projects onto DTOs that are presented on the view for edit.

My questions are:

1) I know DO has made a ton of optimizations to make querying/materializing/projecting really fast. Do you think there would be a big performance improvement with the CQRS approach?

2) On the read-side, when projecting onto DTOs, does DO actually materialize entities and then project onto the DTO? Or does it project directly from the data store?

3) I forgot my other questions... but I'd love to hear any comments you may have about anything related to CQRS. =)

There are some great documents about CQRS and its benefits here:


asked Sep 08 '10 at 08:37

ara's gravatar image


edited Sep 14 '10 at 14:21

Alex%20Yakunin's gravatar image

Alex Yakunin

I dont know what you are like on CQRS, but we are working on projects last 5 years using CQRS and i must say, working with CQRS is terrible. Maybe when you start a small project with small count of tables, there is few read/write methods in service. But when your project (in our case framework) grows up (up to 500+) tables, then there can be maybe 10 services (both have read/write on our side) or 20 services (10/10 read/write), where each service has ~= 15 methods. And with such complexicity it is hard to support such CQRS at least for us. Now we are switching to ORM=>DO4 - Finally!

(Sep 08 '10 at 14:57) Peter Šulek Peter%20%C5%A0ulek's gravatar image

IMO, CQRS usage is inevitable in some cases - at least, its partial implementation. Billing and other near-real-time systems with high load require something like this.

But I agree, designing the whole app relying on this pattern ~= adding unnecessary complexity. If you know precisely this form is opened quite rarely, and no high load is expected here, it's a bad case for CQRS.

(Sep 08 '10 at 15:58) Alex Yakunin Alex%20Yakunin's gravatar image

P.S. Writing an exact answer ;)

(Sep 08 '10 at 15:59) Alex Yakunin Alex%20Yakunin's gravatar image

Yes exacly Alex! In some cases it is good (as many other technologies, techniques), but correct me if i'm wrong but i can't (and won't) image DO4 + CQRS, is this even posible?

(Sep 08 '10 at 16:02) Peter Šulek Peter%20%C5%A0ulek's gravatar image

This is certainly possible - I'm writing the full answer now.

(Sep 08 '10 at 16:18) Alex Yakunin Alex%20Yakunin's gravatar image

One Answer:

1) I know DO has made a ton of optimizations to make querying/materializing/projecting really fast. Do you think there would be a big performance improvement with the CQRS approach?

I won't expect anything beyond what CQRS and DO can separately offer here. Some performance-related features of DO may help, if you'd try to optimize for them:

  • If you'd try to process a set of commands in each transaction (probably, a good idea with CQRS), DO could noticeably reduce the chattiness between application server and RDBMS. Should be attractive, since such transactions must be normally very short. I.e. if you design your processing pipeline so that each transaction runs a set of SQL commands, but not just one or two, DO will try to optimize this.
  • The same about query part. DO will help to run a set of them together, if you use future queries.

But note that all these advantages play nearly the same role in regular applications as well.

2) On the read-side, when projecting onto DTOs, does DO actually materialize entities and then project onto the DTO? Or does it project directly from the data store?

Currently DO does this only after materializing the entities (if entities you populate aren't pure DTOs). We implemented this by this way because we thought about security & (possible enforcement of security rules in getters).

But we already (may be 6 months ago) decided to switch to direct materialization here. This isn't implemented yet, but if any of our clients will really need this, we'll implement this in few weeks.

3) I forgot my other questions... but I'd love to hear any comments you may have about anything related to CQRS. =)

Likely, DO must be good at producing denormalized entities as well - its prefetch & batching capabilities may help here.

If you deal with such entities (I'd think of them more as of documents), it doesn't matter much if you materialize it to DTO or not. It should be relatively heavy object containing e.g. BLOB or string aggregating the data for the whole page or form.

I'd prefer this (document-like) denormalization style. "Wide" tables are less attractive - you should care about their structure, etc.; so I'd try to deal with document-like objects instead, and leave only the columns that are interesting from the point of queries and indexes.

I also can add few notes:

  • I haven't understood why you though about DisconnectedState with CQRS. I.e. DO4 uses operation sequences in DisconnectedState (and it's easy to produce such sequences by your own + serialize / deserialize them), but general, this isn't related to CQRS... Or, you mean, you need CQRS in WPF application, and query methods must return something like our MovableResult (DisconnectedState hosting Entity states + any graph with these entities) from WCF sample?
  • As I wrote, I'd avoid CQRS, if possible. If you aren't sure that you need this, it's simply better to avoid this. The pattern is good only when benefits of its usage are essential. Otherwise, it's just an additional complexity - like implementing N-tier instead of 2-tier when N-tier actually isn't necessary.

Note about patterns in general

During last N years I constantly see the following: there are lot of really nice patterns, that work great in particular cases. But software design books are focused mainly on exposing the most shiny edges of such patterns - i.e. they don't show how painful it could be, if you choose the wrong one.

Let's take repository pattern as an example. It's covered in almost any book related to DAL design. On the other hand, this pattern isn't a good choice if you're going to build a a system of e.g. 1K of types with pretty complex interaction - instead of trying to reduce the complexity here by automatically persisting the state at least (DRY!), it makes you to care about this. That's obvious that big systems can't rely on explicit memory deallocations and need GC, but that's still not so obvious repository isn't good in this case, and more intelligent solution is necessary (DO, NH and EF offer them - but with their own differences).

Imagine, how I react (at least, inside) when someone asks me, "How to implement repository pattern over DO?"

The same can be said about generally any other pattern. There is no "uber cool" solution for any case; moreover, any pattern implies some associated development cost. So it's better to be very careful while making any decisions affecting on the whole architecture of your solution, such as "CQRS everywhere" option.

Or may be that's because I become pretty skeptical about ideal solutions ;) Our own experience shows that fighting for ideals in software architecture is a mistake in many many cases: you should always balance it with the implementation cost. Cost of better clearness, maintainability or extensibility is frequently too high - especially, in scope of particular project.

All I wrote here isn't much related to DO itself - it is related to other projects we run recently. E.g. we have a client, that waste may be twice more money and time on development of software exactly as it's described by Microsoft:

  • they always write 100% complete specs; step aside from spec = you're shoot;
  • 3-tier architecture (for web site!);
  • WCF services exposing all the methods for web part there (actually, they need just few methods for integration!);
  • classic ASP.NET (stuff like ASP.NET MVC isn't solid enough);
  • etc.

on the other hand, they don't really care about coding guidelines, XML documentation, unit tests, CI and so on. In short, they aren't agile at all, and probably, they do understand this. But they're very fanatic about this approach, and I'm really curious who's (or what's) responsible for patching their brains this way.

And on contrary, there are projects where our developers have almost complete freedom - recently I described one of such projects. And I should say, the projects when we're free to choose the tools and approach (we normally follow agile practices) are generally more successful from the point of cost.

I also noticed that lots of successful projects (may be even with rare exceptions) were designed in worst traditions :) - i.e. frequently they're written to be brought to market as fast as it's possible. Normally it's not a big problem to make it scale well further, if there are no completely crazy mistakes. The example I immediately remembered is - Russian It was obvious it's a trash inside during its first years after launch. When they were trying to migrate it to web farm, there were absolutely funny mistakes - frankly speaking, I didn't believe they'll succeed at all. But now it's shiny - they migrated it, and it seems, finally rewrote the initial codebase almost completely. It's full of AJAX now.

So as a conclusion: think yourself, and don't be too much fanatic about patterns ;) The goal is to bring the software to market fast; how it works inside is actually a secondary question, that's interesting mainly for you. Unusual design decisions may bring very good results (immediately remembered "Beautiful code" - the solutions shown there frequently looks absolutely crazy from design point; but they worked, and worked well).

Unfortunately, this is rarely said in software design books. May be because most of them are targeted to enterprise software development (that's what we deal with as well) + large team of code monkeys (fortunately, we don't have such a team) ready to violate DRY thousand times - actually, I don't know.

answered Sep 08 '10 at 17:52

Alex%20Yakunin's gravatar image

Alex Yakunin

edited Sep 08 '10 at 18:03

Your answer
Please start posting your answer anonymously - your answer will be saved within the current session and published after you log in or create a new account. Please try to give a substantial answer, for discussions, please use comments and please do remember to vote (after you log in)!
toggle preview

powered by OSQA