So a while back I set up a system for a customer. They are not a tech company, but rather a more traditional business constructed around “buy stuff for cheap and sell for more”.
The system (which some aspect and the history and evolution of it are material for a few other blob posts) is automating a lot of the pre-processing for incoming buy and sell requests, filtering a real noisy stream of incoming data into relevant pieces of information that is handled to the sales persons quickly, making the business far more productive and competitive than without.
Given the importance, the system needs to be pretty robust. Given the amount of moving parts, it is not a very trivial task.
The backend storage for the system’s internal state (it also coordinates with several other data sources) was MongoDB.
The setup – a single Mongod process, running version 1.8.something (the latest at the time) with journaling on, and all write ops from the client require full ack and flush-to-disk (fsync) to complete. It also is running on a machine that already runs many other things, and is not a very beefy machine to begin with.
Oh yeah, and nobody is watching over it (not a tech company – did I mention that?).
Single instance you say? but sir this is completely and utterly stupid!Sharing the machine you say? but it would eat up all memory and kill everything!No db admin? do IT person who know anything about it? it’s doomed!
In over a year, the system and it only suffered one-time breakdown, which is only attributed to my stupidity – I installed a 32bit version and once the system needed to allocate >2gb file it broke down.
The fix was very simple and super fast – downloaded the 64bit package, replaced the binaries, restarted the service.no data loss, the system picked up jobs from the queue and quickly restored full capacity.
The system have been running for well over a year now, completely unattended, and the only melt-down was avoidable, yet solved quickly and easily. MongoDB proved to be a robust piece of the puzzle. It also is showing a rather small memory footprint (most queries and updates are on the newest data, insertions are usually to the end of collections, so most of the files are kept paged to disk).
So yeah it is not a “web-scale” system in terms of request/sec or data size, but it proved to be a fairly good solution for an internal system that is in charge of tons of money.
Given the design I did for the system (another time, another post), I was not very afraid of possible problems with the data store, knowing that given a problem, once I solve it the system can quickly get back to work. Then I needed a solution that was cheap (low resources, run on existing hardware and OS), flexible to develop with, and with super easy install and upgrade story (xcopy deployment ftw). MongoDB was a perfect fit.
I’ve seen in my consulting years quite a few systems being very fragile, although they were relying on “proven stable” systems such as top-of-the-line RDBMS. Solid architecture and good design are far more crucial to system’s stability than specific tech choices. The question you need to ask yourself when you need to build a complex system (be it on the amount-of-moving parts front, dataset volume, system stress, data sensitivity or a mix of the above), is not “Is tech X stable enough or good enough”, but rather “Do I (or my people) know enough about building complex systems to build a stable one”. If you lack the experience, bring a person in who can help.
Last week I went to IASA Israel meeting, where we got to listen to Johanna Rothman of the Pragmatic Programmer fame, talking about the role of architects in the Agile projects world.
I’ve been taking notes during. Consider them a transcript + stream-of-thoughts.
here we begin:
so, is an architect role on an agile team should be considered as an Oxymoron? A common mistake is to think that architecture is about doing frameworks and designs up-front !
Some people think that the fact that in agile-style managed projects the architecture evolves with time, means that there is no place for an architect on such projects.
This will cause the evolvement to be scattered, and a guiding hand is needed to make all of the small changes drive the overall design towards a meaningful direction which will support the growth of business value.
Feature-itis Pushing more and more features in expense of sparing time for determining long-term architecture goals will lead to a hole filled with technical and architectural debt, causing future features and adjustments be more complicated than needed, and hurdling the project in the long term.
My observation – feature-itis is even worse because of it usually comes with the lack of proper user-testing. A feature imo is a hypothesis, that has to be proved, so a proper ground (a/b testing, explicit and auditable metrics etc.) has to be in place, and developed alongside the features being cranked out . Concurrent projects A product (and at a higher level – a program) is often made out of a few projects, each lead by a different person/division/etc. A project is made of many features and feature-sets. Just like in software, you need to lower coupling between projects, and make feature-sets cohesive. This needs to happen both on the product level (the Principal PM’s responsibility), but also on technology level, which is the architect’s responsibility. Developers usually concentrate on a feature, sometimes on a feature-set. Testers have broader sight, but they would usually be working too details, thus not seeing the overall picture also. It is the responsibility of the architect(s) to see that the project is making sense technically across the whole product .
Some Scrum Master observations:- Scrum is not the only agile FW
Communication Paths equals (N*N-N)/2
hence with 10 people there are 45 communication paths!
Architect on an agile team? So what does the architect has to do, which no one else on a Scrum team usually do?
Timing Often managers ask: “what is the latest responsible moment for making architectural decisions”?
Johanna’s view is that the correct question is : “what is the most responsible moment for making architectural decisions”?
You should not try to postpone important decisions, but instead make them at the most critical times.
My observation – for any project of non-trivial scope, the proper time for architectural decisions is always. That is, there should always be at least one person who stops and think about larger picture, about future paths, etc. (the things listed a few paragraphs back) and it has to happen all the time as the project evolves, not at some predefined spots.
Another gem from Daniel Hölbling.
In short – it will allow you to write code like:
public void Browse([DefaultValue("beer")] string category, [DefaultValue(1)] int page)  { 
...
}
grab it here
The cool thing is that because MonoRail is so extremely flexible, one can really easily add this type of functionality without touching the code-base, but rather implementing a straightforward interface. That’s what I call extensibility.
One more super kudos to Hammett for the overall architecture of MonoRail.
hi guys and gals. I’m in need of a good SOA book, especially in relation with highly scaleable web applications.
What it should not contain: - HowTo use a specific technology/vendor tool/etc
What it should contain - detailed stuff about best practices, pitfalls, etc.
please drop a comment of send me an email
10x, K
Has anyone did a detailed comparison of MassTransit/RSB/NSB and is willing to share?
explanation (before the wife kills me): I have some free time in the coming months, so I’m looking for interesting consulting gigs.
So, if you’re in a company / dev team, and looking for some help with Castle (Windsor, MonoRail), NHibernate, or general information system design and architecture advices or training, setting up build and test environments, or any other of the things I rant about in this blog, then I’m your guy.
I also do web-client ninja work, dancing the crazy css/html/javascript tango (jQuery: yes, Ms-Ajax: no)
I currently live in Israel, but I’m fine with going abroad for short terms if you’re an off-shore client.
you can contact me at “ken@kenegozi.com”
I’ve just came across a comparison on IoC containers in the .NET world:
Haven’t read it yet cuz Im actually off-computer right now (the lappy is attached to the living room TV, and the break in the movie is almost over), but MAN is has COLOUR charts, so you can bet your arse I’m gonna read it later.
Not that I’m excited. I’m pretty sure that (INSERT WINNER HERE LATER) will prove to be the best IoC ever.
Assume you are building some kind of an information system. Say it’s an issue tracker (yeah I know, I have a blank spot in the creative-part of my brain).
Now say you want some visual customisation based on the current context - like a different look’n’feel for each customer on a multi-tenant application, or a slightly different menu for an Admin.
Kinda easy - right? you’d stick some overriding CSS rules for the former (say on CUSTOMER_ID.css), and some kind of a simple view logic for the latter (say <% if (isAdmin) { %> … <%}%>, or some type of CodeBehind crap if you’re a WebForms lover).
But - what if you want to customise the behaviour according to context? say that for some actions, for a given customer, an email should be sent, or a webservice be called, or some default data should be set for a given form.
The first option would be to create an interface ICustomerActions, and a DefaultCustomerActions which will be in charge of the, well, default behaviour. then for each customer you’d derive from DefaultCustomerActions (or directly from the interface if it’s completely different).
Then you’d use some kind of a Factory (or your container) to resolve the needed ICustomerActions instance according to the context (say a customerId in the Session).
There are two problems in this approach
So how do you think I solved this problem? How would you do that?
Situation:
Problem: A thread looking for an item in the cache to find that it’s not there, would issue the http request to fill the cache. A second thread might want to initiate another call if it needs the data before the first thread has updated the cache.
Solution 1: use locks on the cache object.
problem with that: you lock the whole cache, so other threads looking for a different type of data will be blocked, even though it’s okay for them to get data from the cache, and even to insert data with a different key into the cache.
Solution 2: Keep a key per requested entry. Now you only lock what needs locking.
You’d keep a dictionary of lockers ( new object() ), then the action of obtaining a locker will cause a full cache lock, however the lock duration will be short (the time it takes to retrieve an object from a Hashtable, or to new an object and put it in the Hashtable), and then the long out-of-process operation of loading the object will be with a lock on the specific key, while the rest of the Cache is accessible for reads and writes by other threads.
Note - this is notepad (or rather WindowsLiveWriter) code. You’d need to fix syntax errors, and inspect the usage. License is MIT - Use at your own risk, and don’t forget to attribute it to the writer
class KeyLevelSafeCache
{
IDictionary lockers = new Hashtable();
IDictionary cache = new Hashtable();
object ObtainLockerFor(string key)
{
return thread-safely-get-an-object-from-lockers-hashtable()
}
public T Get<T>(string key, Func<T> load())
{
var locker = ObtainLockerFor(key);
//now retrieve the object from the cache using 'locker'
}
}
Well, not that slow apparently.
The lesson:
Don’t be afraid of powerful tools.
You can use reflection right, gain the power, while not losing too much performance.
Quoting from nhusers mailing list:
How much you be scare about the use of reflection in NH if 1.000.000 of access to get & set to a field mean 0.2seconds ?– Fabio Maulo
A quick ripoff from NHibernate’s users group:
Fabio Maulo:
The base concepts to understand are (my opinion):- The Cache is not the panacea of performance.- Don’t use the Cache like the base of your app; add the management of Cache at the end of your development process to increase the performance only where you really need do it.- Implementing a method named GetAll is, in most cases, a bad idea; an acceptable mediation is PaginateAll(pageSize).- InMemoryFilter can have less performance than filter trough RDBMS (especially when you intent to do it trough Cache with a large amount of entities).- Take care on what happen to the memory usage of your app when you are using Cache.
Ayende:
The cache is not magic, and should not be treated in such a fashion. I refuse to use the 2nd level cache in my applications until I have a perf problem that can’t be solved by creating smarter queries.Think about the cache as band aid, and good design as avoiding the need for that.
And I say Hallelujahs
An innocent question raised by Ayende has started an interesting debate on the comments.
In short (read it all there - don’t be lazy)
Which interface name is better?
a. IRecognizeFilesThatNeedToBeIgnored
b. IIgnoreFilesSpecification
with a single method: ShouldBeIgnored(string file);
Some were in favour of a, some in favour of b.
The interesting thing is that many has offered a third option:
c. IFileFilter
Let’s group these things:
Personally I couldn’t care less which one of the first type will be used. I slightly in favour of b., as I think funny names are good. The compiler cares nothing about names, but the human mind would remember the purpose well, and a newcomer would pick it up quickly.
The second group (IFileFilter) is not good. It might get filled with a lot of methods that do file filtering.and if it’s not, I think it should reflect the intention of the implementing class.Since multiple interfaces per class are allowed, it’s ok to have specialised ones.
Have just came back from my talk, given for The Developers Group in Microsoft’s Victoria offices, London UK.
Took me a bit to find the place, as the building does not say “Microsoft” on the outside (as opposed to the offices in Israel).
The presentation went pretty much ok, considering it was my first time actually presenting in English, in front of an English crowd, and considering I had a PC malfunction that has forced me to recreate the Demo project, on the train today … Just finished it up 5 seconds before connection the laptop to the projector.
I didn’t manage to squeeze in some of the parts that I wanted to, like JSONReturnBinder and Windsor integration, and like Unit-Testing controllers and views, but I do hope that I managed to do justice with this wonderful stack, within the limited time and my horrible English …
Unfortunately, I missed the post-meeting-pub-thing as I just happened to leave the place last and didn’t see where everyone did go, so if you were there and has some questions, please do not hesitate to leave them here as comments.
Anyway, as promised, here are the slides and the demo project.
If you are using git, and have a git-hub account, then you would be able to follow the demo project’s source at http://github.com/kenegozi/monorail-aspview-demo/tree/master
Have fun.
P.S
I’d like to thank Jason from The Developers Group, and Nina from Microsoft, who have helped with the administration part of things. Everything went smooth despite my late arrival. I’d also like to thank the attendees for their patience and listening. I hope you’ve enjoyed it, I definitely have :)
There appear to be yet another XML API.
So, when you want to generate:
<?xml version="1.0" encoding="utf-8"?>
<root>
<result type="boolean">true</result>
</root>
instead of (using System.XML):
XmlDocument xd = new XmlDocument();
xd.AppendChild(xd.CreateXmlDeclaration("1.0", "utf-8", ""));
XmlNode root = xd.CreateElement("root");
xd.AppendChild(root);
XmlNode result = xd.CreateElement("result");
result.InnerText = "true";
XmlAttribute type = xd.CreateAttribute("type");
type.Value = "boolean";
result.Attributes.Append(type);
root.AppendChild(result);
one can (using the new API):
XmlOutput xo = new XmlOutput()
.XmlDeclaration()
.Node("root").Within()
.Node("result").Attribute("type", "boolean").InnerText("true");
Exciting.
Or is it?
Why not just (using your template-engine of choice):
<?xml version="1.0" encoding="utf-8"?>
<root>
<result type="<%=view.Type%>"><%=view.Value%></result>
</root>
works great for the “complex” scenarios on Mark S. Rasmussen’s blog:
<?xml version="1.0" encoding="utf-8"?>
<root>
<numbers>
<% foreach (Number number in view.Numbers) { %>
<number value="<%=number%>">This is the number: <%=number%></number>
<% } %>
</numbers>
</root>
and:
<?xml version="1.0" encoding="utf-8"?>
<root>
<user>
<username><%=view.User.Username%></username>
<realname><%=view.User.RealName%></realname>
<description><%#view.User.Username%></description>
<articles>
<% foreach (Article article in view.User.Articles) { %>
<article id="<%=article.Id%>"><%#article.Title%></article>
<% } %>
</articles>
<hobbies>
<% foreach (Hobby hobby in view.User.Hobbies) { %>
<hobby><%#hobby.Name%></hobby>
<% } %>
</hobbies>
</user>
</root>
is Hobby and Article more complex? no probs. break it down to sub-views:
<?xml version="1.0" encoding="utf-8"?>
<root>
<user>
<username><%=view.User.Username%></username>
<realname><%=view.User.RealName%></realname>
<description><%#view.User.Username%></description>
<articles>
<% foreach (Article article in view.User.Articles) { %>
<subview:Article article="<%=article%>"></subview:Article>
<% } %>
</articles>
<hobbies>
<% foreach (Hobby hobby in view.User.Hobbies) { %>
<subview:Hobby hobby="<%=hobby%>"></subview:Hobby>
<% } %>
</hobbies>
</user>
</root>
Can you get more expressive that that?
Look how easy it is to visualize what we’re rendering, and how easy it is to change.
I consider all those XML API (including ATOM/RSS writers) as a leaky and unneeded abstractions, just like WebForms. Do you?
Reading this post from Phil Haack made me jump a little. Oh no, I said, Please don’t let the clean IMvcFramework become clumsy.
Ayende has ranted about it better than I would.
Now I see that Phil has issues with ABC as well.
The answers for the ABC problems he shows there are cumbersome. In order to gain “flexibility”, you end up polluting your API with “CanSupportCrap” methods, etc.
So, to recap:
Please Please Please keep IHttpContext in place …
Given the following code:
public void UpdatePerson(int id, string name){ Person p = peopleRepository.Get(id); p.name = name; peopleRepository.Update(p);}
One answer would be (using a pseudo mocking framework):
Person p = new Person();
Expect.Call(peopleRepository.Get(0)) .Returns(p);
Expect.Call(peopleRepository.Update(p));
...
service.UpdatePerson(0, "MyName");
Other approach would be (using pseudo coding again):
Person p = CreateAndInsertToDB();service.UpdatePerson(p.id, "New Name");FlushAndRecreateTheSession();Person updated = GetFromDB(p.Id);Assert.Equal("New Name", updated.Name);
What would you do, and why?
(I’m tagging that also under altnetuk as it has been inspired by a session around test-granularity, mocking frameworks, etc.)
It’s funny. At the end of the day, I didn’t use the tiny IoC in the StaticSiteMap for the testing.
It was fun however.
Last night I’ve built a nice new tool called StaticMapGenerator which is used to generate a typed static resources site-map for ASP.NET sites (works for MonoRail, ASP.NET MVC and even WebForms).
I’ll blog about it on a separate post in details.
Since I didn’t want any dependency (but .NET 2.0 runtime) for the generator and the generated code, I couldn’t use Windsor to IoC. That calls for a hand rolled simple IoC implementation
Ayende has already done it in 15 lines, but I wanted also to automagically set dependencies and have a simpler registration model.
so I’ve quickly hacked together a configurable DI resolver (a.k.a. IoC container) in 15 Minutes and 22 Lines Of Code. Call me a sloppy-coder, call me whadever-ya-like. It just works.
Ok, I’ve cheated. You’d need using statements too, but you can see that I was generous enough with newlines …
Usage:
Given those:
You can do that:
You need not worry about supplying the BuildDirectoryStructureService with an implementation for the service it depends on, but only to register an implementation for that service.
If you know not what XSS is or how easily you can expose your application to XSS, take a short read at the next posts:
AspView was written by me, for my (and my employer at the time) use. Therefore, I did not make it ‘secure by default’ in terms of HttpEncode.
However, seeing now that the convention lean toward outputing HtmlEncode-ed by default, I’m adapting AspView to that.
The usage would be similar to the one suggested for Asp.NET MVC at http://blog.codeville.net/2007/12/19/aspnet-mvc-prevent-xss-with-automatic-html-encoding/
So,
<%="<tag>" %>
would output
&lt;tag&gt;
While
<%=RawHtml("<tag>") %>
would output
<tag>
The only exception here is ViewContents on layouts. since the view contents is 99% of the times made of markup, so in the layout would still write:
<%=ViewContents %>
All of that stuff is being implemented with AspView trunk (versions 1.0.4.x) that works with Castle trunk.
If anyone wishes me to bubble it down to the 1.0.3.x branch (for Castle RC3), please leave your comments here. Unless I’ll see that people actually want that I would probably not make the effort.
I really do not know how I missed this thread.
So funny, so true.
http://discuss.joelonsoftware.com/default.asp?joel.3.219431.12
If ever you need to convince someone to KISS, that’s the source.
And if that’s not enough, you have a shorter version at http://ayende.com/Blog/archive/2007/12/18/Choices.aspx
Ayende has recently posted a walkthrough for building Web Apps using the Castle Project’s libraries.
He covers ActiveRecord and MonoRail basics, showing off some of the shiny and new abilities (AR scaffolding, ARSmartDispatchers, Generics integration and so on).
The only thing missing is IoC-ing using Winsdor or even Binsdor. Maybe to hook some BL layer or something.
So it concludes (as of now) a Part I, Part II , and source-code.
If you wanna see a decent web development framework at action - tune up to those posts.
There is a great article on CodeProject, by Guenter Prossliner.
A simple class in presented there, that makes Duck Typing possible for Generics enabledCLS languages (VB.NET 8 and C#2.0 for instance).
I’ll present it here in short form:
let’s say we have two classes:
1: class Foo1 2: { 3: publicstring Bar() { } 4: } 5: class Foo2 6: { 7: publicstring Bar() { } 8: }
Now you have a method that can work with instances of eiether one, and invoke Bar on it:
1: void SimpleMethodOnFoo1(Foo1 foo) 2: { 3: foo.Bar(); 4: } 5: void SimpleMethodOnFoo2(Foo2 foo) 6: { 7: foo.Bar(); 8: }
Brad Abrams has published on his blog a short presentation about 5 rules for good framework design, a presentation he is giving at the Pattern and Practices Summit event.
The stuff there is taken the “Framework Design Guidelines” book, written by Brad and Krzysztof Cwalina. I’d recommend to people who develop frameworks for the use of other developers, to get acquainted with the ideas, and also read the book (from constructor to destructor), possiblyleaving it on their shelf for future reference.
The presentation itself is very good in it’s design - large and readable text (though slide 4 is of balance - maybe he’s relying on a non-standard font). the pictures and examples are funny and to-the-point, and it has meaningful colors (as if Do Not needs a red X nearby …). The story on slides 12 to 26 is cute, and you get the message simply enough.
I’d add to the StopWatch example from slide 7, the method .Restart(), if I’d want the user to have even a simpler way to reuse the object.
I’ve evaluated some methods to rail and to MVC in the .NET world, without using webforms.
The first method I tried was to treat aspx’s kind’a like old ASP, no server controls, allowing multiple forms, no __VIEWSTATE __EVANTVALIDATION __UGLYHIDDENFIELD in the generated markup, and calling actions on the server, implemented as Controllers over ashx’s, or directly linking to a new .aspx view (if no operation is required).It allowed me to create super clean html, but it has it’s limits, since I’ve had to implement a mechanism for MasterPages and UserControls, and that suck.There is BooWebness. Seams like a great effort, and I like the natural .ashx approach, but I am not very into its whole framework there.
Then I’ve went after MonoRail.Cool.Has a lot out-of-the-box, including MasterPages(Layouts), UserControls(ViewComponents), Markup Helpers, AjaxHelpers, and a large community. Being part of Castle is a Big Bonus. I believe in Castle. I’ve been using Castle’s ActiveRecord for a while and I find it almost too good to be true. So MonoRail fits well in Castle’s world, so I’m into it.
Now I needed to choose a view engine.NVelovity was disqualified for its discontinuesness, and the need to learn something new and narrow. Not to mention the fact that it’s interpreted.The WebForms hybrid just doesn’t look too good.Brail from Ayende is very nice. Learning Boo isn’t like learning a new thing, since I’ve had a little taste of it in the past, and since it’s .NET, and since it has a very readable syntax that any C#/VB.NET/Python/Perl/java/you-name-it developer can learn in minutes. Brail is a lovely name, and I can count on Ayende to keep developing it as much as it’s needed.
So Brail over MonoRail it is.
Posts about the matter will come shortly.
So after a lot of talking about the matter, I’m starting a little (but real) project with Castle’s ActiveRecord as an ORM service.
What I’m still not sure about, is weather I should inherit everything from ActiveRecordBase, or have my own base class and use ActiveRecordMediator?
Sure, I can derive my base class from ActiveRecordBase and have common behaviour for my model, but I am still not sure that I’m fully into the ActiveRecord pattern as a whole. It’s tempting to exploit Castle’s implementation but to keep the methods in a seperate class rather than in the model itself.
I also have some problem with the need to do FindAll, Find, etc. on each class so to expose the static methods in a typed way.
Well, I take back the last paragraph, since Icould use ActiveRecordBase<> and it solves this problem.
To conclude: I tend to go with subclassing ActiveRecordBase<> as a base class for my model, and I’m starting to code (and test) that way,but I could still use the knowledge gained by people with real experience withthis implementation…
I’ve looked for insights on the matter on the web, and have found nothing.If anyone reading this has an insight about the matter, please comment here, so people who do their first steps in Castle’s ActiveRecord implementation would have a better kick start regarding this issue.
Last Tuesday (06/07/2006) I gave a presentation with Oren at our company, for the company’s .NET forum.The forum is made of all the employees from all the branches of the company, who are dealing with Microsoft’s development tools, at any level.Oren talked about our in-house architecture and presented our code generating tool - Code-Agent, that incorporates our architecture to generate a full blown n-tier application from a mere SQL Server database, in just a few clicks.Afterwards, I talked about the problems (as I see ‘em) in the data querying world today, ran a quick overview of some methods and tools that help developers to solve them, and showed a bit of the way that ADO.NET vNext (ADO.NET Entities, or ADO.NET 3.0) and LINQ, come to the rescue.After the presentations, we’ve talked with some of the attendees. It was delighted to meet very capable, experienced and sharp minded people, and we encouraged them to take more active part in future meeting of the forum. I hope that this forum will evolve to be a great place to share knowledge.The presentation that accompanied my lecture can be found here (or here if your browser cannot handle it), and if you have any questions or ideas about the subjects discussed, you are more than welcome to send them to me.
The web application that we’re working on now, uses ASP.NET 2.0’s ObjectDataSource model, to bind to GridView and FormView in the front-end.Starting off, I’ve made a hierarchy of base classes to manage the data-binding and visual behaviors for Entity Pages (used to view, edit or insert a single entity) and Master-Details Pages (used to manage Master-Details scenarios).The base classes looked like:
class BasePage : Page { /* ... */ }class BaseEntityPage<T> : BasePagewhere T: Entity{public BaseEntityPage(){/// hook ObjectDataSource and FromView databind events}//... }class BaseMasterDetailsPage<T> : BaseEntityPage<T>where T: Entity{public BaseMasterDetailsPage() : base(){/// hook GridView databind events}//... }
Somewhere along the road, we’ve decided to change the first module being developed, to use ascx controls in a single page, instead of multiple pages in a master page (due to the fact that it became a nested master page, and vs2005 doesn’t like it).So, a developer working on the change from Pages to UserControls, changed the base class to something like that:
class BaseUserControl : UserControl { /*...*/ }class BaseEntityUserControl<T> : BaseUserControlwhere T: Entity{public BaseEntityUserControl(){/// hook ObjectDataSource and FromView databind events}//... }class BaseMasterDetailsUserControl<T> : BaseEntityUserControl<T>where T: Entity{public BaseMasterDetailsUserControl() : base(){/// hook GridView databind events}//... }
Later on, we needed some standalone pages (not “mastered”) to have data capabilities.We already had the databinding for UserControls, but now we would need to create empty “dummy” pages and host UserControls in them, which means that instead of aspx + aspx.cs files per page, we’ll have aspx + aspx.cs + ascx + ascx.cs files !!!We could always keep the BasePage hierarchy next to the BaseControl hierarchy, but it will create an ugly duplication.If Multi-Inheritance was possible, we would have used something like:
class EntityController { /* ... */ }class BaseEntityPage<T> : BasePage, EntityController { /* ... */ }class BaseEntityUserControl<T> : BaseUserControl, EntityController { /* ... */ }
The solution is to use an external controller, we’ll call the Manager, to do all the recurrent login that applies to both the Pages an the UserControls.We’ll use an EntityManager that will manage both BaseEntityPages and BaseEntitycontrols, and a MasterDetailsManager that will manage BaseMasterDetailsPages and BaseMasterDetailsControls. Each and every page and UserControl will register itself with a manager.Now the base class look like that:
class EntityManager<T> {}class MasterDetailsManager<T> : EntityManager<T> { }class BaseEntityPage<T> : BasePagewhere T: Entity{private EntityManager<T> manager;public BaseEntityPage(){manager =new EntityManager<T>(this);}//... }class BaseMasterDetailsPage<T> : BaseEntityPage<T>where T: Entity{public BaseMasterDetailsPage() // not calling base() cuz we need a different manager{manager =new MasterDetailsManager<T>(this);}//... }class BaseEntityUserControl<T> : BaseUserControlwhere T: Entity{private EntityManager manager;public BaseEntityUserControl (){manager =new EntityManager<T>(this);}//... }class BaseMasterDetailsUserControl<T> : BaseEntityUserControl<T>where T: Entity{public BaseMasterDetailsUserControl () // not calling base() cuz we need a different manager{manager =new MasterDetailsManager<T>(this);}//... }
All the databinding logic and visual behavior control, is now placed in the managers classes, and the page or control itself, is only dealing with the things that are specifically needed by it.The developers on the team, when creating a new page or UserControl, only need to register their page or control to the appropriate manager, and not to mind all the databinding and behavior stuff.
I’ve been yesterday to a lecture, given by my colleague Oren Ellenbogen, on the subject: “Code Templating – Advanced use of Delegates and Generics in c# 2.0”.The lecture took place at Raanana’s Microsoft offices, as part of the C++/C# User Group.He presented us with a way to refactor our recurring code blocks, by “separating the recurrent from the unique”. This technique gives us the ability to have the recurrent logic (let’s say, open a db connection, apply logging and transaction management, etc.) only on one place, aka – the code template, and then apply unique logic (actual select clause, or population of a Business Object from the database, etc.) where you need, using the template as a wrapper.The technique can be achieved using Interfaces, but thanks to the anonymous delegates and methods presented by C# 2.0, the process can be more “code viewer friendly”.All that (and a more technical explanation) would probably be published on Oren’s blog, during the next few days, so I’d advise on keeping an eye (or rss reader) on that one.One note about the lecturing technique he took – well, he did let the audience ask question anytime during the lecture, and actually answered their questions immedietly. This caused some flaws in the flow of the lecture, so I’d advice to be firm with the audience and postpone any question to a predetermined point (or points) in the lecture. The accompanying powerpoint presentation was lovely made, with large and readable fonts, nice images to emphasize important points, and the Visual Studio demos where pre-written, well documented, and presented well, using some nice add-ons ,such as “demo font” and Windows Magnifier.I hope too, that I’ll be able to give a lecture to the group during the next months, if I’ll have time to compose one …
We’re developing a new web application at our department with rich client interface,and as part of the process we need to develop a few UI custom controls, that can and will be reusable in future projects.
I am indecision about two approaches:1. Packing all the UI web controls into a single assembly (let’s say SQLink.Web.UI), as MS did with their System.Web (which include all the UI namespaces, and all other web namespaces, too), and also with the Microsoft.Web.UI.WebControls.dll2. Pack every single control into adifferent assembly.
Why 1? it’s somewhat easier to maintain (a single project on VSS), it’s following the path of MS (this isn’t always the best thing to do, but it often is), and it’s easier to deploy.
Why 2? (a)It’s easier for two developers to work on different controls without “fighting” over source control privileges, and (b) if the assembly holds valuable controls not needed to a specific application, and the application is handed to a client who isn’t licensed to use those controls, he still can lay his hands on it if we had packed all our controls into a single dll.
After consultation with my colleague Oren Ellenbogen, we came to conclusion that approach 1 is better, and that the downsides of it can be solved by (a) working properly (by means of not allowing the developers to keep project wide objects checked-out), and (b) relying on the legal discipline of our clients, not to use our application dlls in their or in third party solutions, without our explicit permission.
back to coding …