All posts by Geoff Cox

Using JSON payloads to improve modern ASP.NET MVC applications

Getting Started: An ASP.NET MVC site that lists customers

Download all the code here.

This section is the vanilla MVC to set the stage for talking about AJAX and JSON payloads later on.  This should all be very familiar to ASP.NET MVC developers.

1. First, wel add a very simple customer model.

public class Customer
    public string Name { get; set; }

2. The we add a very simple home page model.

public class HomePageModel
    public IEnumerable<Customer> Customers { get; set; }

3. We write an Index action in the Home controller.

public ActionResult Index()
    var model = new HomePageModel();
    model.Customers = LoadCustomers();
    return View(model);

4. We write a LoadCustomers method to get the data. For demo purposes just returns a static list of customers sorted by name. This would likely be factored into a data access layer in a real application.

private static Customer[] customers = new Customer[]
    new Customer() { Name = "John"},
    new Customer() { Name = "Bill"},
private static IEnumerable<Customer> LoadCustomers()
    return customers.OrderBy(x => x.Name);

5. We write a Index view in the Home folder to list the customers.

@model PayloadDemo.Models.HomePageModel
    ViewBag.Title = "Payload Demo";
            <th style="text-align:left; margin:0">
    @foreach (var item in Model.Customers)
                @Html.DisplayFor(modelItem => item.Name)

PART 1: Traditional Pagination

That customer list could get pretty long, so we next we will want to add pagination.  We can start by doing using traditional request/response to page through the data.

1. Wel update Index action in the Home controller to support pagination.  Notice that it sets the previous and next page numbers in the ViewBag.  Our view will use these later on.

private const int pageSize = 5;
public ActionResult Index(int pageNumber)
    ViewBag.PreviousPageNumber = Math.Max(1, pageNumber + -1);
    ViewBag.NextPageNumber = pageNumber + 1;
    var model = new HomePageModel();
    model.Customers = LoadCustomers((pageNumber - 1) * pageSize, pageSize);
    if (pageNumber != 1 && model.Customers.FirstOrDefault() == null)
        ViewBag.NextPageNumber = -1;
    return View(model);

2. We update the LoadCustomers method to supports skip and top parameters.

private static IEnumerable<Customer> LoadCustomers(int skip, int top)
    return customers.OrderBy(x => x.Name).Skip(skip).Take(top);

3. We update the route entry in Global.asax.cs to support the pageNumber parameter.

"Default", // Route name
"{controller}/{action}/{pageNumber}", // URL with parameters
new { controller = "Home", action = "Index", pageNumber = 1 } // Parameter defaults

4. We add a div with action links to the Home/Index view.  The action links  use the ViewBag properties to navigate to the previous or next page.

<div style="margin:5px">
    @Html.ActionLink("Previous", "Index", new { pageNumber = ViewBag.PreviousPageNumber })
    @if (ViewBag.NextPageNumber != -1)
        @Html.ActionLink("Next", "Index", new { pageNumber = ViewBag.NextPageNumber })

Now the user can page through data.  Notice the URL reflects the current page number.


Part 2: Pagination using AJAX and JSON

The traditional MVC approach works, but the user is forced to refresh the entire page each time they navigate to the previous or next set of customers.  Modern web applications make an asynchronous AJAX request to populate the customer list with the paged data.

1. We keep the changes we made to the Index action, LoadCustomers method, and routing table to support pagination for requests to the server.

public ActionResult Index(int pageNumber)

2. Wel add a new method to support returning paged customer data as JSON. Here is the new GetCustomers action method in the Home controller.

public JsonResult GetCustomers(int pageNumber)
    return Json(LoadCustomers((pageNumber - 1) * pageSize, pageSize));

2. We replace the links in the Home/Index view with buttons.

<div style="margin:5px">
    <button id="previous-page" style="width: 80px; height: 25px; display: inline;" >Previous</button>
    <button id="next-page" style="width: 80px; height: 25px; display: inline;" >Next</button>

3. We also add a data attribute and and ID to the Home/Index view.  This will help us call back using AJAX.

<table id="customers" data-url="@Url.Action("GetCustomers", "Home")">

4. We add a new JavaScript file called payload-demo.js to the project.

Here is the payload-demo.js code.  This code tracks the current page number.  When the user clicks one of the buttons, we make an AJAX request and then repopulate the customer list with the data.

$(function () {
    var currentPageNumber = 1;

    var populateCustomers = function(data) {
        var $customerList = $('#customers tbody');
        $.each(data, function (i, customer) {

   $("#next-page").click(function (event) {
        currentPageNumber += 1;
        var $customers = $('#customers');
        var url = $'url');
        url = url + '/' + currentPageNumber.toString();
            url: url,
            context: document.body,
            type: 'POST',
            success: function (data) {
                if ($(data).length == 0)
                    currentPageNumber -= 1;

    $("#previous-page").click(function (event) {
        if (currentPageNumber == 1)

        currentPageNumber -= 1;
        var $customers = $('#customers');
        var url = $'url');
        url = url + '/' + currentPageNumber.toString();
            url: url,
            context: document.body,
            type: 'POST',
            success: function (data) {

5. We add a reference to payload-demo.js to _Layout.cshtml so that each page gets the script.

<script src="@Url.Content("~/Scripts/payload-demo.js")" type="text/javascript"></script>

Now the user can navigate paged customer data without leaving the page. Notice that we’re on page 2, but the URL hasn’t changed.


Part 3: Pagination with AJAX and a JSON payload

At this point, we’ve done the same work twice.

To produce the initial page, we created a model for the home page, an action that loaded the data into the model and invoked the view, and the Razor syntax in the view to produce the table with rows and columns.

To update the page, we created a JSON model, an action to return the JSON, and jQuery code to update the table with new rows and columns.

We could decide to do everything with AJAX and JSON and get rid of the initial page code.  We could leave the customer table empty on the initial page response, and as soon as the page loads make a request for the first page of customer data.

This works OK, but there can be a delay as the request is made.  The customer may see an empty customer list for a moment.  You could add a wait indicator, but this further delays getting data in front of the user and makes the application feel less responsive.

Note: I never liked those pages where it loads and then a hundred wait spinners go nuts as I wait and wait for the data to show up. I forgive the spinners on because they are asynchronously connecting to my bank accounts and they provide the most recent data immediately.

You don’t have to make that extra web request! In this section, I’ll show you how to send down initial JSON data with the page as a payload and let the same script that populates the table with the results of the AJAX request, immediately populate the customer list.

1. We update our Index action in the Home controller to put the JSON data into the ViewBag.  This uses the JavaScriptSerializer to turn the JsonResult from GetCustomers into a string.  Warning: Be sure to remember the .Data or it won’t serialize the right data.

public ActionResult Index(int pageNumber)
    ViewBag.CustomersPayload = new JavaScriptSerializer().Serialize(GetCustomers(pageNumber).Data);
    return View();

2. We can delete the HomePageModel and simplify our Home/Index view.  Notice the table is empty. We add a hidden div with a data attribute to the end of the page.  We put it at the end of the page so that the browser can doesn’t have to parse it as it is working to render the visible elements and so that we can delete it after we’ve used the data.

  ViewBag.Title = "Payload Demo";
<table id="customers" data-url="@Url.Action("GetCustomers", "Home")">
      <th style="text-align:left; margin:0">Name</th>
<div style="margin:5px">
  <button id="previous-page" style="width: 80px; height: 25px; display: inline;" >Prev</button>
  <button id="next-page" style="width: 80px; height: 25px; display: inline;" >Next</button>
<div id="payload" style="display:none" data-customers="@ViewBag.CustomersPayload">

3. We update our payload-demo.js script to use the payload.  We delete the div once we have used it to ensure we won’t apply stale data later when the user clicks a navigation button.

var $payload = $('#payload');
    if ($payload != null) {

Now we get the initial page populated and only have a single code path to write, debug, and maintain.



If you inspect the client/server traffic in Fiddler or IE F12, you will see that the DNS resolution, proxy routing and connection negotiation are more expensive than the a few extra Kb of downloaded data.  This is even more true if your users are in other countries or on lower bandwidth connections.

By including JSON payloads in your page, you can get the best of both worlds: Dynamic applications built using jQuery and AJAX and and immediate display of data with fewer asynchronous requests.

Effective StyleCop

StyleCop can have a significant detrimental effect on the quality, readability, and maintainability of the code. Based on years of using StyleCop on a variety of projects, here’s ways to use StyleCop effectively.

Read More

When/Then Unit Test Naming Pattern

Giving meaningful names to unit tests is critically important to quality unit tests.  Unit tests classes often contain a large set of test methods and it can be challenging to distinguish from between tests because of the small permutations of the condition, the action, or the result.

I was clued in to the When/Then naming pattern when working with Patterns & Practices at Microsoft.  I’ve seen this naming pattern work well across both client and web projects containing several hundred unit tests for all kinds of classes.  I’ve adopted for all my unit tests with success.

The pattern


For example, a unit test that verifies the PropertyChanged event is raised when the Name property is changed might read:  “WhenNameSetToDifferentValueThenPropertyChangedRaised”.


The When/Then naming pattern works well because it includes the action, condition, and result.  By putting the key semantic information in the name of the unit test there is no longer a need for method or code comments explaining what is being tested.

The When/Then naming pattern has several other benefits.

•    It naturally focuses each unit test on verifying one result.  This encourages writing atomic tests an discourages integration tests posting as unit tests.
•    It helps evaluate the purpose of each member of a class by thinking about the conditions and results.  This also illuminates other unit test permutations.
•    It helps describe the test independent of the code.  This helps when the test code doesn’t make the condition or result as obvious as the reader would like.  Most often this is due to mocking frameworks, verifying complex data, or other aspects of the test obfuscate the test’s purpose.


Some tips to consider when applying the pattern:

•    To keep the names a little shorter, you can remove passive verbs.  “WhenNameIsSetToDifferentValueThenPropertyChangedIsRaised” becomes just WhenNameSetToDifferentValueThenPropertyChangedRaised.  At one point I was tempted to remove the “When” and the “Then” and separate them with an underscore, but I found that it made the test name harder to read for comprehension.
•    Use standard verbs in your test names.  I use “Set” for properties, “Called” for methods, and “Raised” for events. I avoid synonyms like Updated, Recalculated, Invoked, or Fired.
•    Often a set of tests are focused on single part of your class (such as the Name property).  I use #region to group test cases that are related.  This helps me avoid injecting excessive class structure into my test names and makes it easy to evaluate the set of tests when working toward 100% code coverage.

Additional Examples

Here are some additional examples from my own code:

#region Name Tests



#region UpdateStatistics Tests



Flyweight M-V-VM: Decorator revision

Revision: Behavior problems

In my original post I provided a behavior (ViewModelBehavior) for provding a view model for the given DataContext.  The behavior saved away the original DataContext as the model, set DataContext of the AssociatedObject to the view model,  and passed the model to the view model.

This worked in 90% of the basic data binding scenarios.  However, it fails to work properly when the DataContext contains a relative data binding expression.  The root source of the problem is that the behavior is a child of the AssociatedObject (from a data binding perspective).

Changing the DataContext causes problems when the original DataContext binding expression is relative.  When the behavior saves the original DataContext to a dependency property it owns, two things break:

First, the relative position of the DataContext on the behavior is one level deeper than the DataContext at the AssociatedObject level.  This means any relative binding will be starting from the wrong point in the tree and won’t get the right value.

Second, when the AssociateObject’s DataContext is set to the view model, this affects the binding evaluation from the original DataContext.  This corrupts the saved DataContext value and the view model never gets the model.

Attempt: Attached property

To solve the first problem, I attempted to use an attached property on the AssociatedObject to make a copy of the DataContext.  This would mean the data binding statement of the copy would start at the same point as the original DataContext.

I had to write some ugly code to clone the binding and to set the initial value of the attached property to force an attachment at run-time.  Unfortunately, the attached property was still affected by the change to the DataContext.  This is because WPF follows the ordering of attributes.  I attempted to clear the DataContext, set the original DataContext, and then re-apply the DataContext. This didn’t work either as WPF has some special affinity for the DataContext property and always gives it priority.

Attempt: Cloned Binding

To try and solve the second problem, I did a whole bunch of ugly code for making a smart clone of the data binding.  However, the inherited nature of the DataContext made this approach unworkable. The copy couldn’t participate in the same inheritance chain as the DataContext property.

Analogy: Variable swap

After the failed attempts to solve the problem, I realized it was like  trying to swap two variables without having a temp variable.  A good interview question, but not a great WPF behavior.  By adding another control (like a Grid, Border, etc), I could add another level and separate the DataContext binding expression away from the AssociatedObject control.  This provides the temp variable for making the swap.

I considered not allowing (or restricting) the DataBinding expression on the AssociatedObject.  This isn’t a good choice though because it would interfere with other behaviors, prevent normal XAML development, and favors a run-time exception for a design-time error.

Solution: Decorator

Given that I need to ensure a level in the XAML where the DataContext could be changed, I decided to use a Decorator as a base class rather than a behavior.  The decorator could have it’s DataContext set and could modify the DataContext of its only child.  I thought this would make more sense to the developer than the behavior; everything within the decorator would have the view model as the DataContext.

The improved simplicity of understanding the data binding scope comes as the cost that the developer has to add a new XAML element.  This sacrifices on of my goals for  M-V-VM: “The library does not require to change how you build your XAML hierarchy”.  I thought a lot about these tradeoffs and after using it for a bit, the decorator approach seems worth it.

The ViewModelScope class inherits from Decorator.  It has the same ViewModelType and static ViewModelFactory properties as the behavior.  To use it, you just add it to your XAML like you would a Border:

<bellacode:ViewModelScope ViewModelType="{x:Type local:MyViewModel}" >

Because ViewModelScope is a Decorator which derives from FrameworkElement, you can set the DataContext just like any other FrameworkElement.

I’ve included the updated Flyweight M-V-VM library here.  The HelloWorld sample application is updated and it includes the POCO library from commands from my previous post.

Flyweight M-V-VM: POCO commands

M-V-VM encourages POCO view models

M-V-VM is one of the design patterns that help separate concerns between presentation and data:

  • The model is concerned with data and knows nothing of the view or view model.
  • The  view is concerned with presentation.  It only knows about the view model or model through data binding.
  • The view model is concerned with data although it is designed to provide functionality for the view.  The view model knows about the model, but not about the view.

By keeping the view model unaware of the view, M-V-VM distinguishes itself from patterns like Model-View-Presenter (M-V-P) which allows the presenter to have knowledge of  the view.

Because the view model does not know about or rely on the view, it encourages the view model to be built as a Plain Old CLR object (POCO) class just like the model.  Many consider the view model an example of the adapter pattern as  it wraps the model to make it consumable by the view.

POCO view models are deceptively challenging

How hard could it be to write a POCO view model?  It is just a regular class right?

Writing a view model that isn’t polluted by view concerns is harder than it first appears.  This is due to how the WPF/Silverlight data binds to the view model.

WPF/Silverlight data binding was designed to bind to dependency properties, not to call methods, or respond to events. To handle calling methods, WPF/Silverlight use the ICommand interface.  By encapsulating a method call in an object, WPF can data bind to it. Event and data triggers allow the view to respond to changes in the view model.

So lots of view model authors choose to expose ICommand, RoutedCommand, or RoutedUICommand properties.  Several WPF/SL libraries provide a class that can relay or delegate the CanExecute and Execute implementation from the command to methods in  the view model.  This way, view model authors write some extra ICommand properties, wire-up the delegation class, and then write methods to implement the commands.

This works, but  doesn’t get the full power WPF/SL offers with commanding.  Custom controls (like TextBox) register themselves for commands directly with the CommandManager.  This leverages the structure of the XAML tree, to let the control intercept commands as they bubble up.  Controls like Button and MenuItem that support a Command property can bind to x:Static commands or use the built in ApplicationCommands, ComponentCommands, and MediaCommands.  The standard commands can also be easily associated with input gestures (i.e. keyboard shortcuts) in XAML.


I’ve never liked exposing ICommand properties from my view models.  It feels like the first step to knowing too much about the view.  I also don’t like adding command delegation classes to my view model; it seemed like a lot of extra work.

I  decided that a behavior could do a better job of delegating commands while still getting the full power of WPF/SL commanding.  You can download it here.

The download is the Flyweight M-V-VM solution from my previous post with the addition of the Poco library and the HelloPocoWorld sample application.  Check out the readme.txt in the sample app.

How to use it

Attach the BindCommandBehavior to any control in your XAML.  The control’s data context should point at class that implements the method you want to call when the command is executed.  Just like any other behavior, you can drag-and-drop the BindCommandBehavior on a control when using Expression Blend.

Once you do that, just set the CommandProperty just like you would on a Button control.

Here is a snippet from the HelloPocoWorld sample application.

        <local:Stopwatch x:Key="theStopwatch" />        
    <Grid DataContext="{Binding Source={StaticResource theStopwatch}}">
            <bellacode:BindCommandBehavior Command="{x:Static local:StopwatchCommands.Start}" />
  • The Stopwatch implements the Start method.
  • The Grid has the Stopwatch instance as its DataContext and has the BindCommandBehavior attached.
  • The Start command is a RoutedCommand static property defined in the StopwatchCommand static class.

That’s it!  By using the BindCommandBehavior, your view models don’t need to know anything about ICommand. You just write plain old methods in your view model.

How it works

Just like a button uses the Command property to determine the command to execute when the button is clicked, the BindCommandBehavior button uses the Command property to determine the command to handle.  When the command is executed, the behavior delegates by calling a method on the data context.

You can also write a property or method that determines if the command can execute.  If no “CanExecute” property or method is found, the behavior allows the command to execute.  When your class fires an INotifyPropertyChanged.PropertyChanged event for any of your CanExecute properties, the CommandManager.InvalidateRequerySuggested is called for you.

If the properties or methods in your view model are named differently than your command, you can set the ExecuteMethodName and either the CanExecutePropertyName or CanExecuteMethodName attributes on the behavior.  By default, the behavior follows a convention that uses the command name to look for the appropriate properties and methods.

The BindCommandBehavior will try to pass the CommandParameter to your method if possible.  Because the CommandParameter arrives as an object, the behavior will  determine the parameter type of your method and try to convert the object to that type.  If it can’t convert it, it just passes the object and hopes for the best.

There are a couple of limitations on your view model imposed by the behavior (for now):

  • You can’t have overloads of the method.  The behavior isn’t sure which one to call.
  • Your execute and can execute methods can’t take more than one parameter.


Flyweight M-V-VM

A better M-V-VM library

There are several existing libraries that support the Model-View-ViewModel (M-V-VM) pattern. They each have some nice features. However, I didn’t find any that were just M-V-VM. Some required me to fundamentally restructure my XAML. Others required me to use dependency injection or implement a view model locator pattern. A few required me to use a heavy base class full of complex helpers or depend on a whole set of additional infrastructure. While I appreciate the help writing Plain Old CLR Objects (POCO) view models, I kept wishing there was a library requiring less buy-in to all those “extras”.

So I started building a library that just does M-V-VM. I ended up with something that is easy to use, lightweight, and powerful.

I’ve revised the approach from using a Behavior to using a Decorator.  You can download the updated version and read about it at

You can download it here. It contains the library project and a HelloWorld sample application. Both have complete ReadMe.txt files.


M-V-VM only

The library has no dependencies outside WPF/.NET. All the classes are related strictly to M-V-VM.

Normal XAML development

The library does not require to change how you build your XAML hierarchy.

No POCO glue

The library does not provide commanding or property changed helpers or patterns. Another library might, but not this one.

No additional patterns required

The library can support dependency injection and view model locator, but nothing is required on behalf of the developer.

How to use it

These are the two classes you’ll use 99% of the time.


This is a base class for your view model classes.


This is a behavior that you can drag and drop onto your controls (in Expression Blend). Just set the ViewModelType property to the type of your view model.

You can review the readme to learn about the other classes.

How it works

When the behavior attaches to the control:
  1. It instantiates a view model using the ViewModelType property.
  2. It sets the view model’s Model property = DataContext
  3. It sets the view model’s View property = control
  4. It sets the control’s DataContext = view model

Example 1: Basic ViewModel

This example shows the most basic creation of a view model.

In this example and the following examples the is a Customer is the model. The Customer class has basic properties like FirstName and LastName. The view model provides additional properties like FullName.

<Grid DataContext="{Binding Customer}">
    <bellacode:ViewModelBehavior ViewModelType="{x:Type local:CustomerViewModel}"/>
  <TextBlock Text="{Binding FullName}" />

Yes the example is contrived as you could easily do this with a ValueConverter. Some consider the M-V-VM pattern eliminates the need for ValueConverters and ValidationRules. Others find those classes continue to be helpful and are better encapsulated for re-use.

Example 2: Bind to ViewModel and Model

This example shows the you can also bind directly against the model.

It is up to you if your view model follows a strict Facade pattern when wrapping your model. There are some nice benefits to the Facade pattern because then your data-binding statements don’t have any knowledge that there is a view model containing a model. However, if your model already implements INotifyPropertyChanged, it can often be a massive time saver to bind directly to the model.

<Grid DataContext="{Binding Customer}"> <i:Interaction.Behaviors> <bellacode:ViewModelBehavior ViewModelType="{x:Type local:CustomerAccountViewModel}"/> </i:Interaction.Behaviors> <StackPanel> <TextBlock Text="{Binding Model.CreditCard.AccountHolderName}" /> <TextBlock Text="{Binding AccountStatus}" /> </StackPanel> </Grid>

Example 3: Hierarchy of ViewModels

This example shows how to create view models at different levels for a hierarchy of views.

Any part of the view model – the model, part of the model, or a property – can be passed as the data context for a nested control. The behavior can then use that data context as a model for the nested control’s view model. View models naturally follow the same hierarchy as your views. The behavior is easy to use because it leverages the natural XAML hierarchy.

    <bellacode:ViewModelBehavior ViewModelType="{x:Type local:CustomerViewModel}"/>
    <ColumnDefinition Width="Auto" />
    <ColumnDefinition />
    <RowDefinition Height="Auto" />
    <RowDefinition Height="Auto" />
    <RowDefinition Height="Auto" />
  <Label Content="Name" />
  <TextBlock Text="{Binding FullNameWithSalutation}" Grid.Column="1" />
  <Label Content="Shipping Address" Grid.Row="1"/>
  <Grid DataContext="{Binding Model.ShippingAddress}" Grid.Row="1" Grid.Column="1">
      <bellacode:ViewModelBehavior ViewModelType="{x:Type local:AddressViewModel}"/>
    <TextBlock Text="{Binding FullAddress}" />
  <Label Content="Billing Address" Grid.Row="2"/>
  <Grid DataContext="{Binding Model.BillingAddress}" Grid.Row="2" Grid.Column="1">
      <bellacode:ViewModelBehavior ViewModelType="{x:Type local:AddressViewModel}"/>
    <TextBlock Text="{Binding FullAddress}" />
  <Label Content="Account Status" Grid.Row="3"/>
  <Grid DataContext="{Binding Model}" Grid.Row="2" Grid.Column="3">
      <bellacode:ViewModelBehavior ViewModelType="{x:Type local:CustomerAccountViewModel}"/>
    <TextBlock Text="{Binding FullAddress}" />

Example 4: Lists and Data Templates

This example shows how a data template can easily create a view model for each item when used in a list control.

Your view models should not generally expose other view models as properties. This would mean that one view model is responsible for creating instances of other view models. You would lose the loose coupling the view model behavior provides and the flexible type-agnostic nature of data binding.

<Grid> <ListBox ItemsSource="{Binding Customers}"> <ListBox.ItemTemplate> <DataTemplate> <Grid> <i:Interaction.Behaviors> <bellacode:ViewModelBehavior ViewModelType="{x:Type local:CustomerViewModel}"/> </i:Interaction.Behaviors> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Text="{Binding Customer.Name}" /> <TextBlock Text="{Binding MostRecentPurchaseDate}" Grid.Column="1" /> </Grid> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </Grid>

Up Next: POCO

Plain Old CLR Objects (POCO) view models feel much less UI-bound than those that expose RoutedCommand, ICommand, RoutedEvents, or ICollectionView. I’m in progress working on a behavior that make command-binding for a control just as easy as the view model. It will be a separate library so you can choose to use if it works for you.

Modularity with Prism (Boise 2011)

As part of the talk I gave on Modularity with Prism today, I created a very small modular application.

The Simplest Modular Application

This WPF application allow you to type different commands into a text box and see the result in a textbox below.  The app has just has 1 module – the MathModule – which provides add (+), subtract (-), multiply (*), and divide (/) in RPN style notation.

The application went through several stages:

  1. Non-Modular application that just defines the ICommand interface and command processing classes.
  2. A non-modular application that uses the most basic Prism bootstrapper possible.
  3. A modular application that uses the standard ModuleCatalog and add the MathModule by direct assembly type reference.

I then walked through both the Modularity Quickstart (Desktop and Silverlight) to show UnityBootstrapper.Run() method diving down into the ModuleManager, ModuleCatalog, ModuleInitializer, and ModuleTypeLoaders (configuration, directory, and XAP).

You can download the Cmdlet program here.

During the demo, I had put project references to Prism.Desktop and Prism.Desktop.UnityExtensions.  However, to make this download more consumable, I reverted to assembly references.

If you unzip so that the Examples folder it a sibling to Prism, you shouldn’t need to update any assembly references.


  • There isn’t any magic to Prism’s Modularity. While some aspects of loading assemblies into the application domain during directory sweep or XAP download are sophisticated, you can debug through the entire module discovery, loading, and initialization process.
  • Prism provides rich module management – don’t roll your own. At first glance, it seems easy enough to discover and load assemblies.  However, the ModuleManager provides a state machine for modules loading, cyclic graph and duplicate detection of dependencies, and support for events for both synchronous and asynchronous loading.
  • Modules are logical divisions.  You don’t have to partition your application only on assembly boundaries,  you can register a group of assemblies in a single module.  Partition your application into modules based on expected logical usages of the application.

A couple of things I forgot to mention

  • For Silverlight developers, you can leverage the asynchronous download and progress modules in XAPs to keep your start-up time very short and on-demand load features as they are needed.  This can make a big difference for applications where some expensive features are used rarely.
  • You can group ModuleInfo in configuration/XAML catalogs into groups and then specify dependencies between groups.  This can make configuring dependencies simpler.

Unit Testing and MOQ (Boise 2011)

I got the chance to talk about Unit Testing and MOQ at the WPF/Prism conference in Boise, Idaho. This provides a list of the topics I covered along with the sample source code.

Read More

WPF Behavior: Global Application Shortcut Keys

RegisterAppShortKeysBehavior is a standard System.Windows.Interactivity.Behavior that can be attached to any control or control template. The behavior encapsulates all functionality required to support global shortcut keys.

Read More

Effective MEF (Managed Extensibility Framework)

I’ve been working heavily with Managed Extensibility Framework (MEF) as part of Prism v4. Through this effort (and by regularly pestering Glenn Block with questions), I’ve compiled some guidelines that I find useful.

I’ve based my guidelines on a style I first saw in ‘Effective C++’ by Scott Meyers. I like his use of the word “judiciously”. :-)

A couple of to know before we get started:

This post isn’t a getting started guide for MEF, go to the MEF codeplex site instead.

If you use Prism, you may notice that the Prism library code violates almost every guideline I have listed below.  Prism is “container agnostic” and there are some additional complexities and compromises in order to be able to support multiple dependency injection containers.  I guess Prism is the exception that proves the rule. Working to make Prism support MEF certainly made me familiar with MEF in a hurry.

#1 Prefer new() within assemblies
Ultimately, MEF uses reflection to discover attributed types.  It also tracks containers, catalogs, and exports; and must walk the tree of imports to satisfy them.  Overuse of MEF complicates design and slows performance.

The new() operator is the right choice for strong cohesion.  If you still need logic for which types to create, use design patterns such as the abstract factory pattern.

Clarification: If you have a Customer class that uses an AccountId class where no one will ever replace the implementation of AccountId, then just use new().  There’s no need to use dependency injection here.

#2 Prefer a mocking framework for unit testing
Yes, MEF can make unit testing easier by allowing run-time replacement. Your program pays the cost of MEF even when you are not running tests.  Mocking frameworks can inject mock implementations only during testing, leaving your code optimal for the real world.

#3 Prefer MEF’s declarative attributes to imperative programming
MEF has a large collection of classes and rich services.  With the exception of setting up the CompositionContainer and some advanced scenarios, you should avoid calling them.  You won’t get the benefits of static analysis tools, you’ll spend a lot of time to understand which to use when, and your code will be significantly more complex to manage.

MEF’s declarative attributes allow you to build your classes much like you do today.  When you ask the composition container for an instance of your class, MEF handles the walking of the exports to satisfy the imports you need.

#4 Prefer Export interfaces (avoid exporting concrete types)
In MEF an export is identified through a contract name.  When just [Export] is specified, the contract name is the namespace qualified type name.  Exporting and importing concrete types reduces MEF to a slower version of new().

The whole point of managed extensibility is being able to discover and replace implementation at run time, so always prefer to define and export an interface.  As well, moving your interfaces into a separate component goes a long way to being able to quickly change an assembly reference or deploy a different DLL to replace dependencies at run time.

#5 Use strongly-typed metadata to prevent unnecessary instantiation.
Although MEF provides name/value pair metadata, its real power comes with defining a strongly-typed export attribute.  This way you get strongly-typed metadata that can be used before the type is instantiated.  It also allows [ImportMany] with Lazy<T, M> to filter to only the types that have that metadata.

If you call GetType() on a Lazy<T>.Value it is very easy to end up instantiating the type before you need it. However, you may sometimes need to know the concrete type of the export.  By putting the concrete type name in the export metadata, you can inspect an export’s type without instantiating it.

#6 Use Lazy<T> when importing types
Lazy<T> has the same annoying value de-reference as null-able types, but is well worth it not only instantiate a type with someone actually needs it.

Using Lazy<T> during imports provides great benefits:

  • If the type isn’t used at run-time, it is never instantiated and its imports are not satisfied.  This improves performance and decreases the memory footprint.
  • Import metadata can be used to determine what to do with the type without instantiating it.
  • Lazy<T, M> (where M stands for metadata) can be used to filter out types that don’t have certain attributes.
  • The longer a program waits before instantiating a type, the more likely the imports it requires can be satisfied.  This is especially true to Silverlight programs broken multiple XAP files that download in the background.

#7 Judiciously restrict recomposition
MEF provides two very attractive ways to restrict recomposition: The [ImportingConstructor] attribute and setting ‘AllowRecomposition=false’ when using the [Import] attribute.

It often feels like a good design choice to use these because they help ensure a class is initialized properly. However, they have a very high impact to composition:

When you restrict recomposition of type ‘B’ within type ‘A’, recomposing ‘B’ isn’t just restricted within ‘A’ – you can no longer recompose ‘B’ within that container.  This is a core feature and tenant of MEF called “stable composition” – if the recomposition requirements cannot be satisfied, composition is rejected and the container is unchanged.

#8 Allow for late composition when possible
The [Import] and even [ImportingConstructor] attributes have an option you can set to ‘AllowDefault=true’.  This allows the import to be set to Default(T) (usually null for reference types) if the import cannot be satisfied.

If your type can be written such that an import is not required immediately, then setting ‘AllowDefault=true’ can make your class much more forgiving when an import is satisfied at a later time during recomposition.  Of course, you will need to add proper null checking before trying to use that import.

Note: Make sure when implementing IPartImportsSatisfiedNotification.OnImportsSatisfied that you code it to be called multiple times (as recomposition can occur multiple times).

#9 Don’t register the CompositionContainer with itself
Many dependency injection containers register themselves.  It is tempting to do the same with MEF, but if you use the declarative attributes, you shouldn’t need access to the container.

Registering the composition container with itself causes some circular references that make it harder for the container to be disposed correctly and may artificially extend the lifetime of the part exports in the catalog.  MEF’s container holds exports, and singleton exports hold references to types that have been instantiated.

If you do need access to the container, use a singleton pattern.  If you are using Silverlight, reference System.ComponentModel.Composition.Initialization and use the CompositionInitializer to bootstrap MEF or get imports satisfied on a newly constructed type.

#10 Learn to debug composition errors
I will always prefer compile-time errors to run-time errors (a la Scott Meyers).  If you use MEF, you are making an important trade-off:  you dramatically increasing the chance of run-time errors to gain discovery and extensibility.

Using MEF can entail some deeply frustrating debug sessions when composition mysteriously fails.  MEF is run-time extensibility system and is lacking in static analysis tools.  Learning a few techniques can get to the root cause of the composition error quicker.

  • Implement IPartImportsSatisfiedNotification on your type to be able to set breakpoints during the composition process.
  • Slow down and read the composition exception detail carefully.  It looks like a bunch of repetitive cascading errors –and it is – but don’t ignore it.  The cause of the composition error is often right in front of you.
  • With the .NET 4.0 RTM, Silverlight classes cannot compose on private fields.  You have to mark them ‘public’ to have imports work successfully.
  • Make sure to set ‘CopyLocal=false’ for shared assembly references.  ComposablePart has no notion of identity and if an assembly exports a singleton and then the DownloadCatalog or DirectoryCatalog import that DLL again composition will fail.  This is especially important for Silverlight applications with multiple XAPs.
  • Build your application to recover from composition exceptions.  MEF provides stable composition so that the container is still OK to use because composition is rejected when imports can’t be properly satisfied.
  • Follow the KISS principle. If your types have so many dependencies that you can’t keep them straight, the problem may be your design.  Visual Studio has some nice analysis tools for cyclic complexity and maintainability indices where you can check how well your code is re-factored.
  • Bad case: When things get tough, build a skeleton sample application that contains the types in the composition order you want, and debug it piecemeal there.
  • Worst case: Unfortunately, sometimes I have to put Debug.WriteLine in the constructors of types, or start replacing [Import] with new().

I really like MEF and have gotten pretty good at using it “judiciously”.  There are some incredibly clever things you can do with MEF and it can seriously change your thinking on solving some problems.  MEF is still a baby (although a darn cute one), and as it matures I think it will be an incredible tool that will have radically altered how programs are written and how they can adapt at run-time.

Give MEF a try!