Category Archives: Code

Using JSON payloads to improve modern ASP.NET MVC applications

Getting Started: An ASP.NET MVC site that lists customers

Download all the code here.

This section is the vanilla MVC to set the stage for talking about AJAX and JSON payloads later on.  This should all be very familiar to ASP.NET MVC developers.

1. First, wel add a very simple customer model.

public class Customer
{
    public string Name { get; set; }
}

2. The we add a very simple home page model.

public class HomePageModel
{
    public IEnumerable<Customer> Customers { get; set; }
}

3. We write an Index action in the Home controller.

public ActionResult Index()
{
    var model = new HomePageModel();
    model.Customers = LoadCustomers();
    return View(model);
}

4. We write a LoadCustomers method to get the data. For demo purposes just returns a static list of customers sorted by name. This would likely be factored into a data access layer in a real application.

private static Customer[] customers = new Customer[]
{
    new Customer() { Name = "John"},
    new Customer() { Name = "Bill"},
    …
};
private static IEnumerable<Customer> LoadCustomers()
{
    return customers.OrderBy(x => x.Name);
}

5. We write a Index view in the Home folder to list the customers.

@model PayloadDemo.Models.HomePageModel
@{
    ViewBag.Title = "Payload Demo";
}
<h2>
    @ViewBag.Title</h2>
<table>
    <thead>
        <tr>
            <th style="text-align:left; margin:0">
                Name
            </th>
        </tr>
    </thead>
    <tbody>
    @foreach (var item in Model.Customers)
    {
        <tr>
            <td>
                @Html.DisplayFor(modelItem => item.Name)
            </td>
        </tr>
    }
    </tbody>
</table>

PART 1: Traditional Pagination

That customer list could get pretty long, so we next we will want to add pagination.  We can start by doing using traditional request/response to page through the data.

1. Wel update Index action in the Home controller to support pagination.  Notice that it sets the previous and next page numbers in the ViewBag.  Our view will use these later on.

private const int pageSize = 5;
public ActionResult Index(int pageNumber)
{
    ViewBag.PreviousPageNumber = Math.Max(1, pageNumber + -1);
    ViewBag.NextPageNumber = pageNumber + 1;
    var model = new HomePageModel();
    model.Customers = LoadCustomers((pageNumber - 1) * pageSize, pageSize);
    if (pageNumber != 1 && model.Customers.FirstOrDefault() == null)
    {
        ViewBag.NextPageNumber = -1;
    }
    return View(model);
}

2. We update the LoadCustomers method to supports skip and top parameters.

private static IEnumerable<Customer> LoadCustomers(int skip, int top)
{
    return customers.OrderBy(x => x.Name).Skip(skip).Take(top);
}

3. We update the route entry in Global.asax.cs to support the pageNumber parameter.

routes.MapRoute(
"Default", // Route name
"{controller}/{action}/{pageNumber}", // URL with parameters
new { controller = "Home", action = "Index", pageNumber = 1 } // Parameter defaults
);

4. We add a div with action links to the Home/Index view.  The action links  use the ViewBag properties to navigate to the previous or next page.

<div style="margin:5px">
    @Html.ActionLink("Previous", "Index", new { pageNumber = ViewBag.PreviousPageNumber })
    @if (ViewBag.NextPageNumber != -1)
    {
        @Html.ActionLink("Next", "Index", new { pageNumber = ViewBag.NextPageNumber })
    }
</div>

Now the user can page through data.  Notice the URL reflects the current page number.

Part1

Part 2: Pagination using AJAX and JSON

The traditional MVC approach works, but the user is forced to refresh the entire page each time they navigate to the previous or next set of customers.  Modern web applications make an asynchronous AJAX request to populate the customer list with the paged data.

1. We keep the changes we made to the Index action, LoadCustomers method, and routing table to support pagination for requests to the server.

public ActionResult Index(int pageNumber)

2. Wel add a new method to support returning paged customer data as JSON. Here is the new GetCustomers action method in the Home controller.

[HttpPost]
public JsonResult GetCustomers(int pageNumber)
{
    return Json(LoadCustomers((pageNumber - 1) * pageSize, pageSize));
}

2. We replace the links in the Home/Index view with buttons.

<div style="margin:5px">
    <button id="previous-page" style="width: 80px; height: 25px; display: inline;" >Previous</button>
    <button id="next-page" style="width: 80px; height: 25px; display: inline;" >Next</button>
</div>

3. We also add a data attribute and and ID to the Home/Index view.  This will help us call back using AJAX.

<table id="customers" data-url="@Url.Action("GetCustomers", "Home")">

4. We add a new JavaScript file called payload-demo.js to the project.

Here is the payload-demo.js code.  This code tracks the current page number.  When the user clicks one of the buttons, we make an AJAX request and then repopulate the customer list with the data.

$(function () {
    var currentPageNumber = 1;

    var populateCustomers = function(data) {
        var $customerList = $('#customers tbody');
        $customerList.children().detach();
        $.each(data, function (i, customer) {
            $('<td></td>').text(customer.Name)
            .appendTo($('<tr></tr>'))
            .parent()
            .appendTo($customerList);
        });
    };

   $("#next-page").click(function (event) {
        currentPageNumber += 1;
        var $customers = $('#customers');
        var url = $customers.data('url');
        url = url + '/' + currentPageNumber.toString();
        $.ajax({
            url: url,
            context: document.body,
            type: 'POST',
            success: function (data) {
                if ($(data).length == 0)
                {
                    currentPageNumber -= 1;
                    return;
                }
                populateCustomers(data);
            }
        });
    });

    $("#previous-page").click(function (event) {
        if (currentPageNumber == 1)
        {
            return;
        }

        currentPageNumber -= 1;
        var $customers = $('#customers');
        var url = $customers.data('url');
        url = url + '/' + currentPageNumber.toString();
        $.ajax({
            url: url,
            context: document.body,
            type: 'POST',
            success: function (data) {
                populateCustomers(data);
            }
        });
    });
});

5. We add a reference to payload-demo.js to _Layout.cshtml so that each page gets the script.

<script src="@Url.Content("~/Scripts/payload-demo.js")" type="text/javascript"></script>

Now the user can navigate paged customer data without leaving the page. Notice that we’re on page 2, but the URL hasn’t changed.

Part2

Part 3: Pagination with AJAX and a JSON payload

At this point, we’ve done the same work twice.

To produce the initial page, we created a model for the home page, an action that loaded the data into the model and invoked the view, and the Razor syntax in the view to produce the table with rows and columns.

To update the page, we created a JSON model, an action to return the JSON, and jQuery code to update the table with new rows and columns.

We could decide to do everything with AJAX and JSON and get rid of the initial page code.  We could leave the customer table empty on the initial page response, and as soon as the page loads make a request for the first page of customer data.

This works OK, but there can be a delay as the request is made.  The customer may see an empty customer list for a moment.  You could add a wait indicator, but this further delays getting data in front of the user and makes the application feel less responsive.

Note: I never liked those pages where it loads and then a hundred wait spinners go nuts as I wait and wait for the data to show up. I forgive the spinners on Mint.com because they are asynchronously connecting to my bank accounts and they provide the most recent data immediately.

You don’t have to make that extra web request! In this section, I’ll show you how to send down initial JSON data with the page as a payload and let the same script that populates the table with the results of the AJAX request, immediately populate the customer list.

1. We update our Index action in the Home controller to put the JSON data into the ViewBag.  This uses the JavaScriptSerializer to turn the JsonResult from GetCustomers into a string.  Warning: Be sure to remember the .Data or it won’t serialize the right data.

public ActionResult Index(int pageNumber)
{
    ViewBag.CustomersPayload = new JavaScriptSerializer().Serialize(GetCustomers(pageNumber).Data);
    return View();
}

2. We can delete the HomePageModel and simplify our Home/Index view.  Notice the table is empty. We add a hidden div with a data attribute to the end of the page.  We put it at the end of the page so that the browser can doesn’t have to parse it as it is working to render the visible elements and so that we can delete it after we’ve used the data.

@{
  ViewBag.Title = "Payload Demo";
}
<h2>@ViewBag.Title</h2>
<table id="customers" data-url="@Url.Action("GetCustomers", "Home")">
  <thead>
    <tr>
      <th style="text-align:left; margin:0">Name</th>
    </tr>
  </thead>
  <tbody>
  </tbody>
</table>
<div style="margin:5px">
  <button id="previous-page" style="width: 80px; height: 25px; display: inline;" >Prev</button>
  <button id="next-page" style="width: 80px; height: 25px; display: inline;" >Next</button>
</div>
<div id="payload" style="display:none" data-customers="@ViewBag.CustomersPayload">

3. We update our payload-demo.js script to use the payload.  We delete the div once we have used it to ensure we won’t apply stale data later when the user clicks a navigation button.

var $payload = $('#payload');
    if ($payload != null) {
        populateCustomers($payload.data('customers'));
        $payload.detach();
}

Now we get the initial page populated and only have a single code path to write, debug, and maintain.

Part3

Wrap-Up

If you inspect the client/server traffic in Fiddler or IE F12, you will see that the DNS resolution, proxy routing and connection negotiation are more expensive than the a few extra Kb of downloaded data.  This is even more true if your users are in other countries or on lower bandwidth connections.

By including JSON payloads in your page, you can get the best of both worlds: Dynamic applications built using jQuery and AJAX and and immediate display of data with fewer asynchronous requests.

Flyweight M-V-VM: Decorator revision

Revision: Behavior problems

In my original post I provided a behavior (ViewModelBehavior) for provding a view model for the given DataContext.  The behavior saved away the original DataContext as the model, set DataContext of the AssociatedObject to the view model,  and passed the model to the view model.

This worked in 90% of the basic data binding scenarios.  However, it fails to work properly when the DataContext contains a relative data binding expression.  The root source of the problem is that the behavior is a child of the AssociatedObject (from a data binding perspective).

Changing the DataContext causes problems when the original DataContext binding expression is relative.  When the behavior saves the original DataContext to a dependency property it owns, two things break:

First, the relative position of the DataContext on the behavior is one level deeper than the DataContext at the AssociatedObject level.  This means any relative binding will be starting from the wrong point in the tree and won’t get the right value.

Second, when the AssociateObject’s DataContext is set to the view model, this affects the binding evaluation from the original DataContext.  This corrupts the saved DataContext value and the view model never gets the model.

Attempt: Attached property

To solve the first problem, I attempted to use an attached property on the AssociatedObject to make a copy of the DataContext.  This would mean the data binding statement of the copy would start at the same point as the original DataContext.

I had to write some ugly code to clone the binding and to set the initial value of the attached property to force an attachment at run-time.  Unfortunately, the attached property was still affected by the change to the DataContext.  This is because WPF follows the ordering of attributes.  I attempted to clear the DataContext, set the original DataContext, and then re-apply the DataContext. This didn’t work either as WPF has some special affinity for the DataContext property and always gives it priority.

Attempt: Cloned Binding

To try and solve the second problem, I did a whole bunch of ugly code for making a smart clone of the data binding.  However, the inherited nature of the DataContext made this approach unworkable. The copy couldn’t participate in the same inheritance chain as the DataContext property.

Analogy: Variable swap

After the failed attempts to solve the problem, I realized it was like  trying to swap two variables without having a temp variable.  A good interview question, but not a great WPF behavior.  By adding another control (like a Grid, Border, etc), I could add another level and separate the DataContext binding expression away from the AssociatedObject control.  This provides the temp variable for making the swap.

I considered not allowing (or restricting) the DataBinding expression on the AssociatedObject.  This isn’t a good choice though because it would interfere with other behaviors, prevent normal XAML development, and favors a run-time exception for a design-time error.

Solution: Decorator

Given that I need to ensure a level in the XAML where the DataContext could be changed, I decided to use a Decorator as a base class rather than a behavior.  The decorator could have it’s DataContext set and could modify the DataContext of its only child.  I thought this would make more sense to the developer than the behavior; everything within the decorator would have the view model as the DataContext.

The improved simplicity of understanding the data binding scope comes as the cost that the developer has to add a new XAML element.  This sacrifices on of my goals for  M-V-VM: “The library does not require to change how you build your XAML hierarchy”.  I thought a lot about these tradeoffs and after using it for a bit, the decorator approach seems worth it.

The ViewModelScope class inherits from Decorator.  It has the same ViewModelType and static ViewModelFactory properties as the behavior.  To use it, you just add it to your XAML like you would a Border:

<bellacode:ViewModelScope ViewModelType="{x:Type local:MyViewModel}" >
    ...
</bellacode:ViewModelScope>

Because ViewModelScope is a Decorator which derives from FrameworkElement, you can set the DataContext just like any other FrameworkElement.

I’ve included the updated Flyweight M-V-VM library here.  The HelloWorld sample application is updated and it includes the POCO library from commands from my previous post.

Flyweight M-V-VM: POCO commands

M-V-VM encourages POCO view models

M-V-VM is one of the design patterns that help separate concerns between presentation and data:

  • The model is concerned with data and knows nothing of the view or view model.
  • The  view is concerned with presentation.  It only knows about the view model or model through data binding.
  • The view model is concerned with data although it is designed to provide functionality for the view.  The view model knows about the model, but not about the view.

By keeping the view model unaware of the view, M-V-VM distinguishes itself from patterns like Model-View-Presenter (M-V-P) which allows the presenter to have knowledge of  the view.

Because the view model does not know about or rely on the view, it encourages the view model to be built as a Plain Old CLR object (POCO) class just like the model.  Many consider the view model an example of the adapter pattern as  it wraps the model to make it consumable by the view.

POCO view models are deceptively challenging

How hard could it be to write a POCO view model?  It is just a regular class right?

Writing a view model that isn’t polluted by view concerns is harder than it first appears.  This is due to how the WPF/Silverlight data binds to the view model.

WPF/Silverlight data binding was designed to bind to dependency properties, not to call methods, or respond to events. To handle calling methods, WPF/Silverlight use the ICommand interface.  By encapsulating a method call in an object, WPF can data bind to it. Event and data triggers allow the view to respond to changes in the view model.

So lots of view model authors choose to expose ICommand, RoutedCommand, or RoutedUICommand properties.  Several WPF/SL libraries provide a class that can relay or delegate the CanExecute and Execute implementation from the command to methods in  the view model.  This way, view model authors write some extra ICommand properties, wire-up the delegation class, and then write methods to implement the commands.

This works, but  doesn’t get the full power WPF/SL offers with commanding.  Custom controls (like TextBox) register themselves for commands directly with the CommandManager.  This leverages the structure of the XAML tree, to let the control intercept commands as they bubble up.  Controls like Button and MenuItem that support a Command property can bind to x:Static commands or use the built in ApplicationCommands, ComponentCommands, and MediaCommands.  The standard commands can also be easily associated with input gestures (i.e. keyboard shortcuts) in XAML.

BindCommandBehavior

I’ve never liked exposing ICommand properties from my view models.  It feels like the first step to knowing too much about the view.  I also don’t like adding command delegation classes to my view model; it seemed like a lot of extra work.

I  decided that a behavior could do a better job of delegating commands while still getting the full power of WPF/SL commanding.  You can download it here.

The download is the Flyweight M-V-VM solution from my previous post with the addition of the Poco library and the HelloPocoWorld sample application.  Check out the readme.txt in the sample app.

How to use it

Attach the BindCommandBehavior to any control in your XAML.  The control’s data context should point at class that implements the method you want to call when the command is executed.  Just like any other behavior, you can drag-and-drop the BindCommandBehavior on a control when using Expression Blend.

Once you do that, just set the CommandProperty just like you would on a Button control.

Here is a snippet from the HelloPocoWorld sample application.

    <Window.Resources>
        <local:Stopwatch x:Key="theStopwatch" />        
    </Window.Resources>
    <Grid DataContext="{Binding Source={StaticResource theStopwatch}}">
        <i:Interaction.Behaviors>
            <bellacode:BindCommandBehavior Command="{x:Static local:StopwatchCommands.Start}" />
            ...
  • The Stopwatch implements the Start method.
  • The Grid has the Stopwatch instance as its DataContext and has the BindCommandBehavior attached.
  • The Start command is a RoutedCommand static property defined in the StopwatchCommand static class.

That’s it!  By using the BindCommandBehavior, your view models don’t need to know anything about ICommand. You just write plain old methods in your view model.

How it works

Just like a button uses the Command property to determine the command to execute when the button is clicked, the BindCommandBehavior button uses the Command property to determine the command to handle.  When the command is executed, the behavior delegates by calling a method on the data context.

You can also write a property or method that determines if the command can execute.  If no “CanExecute” property or method is found, the behavior allows the command to execute.  When your class fires an INotifyPropertyChanged.PropertyChanged event for any of your CanExecute properties, the CommandManager.InvalidateRequerySuggested is called for you.

If the properties or methods in your view model are named differently than your command, you can set the ExecuteMethodName and either the CanExecutePropertyName or CanExecuteMethodName attributes on the behavior.  By default, the behavior follows a convention that uses the command name to look for the appropriate properties and methods.

The BindCommandBehavior will try to pass the CommandParameter to your method if possible.  Because the CommandParameter arrives as an object, the behavior will  determine the parameter type of your method and try to convert the object to that type.  If it can’t convert it, it just passes the object and hopes for the best.

There are a couple of limitations on your view model imposed by the behavior (for now):

  • You can’t have overloads of the method.  The behavior isn’t sure which one to call.
  • Your execute and can execute methods can’t take more than one parameter.

Enjoy!

Flyweight M-V-VM

A better M-V-VM library

There are several existing libraries that support the Model-View-ViewModel (M-V-VM) pattern. They each have some nice features. However, I didn’t find any that were just M-V-VM. Some required me to fundamentally restructure my XAML. Others required me to use dependency injection or implement a view model locator pattern. A few required me to use a heavy base class full of complex helpers or depend on a whole set of additional infrastructure. While I appreciate the help writing Plain Old CLR Objects (POCO) view models, I kept wishing there was a library requiring less buy-in to all those “extras”.

So I started building a library that just does M-V-VM. I ended up with something that is easy to use, lightweight, and powerful.

I’ve revised the approach from using a Behavior to using a Decorator.  You can download the updated version and read about it at http://blogs.southworks.net/geoff/2011/10/19/mvvm-decorator/.

You can download it here. It contains the library project and a HelloWorld sample application. Both have complete ReadMe.txt files.

Tenants

M-V-VM only

The library has no dependencies outside WPF/.NET. All the classes are related strictly to M-V-VM.

Normal XAML development

The library does not require to change how you build your XAML hierarchy.

No POCO glue

The library does not provide commanding or property changed helpers or patterns. Another library might, but not this one.

No additional patterns required

The library can support dependency injection and view model locator, but nothing is required on behalf of the developer.

How to use it

These are the two classes you’ll use 99% of the time.

ViewModel<TModel>

This is a base class for your view model classes.

ViewModelBehavior

This is a behavior that you can drag and drop onto your controls (in Expression Blend). Just set the ViewModelType property to the type of your view model.

You can review the readme to learn about the other classes.

How it works

When the behavior attaches to the control:
  1. It instantiates a view model using the ViewModelType property.
  2. It sets the view model’s Model property = DataContext
  3. It sets the view model’s View property = control
  4. It sets the control’s DataContext = view model

Example 1: Basic ViewModel

This example shows the most basic creation of a view model.

In this example and the following examples the is a Customer is the model. The Customer class has basic properties like FirstName and LastName. The view model provides additional properties like FullName.

<Grid DataContext="{Binding Customer}">
  <i:Interaction.Behaviors>
    <bellacode:ViewModelBehavior ViewModelType="{x:Type local:CustomerViewModel}"/>
  </i:Interaction.Behaviors>
  <TextBlock Text="{Binding FullName}" />
</Grid>

Yes the example is contrived as you could easily do this with a ValueConverter. Some consider the M-V-VM pattern eliminates the need for ValueConverters and ValidationRules. Others find those classes continue to be helpful and are better encapsulated for re-use.

Example 2: Bind to ViewModel and Model

This example shows the you can also bind directly against the model.

It is up to you if your view model follows a strict Facade pattern when wrapping your model. There are some nice benefits to the Facade pattern because then your data-binding statements don’t have any knowledge that there is a view model containing a model. However, if your model already implements INotifyPropertyChanged, it can often be a massive time saver to bind directly to the model.

<Grid DataContext="{Binding Customer}"> <i:Interaction.Behaviors> <bellacode:ViewModelBehavior ViewModelType="{x:Type local:CustomerAccountViewModel}"/> </i:Interaction.Behaviors> <StackPanel> <TextBlock Text="{Binding Model.CreditCard.AccountHolderName}" /> <TextBlock Text="{Binding AccountStatus}" /> </StackPanel> </Grid>

Example 3: Hierarchy of ViewModels

This example shows how to create view models at different levels for a hierarchy of views.

Any part of the view model – the model, part of the model, or a property – can be passed as the data context for a nested control. The behavior can then use that data context as a model for the nested control’s view model. View models naturally follow the same hierarchy as your views. The behavior is easy to use because it leverages the natural XAML hierarchy.

<Grid>
  <i:Interaction.Behaviors>
    <bellacode:ViewModelBehavior ViewModelType="{x:Type local:CustomerViewModel}"/>
  </i:Interaction.Behaviors>
  <Grid.ColumnDefinitions>
    <ColumnDefinition Width="Auto" />
    <ColumnDefinition />
  </Grid.ColumnDefinitions>
  <Grid.RowDefinitions>
    <RowDefinition Height="Auto" />
    <RowDefinition Height="Auto" />
    <RowDefinition Height="Auto" />
  </Grid.RowDefinitions>
  <Label Content="Name" />
  <TextBlock Text="{Binding FullNameWithSalutation}" Grid.Column="1" />
  <Label Content="Shipping Address" Grid.Row="1"/>
  <Grid DataContext="{Binding Model.ShippingAddress}" Grid.Row="1" Grid.Column="1">
    <i:Interaction.Behaviors>
      <bellacode:ViewModelBehavior ViewModelType="{x:Type local:AddressViewModel}"/>
    </i:Interaction.Behaviors>
    <TextBlock Text="{Binding FullAddress}" />
  </Grid>
  <Label Content="Billing Address" Grid.Row="2"/>
  <Grid DataContext="{Binding Model.BillingAddress}" Grid.Row="2" Grid.Column="1">
    <i:Interaction.Behaviors>
      <bellacode:ViewModelBehavior ViewModelType="{x:Type local:AddressViewModel}"/>
    </i:Interaction.Behaviors>
    <TextBlock Text="{Binding FullAddress}" />
  </Grid>
  <Label Content="Account Status" Grid.Row="3"/>
  <Grid DataContext="{Binding Model}" Grid.Row="2" Grid.Column="3">
    <i:Interaction.Behaviors>
      <bellacode:ViewModelBehavior ViewModelType="{x:Type local:CustomerAccountViewModel}"/>
    </i:Interaction.Behaviors>
    <TextBlock Text="{Binding FullAddress}" />
  </Grid>
</Grid>

Example 4: Lists and Data Templates

This example shows how a data template can easily create a view model for each item when used in a list control.

Your view models should not generally expose other view models as properties. This would mean that one view model is responsible for creating instances of other view models. You would lose the loose coupling the view model behavior provides and the flexible type-agnostic nature of data binding.

<Grid> <ListBox ItemsSource="{Binding Customers}"> <ListBox.ItemTemplate> <DataTemplate> <Grid> <i:Interaction.Behaviors> <bellacode:ViewModelBehavior ViewModelType="{x:Type local:CustomerViewModel}"/> </i:Interaction.Behaviors> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Text="{Binding Customer.Name}" /> <TextBlock Text="{Binding MostRecentPurchaseDate}" Grid.Column="1" /> </Grid> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </Grid>

Up Next: POCO

Plain Old CLR Objects (POCO) view models feel much less UI-bound than those that expose RoutedCommand, ICommand, RoutedEvents, or ICollectionView. I’m in progress working on a behavior that make command-binding for a control just as easy as the view model. It will be a separate library so you can choose to use if it works for you.

Modularity with Prism (Boise 2011)

As part of the talk I gave on Modularity with Prism today, I created a very small modular application.

The Simplest Modular Application

This WPF application allow you to type different commands into a text box and see the result in a textbox below.  The app has just has 1 module – the MathModule – which provides add (+), subtract (-), multiply (*), and divide (/) in RPN style notation.

The application went through several stages:

  1. Non-Modular application that just defines the ICommand interface and command processing classes.
  2. A non-modular application that uses the most basic Prism bootstrapper possible.
  3. A modular application that uses the standard ModuleCatalog and add the MathModule by direct assembly type reference.

I then walked through both the Modularity Quickstart (Desktop and Silverlight) to show UnityBootstrapper.Run() method diving down into the ModuleManager, ModuleCatalog, ModuleInitializer, and ModuleTypeLoaders (configuration, directory, and XAP).

You can download the Cmdlet program here.

During the demo, I had put project references to Prism.Desktop and Prism.Desktop.UnityExtensions.  However, to make this download more consumable, I reverted to assembly references.

If you unzip so that the Examples folder it a sibling to Prism, you shouldn’t need to update any assembly references.

Takeways

  • There isn’t any magic to Prism’s Modularity. While some aspects of loading assemblies into the application domain during directory sweep or XAP download are sophisticated, you can debug through the entire module discovery, loading, and initialization process.
  • Prism provides rich module management – don’t roll your own. At first glance, it seems easy enough to discover and load assemblies.  However, the ModuleManager provides a state machine for modules loading, cyclic graph and duplicate detection of dependencies, and support for events for both synchronous and asynchronous loading.
  • Modules are logical divisions.  You don’t have to partition your application only on assembly boundaries,  you can register a group of assemblies in a single module.  Partition your application into modules based on expected logical usages of the application.

A couple of things I forgot to mention

  • For Silverlight developers, you can leverage the asynchronous download and progress modules in XAPs to keep your start-up time very short and on-demand load features as they are needed.  This can make a big difference for applications where some expensive features are used rarely.
  • You can group ModuleInfo in configuration/XAML catalogs into groups and then specify dependencies between groups.  This can make configuring dependencies simpler.

Unit Testing and MOQ (Boise 2011)

I got the chance to talk about Unit Testing and MOQ at the WPF/Prism conference in Boise, Idaho. This provides a list of the topics I covered along with the sample source code.

Read More

WPF Behavior: Global Application Shortcut Keys

RegisterAppShortKeysBehavior is a standard System.Windows.Interactivity.Behavior that can be attached to any control or control template. The behavior encapsulates all functionality required to support global shortcut keys.

Read More

x264 encoding with IIS Transform Manager

I recently got the time to play with the IIS Transform Manager and the X264 encoder.  I’ve created a Transform Manager task that allows for using X264 to encode videos by dropping files into a watch folder.  I based my work off of Ezequiel’s excellent blog post about integrating the RCE with Transform Manager.

While I’ve used encoding tools before, this was my first attempt at integrating with Transform Manager and my first time working with X264.

I found working with IIS Transform Manager much easier than I expected. Adding a new task only requires authoring a single class and a well-defined XML file.  Deployment is just dropping the DLL and XML in folders than a few clicks to configure.

The source code is included here.

About X264

X264 is a lightning fast, open source media encoder for creating H.264/MPEG-4 AVC format videos.  It has both a command line executable and an FFMPEG implementation.

I chose to work with the command-line executable because I could run it stand-alone to verify it worked outside of Transform Manager.  I may go back and build another task around the FFMPEG implementation later.

I found the following resources very helpful in understanding X264:

Preparation

These are the steps I went through to prepare my development machine.  I already had Visual Studio 2010 and IIS setup on a Windows 7 client machine.

Note: As I download files to the ‘My Downloads’ folder, I always right-click the files to display properties and click the ‘Unblock’ button.  Otherwise, some things won’t extract properly.

  1. I installed IIS Media Services 4.0 Beta 1 from Microsoft using the Web Platform Installer 2.0
  2. I installed IIS Transform Manager 1.0 Alpha 1 from Microsoft using the Web Platform Installer 2.0
  3. I download the latest X264 from http://x264.nl/
  4. I downloaded a sample Y4M video file from: http://wiki.multimedia.cx/index.php?title=YUV4MPEG2.
  5. I used WinRAR to extract the example.y4m.bz2 file.
  6. I configured the IIS Transform Host service to log on using my credentials (which have administrator access) and started the service.

This seems like a lot of steps, but it only took me about 30 minutes.  The IIS extensions and web platform installer add features to IIS without any annoying reboot.

Getting Started

The first thing I did was run X264.exe manually.  I gave it the example.y4m file and asked it to produce example.mp4.

     x264.exe -o example.mp4 example.y4m

This produced the video pretty instantaneously because the video is short.  I then ran the video in Windows Media Player (WMP) to ensure it looked right.

Next  I created a C# class library project in Visual Studio 2010.  I created an libs folder and copied in the required DLLS from the following locations.  Note: I’m running a 64-bit machine.

C:Program Files (x86)IISTransform Manager

  • Microsoft.Web.Media.TransformManager.Common.dll
  • Microsoft.Web.Media.TransformManager.Core.dll
  • Microsoft.Web.Media.TransformManager.SDK.dll

I created 3 classes and 1 XML file:

  • X264.cs – Launches the X264.exe
  • X264TransformTask.cs – the ITransformTask implementation
  • X264Namespaces.cs – the set of XNamespaces to integrate with the task definition.
  • TransformX264TaskDefinition.xml – the XML file that defines the transform task.

I realized I would by copying the DLL and XML file and stopping and restarting the service repeatedly during development, so I wrote a small batch file.  Your paths will vary:

copy /Y "C:Southworksx264Transformx264Transformx264binDebugTransformX264.dll" "C:Program Files (x86)IISTransform ManagerTransformX264.dll" copy /Y "C:Southworksx264Transformx264Transformx264TransformX264TaskDefinition.xml" "C:ProgramDataMicrosoftIISTransform ManagerConfigurationTask DefinitionsTransformX264TaskDefinition.xml" net stop IISTMHost net start IISTMHost pause

For each of the following coding steps,  I wrote the code, built the project, ran the batch file, copied a video into the drop folder, and inspected the task activity log in the IIS Management Console/Transform Manager .

Coding Part I – Running the task

I implemented the ITransformTask interface with Initialize() and Start().  I left Start() implemented as setting progress to 100.  I built and ran my batch file.  Then I configured the Transform Manager Task and created a Watch folder.

 The following screen shots show additional properties that were implemented later.

Create an edit the job template

Add and edit the task template

Add and edit the watch folder.

Coding Part II –  Calling X264.exe

I implemented the X264 class Run() method to use Process to launch the x264.exe.

I hard-coded the path to the x264.exe, the input file, and the output file to be the same as when I ran X264.exe from the command-line.

I then called X264.Run() from X264TransformTask.Start with a try-catch around it.

Coding Part III –Parsing basic metadata

I needed to make all x264.exe path location and output format configurable and to use the real path to the input file.

I added x264ExePath and outputFileExtension properties to the task definition file. I also wrote the following small helper method to get a property value.

private string GetMetadataProperty(string propertyName)
{
    IManifestProperty property = this.transformMetadata.GetProperty(X264Namespaces.X264 + propertyName);
    if (property != null)
    {
        return property.Value;
    }

    return null;
}

I then leveraged Ezequiel’s code to get the input file name(s):

IEnumerable<XElement> elements = this.transformMetadata.Manifest.Descendants("ref");
foreach (XElement element in elements)
{
    string fileName = (string)element.Attribute("src");
    this.transformLogger.WriteLine(LogLevel.Information, "Transforming file: " + fileName);
    this.Transform(fileName);
}

While the task definition schema is nice, I wanted to see the real manifest so I would understand what I was navigating.  I used the following code temporarily to write out the manifest:

using (StreamWriter writer = File.CreateText(@"C:Southworksx264Manifest.xml"))
{
    using (XmlWriter xmlWriter = XmlWriter.Create(writer))
    {
        this.transformMetadata.Manifest.WriteTo(xmlWriter);     
    }     

    writer.Flush();
}

 Now I had something that would do a basic transformation using X264 with arguments loaded from the Transform Manager Task.  I also went ahead and logged the actual command line X264 is called with to the Transform Manager activity log.

Coding Part IV – Supporting X264 options (Take 1)

X264.exe has a lot of command line options and flags that I wanted to be able to handle.  Type x264 —fullhelp to see them all. To start I only needed a few: keyint, min-keyint, scenecut, and open-gop.

I didn’t want to have to write a lot of code each time I added support for a new option.   Given that the options are specified in the task definition XML file and would be passed to a command-line executable, I decided against having the task know too much about the options or try and enforce certain types/values for parameters.

As an example, I was tempted to convert the ‘–keyint’ option to an integer and throw an exception if it didn’t convert.  However, X264.exe will take “infinite” as a valid number. 

I noticed X264.exe has two types of options: name-value pair and named flags.

I updated X264 with a dictionary of name-value pairs and the argument builder to pass them along to x264.exe.  I supported flags by outputting only the name if the value is string.Empty.

string optionValue;
foreach (string optionName in this.options.Keys) {     optionValue = options[optionName];     arguments.Append("--");     arguments.Append(optionName);     arguments.Append(" ");

    if (!string.IsNullOrEmpty(optionValue))     {         arguments.Append(optionValue);         arguments.Append(" ");
    }
}

I then wrote two small methods on top of my GetMetadataProperty.

Note: You won’t see these methods in my code because I found a more extensible way to handle options that I describe in the next section.

private void ProcessMetadataX264Option(string propertyName)
{
    string propertyValue = this.GetMetadataProperty(propertyName);     
    if (!string.IsNullOrEmpty(propertyValue))     
    {     
        this.x264.Options[propertyName] = propertyValue;     
    } 
} 


private void ProcessMetadataX264Flag(string propertyName)
{
    string propertyValue = this.GetMetadataProperty(propertyName);     
    if (!string.IsNullOrEmpty(propertyValue) && 
       (string.Compare(propertyValue"false"StringComparison.OrdinalIgnoreCase) != 0))     
    {     
        this.x264.Options[propertyName] = string.Empty;
    } 
}

Finally, I separated initializing X264 into a separate method, added string array to list the option and flag names, and some foreach loops to process each one.

private static string[] x264Options = new string[] { "keyint""min-keyint""open-gop""pulldown""scenecut"};
private static string[] x264Flags = new string[] { "tff""bff" };

Now to add support for an X264 option, all that would be required is to add it to the task definition XML and update the array of option and/or flag names.

Coding Part V – Supporting X264 options (Take 2)

Although recompilation isn’t the worst thing, I didn’t like that I had to keep the array of option names and the task definition file synchronized.

Looking at the task definition and the manifest, I saw that all the properties are prefixed with my namespace prefix ‘MSX264’.  I also noticed that ITransformMetadata.TaskMetadata selects the task properties.  I updated the code to process all the task metadata.

x264ExePath and outputFileExtension are special properties, so I handle those directly. To distinguish name-value pair from flags, I prefix flags with ‘flag-‘.   Note: XName/XElement restrict using most punctuation like parenthesis, exclamation point, or dollar sign.

private void ProcessTaskMetadata()
{            
    IEnumerable<XElement> elements = this.transformMetadata.TaskMetadata.Descendants();     
    foreach (XElement element in elements)
    {
        if (element.Name.Namespace == X264Namespaces.X264)
        {
            if (element.Name.LocalName == "x264ExePath")
            {
                if (!string.IsNullOrEmpty(element.Value))                 
                {                     
                    x264.ExePath = element.Value;                 
                }       
            }
            else if (element.Name.LocalName == "outputFileExtension")
            {
                if (!string.IsNullOrEmpty(element.Value))                 
                {                     
                    outputExtension = element.Value;                 
                }
            }             
            else if (element.Name.LocalName.StartsWith("flag-"StringComparison.OrdinalIgnoreCase))             
            {                 
                string flagName = element.Name.LocalName.Substring(5);                 
                if (string.Compare(element.Value, "true"StringComparison.OrdinalIgnoreCase) == 0)                 
                {                     
                    this.x264.Options[flagName] = string.Empty;
                }
            }
            else
            {
                if (!string.IsNullOrEmpty(element.Value))                 
                {                     
                    this.x264.Options[element.Name.LocalName] = element.Value;                 
                }             
            }
         }
     }
 }

Now to support a new X264 option, only the task definition file needs to be updated, re-dropped, and the service restarted.

Summary

In a little over 2 days I was able to integrate a the X264 encoder with IIS Transform Manager.

This task supports:

  • Launching the X264 command line executable.
  • Configuring the location of the X264 executable and the output format in the task definition.
  • Adding properties to the task definition and passing those options along to X264.
  • Distinguishing between X264 name-value and flag options in the task definition.
  • Logging completion/failure back to the Transform Manager.

I’m pretty happy with this for a first working version, although I have some improvements that could be made to this code:

  • Replacing the magic strings and number with constants
  • Redirecting stdout on the X264 exe to the activity log.
  • Catching the and reporting the X264 exit code.
  • Generally better fault tolerance.

Transform Manager lets you easily integrate and chain tasks together, so the output of this task could be chained to MP4 to smooth streaming and deployment out to other locations.  Transform manager also provides a stable and efficient work flow for processing files that are dropped into watch folders.  The task definition is a nice blend of setting properties and defining a task schema.  Overall, I think Transform Manager puts video processing within IIS within reach for any developer.

Enjoy!

An improvement to INotifyPropertyChanged

en español

Awhile back, I needed to have one object calculate a total across a collection of items for a value that changed regularly.  Using a just-in-time calculation wasn’t efficient due to a large number of items and some complexity in the total calculation.

I first tried implementing and using INotifyPropertyChanging.  This caused some small performance problems because now I was firing 2x the number of events.  It also made the subscribing to events more complex as the subscriber had to remember the value between the changing event handler and the changed event handler. Otherwise, I ended up wtih an intermediate incorrect state during the transition.

Download the Source Code.

PropertyChangedEventArgs<T>

I needed a PropertyChangedEventArgs that passed both the old and new value.  I want it to be type-safe as well. My solution was to write PropertyChangedEventArgs<T>.

public class PropertyChangedEventArgs<T> : PropertyChangedEventArgs
{
  public PropertyChangedEventArgs(string propertyName, T oldValue, T newValue)
    : base(propertyName)
  {
    this.OldValue = oldValue;
    this.NewValue = newValue;
  }
  public T NewValue { get; private set; }
  public T OldValue { get; private set; }

Raising the event

Raising the event just requires remembering the old and new values.  Not every property has to raise the new event arguments type since it would likely be inefficient for large objects or long strings.

public double MyValue
{
  get
  {
    return this.myValue;
  }
  set
  {
    if (this.myValue != value)
    {
      double oldValue = this.myValue;
      this.myValue = value;
      this.RaisePropertyChanged<double>("MyValue", oldValue, this.myValue);
    }
  }
}

Subscribing to the event

Existing subscribers not have to be changed, and those that need PropertyChangedEventArgs<T> just cast to the derived agruments type:

private void MyClass_PropertyChanged(object sender, PropertyChangedEventArgs e)
{
  PropertyChangedEventArgs<double> e2 = (PropertyChangedEventArgs<double>)e;
  switch (e.PropertyName)
  {
    case "MyValue":
    {                       
      if (e2.OldValue != e2.NewValue)
      {
        this.MyTotal = this.MyTotal - e2.OldValue + e2.NewValue;
      }
    }
    break;
    //...
  }
}

Generic IPropertyAccessor (and implementation trick)

en español

Last week I needed a way to get at the property values in a class (instance) by name.  The class implementing the properties and the class accessing the properties needed to be independent. 

I also didn’t want to implement a class per property (way too much typing), and I didn’t want to have to instantiate anything per class instance that I was accessing.

I put my final solution into a small sample. Download the Source Code.

First attempt

I started with a weakly-typed approach that used object:

public interface IPropertyAccessor
{
  object GetPropertyValue(object @object, string propertyName);
  void SetPropertyValue(object @object, string propertyName, object value);
}

This worked well enough and was simple to implement using a switch statement on propertyName. I wasn’t very happy with this approach because it isn’t type safe.   My caller knew the properties to access and their types.

A type-safe attempt

I modified the class to be a generic:

public interface IPropertyAccessor<T>
{
  TProperty GetPropertyValue<TProperty>(T @object, string propertyName);
  void SetPropertyValue<TProperty>(T @object, string propertyName, TProperty value);
}

This appeared to be a great solution – until I tried to implement GetPropertyValue and SetPropertyValue and got a compiler error- “Cannot implicitly convert type ‘string’ to ‘TProperty'”.

public TProperty GetPropertyValue<TProperty>(MyObject @object, string propertyName)
{
  switch (propertyName)
  {
    case "Name":
      return obj.Name;  //ERROR here! (and casting won't help)
    default:
      throw new ArgumentException("Unrecognized property name.", "propertyName");                   
  }
}

Solution – Trick the compiler

I was stuck until Michael Puleio showed me some code that used the following to trick the compiler into not trying to statically type check.  This extension method to object makes the static type checker give up through the use of a generic method that does the casting.

public static T CastAs<T>(this object value)
{
  return (T)value;
}
public TProperty GetPropertyValue<TProperty>(MyObject @object, string propertyName)
{
  switch (propertyName)
  {
    case "Name":
      return obj.Name.CastAs<TProperty>;  // This works!
    default:
      throw new ArgumentException("Unrecognized property name.", "propertyName");                   
  }
}

 I’ve seen some examples using reflection, but I wanted a compiled approach for the case I had.