Category Archives: Identity

Enabling SSL Client Certificates in ASP.NET Web API

You can get an excellent description of what client certificates are and how they work in this article – if you want to really understand this post take a minute to read it. In a nutshell, client certificates allows a web app to authenticate users by having the client provide a certificate before the HTTP connection is established (during the SSL Handshake). It’s an alternative to providing username/password. As the article explicitly mentions, client certificates have nothing to do with HTTPS certificates – that means you can have HTTPS communication without client certificates. However, you cannot have client certificates work without enabling HTTPS on your site.

ASP.NET Web API can take advantage of client certificates to authenticate clients as well. In this post, I’ll walk you through the steps of configuring client certificates in your IIS and test it on a Web API. Please notice that this steps should only be executed on a Development environment, as a production environment might require a more rigorous approach.

1. Let’s first create the necessary certificates (as explained here). To do this open the Visual Studio Developer Command Prompt and run the following command. Type the certificate password as prompted. This first command will create a ‘Root Certification Authority’ certificate. If you want more details, you can read about how these commands this MSDN article.

[code language=”powershell”]
makecert.exe -n "CN=Development CA" -r -sv DevelopmentCA.pvk DevelopmentCA.cer

2. Install the DevelopmentCA.cer certificate in your Trusted Root Certification Authorities for the Local Machine store using MMC (right-click over the Trusted Root Certification Authorities folder | All Tasks | Import).


Note: For production scenarios, this certificate will obviously not be valid. You will need to get an SSL certificate from a Certificate Authority (CA). Learn more about this here.

3. Now let’s create an SSL certificate in a .pfx format, signed by the CA created above, using the following 2 commands. The first command will create the certificate and the second one will convert the .pvk certificate containing the private key to .pfx. This certificate will be used as the SSL certificate.

[code language=”powershell”]
makecert -pe -n "CN=localhost" -a sha1 -sky exchange -eku
-ic DevelopmentCA.cer -iv developmentCA.pvk -sv SSLCert.pvk SSLCert.cer

pvk2pfx -pvk SSLCert.pvk -spc SSLCert.cer -pfx SSLCert.pfx -po 123456

4. Install the SSLCert.pfx certificate in the Personal store of Local Computer using MMC. Notice that the certificate shows it was issued by the Development CA.


5. Run this third command to create a private-key certificate signed by the CA certificate created above. The certificate will be automatically installed into the Personal store of  Current User, as shown in the figure below. Notice also that the Intended Purpose shows Client Authentication.

[code language=”powershell”]
makecert.exe -pe -ss My -sr CurrentUser -a sha1 -sky exchange -n "CN=ClientCertificatesTest"
-eku -sk SignedByCA -ic DevelopmentCA.cer -iv DevelopmentCA.pvk


6. Now let’s get into the code. A good place in ASP.NET Web API 2 to validate the client certificate is an ActionFilterAttribute, by calling GetClientCertificate on the request message (see some examples here). An action filter is an attribute that you can apply to a controller action — or an entire controller — that modifies the way in which the action is executed.

[code language=”csharp” highlight=”5″]
public override void OnActionExecuting(HttpActionContext actionContext)
var request = actionContext.Request;

if (!this.AuthorizeRequest(request.GetClientCertificate()))
throw new HttpResponseException(HttpStatusCode.Forbidden);

7. Use your local IIS to host your Web API code. Under the site configuration, open the site bindings and configure HTTPS using the SSL certificate created above. Select the ‘localhost’ certificate create in step 3.


8. Open the SSL Settings under your web site in IIS and select Accept. The options available are:

  • Ignore: Will not request any certificate (default)
  • Accept: IIS will accept a certificate from the client, but does not require one
  • Require: Require a client certificate – to enable this option, you must also select “Require SSL”


Changing this value will add the following section in the ApplicationHost.config (by default located under C:WindowsSystem32inetsrvconfig). The value SslNegotiateCert equals the Accept we’ve selected before.

[code language=”xml” highlight=”6″]

<location path="subscriptions">
<access sslFlags="SslNegotiateCert" />

Note: If you want to enable this from Web.config instead of using ApplicationHost.config, notice that the  <access> element is not allowed to be overridden from the Web.config by default. To enable overriding the value from Web.config you can change the overrideModeDefault value of the <access> section like this: <section name=”access” overrideModeDefault=”Deny” />. Please notice this is not recommended for production servers, as this would imply changing the behavior for the entire IIS server.

9. Now when browsing to the site using HTTPS in a browser like Internet Explorer you should get prompted for a client certificate. Select the ClientCertificatesTest client certificate you’ve created. As we’ve only selected ‘Accept’ in IIS SSL Settings, if you click Cancel, you should be able to browse to the site all the same, even if you didn’t provide a client certificate.

Also, notice that you are now shown an untrusted certificate warning because you’ve installed the Development CA cert as a Trusted Root Certification Authority.


Finally, if you want to know how to perform a request programmatically using client certificates, you can check this Gist.

Note: I’m actually not an expert in security, this post is mostly the results of a couple of battles, some of them won some of them lost – so feel free to provide feedback!

ASP.NET Dynamic Data en Español

Estuve desarrollando una aplicación ASP.NET ayudandome de Dynamic Data, con la característica que el contenido de la misma se debía mostrar enteramente en castellano.

La funcionalidad de Scaffolding ASP.NET Dynamic Data, permite generar muchas páginas de ABM muy rapidamente. Como Dynamic Data es muy customizable, se puede cambiar el texto de muchas de las páginas, que por default se crea en inglés, editando el contenido de la carpeta Dynamic Data.

Sin embargo, se me presentó el problema que al estar utilizando un ambiente de desarrollo totalmente en inglés (Windows 7 + Visual Studio 2008 SP1 en inglés), los mensajes de error y links generados automaticamente por Dynamic Data me aparecian en inglés.

Para hacer que todos los mensajes automáticos de Dynamic Data aparezcan en castellano se debe realizar lo siguiente:

  1. Descargar e instalar el language pack en español para Windows 7:
  2. Agregar una sección de Globalización en el Web.config, seteando el idioma a español (en este caso español – Argentina 🙂 )
Ahora los mensajes de Dynamic Data se deben mostrar en español:
Tener en cuenta que al configurar el idioma de la solución enteramente en español, los mensajes de error de ASP.NET también apareceran en español, lo cual a veces puede ser algo no deseado:


Working together with AntiForgeryToken and OutputCache on ASP.NET MVC

If you develop using ASP.NET MVC you’ll probably know the AntiForgeryToken helper method, used as a protection mechanism for cross-site request forgery.

This mechanism consists of two pieces: a cookie (1) and a hidden field included on the form to be submitted (2).


When submitting to a Controller’s Action annotated with the [ValidateAntiForgeryToken] attribute, the MVC framework will compare the values from the Cookie and the Hidden field (which could match partially only) and decide whether the request is valid or not.

The problem with this two-steps mechanism is that it doesn’t get along very well with cache (at least not with caching’s default behavior). Imagine that the action that renders the page with the AntiForgeryToken helper method invocation is marked with the [OutputCache] attribute, this means that unless you take into account cookie generation, you may be returning a cached version of the page (with the hidden field value set) even though the cookie has not been sent to the requesting user.

In order to avoid this issue you can use the VaryByCustom property on the OutputCache attribute:

  Location = OutputCacheLocation.ServerAndClient,
  Duration = 600,
  VaryByParam = "none",
  VaryByCustom = "RequestVerificationTokenCookie")]
public ActionResult Index()
  return new View();

And then program the rule on the global.asax‘s GetVaryByCustomString method:

public override string GetVaryByCustomString(HttpContext context, string custom)
  if (custom.Equals("RequestVerificationTokenCookie", StringComparison.OrdinalIgnoreCase))
    string verificationTokenCookieName =
        .FirstOrDefault(cn => cn.StartsWith("__requestverificationtoken", StringComparison.InvariantCultureIgnoreCase));
    if (!string.IsNullOrEmpty(verificationTokenCookieName))
      return context.Request.Cookies[verificationTokenCookieName].Value;

  return base.GetVaryByCustomString(context, custom);

This code checks if the anti-forgery token cookie is sent, and in that case, returns its value that acts as the key for the cached version of the page (which can be safely used since the cookie and the hidden field should have the same partial value); otherwise the base class’ method is invoked which ends up ignoring the cached versions of the page.

Please note that this will generate a lot of cached versions of the page (at least one by user’s session) that can greatly degrade the site performance. In this case I decided to keep the cache for that page as it was the home page and I assumed it would be hit several times by the same user.

Progressive Enhancement

Progressive Enhancement (PE) is an approach for building Web Applications that starts from the perspective that a user browser experience will support a minimum functionality, this is called base line, but has hooks to allow functional enhancements when a browser can support them. PE benefits users by supporting older browsers, but also supporting users with modern browsers and technologies by providing them an improved experience.

The progressive enhancement and its counterpart, Graceful Degradation, are approaches that can help rich Web Applications support more browsers and have a wider reach.

Disclaimer: We are writing this documentation as part of the new Web Client Guidance that is being done in patterns & practices. We are close to finishing this project, so I would like to get more feedback or validation from everyone that gets the chance to read this before we release. Have in mind that the content of this topic can change both in content and in form (it can change because of YOUR feedback).

Progressive Enhancement vs. Graceful Degradation

Graceful degradation is the practice of building an application for modern browsers while ensuring it remains functional in older browsers and other user agents (for example, accessibility tooling and mobile devices).

Progressive enhancement is usually preferable to graceful degradation: it starts with the simple basics and improves usability and aesthetics on top of that. When developing using Progressive enhancement you should define a development base, for example when using MVC the base could be a user that does not have JavaScript enabled. Therefore, the expectation is that you should have the basic functionality available without JavaScript; but with JavaScript, the experience is richer, including functionality, such as client-side form validation, predictive fetching, and preview.

It is strongly recommended to use progressive enhancement, when designing something from scratch. Graceful degradation can be tedious, difficult and requires more work to implement.

However, if you are maintaining an existing site, the easiest choice is to provide graceful degradation, unless you want to rewrite the whole site.

It is also possible to combine both approaches in the application. Even some features may end up being the same, independent of the approach used.

Note: For more information about both concepts, see Progressive Enhancements and Graceful Degradation: Making a Choice.

Core Principles

  • Progressive Enhancement consists of the following core principles:
  • The basic content should be accessible to all browsers
  • The basic functionality should be accessible to all browsers
  • Semantic markup contains all content
  • Enhanced layout is provided by externally linked CSS
  • Enhanced behavior is provided by unobtrusive, externally linked JavaScript
  • End user browser preferences are respected

There are several scenarios to consider for using these patterns:

  1. Browsers that have JavaScript turned off
  2. Different browsers that:
    1. implement JavaScript and DOM features differently
    2. implement CSS differently
  3. The need for SEO (Search Engine Optimization)

Browsers that Have JavaScript Turned Off

Browsers may have JavaScript disabled due to company policies, screen readers, or other accessibility issues. This is usually the biggest challenge of all, as it requires the most development effort. If it is not done correctly, it could prevent the user from interacting with the site at all.

To address this scenario, you should start developing the site using HTML for the basic content. You should use semantic markup and CSS to enhance the layout. Then you should start adding functionality for browsers that do support JavaScript and other client technologies.

The following are some advantages and disadvantages of creating applications without JavaScript.


  • Can support browsers without JavaScript
  • It is SEO friendly (as search engine spiders do not use JavaScript)
  • Easier to create an accessible site as a side effect without too much effort
  • Possibility to open links in new tabs and bookmark it (for example if you use the middle mouse, it will open the link in href instead of executing the JavaScript defined for the click event).


  • There is usually a need to create a server-side version of a view, and a separate client version of the same view or portions of it.

The developer has to be aware of the scripted and non-scripted features of the application, instead of just assuming that JavaScript will always be present.

Different Browsers that Implement JavaScript and DOM Features Differently

This challenge is typically known as cross-browser incompatibility. Compared to the previous scenario, when not implemented successfully, might prevent the user from interacting with the site in some cases, which causes frustration for the user, or even breaking JavaScript functionality completely, and cause a situation similar to the previous scenario.

This challenge has been a major pain point for web developers in past years, which led to using mostly server-side controls made by expert web developers that emit islands of JavaScript automatically, in order to prevent the application developer to have to learn all the browser differences, which in turn led to application developers from getting away from learning any JavaScript at all, even for simple tasks.

In recent years several JavaScript libraries have emerged that provide a cross browser experience for all of the most common tasks. By writing your code on top of these base libraries, in most cases you can avoid dealing with branching logic for the different browsers, which leads to a better appealing to the JavaScript language as a renewed development tool.

Different Browsers that Implement CSS Differently

CSS differences between browsers, though it is generally good to account for, it is usually not a big problem if bypassed. The reason is that having CSS that do not work in all browsers might cause some browsers to render the page with some inconsistencies in sizes, placement, overlapping of sections, but this will generally not prevent the user from interacting with the site entirely.

The following are some tips for dealing with these CSS inconsistencies:

  • Use a CSS reset, which improve a lot of the cross-browser inconsistencies in size, placement, and overlapping issues.
  • Develop the site with standards in mind (consider targeting XHTML 1.0 Transitional here), which almost always works in IE8, Firefox, Safari, Chrome, and usually Opera.
  • If the standards targeted HTML/CSS has IE6/7 issues, fix those issues by including CSS “hacks” in separate stylesheets referenced through conditional comments.

It is up to the business to decide the ROI for creating a site that looks identical across all the browsers. Because not supporting all versions of CSS implementations will not prevent users from interacting with your site, you could typically decide to support the CSS features in the browsers with the largest market shares, while ignoring older browsers.

How to Achieve Progressive Enhancement

To implement PE you should begin with the basic version, and then add enhancements for those browsers that can handle them.

First, you should start developing your application in plain HTML. Plain HTML is understood by all browsers. Furthermore, search spiders will be able to access and index your site content.

This also means that all anchors and forms must have a working target URL for navigating or posting data without the need for JavaScript.

Then, add styles using CSS in an external file to improve the look and feel of the site. Almost all browsers support CSS, and those which do not support it, will simply ignore the styling.

Finally, add JavaScript support, using unobtrusive JavaScript. Unobtrusive scripts are silently ignored by browsers that do not support JavaScript, but it is applied by those that do it.

Unobtrusive JavaScript separates content from behavior. This means you should avoid having inline JavaScript as in the following example.

<form id=”profile” action=””>
  <input type=”text” name=”age” />
  <input type=”submit” value=”Save” onclick=”SaveProfileWithAjax();” />

This is because the purpose of markup is to describe a document’s structure, not its programmatic behavior.

The unobtrusive solution is to register the necessary event handlers programmatically, rather than inline. This is commonly achieved by assigning a particular CSS selector to all elements which are affected by the script, to reduce the amount of script code. The JavaScript code should reside in a separate file. In the following code the id attribute is used for identifying a form:

<form id=”profile” action=””>
  <input type=”text” name=”age” />
  <input type=”submit” value=”Save” />

It is recommended that you use libraries that provide an abstraction of the DOM. The jQuery and ASP.NET Ajax libraries do a good job at this.

The following jQuery script binds the submit event of the form with id=”profile”, to the SaveProfileWithAjax function:

Note: jQuery simplifies this, by providing a CSS-like selector, instead of just getting the elements by ID.

JavaScript using jQuery
$(document).ready(function(){ //Wait for the page to load.
    $(‘form#profile’).bind(‘submit’, SaveProfileWithAjax);

function SaveProfileWithAjax(event){
    event.preventDefault(); // this will prevent the browser for submitting the form in the default way, as we will handle the post programmatically using AJAX
    // Post the data using an AJAX call

Note: The event.preventDefault JavaScript method cancels the event if it is cancelable, meaning any default action normally taken by the implementation as a result of the event will not occur.

The following code shows the implementation in ASP.NET Ajax Library.

JavaScript using ASP.NET Ajax Library
Sys.Application.add_init(function() {
    $addHandler($get(‘#profile’), ‘submit’, SaveProfileWithAjax);

function SaveProfileWithAjax(event){
    // Post the data using an AJAX call

Note: To attach events, modern browsers use the addEventListener function specified in the DOM Level 2 (Events) specification, while Internet Explorer will use its proprietary attachEvent function. For this reason, if you want to achieve cross-browser compatibility in a simple manner, you should always attach events using a library like jQuery or ASP.NET Ajax Library, which automatically deals with these compatibility issues.

Tips for Achieving Progressive Enhancement in ASP.NET MVC

Consider the following tips when implementing the progressive enhancement pattern in ASP.NET MVC.

First, build an HTML feature that works without JavaScript. For example, the song rating functionally of the Reference Implementation, which was created using radio buttons.

The following rules are applied to every web application (not just ASP.NET MVC ones) to achieve PE.

  1. Use semantic markup to render the basic content.
  2. Use CSS to enhance the layout.
  3. Anchors should always have the href attribute set to return a working view from a controller.
  4. Forms should always have the action attribute set to post the data to a working action in a controller, based on the input elements in the form. Consider rendering hidden inputs for preset values that the users don’t need to update or see.

Once the basic functionality works without JavaScript, consider the following:

  1. Enhance the experience using Unobtrusive JavaScript. This typically includes:
    • Adding client-side validation to acquire immediate feedback without requiring a full post.
    • Converting full page POST/GET requests into AJAX calls that update portions of the page.
    • Adding animations / eye candy features.
  2. When converting a full page request into an AJAX call, hijack and prevent the default action of the anchor link or form submit, and replace it with the AJAX call. You typically do this by calling the preventDefault method of the arguments object received when handling the click/submit event using JavaScript.
  3. Avoid having different URLs for accomplishing the same business result for these cases. You can add branching code in your controller that checks the Request’s headers to see if it is an AJAX call as opposed to a typical GET or POST call, and return a different result in this case.
    • Typical approaches include returning partial HTML markup to be inserted without processing into the DOM, or data represented in JSON that requires some processing in the browser to display it.
    • ASP.NET MVC provides a standard way of checking if the request was initiated using XmlHttpRequest by calling the Request.IsAjaxRequest() extension method
      Note: If you use ASP.NET WebForms instead of ASP.NET MVC, you might need to create different endpoints, such as Web Services, ASP.NET Page Methods, or even expose MVC controllers for these actions, as there is no easy way of reusing the same endpoint, because a URL maps to a physical ASPX file.
  4. In the cases where you return JSON, you might need to use JavaScript to directly manipulate the DOM, or when possible, have a template view that renders data that was retrieved in JSON format. Having templates can help you better separate UI logic from the model in the JavaScript code. The ASP.NET Ajax Library has a very good templating engine that allows you to bind to a ViewModel in a somewhat similar way as WPF & Silverlight. For more information, see Isolating the Domain Model from the Presentation Layer (this is in the Web Client Guidance documentation).
  5. In some cases, you may have DOM elements that only make sense in a non-JavaScript version of the page. This is common, and you might want to remove these elements by using JavaScript if it is available in the browser.
    For example, you may have a link that redirects to a different page, but when you enhance it with JavaScript, you might hide those links entirely, and replace it with a much richer experience that does not require a redirect for example.
    Note: You can also use the noscript HTML element to render content when JavaScript is not enabled or not supported by your browser.
  6. Cascading drop-downs is another canonical example: If you have a State drop down, that once selected will set available cities in another drop-down, the non-JavaScript version will have a visible submit button, and so after setting the State, the user can POST back to the server and get the same page with the available cities already populated. When JavaScript is enabled, after the page is loaded, you might want to hide this button, and on selection change of the State dropdown, request the available cities with an AJAX call that returns JSON, and update the dependant Cities dropdown. There is no need for the user to explicitly click the button to get the cities.

Finally, move on to the next feature. Remember, to always build a non-JavaScript version of a feature, and enhance it afterwards. There might be secondary features in the application where you can decide to implement in browsers that only have JavaScript enabled, but you should make this decision consciously, identifying the risks for not supporting that scenario.


More Web Client Guidance

You can find this and many other topics and key decisions for creating web client applications (both in ASP.NET Web Forms and ASP.NET MVC) in our latest Web Client Guidance drops. We are close to finishing this guidance, so your prompt feedback is invaluable to us. Make sure you check it out and comment in this topic or in the codeplex forums (there is a special tag to mark conversations for this new guidance)

kick it on Shout it

The Single Page Interface Pattern


Typically, the user interface in Web Applications is composed of multiple pages. Now with the increasing popularity of AJAX, it is common that people want to develop Web applications that are similar and provide the same user experience as desktop applications. One common problem in Web applications is the constant page reloads and flickering when navigating the application.

Disclaimer: We are writing this documentation as part of the new Web Client Guidance that is being done in patterns & practices. We are close to finishing this project, so I would like to get more feedback or validation from everyone that gets the chance to read this before we release. Have in mind that the content of this topic can change both in content and in form (it can change because of YOUR feedback).


Any of the following conditions suggest using the solution described in this pattern:

  • You want to minimize the page reloads and flickering when navigating through the application.
  • You want to change only the content of the page and maintain the general layout; that is the header, footer, and menus, when updating the page.
  • You want the user to keep the context of most of the page, while only manipulating the data on part of the page.
  • You want to have long running processes and/or avoid losing/refreshing dynamic content state while navigating (for example, uploading files or a chat window).
  • You want to improve the user experience of Web applications, by simulating the look and feel, and usability of desktop applications.


Have all of your page features, or at least most of them, in a single page. This is known as the Single-Page Interface (SPI) model. In the SPI model, all browser interactions with a Web application occur inside the boundaries of one page.

The SPI pattern improves the UI navigability of Web applications because it decreases the number of page reloads and eliminates flickering.

The SPI model requires a number of highly interactive features, which include in-place editing, context-sensitive user interface, immediate user feedback prompts, and asynchronous operations.

The Single-page Interface pattern elements


SPI is an AJAX pattern that suggests you have only a main page in your Web Application. This page interface is rearranged as a result of user interaction with the application.

Having a single-page interface, may result in your application using less-distinct URLs. Therefore, this pattern may not provide good support for search engines, unless explicit mechanisms for also allowing the navigation of the site by using full redirects are implemented.


The Single-page Interface pattern has the following liabilities:

  • As all interactions occur within a page, there are less distinct URLs. Therefore, this pattern does not inherently provide good support for search engines
  • You need to implement a mechanism to identify the different states of the application; this is for providing history browsing support and bookmarking. It might be used also for supporting permanent link. Unique URLs and deep linking can be supported in an SPI app but with additional development cost. Web applications built using more traditional full page refreshes support unique URLs and deep linking more easily.
  • You have to be careful with memory leaks, because as the page is not refreshed as often, the memory is not cleaned automatically by the browser. Therefore you have to manually dispose objects, handlers, and so on, when navigating through the page.
  • The SPI pattern requires a lot of JavaScript. If JavaScript is turned off, the user will not be able to use the application. This then requires down-level support in addition to the SPI pattern. It makes more sense to implement the SPI pattern if you have more control of your browsers, for example, if you have an intranet application and you know the corporate image does not restrict JavaScript, then SPI might work here. For the Internet case, it is more mixed because some users will enable JavaScript and some will not.
  • Consider using the Progressive Enhancement or Graceful Degradation pattern, for addressing accessibility, browsers where JavaScript is disabled, and search engine optimization. Otherwise older browsers or readers will not be able to use your application. If you choose to support accessing the website without the use of JavaScript, the complexity of the application increases exponentially, as you would typically need to create server and client side version of most of the views. This approach can also help to add better support for search engines.

Identifying the State of an Application with Distinct URLs

When using AJAX and the SPI pattern, an application may perform different operations, and therefore pass through different states while the URL in the browser stays constant. This creates the need of a mechanism to identify different states of the application, for example, for moving back to a previous state.

Browsers implement the Back and Forward buttons functionality by caching the list of visited URLs.

In AJAX, the server communication is done through XMLHttpRequests, and these requests do not change the page URL. Therefore, the list of visited URLs is not modified.

The Unique URLs pattern helps addressing this problem, by assigning a unique and expressive URL to each significant application state.

Scripting does not provide a mechanism to modify the list of visited URLs, but it provides mechanisms to modify the URLs.

You can manipulate the URL with JavaScript using the window.location.href property, but this will trigger a page reload; fortunately you can use the window.location.hash property. The hash property is used for fragment identifiers, which are optional components of URLs that came after the hash character (#). The string that comes after the hash character is not sent to the server; therefore, the browser is responsible for restoring the state through client scripts, and retrieving the appropriate views. Because they are like normal links, but within a single page, no page reload occurs, and the browser behaves as if you have clicked a standard link, adding the URLs to the list of visited URLs. This also enables history navigation, bookmarking and sharing URLs with other people.

Note: Setting the hash property does not work reliably on all browsers, and you do not get change notifications on all browsers either. For this reason, it is recommended using some library that handles these differences in a cross-browser way, such as the Microsoft Ajax Library History control.

Therefore, when there is an important state change in your application, modify the URL hash property. In this way the changes will be tracked by the browser.

window.location.hash = stateData;

The resulting URL will look like the following.


Once you have identified each relevant state of the application, you need to implement code to parse the URL. You can choose the format based on your specific needs to represent a state, you can even add parameters in the URL, as seen in the following example.


Finally, after parsing the URL, you should restore the application to the corresponding state based on the hash data, keep in mind that the hash data is not sent to the server as part of the URL. In the previous example, when loading that URL, the application should go to the product details form, showing the details of the product with ID 5529.

More Information

For more information about the Single-Page Interface pattern, see the following:

For more information about software design patterns applied in the Web Client Guidance, see Patterns in the Web Client Guidance.

You can find this and many other topics and key decisions for creating web client applications (both in ASP.NET Web Forms and ASP.NET MVC) in our latest Web Client Guidance drops. We are close to finishing this guidance, so your prompt feedback is invaluable to us. Make sure you check it out and comment in this topic or in the codeplex forums (there is a special tag to mark conversations for this new guidance).

kick it on Shout it

ASP.NET Ajax Library or jQuery?

I get this question very often. Should I you the one or the other?
Well, my short answer in most circumstances is “you should use both“.
My long answer is “it depends on what you are trying to achieve”, and this blog post will try to cover what are the strengths of each of the libraries, and also why it’s OK to use both with the overhead this “might” have.

ASP.NET Ajax Library vs jQuery functionality overlap

Disclaimer: We are writing this documentation as part of the new Web Client Guidance that is being done in patterns & practices. We are close to finishing this project, so I would like to get more feedback or validation from everyone that gets the chance to read this before we release. Have in mind that the content of this topic can change both in content and in form (it can change because of YOUR feedback).
Also, this topic was created using ASP.NET Ajax Library 0911 beta and jQuery 1.3.2.

Guidelines for using ASP.NET Ajax Library and jQuery

There are different JavaScript libraries that can assist you in authoring JavaScript code that runs on the browser, and using at least one of them is an absolute must in today’s environment. Though ASP.NET WebForms is better suited -but not limited to- for using the ASP.NET Ajax Library, ASP.NET MVC has no preference over either ASP.NET Ajax Library, jQuery or other 3rd party libraries. Furthermore, ASP.NET Ajax Library and jQuery can work especially well together and complement each other, whether you render markup using WebForms or MVC.

ASP.NET Ajax Library and jQuery can be used together in the same Web application, and they both have support from Microsoft. These libraries do not conflict with each other, but they complement each other greatly. The overlap between functionality is not big, and since jQuery’s official support by Microsoft, this overlap is getting reduced in every release.

ASP.NET Ajax Library has made important changes to its core to leverage jQuery functionality when it detects that jQuery is also loaded. For example, ASP.NET Ajax Library provides some basic jQuery-like selector support for selecting DOM elements for its Sys.get(selector) method, which is used internally by many other components. Nevertheless, when jQuery is available, the selector passed into this method can be very complex, because it will delegate the DOM selection to jQuery under the hoods.  Furthermore, all ASP.NET Ajax Library controls and plugins, are also exposed automatically as jQuery plugins.

We found that in most cases, although there is an additional cost for downloading the 2 libraries instead of just one, the productivity and features covered by both libraries working together far outweighs that cost. Also, keep in mind, that if your application is hosted in the internet, as opposed to an Intranet, it is strongly recommended that you use the Microsoft CDN, so the download time gets reduced, and sometimes skipped, as the files may be cached by the browser.

The following sections define some pros that we found when using a particular library over the other. This list should not be taken as completely objective or closed, as depending on what plugins you decide to use, there might be advantages or disadvantages for each. These guidelines are based on the experience developed by using both ASP.NET Ajax Library and jQuery when building web applications in the last few months, as Microsoft has recently added support for jQuery from within Visual Studio, and the ASP.NET Ajax Library has taken huge steps towards supporting both libraries side by side.

Why should you use jQuery

One of the more important reasons for using jQuery is for DOM (Document Object Model) manipulation.

  • The jQuery library is extremely easy to use for finding elements using selectors, for moving elements to different locations inside the DOM, changing element classes, basic animations, and so on.
  • jQuery has a simple API for handling DOM events. However, this functionality is comparable to what can be accomplished with ASP.NET Ajax Library. Due to this, in the Reference Implementations both approaches are used interchangeably; depending on what was the handling logic being used for (that is, DOM manipulation, controlling logic, and so on).

Another strong point of jQuery is the Developer community behind it.

  • Because jQuery supports only client-side functionality, it is server-side technology agnostic, and has a powerful extension mechanism.
  • For the previous reasons, it has been more widely used to develop plugins for a lot of different situations. These plugins are available through the official jQuery site, although each of them has its own license. Nevertheless, a big percentage of these plugins use very permissive open source licenses, an most of them use the same license as the jQuery core, to ease adoption for users who are already using jQuery.

Why should you use the ASP.NET Ajax Library

We found the ASP.NET Ajax Library to be especially useful for writing JavaScript logic whose objective was not just manipulating or animating the DOM. Also, the script loading capabilities was of great help in both the perceived performance of the application and in the development and organization of the JavaScript code.

Some other strong points of the ASP.NET Ajax Library are the following:

  • API and syntax similar to the .NET framework. The library imitates the .NET framework API, type system and namespaces hierarchies, therefore the developers familiar with the .NET framework can learn this framework with ease.
  • The Ajax Script Loader that comes with the library is useful for loading scripts in parallel and for managing dependencies. Using the Script Loader gives you the following advantages.
    • Minimize an Ajax application’s render time
    • Handle loading script dependencies
    • Leverage script combining techniques
    • Perform lazy loading of scripts behind the scene

Note: For more information about the ASP.NET Ajax Library Script Loader, there is a specialized Script Loading document in the Web Client Guidance, make sure to check it out.

  • It is especially useful for writing logic that does not only manipulate the DOM. It provides infrastructure for creating JavaScript controls and behaviors (extending the Sys.Component type), and also provides memory management, easy control creation/instantiation.
  • You can use Client Templates to bind the UI to an observable view model. This allows separating the concerns of UI specific code (or DOM manipulation) from the presentation logic. This also simplifies unit testing the JavaScript code by avoiding the need to test the DOM state and user interaction directly.
  • Browser history:  When the state of a web page changes by using Ajax calls to the server, the URL in the browser does not change automatically. The ASP.NET Ajax Library simplifies working with the browser history, with a cross-browser approach.
  • The Ajax Control Toolkit is a set of reusable controls that can be used from both ASP.NET Ajax Library and jQuery using JavaScript code, and is also very friendly for using it from server-side generated JavaScript, especially when using ASP.NET WebForms server controls.
  • It provides auto-generated proxy classes that simplify calling Web service methods from client scripts.
  • When using ASP.NET WebForms, the ASP.NET Ajax Library is the easiest to use in several scenarios, as many of the controls and functionality can be used from their server-side controls counterparts.
  • Working with ADO.NET Data Services is very easy with the ASP.NET Ajax Library. This, in combination with Client Templates can help creating client side views that consume data from the server very easy.

More Web Client Guidance

You can find this and many other topics and key decisions for creating web client applications (both in ASP.NET WebForms and ASP.NET MVC) in our latest Web Client Guidance drops. We are close to finishing this guidance, so your prompt feedback is invaluable to us. Make sure you check it out and comment in this topic or in the codeplex forums (there is a special tag to mark conversations for this new guidance)

p&p’s Web Client Guidance drop is out!

As some of you might probably know because of some posts by Blaine, the patterns & practices team that brought you Prism is working on a new Web guidance.

This project is currently under development, and as with all p&p assets, we opened up our biweekly drops to the community, so we can get early feedback from you.

Music Store Reference Implementation - Search Results

This all new Web guidance is not built on top of Web Client Software Factory, but can coexist with it. So if it’s not WCSF, and it is from the same group…

… what is this all about?

The anticipated benefits include (but might change in the final release, again, FEEDBACK is the magic keyword):

  • Provides infrastructure for developing and maintaining ASP.NET and AJAX applications
  • Provides guidance on MVC 2, jQuery, and AJAX library
  • Unit testing for ASP.NET and JavaScript client applications
  • Responsive applications
  • Flexible architecture that allows change
  • Separated presentation including unit testing view logic
  • Application modules are developed, tested and deployed separately
  • User Interface components are dynamically composed
  • Guidance on how to improve Web client security
  • Allows incremental adoption of the components

What is the current state?

This is the first public drop of the Web Client Developer Guidance. We are currently in the 5th iteration. The Reference Implementation (RI) shows several UI patterns (predictive fetch, preview, edit-in-line). Most recently, we’ve been working on implementing separated presentation patterns and composability within MVC. The RI has JavaScript unit tests using QUnit. Also included in the RI is guidance on minification and combining JavaScript files.

The 3 included QuickStarts are:

  • RI_WebForms. This is a port of the RI, developed using MVC, to Web Forms.
  • Validation QuickStart. This shows server and client-side validation.
  • WebFormsMVCHybrid. This shows MVC functionality within a Web Forms application.

The documentation includes draft guidance on cross-site scripting, cross-site request forgery, update panel, and more.

Bare with us that this is in a very early stage, so for example you might see a bad looking site, as we hadn’t hired a designer to enhance our CSS and so on (yup, us developers frequently pick colors from the combobox, and remember that the usual choices are Aqua, Magenta and so on :D).

How can I consume this?

There is a readme file included in the drop, that will tell you how to get the dependencies (they are just a few). It will also tell you how to run the RI or the JavaScript unit tests.

Once you are running the RI, you’ll see in each page a very last-minute-information-panel, that will tell you what we are demonstrating in each of the pages, from a technical perspective (expect improvements to this in future drops).

Now that this drop is out, I’ll be writing blog posts more frequently, to tell you about the decisions and hidden goodies we’ve found while developing.

Where is it?

Get it from codeplex! Current drop is the one from Nov 13th.

Remember, WE WANT FEEDBACK FROM YOU! (yes, we are not kindly asking, you can make the project be what you need, we just need to know what you think about it).

Shout it kick it on

Book: Programming Microsoft ASP.NET 2.0: Core Reference

During the last 2 months, I spent my travel time from and to my home reading the book mentioned in the title. When I told this to Matias, he asked me if I was crazy because it is a big book (700 pages) and width like a bible.

I have taken it from the Southworks’ Library and read it because, although I have 2 years of experience developing on this platform, there are some things and tools to use to make your Web application in a better way on ASP.NET that I haven’t known until now. And also, I think this will be very useful for my actual job inside UX Patterns & Practices (Sustained Engineering) team giving a better support to the community in the Web Client Software Factory (WCSF) forum.

My opinion about the book

This is a great book. A good way to consume it is reading all the content one time to get knowledge about what the book talks and make a refresh of the concepts in your mind. And then, keeping it in your library as a reference to check it if you have doubts about a particular topic or what is the best way or practices to implement something on ASP.NET.

The different Chapters of the Book

The book contains 14 chapters going through different topics, some very basic like the following:

  • What is ASP.NET & Web Development in VS (Chapters 1 & 2)
  • How Pages are structured and how you can working with these (Chapters 3 & 5)
  • ASP.NET Controls (HTML & Web Controls) and UI Elements (Chapters 4 & 6)
  • ADO.NET (Chapters 7 & 8 )

And another more interesting and useful in my opinion:

  • The Data-Binding Model and how it is performed in ASP.NET (Chapter 9)
  • The different objects that compose the HTTP Request Context (Chapter 12)
  • The different ways to save data in your Web application using the Session, the Cache and/or the ViewState (Chapters 13 & 14)
  • The different ways to authenticate the User of your application, the Membership and Role Management APIs and the security web controls (Chapter 15)

Next Steps

As you may know, this year will be released the ASP.NET 4.0 version. So, this book is quite old but I keep my opinion about it, it is a good book. The idea is to continue reading books, articles and blog posts about what new features have the newer versions incrementing the knowledge that I get from this read book.


IssueTracker Azure Edition – a Cloud Application

Couple of weeks ago Ryan Dunn announced Azure Issue Tracker. From this post:

"This sample application is a simple issue tracking service and website that pulls together a couple of the Azure services:  SQL Data Services and .NET Access Control Service."image

I’ve been working with Ryan and other guys at DPE and Southworks to put together this sample before PDC. With all the back and  forth (the .NET services were not working as reliable as they work now) we were not able to pull it through at that time. Well, it’s now live and you can download the source code. Some of its features:

  • [Identity] .NET Services Access Control as a relying party and claims transformation STS
  • [Identity] Federation against LiveID and claim mapping between email -> tasks. I hinted the implementation in these post.
  • [Identity] Claims aware application and service layer (by doing identity delegation with ActAs)
  • [Data] Storage on SDS using the flexible schema to extend the data model of the issue
  • [General] Multi tenancy at all levels (identity, data, programming model)
  • [General] Clean separation of concerns using ASP.NET MVC, Geneva Framework, WCF and WF.

This is the standard edition. The enterprise edition is coming with features related to manageability (Management API, Powershell CmdLets, MMC, SCOM, etc.) and identity federations against third party STS. Stay tuned!

Download the code

ASP.NET 2.0 Introduction

The following post will give an introduction to some of the main characteristics provided by ASP.NET 2.0 when developing WebApplications.

Component Model

It is important to remark that ASP.NET 2.0 uses a component model that maps all elements required by the WebSite into server side classes.

That is why pages are transformed by the ASP.NET runtime into classes that inherit from page. To transform page elements into their server side “pairs” the runat attribute must be added to the element description.

The great advantage that comes with this is that by “transforming” each page element  page into an object, you can use the entire .NET Framework class libraries, and usual object oriented programming syntax can be used to modify it.

For example:

<!-- Page Layout -->
<body runat="server" id="TheBody">

So in your code you could write the following to set the Background color in runtime:

public partial class _Default : System.Web.UI.Page
    protected void Page_Load(object sender, EventArgs e)
        TheBody.Style[HtmlTextWriterStyle.BackgroundColor] = "#006600";

Provider Model

The provider model is based on the Strategy Pattern, which allows the application to select the best suitable algorithm for a specific behavior. This model enables some base functionality to be easily changed by repeating a similar approach each time this is done.

The model is composed by base classes which define the methods that the customizable implementations ought to implement, thus serving as a guidance for developers. Parameters for initialization and storage for each of the providers are defined in the Configuration file.

HTTP Runtime Environment

The following diagram shows the overview of the HTTP Runtime Environment:


Basically when an ASP.NET request is “listened” it is directed to an application pool within which a Worker Process is executed (w3wp.EXE in ASP.NET 2.0 with IIS 6.0). For each request a different worker thread is executed. The threads start the HTTP Runtime Pipeline which allows the modifications of requests (either by HTTP Applications/Modules) before the HTTP handler is executed on the request. Once it is updated it goes back to the worker thread.

The HTTP Context is obtained before any changes are done to the request. Since it is “raw” you can run any code by the request prior its modification:

void ShowName()
    HttpContext currentcontext = HttpContext.Current;
    string user = currentcontext.User.Identity.Name;
    currentcontext.Response.Write("Hi" + user);