All posts by Matias Woloski

Blog moved to woloski.com

NEW URL

http://woloski.com

NEW RSS

http://feeds.feedburner.com/woloski

DelegatingHandler vs HttpOperationHandler – WCF Web API

Yesterday Phil Haack wrote a post about Implementing an Authorization Attribute for WCF Web API. We’re doing something similar to handle auth using SimpleWebTokens handled by ACS and found a mix of approaches between Pedro Felix, Howard’s post, Lewis, and Johnny’s team who is also working on something similar. However I was too lazy to read the blogs and thrown this out to twitter knowing that @gblock would give me the answer I wanted in matter of minutes Smile

@woloski: when would you use a delegatingchannel vs httpoperationhandler? I’ve found different samples using both

And indeed he replied. I like things explained in plain English from someone who really knows the thing, so here are the tweets with some color coding to separate one from the other.

NOTE: make sure to also read Glenn’s post which goes into much more detail.

@gblock: there are significant diffs. One is for pure http request / resp related concerns (message handlers) the other for app level

@gblock: one is global / knows nothing about the service the other does knows about the service.

@gblock: one is a Russian doll allowing pre-post handling, the other is a sequential pipeline.

@gblock: one handles model binding type scenarios (operation handlers) the other does not

@gblock: one is async (message handlers) the other is sync. So if you have something io bound use message handlers

@gblock: for cross cutting http concerns like etags, or if-none-match use message handlers.

@gblock: for validation / logging of app data use operation handlers. For security you might use both as Howard did for Oauth

@gblock: if it is truly cross cutting and doesn’t require details about the operation itself like parameter values.

@gblock: message handlers can handle requests dynamically ie they can handle a request to foo without an op foo

@gblock: architecturally I think they make sense even though there is some overlap. HTTP concerns vs app concerns is the line.

For our case we will use HttpOperationHandlers because we want access to the operation to check that it contains an attribute.

Case closed!

WebCast: Explorando Windows Azure AppFabric – Service Bus Messaging y Access Control Service

Hace algo mas de un mes Edu Mangarelli me invito a dar un web cast de Windows Azure AppFabric, que acepte con gusto. Finalmente, el Miercoles pasado con Hernan Meydac Jean hicimos una presentacion de 1 hora de este tema. En particular nos enfocamos en explicar ServiceBus Messaging (colas, topics y subscriptions) y Access Control Service (la version 2 que esta en produccion).

Dejo aqui el link del webcast que fue grabado para aquellos que les interese.

image

https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032494854&culture=es-AR

Not enough space on the disk – Windows Azure

No, it’s not because Local Storage quota is low. That’s easy to fix by just increasing the quota in the SerivceDef. I hit this nasty issue while working with WebDeploy, but since you might get this in a different context as well I wanted to share it and get hours back of your life, dear reader Smile

Problem

WebDeploy throws an out of disk exception when creating a package

There is not enough space on the disk. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.WriteCore(Byte[] buffer, Int32 offset, Int32 count) at System.IO.BinaryWriter.Write(Byte[] buffer, Int32 index, Int32 count) at Microsoft.Web.Deployment.ZipEntry.ReadFromFile(String path, Boolean shouldCompress, BinaryWriter tempWriter, Stream stream, FileAttributes attr, DateTime lastModifiedTime, String descriptor, Int64 size) at Microsoft.Web.Deployment.ZipEntry..ctor(String path, DeploymentObject source, ZipFile zipFile)

Analysis

WebDeploy uses a temp path to create temporary files during the package creation. This folder seems to have a 100MB quota according to MSDN, so if the package is more than that, the process will throw an IO exception because the “disk is full” even though there is plenty of space. Below a trace of Process Monitor running from the Azure Web Role showing the CreateFile returning DISK FULL.

image

By looking with Reflector, we can validate that WebDeploy is using Path.GetTempPath.

image

Solution

Since we can’t change WebDeploy code nor parameterize it to use a different path, the solution is to change the TEMP/TMP environment variables, as suggested here http://msdn.microsoft.com/en-us/library/gg465400.aspx#Y976. An excerpt…

Ensure That the TEMP/TMP Target Directory Has Sufficient Space

The standard Windows environment variables TEMP and TMP are available to code running in your application. Both TEMP and TMP point to a single directory that has a maximum size of 100 MB. Any data stored in this directory is not persisted across the lifecycle of the hosted service; if the role instances in a hosted service are recycled, the directory is cleaned.

If the temporary directory for the hosted service runs out of space, or if you need data to persist in the temporary directory across the lifecycle of the hosted service, you can implement one of the following alternatives:

  • You can configure a local storage resource, and access it directly instead of using TEMP or TMP. To access a local storage resource from code running within your application, call the RoleEnvironment.GetLocalResource method. For more information about setting up local storage resources, see How to Configure Local Storage Resources
  • You can configure a local storage resource, and point the TEMP and TMP directories to point to the path of the local storage resource. This modification should be performed within the RoleEntryPoint.OnStart method.
  • The following code example shows how to modify the target directories for TEMP and TMP from within the OnStart method:

    using System;
    using Microsoft.WindowsAzure.ServiceRuntime;
    
    namespace WorkerRole1
    {
       public class WorkerRole : RoleEntryPoint
       {
          public override bool OnStart()
          {
             string customTempLocalResourcePath = 
                RoleEnvironment.GetLocalResource("tempdir").RootPath;
             Environment.SetEnvironmentVariable("TMP", customTempLocalResourcePath);
             Environment.SetEnvironmentVariable("TEMP", customTempLocalResourcePath);
                
             // The rest of your startup code goes here…
             
             return base.OnStart();
          }
       }
    }

    Windows Azure Accelerator for Worker Role – Update your workers faster

    During the last couple of years I worked quite a lot with Windows Azure. There is no other choice if you work with the Microsoft DPE team, like we do at Southworks Smile

    The thing is that we usually have to deal with last minute deployments to Azure that can take more than a minute <grin>. The good news is that some of that pain started to be eased lately.

    • First, the Azure team enabled Web Deploy. For development this helped a lot.
    • Then, we helped DPE to build the Windows Azure Accelerators for Web Roles, announced by Nate last week. I explained how the accelerator work in a previous post. We actually used the Web Role Accelerator to deploy www.tankster.net (the social game announced on Wednesday). We have the game backend running in two small instances and we had everything ready for the announcement last week but of course there were tweaks on the game till the last minute. We did like 10 deployments in the last day before the release. 10 * 15 minutes per deploy is almost three hours. Instead using the accelerator each deploy took us 30 seconds. The dev team happy with getting 3 hours of our life back.
    • Now, to complete the whole picture I saw it might be a good idea to have the same thing but for Worker Roles

    So I teamed with Alejandro Iglesias and Fernando Tubio from the Southworks crew and together we created the Windows Azure Accelerator for Worker Roles.

    How it works?

    You basically deploy the accelerator with your Windows Azure solution and the “shell” worker will be polling a blob storage container to find and load the “real worker roles”. We made it easy so you don’t have to change any line of code of your actual worker role. Simply throw the entry point assembly and its dependencies in the storage container, set the name of the entry point assembly in a file (__entrypoint.txt) and the accelerator will pick it up, unload the previous AppDomain (if any) and create a new AppDomain with the latest version.

    image

    How to use it?

    You can find the project in github, there is a README in the home page that explain the steps to use it.  Download it and let us know what you think!

    I would like to have a Visual Studio template to make it easier to integrate with existing solutions.

    Enjoy!

    Windows Azure Accelerators for Web Roles or How to Convert Azure into a dedicated hosting elastic automated solution

    Yesterday Nathan announced the release of the Windows Azure Accelerators for Web Roles. If you are using Windows Azure today, this can be a pain relief if you’ve got used to wait 15 minutes (or more) every time you deploy to Windows Azure (and hope nothing was wrong in the package to realize after then that you’ve lost 15 minutes of your life).

    Also, as the title says, and as Maarten says in his blog, if you have lots of small websites you don’t want to pay for 100 different web roles because that will be lots of money. Since Azure 1.4 you can use the Full IIS support but the experience is not optimal from the management perspective because it requires to redeploy each time you add a new website to the cscfg.

    In short, the best way I can describe this accelerator is:

    It transform your Windows Azure web roles into a dedicated elastic hosting solution with farm support and a very nice IIS web interface to manage the websites.

    I won’t go into much more details on the WHAT, since Nathan and Maarten already did a great job in their blogs. Instead I will focus on the HOW. We all love that things work, but when they don’t work you want to know where to touch. So, below you can find the blueprints of the engine.

     

    image

     

    image

    Below some key code snippets that shows how things work.

    The snippet below is the WebRole Entry Point Run method. We are spinning the Synchronization Service here that will block the execution. Since this is a web role, it will launch the IIS process as well and execute the code as usual. 

    public override void Run()
    {
        Trace.TraceInformation("WebRole.Run");
    
        // Initialize SyncService
        var localSitesPath = GetLocalResourcePathAndSetAccess("Sites");
        var localTempPath = GetLocalResourcePathAndSetAccess("TempSites");
        var directoriesToExclude = RoleEnvironment.GetConfigurationSettingValue("DirectoriesToExclude").Split(';');
        var syncInterval = int.Parse(RoleEnvironment.GetConfigurationSettingValue("SyncIntervalInSeconds"), CultureInfo.InvariantCulture);
    
        this.syncService = new SyncService(localSitesPath, localTempPath, directoriesToExclude, "DataConnectionstring");
        this.syncService.SyncForever(TimeSpan.FromSeconds(syncInterval));
    }
    

    Then the other important piece is the SyncForever method. What this method does is the following:

    • Update the IIS configuration using the IIS ServerManager API by reading from table storage
    • Synchronize the WebDeploy package from blob to local storage (point 4 in the diagram)
    • Deploy the sites using WebDeploy API, by taking the package from local storage
    • Creates and copies the WebDeploy package from IIS (if something changed)
    public void SyncForever(TimeSpan interval)
    {
        while (true)
        {
            Trace.TraceInformation("SyncService.Checking for synchronization");
    
            try
            {
                this.UpdateIISSitesFromTableStorage();
            }
            catch (Exception e)
            {
                Trace.TraceError("SyncService.UpdateIISSitesFromTableStorage{0}{1}", Environment.NewLine, e.TraceInformation());
            }
    
            try
            {
                this.SyncBlobToLocal();
            }
            catch (Exception e)
            {
                Trace.TraceError("SyncService.SyncBlobToLocal{0}{1}", Environment.NewLine, e.TraceInformation());
            }
    
            try
            {
                this.DeploySitesFromLocal();
            }
            catch (Exception e)
            {
                Trace.TraceError("SyncService.DeploySitesFromLocal{0}{1}", Environment.NewLine, e.TraceInformation());
            }
    
            try
            {
                this.PackageSitesToLocal();
            }
            catch (Exception e)
            {
                Trace.TraceError("SyncService.PackageSitesToLocal{0}{1}", Environment.NewLine, e.TraceInformation());
            }
    
            Trace.TraceInformation("SyncService.Synchronization completed");
    
            Thread.Sleep(interval);
        }
    }
    

    My advice: If you are using Windows Azure today don’t waste more time doing lengthy deployments Smile Download the Windows Azure Accelerators for Web Roles.

    Enjoy!

    Windows Azure AppFabric Access Control in Practice (Spanish)

    Como habia escrito en un post anterior, aqui dejo un video donde muestro Windows Identity Foundation y Windows Azure Access Control Service.

    En este ejemplo muestro lo siguiente:

    • Creo un sitio web de cero y agrego un identity provider de prueba (Add STS Reference)
    • Me falla una cosa con certificados y hago un poco de troubleshooting
    • Configuro el mismo sitio en Windows Azure Access Control (ACS) y genero la relacion de confianza entre mi sitio y ACS
    • Configuro otros identity providers en ACS
    • Muestro como podemos logearnos a una aplicacion utilizando diferentes proveedores de identidad. En particular muestro el ADFS de Microsoft,  el ADFS de Southworks, Google, LiveID, etc.

    Sobre el final contesto algunas preguntas…

    image

    Espero que les sea util!

    Ajax, Cross Domain, jQuery, WCF Web API or MVC, Windows Azure

    The title is SEO friendly as you can see Smile. This week, while working in a cool project, we had to explore options to expose a web API and make cross domain calls from an HTML5 client. Our specific scenario is: we develop the server API and another company is building the HTML5 client. Since we are in the development phase, we wanted to be agile and work independently from each other.  The fact that we are using Azure and WCF Web API is an implementation detail, this can be applied to any server side REST framework and any platform.

    image

    We wanted a non-intrusive solution. This means being able to use the usual jQuery client (not learning a new client js API) and try to keep to the minimum the amount of changes in the client and server code. JSONP is an option but it does not work for POST requests. CORS is another option that I would like to try but I haven’t found a good jQuery plugin for that.

    So in the end this is what we decided to use:

    • WCF Web API to implement the REST backend (works also with MVC)
    • jQuery to query the REST backend
    • jQuery plugin (flXHR from flensed) that overrides the jQuery AJAX transport with a headless flash component
    • Windows Azure w/ WebDeploy enabled to host the API

    Having a working solution requires the following steps:

    1. Download jQuery flXHR plugin and add it to your scripts folder
    2. Download the latest flXHR library
    3. Put the cross domain policy xml file in the root of your server (change the allowed domains if you want)

      <?xml version="1.0"?>
      <!DOCTYPE cross-domain-policy SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd">
      <cross-domain-policy>
      
        <allow-access-from domain="*" />
        <allow-http-request-headers-from domain="*" headers="*" />
      
      </cross-domain-policy>
      

    Here is some JavaScript code that register the flXHR as a jQuery ajax transport and make an AJAX call when a button is click

    <script type="text/javascript">
        var baseUrl = "http://api.cloudapp.net/";
    
        $(function () {
            jQuery.flXHRproxy.registerOptions(baseUrl, { xmlResponseText: false, loadPolicyURL: baseUrl + "crossdomain.xml" });
        });
    
        $.ajaxSetup({ error: handleError });
    
        $("#btn").click(function () {
            $.ajax({
                url: baseUrl + "resources/1",
                success: function (data) {
                    alert(data);
                }
            });
        });
    
        function handleError(jqXHR, errtype, errObj) {
            var XHRobj = jqXHR.__flXHR__;
            alert("Error: " + errObj.number
            + "nType: " + errObj.name
            + "nDescription: " + errObj.description
            + "nSource Object Id: " + XHRobj.instanceId
        );
        }
    </script>
    

    It’s important to set the ajaxSetup, otherwise POST requests will be converted to GET requests (seems like a bug in the library)

    Finally, make sure to include the following scripts

    <script src="/Scripts/jquery-1.6.2.js" type="text/javascript"></script>
    <script type="text/javascript" src="/Scripts/flensed/flXHR.js"></script>
    <script src="/Scripts/jquery.flXHRproxy.js" type="text/javascript"></script>
    

    The nice thing of this solution is that you can set the baseUrl to an empty string and remove the “registerOptions” and everything will keep working just fine from the same domain using the usual jQuery client.

    This is the client with (default.html)

    image

    This is the server implemented with WCF Web API running in Azure

    image

    Turning on the network monitoring on IE9, we can see what is going on behind the scenes.

    image

    Notice the last two calls initiated by Flash. The first one downloading the crossdomain policy file and then the actual call to the API.

    Some gotchas:

    • I wasn’t able to send http headers (via beforeSend). This means that you can’t set the Accept header, it will always be */*
    • There is no support for other verbs than GET/POST (this is a Flash limitation)

    I uploaded the small proof of concept here.

    Enjoy!

    Windows Azure AppFabric Cache and Access Control in Spanish – Azure Bootcamp

    Hace un par de meses tuve el agrado de participar en el Windows Azure Bootcamp organizado por Microsoft Argentina. Fue un evento de dos dias en el cual presente Windows Azure AppFabric (Caching y Access Control Service). Si conocen mi background se imaginaran que le dedique el 30% a Caching y el 70% a Access Control Smile

    Gracias a Guada y Microsoft que grabaron el evento y postearon el material. Me tome el trabajo de subirlo a vimeo para que no se tengan que bajar un wmv completo de 700MB. 

    Introduccion a Windows Azure AppFabric Caching

    Screen shot 2011-06-29 at 2.26.53 PM

    Contenido:

    0:00 – 0:03 minutos: intro, agenda y un poco de blah blah

    0:03 – 0:25 minutos: Teoria de Windows Azure AppFabric Caching

    Introduccion a Windows Azure Access Control Service 2.0 (Teoria)

    0:25 – 1:00 minutos: Introduccion a Identidad Federada, Protocolos, Claims, STS, FAQ, ADFSv2 y Windows Azure AppFabric Access Control Service v2.

    Screen shot 2011-06-29 at 2.43.54 PM

    En un proximo post, la semana que viene, publicare la segunda parte de la charla en donde utilizo el Access Control Service para asegurar una aplicacion y utilzar diferentes proveedores de identidad.

    UPDATE: la segunda parte esta publicada

    Espero que les sea util!

    Troubleshooting WS-Federation and SAML2 Protocol

    imageDuring the last couple of years we have helped companies deploying federated identity solutions using WS-Fed and SAML2 protocols with products like ADFS, SiteMinder in various platforms. Claims-based identity has many benefits but as every solution it has its downsides. One of them is the additional complexity to troubleshoot issues if something goes wrong, especially when things are distributed and in production. Since the authentication is outsourced and it is not part of the application logic anymore you need someway to see what is happening behind the scenes.

    I’ve used Fiddler and HttpHook in the past to see what’s going on in the wire. These are great tools but they are developer-oriented. If the user who is having issues to login to an app is not a developer, then things get more difficult.

    • Either you have some kind of server side log with all the tokens that have been issued and a nice way to query those by user
    • Or you have some kind of tool that the user can run and intercept the token

    Fred, one of the guys working on my team, had the idea couple of months ago to implement the latter. So we coded together the first version (very rough) of the token debugger. The code is really simple, we are embedding a WebBrowser control in a Winforms app and inspecting the content on the Navigating event. If we detect a token being posted we show that.

    Let’s see how it works. First you enter the url of your app, in this case we are using wolof (the tool we use for the backlog) that is a Ruby app speaking WS-Fed protocol. .

    image

    After clicking the Southworks logo and entering my Active Directory account credentials, ADFS returns the token and it is POSTed to the app. In that moment, we intercept it and show it.

    image

    You can do two things with the token: send it via email (to someone that can read it Smile) or continue with the usual flow. If there is another STS in the way it will also show a second token.

    image

    image

    Since I wanted to have this app handy I enabled ClickOnce deployment and deployed it to AppHarbor (which works really well btw)

    If you want to use it browse to and launch the ClickOnce app @ http://miller.apphb.com/

    If you want to download the source code or contribute @ https://github.com/federicoboerr/token-requestor