All posts by Johnny Halife

HTTP 410 (Gone) – Johnny’s blog has moved….

Hi Folks,

I’ve just moved my blog to a new location, you can see, how, why and when by reading my last post on “Jekyll is rocking my new blog”

I won’t be posting here anymore, so if you want to keep reading me point your rss feed to or bookmark


Windows Powershell Web Access now available!

Late October 2010, Tim, Geoff and I went to meet Sam (SR DEV LED from Windows Manageability) to talk about a potential engagement on a Web UI Project for Windows Powershell 3.0. We got really excited about the opportunity and after eleven (11) weeks of hard work, we helped Windows Manageability to deliver the first version of Windows Powershell Web Access.


Imagine how hard has it been for me to keep the secret during all this time, however today (September 15, 2011) I got really excited when I saw that Windows Powershell Web Access is now public and available for you to use. I’m happier than ever, as it’s a huge milestone for me in my professional career being the Architect for the UX of a Windows Product. It’s really hard for me to express the satisfaction of all this becoming real.


I love Powershell, since the early days of DinnerNow with James Conard, and David Aiken, we have always been advocates of Windows Powershell, you can also see it on all the other projects we helped on the past that they have Powershell somewhere for sure. With this new feature we helped Windows Team on broadening the experience for new users, users on any platform, users anywhere, all it takes is a Web Browser, that can even be a Mobile Device.


I will not get on the details as Jan Egil Ring from has a great post on how to enable and play with the feature itself, I just wanted to share my happiness with everybody and also I want to thank the whole team for this awesome delivery, these are the guys that have been involved on the project: Juan Pablo Garcia the best HTML/CSS Ninja who can make you think you’re on the real console, Esteban Lopez our Test Jedi who trapped all the bugs (before you hopefully), and Pablo Marc the warrior of the JavaScript Performance who incredibly increased the speed of it. These are the real people behind the product from Southworks, and it’s been more than a pleasure to work with people with such level of professionalism, enthusiasm and technical knowledge.


Thanks and read you soon!

eBay shows some OData love!

Hot off the press! As it just was shown by Scott Gu during PDC 2010 keynote, now you can search eBay using a new OData API. The API is running on Windows Azure Platform and can be reached at

There’s also some interesting developer documentation that can be seen at to help you get started.

Besides from the great hype of OData, it’s cool to mention that the API has been built with using nothing but existing eBay’s APIs (If you wanna learn how it’s built and what’s happening under-the-hood, I strongly recommend you to see Pablo Castro’s talk about Building Real-World OData Services). You can search on the catalog using a simple but yet powerful search interface provided by the OData protocol. e.g. walking dead

I’m also happy that this sees the light because there was great people behind it, and working with them it’s truly an inspiration from the technical perspective and also from the human side. These people taught me a very important lesson on commitment and how you can always self-improve by pushing harder and harder.

As part of these team there’re 3 people that I want to special mention and thanks for all their contributions in my personal growth but also for letting me be part of a kick ass team:

Juan Pablo & SebaRen. Words aren’t enough, thanks thanks thanks and more thanks. Never, ever, in my whole entire life I worked with two people like them, no matter the time, always running against the clock this guysmade all these possible (most of this public is going public today so stay tuned).

JC (Jonathan Carter). Lately we paired up on bunch of stuff, and we became close friends and even with thousands of miles of distance we kept a great communication channel. He taught me bunch of valuable lessons while building the project. His ability to position himself on the customer/developer/whatever side it’s amazing and it really adds value on the final result. I’ve never seen anyone who thinks that much (and focused that much) on the dev reaction and feelings while using a library. This is one of the guys that no matter what we’re building, the language, or even the purpose I wanna work with.

#PDC10 just kicked off, and you just seen a quick peep of what’s really going on, stay tuned!


Iron Ruby @ Code Camp Buenos Aires, 2010

Hey folks, yesterday (Sept 3. 2010), I gave a talk on the Code Camp Buenos Aires 2010 as the title of this post states I spent some time talking about IronRuby. The primary focus of the talk was on showing how to get the better of both worlds and how to  work combining them (C# / Ruby).

As I would expect people that attended the talk were senior geeks which engaged with the topic and (hopefully) enjoyed a from-scratch writing of a Testing Framework =).

I want to thank everyone that assisted and Miguel Saez for the organization of the Event which as usual was flawless.

As promised here are the materials from the talk:

Presentation is also available on Code Camp 2010 – Iron Ruby “Paso a Paso”


Windows Azure Performance Gotchas #1, raising the throughput reducing the headache

Lately, we’ve been working on a Windows Azure Project with huge load, really high peaks. During the project we got the following “gotcha moments” that I’ll be trying to summarize throughout the post and thru a series of post I expect to write.

If you are using WCF, you must tweak it

On, there was this post about tweaking WCF that still valid up-to-day. WCF is broken by default, and if you were planning using it on Windows Azure (or even your own servers) you must tweak it with all the performance optimizations, except that you are fine with a lousy 10 requests per second.

Gotcha #1: Follow the performance on “WCF Gotchas 3: Configuring Performance Options

RetryPolicy can be evil

RetryPolicy is the mechanism used by the Windows Azure Storage Client to prevent the users from its own service fails. As the idea itself rocks, implementation wasn’t necessarily done for your scalability needs.

When writing high traffic services you might want to keep the least number of threads or at least all of the identified, the built-in RetryPolicy hides the underlying complexity of performing retries when the service fails but it also hides the Thread usage from you which at this scale is critical.

Gotcha #2: Disable the built-in RetryPolicy

By using RetryPolicies.NoRetry you can prevent your app for creating threads just to ensure that an action has been executed, and if you need your app to retry for an eventual Service Availability issue, write your own policy.

If you need inspiration, you can check these snippets for RetryPolicy and another thing that you should consider when doing this type of things, like adding an Extension Method to identify whether an exception is “Retry-able” or not.

Let Windows Azure Storage handle it

“Cloud Computing” brought Computing up to a whole new level, nowadays developers are able to tell how much a design decision cost. This is stunning, now all those performance optimizations that you always wanted but never had the chance to implement have a strong economical justification (or not).

Whenever you’re exposing data to a client thru a (web)service in Windows Azure your paying for I/O and Compute Hours. This costs are distributed as transfer from/to Storage to the Service and from/to Service to the Client.

Now consider the following, the reference data of your application (data pretty much the same for all the users) can be consumed by the Client straight from Storage instead. Now the cost distribution radically changes to be I/O from Windows Azure Storage to Client, no more Compute Time nor scalability headaches.

Gotcha #3: Deflect load to Windows Azure Storage as much as you can

Redirecting the load to Windows Azure not only saves you some bucks from Computing Power and I/O but also take out the pain of having to scale up your services, since Microsoft is the responsible of scaling it up.

When doing this remember that Windows Azure Storage is RESTful and all the optimization that can be performed at the transport level (like Caching, Expiration, GZip, etc) perfectly fit here, and if it’s a javascript client take a look at JSON(P) (JSON is much more efficient than XML).

Livin’ on the edge

Experience has proven that if you live on the latest VM Image by Windows Azure Team your performance and stability will increase as they ship new images.

Gotcha #4: Configure your Hosted Service Deployment to use the latest-available Virtual Machine

Except your code has compatibility issues with .NET 4 or issues with .NET 3.5 SP(x) you should live always on the latest VM image. This can be configured using the Windows Azure MMC and setting the Virtual Machine Image Version to “*” (star)

I expect to continue writing about the different patterns, gotchas and stuff we figured out while working on Windows Azure, so stay tuned!


self rebase #01

Taken from the rebase concept of git which is also used by GitHub to show they newsworthy and notable projects, I’m using the post to do the same with bunch of Open Source, shared, hacking projects I’ll be doing lately.

Since 2010 started, I didn’t blogged that often, but there were a couple of projects that I’ve been working lately. Throughout this post, I’ll describe each one and the futures.

Every piece of feedback will be welcome, as every other contribution too.

Enjoy the ride,

Rack::Auth::WRAP, the OAuth WRAP Middleware

Yesterday, with Juan Pablo, we published our first version of Rack::Auth::WRAP the first version of the Rack. If you are familiar with the protocol, you can skip the next section if not, take a look at it. Extracted from the read me at

What the heck is WRAP?

Web Resource Authorization Protocol (WRAP) is a profile of OAuth, also called OAuth WRAP. While similar in pattern to OAuth 1.0A, the WRAP profile(s) have a number of important capabilities that were not available previously in OAuth. This specification is being contributed to the IETF OAuth WG.

Also this same group owns the specification for the SWT (Simple-Web-Token), for more information read or visit the

The latest specification for the complete protocol can be found at Google Group as HTML (RFC properly formatted) on

Creating your first protected resource

As you might be thinking, our first resource will be a Sinatra application.

First of all we need to install the gem, as

[sudo] gem install rack-oauth-wrap

To make the sample easier let’s create our own shared key, we can all share this for demo purpose NjkzNTczOTAtMDA2MC0wMTJkLTQ1M2YtMDAyMzMyYjFmYWY4n

So let’s start by creating the protected resource

require 'rubygems'
require 'sinatra'
require 'rack/auth/wrap'

use Rack::Auth::WRAP, :shared_secret =>t; "NjkzNTczOTAtMDA2MC0wMTJkLTQ1M2YtMDAyMzMyYjFmYWY4",
                      :audiences =>; "http://localhost:4567",
                      :trusted_issuers =>; "urn:demo-issuer"

get "/" do
    if @env["REMOTE_USER"]
        return "You are authenticated as #{@env["REMOTE_USER"]['Email']}"
        return "You are an unauthenticated user"

Now we can start this on a Terminal (cmd, or whatever) and let’s jump to the consumer, but first if you try it without sending a token, and using the client we are going to build, you will get:

?> curl http://localhost:45678
You are unauthenticated

Now lets create a client trying to access a protected resource with a token on the header (requires restclient)

require 'rubygems'
require 'cgi'
require 'base64'
require 'restclient'
require 'hmac/sha2'


simple_web_token = {'Audience' =>; "http://localhost:4567",
                    'Issuer' =>; "urn:demo-issuer",
                    'ExpiresOn' =>; ( + 60).to_s,
                    'Email' =>; ''}.map{|k, v| "#{k}=#{CGI.escape(v)}"}.join("&")

signature = Base64.encode64(
simple_web_token += "&HMACSHA256=#{CGI.escape(signature)}"

puts RestClient.get("http://localhost:4567/", "Authorization" =>; "WRAP access_token=#{CGI.escape(simple_web_token)}")

Now let’s try our client, and see if there’s any difference with the curl request:

?> ruby client.rb
You are authenticated as

As you can see, we have our first end to end, Rack::Auth::WRAP Sample.

DISCLAIMER: On a real world application you won’t generate your own token as we are doing on the client code. We are doing it for demo purposed, but probably on you app you will get a token from an authorization server.

Both snippets are available as gits on github: Protected Resource / Client. We are assuming that this is running on localhost:4567

TODO’s and futures

On the upcoming days/weeks/months we are going to get on the middleware support for the other ways of getting the token, like Query String and/or method body. Also we would like to implement the Web Profile of WRAP, so stay tuned.

You can read the freshly published documentation at

Source Code available at:

OAuth WRAP 0.9 for Tcl

First of all, if you aren’t familiar with Tcl it’s “originally from “Tool Command Language”, but conventionally
rendered as Tcl is a scripting language created by John Ousterhout”. I encourage you to test it and also if you are interested read Where’s Tcl hiding?.

This project was born after half an hour spiking on how hard it will be to parse a token on a bare linux distro that only has Tcl. After we noticed that Tcl is really straightforward language for design, prototype and is fun to write, we packed this lib and make it available for anyone interested.

Here’s an snippet of the intended usage of the lib

package require ::oauth::wrap

set rawToken "access_token=something&other_parameters_to_ignore" #=> the token from the IP

# => creates a configuration dictionary for the values
dict set configuration signingKey {valid_key} # => signing key used by the Identity Provider
dict set configuration issuer {valid_issuer} # => the identity provider URI
dict set configuration audience {valid_audience}  # => my application audience URI

# this will return the token when it's valid else it will return false
set token [oauth::wrap::authenticate $configuration $rawToken]

# at this point if the token valid you can mess around with its claims
# that are returned on a dictionary form
set name [dict get $token name]

It’s fun to give it a shot, check out the source code at

Windows Azure Storage for Ruby v1.0

On the 4th February, 2010 I’ve published the version 1.0 of ruby gem I wrote for Windows Azure Storage. This time it has the great contribution of my friend Juan Pablo Garcia Dalolla who has implemented the Windows Azure Tables support.

This version of the gem also includes support for the version 2009-09-19.

Here’re are some code snippets from the Windows Azure Tables support

require 'waz-storage'
require 'waz-tables'

# The same connection of Windows Azure Storage Core (Queues, Blobs) can be reused
WAZ::Storage::Base.establish_connection!(:account_name =>; account_name,
                                         :access_key =>; access_key)

# Grab the service instance
service = WAZ::Tables::Table.service_instance

# Query the customer table
service.query('customer_table', {:expression =>; "(PartitionKey eq 'customer') and (Age eq 23)", :top =>; 15} )

# Insert something into the customer table
serivce.query('customer_table', {:row_key =>; 'my_custom_id', :name =>; 'johnny'})

There’s also a DataMapper adapter effort going on for Windows Azure Storage Tables, I recommend you to check out Juan Pablo’s post about Windows Azure Tables Adapter for Datamapper

Source Code available at:

RDoc available at:

Windows Azure Storage SDK for Ruby v0.5.6 now available!

Hey Folks, after PDC 09′ I’m back on track doing some work on the Windows Azure Storage SDK for Ruby. On my Channel9 interview, I told you that Ray Ozzie announced a new version of Windows Azure Storage API (a.k.a 2009-09-19) which enables developers to perform new tasks on the storage components

These are the enhancements for the 0.5.6 version of the SDK, this release only covers the Windows Azure Queues Service 2009-09-19 version, Blobs support will be released end of this week or early next week.

What’s new on the 0.5.6 version?

  • Added new shared key authentication support for 2009-09-19 Version of the Storage API
  • Queues API has been migrated to the 2009-09-19 Version of the Storage API
  • Added a new parameter for Listing Queues with Metadata
  • Added support for DequeueCount on messages being retrieved from the Queue
  • Known Issue: Creating a queue multiple times with same metadata throws 409.

Using the latest version of the Windows Azure Storage SDK for Poison Message detection

  WAZ::Storage::Base.establish_connection!(:account_name => 'name', :access_key => 'key')

  # Let's start by selecting a queue
  queue = WAZ::Queues::Queue.find('my-queue')

  # While the queue has messages, let's check the content
  while (queue.size > 0) do
    # Now let's dequeue a message as we usually do we the API
    message = queue.lock

    # Before processing the message we can now do a sanity check for the message
    # if the message has been dequeued more than 5 times we will destroy it since
    # it can be a poison message.
    if (message.dequeue_count > 5)
        # We process the message as we usually do

The key feature on this version of the API is the dequeue_count property for Queues which serves the primary goal, as shown sample above, of Poison Message detection.

Stay tuned for further updates on the SDK…


Windows Azure Storage SDK for Ruby on Channel9

Since, the first release of the Windows Azure Storage SDK for Ruby, I’ve received retweets and comments from Jean-Christophe Cimetiere (Sr. Technical Evangelist – Interop Strategy Team at Microsoft), we never met in person until Ezequiel Glinsky put us in contact.

I traveled to PDC 09′ to work on the Keynote and stuff for the event. Thanks to JC, I was able to record a short demo/interview with him for Channel9. The interview is for those that doesn’t know the work we have been doing and also to get you and idea of where this work is going to.

The SDK stills in beta, and we’ve already implemented a couple of features of the 2009-09-19 version of the API announced by Ray Ozzie on the Microsoft PDC 09′. I hope you like it and any comments will be always welcomed.

As usual thanks to my team: Juan Pablo Garcia and Ezequiel Morito major advisors and contributors of this project.


PDC 09′ – Tales from the Trenches

Convergence Demo

About a month ago, I left my home in Buenos Aires to start working with James Conard and his team on the “Cloud Convergence” Demo and some other PDC 09′ Stuff. Today since Ray Ozzie touched the stage we (backstage) were waiting for our demo to become true. Before that were a couple of demos, and specially (and I’ll talk about this later on this post) “Tailspin Travel”.

When we got to Redmond area, we met Jonathan Carter (a.k.a JC) who was working with PC (a fellow southy), they were working on the “Tailspin Travel” demo for the keynote that was shown by Cameron Skinner. Tailpin’s demo represents most of the “best practices” that you should apply when doing an ASP.NET MVC Web Application that has features like back-end support with Workflow and other WCF Services, and integrates with your Enterprise Identity using WS-Federation Protocol.

Imagine how good the demo was, that we took that same application to do our work on the “Future of the Platform”. Additionally, they are doing something great with the application that since the keynote was available for you to download it and learn the “goodies” of .NET 4/WIF/AppFabric development.

Tailspin Travel

Tailspin Travel

The Tailspin Travel application covers a pretty substantial set of functionality, but ultimately seeks to provide a holistic perspective of how Visual Studio 2010, .NET Framework 4,

and the server platform can be used together. Among other technologies it leverages: Visual Studio 2010 (with the new fancy diagrams), .NET 4.0 (MVC2, WIF, WF, WCF, and so) andshow the hosting layer of your services using the brand-new “AppFabric”

I guess at this point of the post you should be rushing to get the bits, and if you don’t go new to get them at

Meet me at PDC

Meet me at PDC

I’m on PDC with other fellow Southies, we are all the time around, and if you want to talk and share a coffee, just look for me or DM message at @johnnyhalife on twitter and we will meet for sure.

Today I’ve been interviewed by Jean-Christophe Cimetiere about the work I’m doing with Ruby and Azure. You should stay tuned for more information that I’ll be announcing soon on these things, like Azure 2009-09-19 support of the API on the Ruby SDK.


Although Matías , PC and I where backstage, there were lots of Southies that made these PDC events work great, I sincerely appreciate the work all the Southies were doing, and how do they helped us.

hope to see you around,


Issues running Windows Azure SDK (Ruby/C#) from Local Time when GMT -3 change didn’t happen

I was working on the Ruby Windows Azure SDK and I started receiving HTTP/1.1 403 errors, I figured out which is the problem so here you can get a simple issue analysis

While trying to connect to Windows Azure (Storage at least) from Argentina, you get 403 errors even dough the message seems to be formatted correctly.

Argentina was supposed to change the time by adjusting it to DTS (Daylight Time Saving), one day before that happens the Argentinean Government decided not to make that change, generating a lots of implications for computer systems adjusted to DTS.

Since Windows Azure relays on UTC Timestamp for making an assertion on the signature, the current time for GMT -3 isn’t what is expected from the server side causing the whole message to fail after the an assertion of the signature.

There’s no apparent solution yet, but there are a couple of workarounds in order to properly develop against Windows Azure from a not changed GMT -3 time zone.

  • Change your time zone to GMT -4. I switched my computer back to Halifax – Canada that has GMT -4 for an Windows Azure started accepting my requests
  • Keep your computer with the clock one hour ahead . Keep the current time zone without tweaking the Date/Time, but remember that you will be out of sync with the country

Both of the workarounds listed above proved to be successfully working, I will stick with GMT -4 since I guide myself (eating, sleeping, and all that) in my computer clock.

Hope it helps,