All posts by Federico Boerr

Sliding sessions in SharePoint 2010

The scenario

In a SharePoint federated scenario, the user session has the same validity time as the SAML token.

If the user is inactive during a certain period of time, the session must expire.

Implementation in SharePoint

To achieve this behavior, SharePoint provides a configuration called LogonTokenCacheExpirationWindow.

The way it works is detailed in the chart below.



Re-issuing the token in every request to the server may have performance penalties so the code below is optimized to issue the session token after a certain period of time. Note that, by implementing this approach, the inactivity time after the user is signed out is half of the LogonTokenCacheExpirationWindow.

E.g.: If the LogonTokenCacheExpirationWindow is 40 minutes:

  • For the first 20 minutes the token is not reissued.
  • If the user interacts with the server during the last 20 minutes, a new session token is issued.
  • If the user is inactive during the last 20 minutes, he will be signed out.

The Global.asax of the SharePoint website has to be replaced/updated with the following code:

<%@ Application Language=”C#” Inherits=”Microsoft.SharePoint.ApplicationRuntime.SPHttpApplication”%>
<%@ Import Namespace=”System” %>
<%@ Import Namespace=”Microsoft.IdentityModel.Web” %>
<%@ Import Namespace=”Microsoft.SharePoint.IdentityModel” %>
<script Language=”C#” RunAt=”server”>

public override void Init()

SessionAuthenticationModule sam = FederatedAuthentication.SessionAuthenticationModule;
sam.SessionSecurityTokenReceived += SessionAuthenticationModule_SessionSecurityTokenReceived;

private void SessionAuthenticationModule_SessionSecurityTokenReceived(object sender, SessionSecurityTokenReceivedEventArgs e)
double sessionLifetimeInMinutes = (e.SessionToken.ValidTo – e.SessionToken.ValidFrom).TotalMinutes;
TimeSpan logonTokenCacheExpirationWindow = TimeSpan.FromSeconds(1);
logonTokenCacheExpirationWindow =

DateTime now = DateTime.UtcNow;
DateTime validTo = e.SessionToken.ValidTo – logonTokenCacheExpirationWindow;
DateTime validFrom = e.SessionToken.ValidFrom;

if ((now < validTo) && (now > validFrom.AddMinutes((validTo – validFrom).TotalMinutes / 2)))
SPSessionAuthenticationModule spsam = sender as SPSessionAuthenticationModule;
e.SessionToken = spsam.CreateSessionSecurityToken(e.SessionToken.ClaimsPrincipal, e.SessionToken.Context,
now, now.AddMinutes(sessionLifetimeInMinutes), e.SessionToken.IsPersistent);

e.ReissueCookie = true;


Updating the LogonTokenCacheExpirationWindow in SharePoint using PowerShell

To update the LogonTokenCacheExpirationWindow, the following PowerShell has be ran.

This example shows how to set the window time to 40 minutes:

$sts = Get-SPSecurityTokenServiceConfig

$sts.LogonTokenCacheExpirationWindow = (New-TimeSpan -minutes 40)



Create a WCF Data Service (OData) to share an Azure Table

The Open Data (OData) is a new protocol for querying and updating data. Find in this site a list of sites that are already supporting OData. Windows Azure Table Storage is one of them but to use this endpoint, the storage key is needed.

Sharing an Azure Table is easy using WCF Data Service and Azure SDK.

1. First, create the class that will be used by the ADO.NET Data Service to create the service definition. All the IQueryable properties in this class will become a collection shared in the service.

Here is the class that we will use in our service, called AzureTableContext. The only collection that will be exposed is the Menu (public IQueryable<MenuItemRow> Menu).

namespace Sample.OData
using System;
using System.Configuration;
using System.Data.Services.Common;
using System.Linq;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;

public class AzureTableContext
private readonly TableServiceContext tableContext;

public AzureTableContext()
(configName, configSetter) =>

var account = CloudStorageAccount.FromConfigurationSetting(“DataConnectionString”);

this.tableContext = new TableServiceContext(account.TableEndpoint.ToString(), account.Credentials);

public IQueryable<MenuItemRow> Menu
return this.tableContext.CreateQuery<MenuItemRow>(“Menu”).AsTableServiceQuery();

[EntityPropertyMapping(“Name”, SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true)]
public class MenuItemRow : TableServiceEntity
public string Name { get; set; }
public string Description { get; set; }
public DateTime CreatedOn { get; set; }

The attribute EntityPropertyMapping has been added in order to have the name of the MenuItemRow displayed as Title when browsing the service from a web browser.

2.  The Cloud Storage Account configuration is read from the web.config. Make sure that the following setting is in your site’s configuration:

Web.config file configuration

<add key=”DataConnectionString” value=”UseDevelopmentStorage=true” />

3. Right-click an existing website and select “Add”> “New item”. On the top-right corner’s textbox, type: “WCF Data Service”.


4. Add the following code to the class that has been auto-generated by the wizard.

namespace Sample.OData
using System.Data.Services;
using System.Data.Services.Common;

[System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)]
public class WcfDataService1 : DataService<AzureTableContext>
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
// TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc.
config.SetEntitySetAccessRule(“*”, EntitySetRights.AllRead);
config.SetServiceOperationAccessRule(“*”, ServiceOperationRights.All);
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
config.UseVerboseErrors = true;

The ServiceBehavior attribute has been added to help debugging. It’s also very useful to set the UseVerboseErrors in the data service configuration to get better error messages.

The “*” in the SetEntitySetAccessRule and SetServiceOperationAccessRule configurations will allow querying all the entities.

5. Run the solution and browse to the service’s web page. The service definition is displayed.


Given we have exposed the Menu collection, we can browse it by adding Menu at the end of the service’s url.

All the rows in the Menu table are displayed as an atom feed.


Windows Azure Storage: TDD and mocks

During the last months, we have been working on a sample application for the Windows Azure Architecture Guide.

One of the challenges we want to face in the development side is to develop the majority of the sample application following TDD practices.

This post shows how we mocked-up Azure Storage Tables by using a IAzureTable interface. Similar interfaces have been develop for queues (IAzureQueue) and blobs (IAzureBlobContainer).

Thanks Johnny Halife (@johnnyhalife), Juan Pablo Garcia (@jpgd) and Scott Densmore (@scottdensmore). These classes have been designed after long discussions with you guys so you also hold credit for them.

Directly using WindowsAzure.StorageClient (from Windows Azure SDK)

When developing applications for Windows Azure, the most used  library for accessing the Azure Storage is the Storage Client (Microsoft.WindowsAzure.StorageClient) that comes as part of the Windows Azure SDK.

As an example, let’s imagine we are developing the SurveyStore (this class is part of the Storage component in the diagram above). This class that is called from the controllers and interacts with the persistence stores.


SurveyStore using WindowsAzure.StorageClient

public class SurveyStore : ISurveyStore
private readonly CloudStorageAccount account;

public SurveyStore(CloudStorageAccount account)
this.account = account;

public IEnumerable<Survey> GetSurveysByTenant(string tenant)

var cloudTableClient = new CloudTableClient(this.account.TableEndpoint.ToString(), this.account.Credentials);

TableServiceContext context = this.CreateContext();

var query = (from s in context.CreateQuery<SurveyRow>(“SurveysTable”)
where s.PartitionKey == tenant
select s).AsTableServiceQuery();

return query.Execute().Select(surveyRow => new Survey(surveyRow.SlugName)
Tenant = surveyRow.PartitionKey,
Title = surveyRow.Title,
CreatedOn = surveyRow.CreatedOn

Testing this implementation

If we want to write a test for this method, this would be a functional test because there is no way to mockup the calls to CloudTableClient and to TableServiceContext.

Every time we run the test, we have to:

1. Ensure the data we will query is exactly what we are expecting to get (2 calls to the real Surveys Table in the Azure storage)

2. Call GetSurveysByTenant (1 call to the real Surveys Table in the Azure storage)

3. Assert that we got what we were expecting from the store

public void GetSurveysByTenant()
var expenseContext = AzureStorageHelper.GetContext();

var expected = new Expense { Tenant = “Tenant”,  (… initialize other properties …) };
expected.Details.Add(new ExpenseItem { (… initialize all properties …) });
AzureStorageHelper.DeleteExpenseAndItemsById(expenseContext, expected.Id);
AzureStorageHelper.SaveExpense(expenseContext, expected);

var store = new ExpenseStore();
var expenses = store.GetSurveysByTenant(“Tenant”);

Assert.AreEqual(1, expenses.Count());
var actual = expenses.Single(e => e.Id == expected.Id);
Assert.AreEqual(“Tenant”, actual.Tenant);

(Assert other properties …)

Wrapping WindowsAzure.StorageClient with an IAzureTable

In the case our development is driven by TDD or we just want to write unit tests on any class that has to interact with Windows Azure Storage, we find a problem with the implementation shown above.

Working with CloudTableClient or TableServiceContext will not allow us to write *unit* tests. Testing the previous implementation, implied not only testing the SurveyStore class code, but also the code to access the Windows Azure Table itself. The ONLY way to test the SurveyStore code, is writing stubs and mocks for the code that access the Windows Azure Table.

This implementation also follows a good object-oriented design: “Program to an ‘interface’, not an ‘implementation'” and provides all its advantages.

SurveyStore using IAzureTable

public class SurveyStore : ISurveyStore
private readonly IAzureTable<SurveyRow> surveyTable;

public SurveyStore(IAzureTable<SurveyRow> surveyTable)
this.surveyTable = surveyTable;

public IEnumerable<Survey> GetSurveysByTenant(string tenant)
var query = from s in this.surveyTable.Query
where s.PartitionKey == tenant
select s;

return query.ToList().Select(surveyRow => new Survey(surveyRow.SlugName)
Tenant = surveyRow.PartitionKey,
Title = surveyRow.Title,
CreatedOn = surveyRow.CreatedOn

Testing this implementation

Testing this implementation is easier and let us focus ONLY in 1 part at a time.

In this example, we are only testing that the method GetSurveysByTenant correctly copies the title from the row read from the IAzureTable (SurveysTable) to the returned survey.

By mocking the IAzureTable, we can setup what the Query property is going to return, so there is no need to interact with the Windows Azure Storage itself. Remember that in the previous implementation we had to make 3 calls to the Windows Azure Table. Here, we are making no calls to the Windows Azure Table.

public void GetSurveysByTenantReturnsTitle()
var surveyRow = new SurveyRow { PartitionKey = “tenant”, Title = “title” };
var surveyRowsToReturn = new[] { surveyRow };
var mock = new Mock<IAzureTable<SurveyRow>>();
mock.SetupGet(t => t.Query).Returns(surveyRowsToReturn.AsQueryable());
var store = new SurveyStore(mock.Object, default(IAzureTable<QuestionRow>));

var actualSurveys = store.GetSurveysByTenant(“tenant”);

Assert.AreEqual(“title”, actualSurveys.First().Title);

Escaping illegal characters for Azure row key and partition key

It was a some-hours-troubleshooting the Bad Request (400) response from azure storage when trying to add an object (context.AddObject).

The result was simple: the partition key and/or the row key contained Characters Disallowed in Key Fields.

In my case I was trying to use as the partition key the user name: “ADATUMMary”.

Trying to find a generic solution, I url encoded the user name so the partition key ended up being: “ADATUM%5CMary”. This value could be inserted but could not be deleted (yes, it’s not mentioned as an illegal character but still).

At this point, I had to decide if a custom Escape method was needed or if I will encode the partition key as base64.

Decision table

Base64 encoding* The solution is generic.
* Easy and fast to implement.
* The output is not human-readable text.
Custom escaping method* Total control on the output.* Takes time and effort.
* Will need maintenance.

The decision was to use the Base64 encoding.

The methods for encoding/decoding and the usage can be found below.

Encode/Decode methods

public static string EncodeKey(string key)
if (key == null)
return null;

return Convert.ToBase64String(System.Text.Encoding.UTF8.GetBytes(key));

public static string DecodeKey(string encodedKey)
if (encodedKey == null)
return null;

return System.Text.Encoding.UTF8.GetString(Convert.FromBase64String(encodedKey));

Testing the Encode/Decode methods

public void EncodeAndDecodeKey()
var encodedPartitionKey = Repository.EncodeKey(“ADATUM\Mary”);

var actualPartitionKey = Repository.DecodeKey(encodedPartitionKey);
Assert.AreEqual(“ADATUM\Mary”, actualPartitionKey);

Using the Encode method for querying by partition key

var context = new ExpenseDataContext(this.account);

var query =

(from expense in context.Expenses
where expense.PartitionKey.CompareTo(EncodeKey(“ADATUM\Mary”)) == 0
select expense).AsTableServiceQuery();

return query.Execute();

Compsition in Windows Azure Table Storage: choosing the row key and simulating StartsWith

For a couple of months I’ve been working with Eugenio Pace, Scott Densmore and Matias Woloski between others in creating the Windows Azure Application Guidance.

The business stories of the project are detailed in Eugenio’s blog so, if want to get familiarized with it, I’d recommend starting with this post.

The motivation for this post started when we were leaving behind the relational databases and started using Windows Azure Storage (Replacing the data tier with Azure Table Storage.)

The model

As you can see in this simple model, there is a composition relationship (1 to n) between Expense (1) and ExpenseItem (n). Note that the relationship is not aggregation because the expense items make no sense unless the expense referencing them exists.


In a relational database is easy: Foreign keys can be used. In Azure Table Storage you have to find a way of implementing this yourself with the row keys in both tables.

Querying the repository

Now let’s take a look at how we implemented the repository method GetExpenseById:

  1. Get the expense by its Id
  2. Get the expense items with the expense Id

#1 is easy: Get the expense by its Id query:

var expenseQuery =

(from e in context.Expenses
where e.RowKey == expenseId.ToString()
select e).AsTableServiceQuery();

#2 is more tricky: Get the expense items with the expense Id query:

char charAfterSeparator = Convert.ToChar((Convert.ToInt32(‘_’) + 1));
var nextId = expenseId.ToString() + charAfterSeparator;
var expenseItemQuery =

(from expenseItem in context.ExpenseItem
expenseItem.RowKey.CompareTo(expenseId.ToString()) >= 0 &&
expenseItem.RowKey.CompareTo(nextId) < 0 &&

expenseItem.PartitionKey.CompareTo(expenseRow.PartitionKey) == 0
select expenseItem).AsTableServiceQuery();

How the entities are stored

To understand who this works, we need to first understand how the expense and the expense items are being stored.

  1. Storing the Expense row:
    1. Expense Id = Expense row key = new Guid (in the real implementation we are using an inverted timestamp)
  2. Storing the Expense item row
    1. Expense item Id = new Guid
    2. Expense item row key = (Expense Id + “_” + Expense item Id)

Simulating RowKey.StartsWith

Now we can understand how the nextId is calculated in the get expense item query with the lines:

char charAfterSeparator = Convert.ToChar((Convert.ToInt32(‘_’) + 1));
var nextId = expenseId.ToString() + charAfterSeparator;

As you can see, the query is simulating what would be a more natural approach, which is still not supported in Azure Table Storage:


As an example, only the highlighted rows would be returned from the Expense Item table when calling GetExpenseById(“A059D3C0-608A-45f7-B2CF-000000000000”):



Expense item


The source code

Get the latest version of this code at:

The Manifesto for Software Craftsmanship

I’ve signed the Manifesto for Software Craftsmanship.

What is it about?

The answer provided by Micah Martin in the discussion group is one I most agree with: “…a gentle push away from “crap code” and toward craftsmanship.”

This manifesto is created after the agile manifesto but it’s not intended to be an appendix or an addition of the former one.

The principles in the Manifesto for Software Craftsmanship are stated as follow:

“Not only working software,

   but also well-crafted software

Not only responding to change,

   but also steadily adding value

Not only individuals and interactions,

   but also a community of professionals

Not only customer collaboration,

   but also productive partnerships

Note that the manifesto stands for principles and not commandments. The difference between these 2 has clearly described in this post as follows:

“Rule based systems lend themselves to misinterpretation, misapplication and gaming, while systems based on values and principles empower individual choice and responsibility and lead to simple, elegant and appropriate solutions.”

Why did I sign?

I signed the Manifesto for Software Craftsmanship mainly because of 2 reasons:

1.      I agree with the principles

2.      I push people in my environments (university, work) to follow them

I believe we have to think what we do, understand what we do, care about what we do and, in a broader sense, love what we do.

Agile by improvisation

Today I went to my car’s insurance company looking for a refund check (some issue with the car during holidays).

After a usual 10 minutes wait, I got to a desk and presented my case with the proper documents to Mr. Payments (don’t know the real name so this will be his name from now). He looked friendly, answered all my questions and handed in the refund check. Everything was running smooth until I verified the amount on the check. They were paying me, by accident, only 20% of the expected refund. I was not happy at all with this so he pointed to the next desk where I could settle my complaint.

At the complaint’s desk I found a 5 person queue, attended only by one single employee. As you probably know, complaints take average over 15 minutes so I forecasted a minimum hour-wait for a company’s mistake. 20 minutes afterwards I decided it was enough waiting and went back to Mr. Payments desk to fill a customer dissatisfaction form against the companies bad policies for making me wait when it was their error.

Arriving at Mr. Payment’s desk, I asked for a pen and a paper and, after providing me both, he suggested that he could take care of passing over my complaint and wrote down my contact number to let me know when the new check is ready.

Instead of filling a customer dissatisfaction form I was glad to having my car insured with them.

In this situation, it was not Mr. Payment’s direct responsibility to receive my complaint but he understood that the customer satisfaction is vital for his company, and by being agile (adapting to change and collaborating with the customer) he strengthened the relationship with a customer instead of losing him. He probably does not know what agile means but he practices it for sure.

.Net AddIn Framework: Concurrency problem

Using .Net System.AddIn framework in an environment that needs to handle concurrency may be a bad idea.

The problem is that the communication between Host adapter and AddIn adater, using remoting, can handle only 2 concurent conections.

This is because the implemented remoting communication is opening only 2 channels as shown in the picture below.

Remoting Channels

If more than 2 concurrent requests arrives to the Host adapter, the folowing exception is raised:

RemotingException: Port is busy (…) All pipe instances are busy.”

Remoting Exception

A proposed approach to solve this issue would be to create an AddIn pool as shown below.

AddIn pool solution

.Net AddIn Framework structure

Addin Structure



Application compoment

The Application that will use the AddIn.

Host adapter component

Component that runs on the application’s ApplicationDomain.

It is the gateway between the application and the addIn.

AddIn adapter component

Component that runs on the addIn’s ApplicationDomain.

It is the entry point of the addIn.

AddIn component

The AddIn that is executed.

The call to it is made by the AddIn Adapter.

For mor information on the AddIn Framework:

Memory leaks: WPF application using VisualBrush

While researching on a memory leak created by WPF, I ran across this page:

A WPF application that uses a VisualBrush object in a RichTextBox control encounters a memory leak when you try to clean up the RichTextBox control

Hopefuly, this post saves your time. The solution is tricky but it works.