All posts by Sebastian Durandeu

Tips for Running a TeamCity CI Server in Microsoft Azure

Note: Before starting to read this post, read this one, where the Jet Brains team (creators of TeamCity and Resharper, among other good stuff) explain how they’ve enhanced the scalability of their own server. Many of the tips explained below were based on those articles.

In this post I’m going to share some tips for running a TeamCity server end-to-end in Microsoft Azure. This includes the server virtual machine configuration, the agent virtual machines, the database, etc. I won’t cover instructions on how to install and configure TeamCity; instead I’ll focus mainly on specific advice on how to take advantage of Azure services to achieve a better scalability and availability of the server. The key aspects will be:

Let’s get started! But let me first warn you that if you are ‘Penny Pinching in the cloud’, you’re not going to like this Smile

The TeamCity Server

Virtual Machine

Not much to mention on this point. Notice that having the agents running in a separate virtual machine and the database on an external service makes it unnecessary to have super-fast hardware on the server virtual machine.


With the release of SQL v12 in Azure, TeamCity can use Azure SQL database as an external database. So we’ve created a SQL Database Server with v12 enabled and then created a database in the Premium P2 tier, which also provides ‘geo-replication’ out of the box.


Disks and Data

TeamCity Data Directory is the directory in the file system used by TeamCity server to store configuration settings, build results and current operation files. The directory is the primary storage for all the configuration settings and holds all the data that is critical to the TeamCity installation. The build history, users and their data are stored in the database.

The server should be configured to use different disks for the TeamCity binaries and the Data Directory, as follows:

  • The TeamCity binaries are placed in the C: drive of the virtual machine (OS Disk)
  • Since the Azure virtual machine D: drive is a solid state drive (SSD) and provides temporary storage only, the TeamCity data directory is on a separate, attached VHD disk (E:). It’s not advisable to store the TeamCity data directory there. For instructions on how to attach another disk to an Azure virtual machine see this article.


Java Virtual Machine (JVM)

To avoid memory warnings it’s better to use the 64-bit JVM for the TeamCity server (see instructions here). TeamCity (both server and agent) requires JRE 1.6 (or later) to operate:


It’s also recommended to allocate the maximum possible memory: 4 GB (more information here). This mainly implies setting the ‘TEAMCITY_SERVER_MEM_OPTS’ Windows environment variable with the following values:

-Xmx3900m -XX:MaxPermSize=300m -XX:ReservedCodeCacheSize=350m


It’s always advisable to monitor the memory consumption from the Diagnostics page in the Server Administration section from time to time:


The TeamCity Agents

For better scalability of the server, it’s advisable to run the TeamCity agents in a different Azure virtual machine. The agents should only include the TeamCity agent software, plus all the software prerequisites to run builds (Visual Studio, Webdeploy, certificates, etc.).

To avoid configuring all the software prerequisites multiple times (once for each agent), you can do it once and then create a virtual machine image. You get the side benefit of being able to use this same image for the cloud agent configuration (more on this below). You can learn how to create/use an Azure virtual machine image in this article.

When you create the new agents based on the virtual machine image, you should remember to update the agent name in the ‘C:\BuildAgent\conf\’ file (see image below) and then restart the agent (from services.msc). Otherwise, all the agents will have the same name and will conflict when registering on the server machine. Also, make sure you open port 9090 in the Windows Firewall before creating the agent image.


Using Cloud Agents

With the release of the Azure Integration plugin (included out-of-the-box in TeamCity 9), you can use TeamCity Cloud Agents to provision agents virtual machines on-demand based on the build queue state. You can have a set of ‘fixed’ agents (that are always running) and a set of cloud agents that start when more build power is needed.

TeamCity triggers the creation of a new agent virtual machine when it has more than one build in the queue and no available agents. It can also be triggered manually (see below). The maximum number of agents created can also be configured, but it won’t create more than what the license allows. For example, to be able to scale from 3 to 6 agents using the cloud configuration, you need to have 6 TeamCity agent licenses available. At the same time, this can help you save some money in your Cloud bill, as TeamCity will delete the agents if they are idle for some time.

The requirements to configure the plug-in include:

  • Downloading the publish settings file (browse here) and uploading the management certificate you obtained in from that file (text only) to the TeamCity virtual machine.
  • Configuring a cloud service that will be used to provision the virtual machines (you can create an empty cloud service)
  • Providing an virtual machine image name, the maximum number of agents to be created, virtual machine size and name prefix. TeamCity will use the prefix plus a number to set the virtual machine name e.g. ‘tcbld-1’, ‘tcbld-2’.




For any questions, you can reach me on twitter @sebadurandeu.

Enabling SSL Client Certificates in ASP.NET Web API

You can get an excellent description of what client certificates are and how they work in this article – if you want to really understand this post take a minute to read it. In a nutshell, client certificates allows a web app to authenticate users by having the client provide a certificate before the HTTP connection is established (during the SSL Handshake). It’s an alternative to providing username/password. As the article explicitly mentions, client certificates have nothing to do with HTTPS certificates – that means you can have HTTPS communication without client certificates. However, you cannot have client certificates work without enabling HTTPS on your site.

ASP.NET Web API can take advantage of client certificates to authenticate clients as well. In this post, I’ll walk you through the steps of configuring client certificates in your IIS and test it on a Web API. Please notice that this steps should only be executed on a Development environment, as a production environment might require a more rigorous approach.

1. Let’s first create the necessary certificates (as explained here). To do this open the Visual Studio Developer Command Prompt and run the following command. Type the certificate password as prompted. This first command will create a ‘Root Certification Authority’ certificate. If you want more details, you can read about how these commands this MSDN article.

[code language=”powershell”]
makecert.exe -n "CN=Development CA" -r -sv DevelopmentCA.pvk DevelopmentCA.cer

2. Install the DevelopmentCA.cer certificate in your Trusted Root Certification Authorities for the Local Machine store using MMC (right-click over the Trusted Root Certification Authorities folder | All Tasks | Import).


Note: For production scenarios, this certificate will obviously not be valid. You will need to get an SSL certificate from a Certificate Authority (CA). Learn more about this here.

3. Now let’s create an SSL certificate in a .pfx format, signed by the CA created above, using the following 2 commands. The first command will create the certificate and the second one will convert the .pvk certificate containing the private key to .pfx. This certificate will be used as the SSL certificate.

[code language=”powershell”]
makecert -pe -n "CN=localhost" -a sha1 -sky exchange -eku
-ic DevelopmentCA.cer -iv developmentCA.pvk -sv SSLCert.pvk SSLCert.cer

pvk2pfx -pvk SSLCert.pvk -spc SSLCert.cer -pfx SSLCert.pfx -po 123456

4. Install the SSLCert.pfx certificate in the Personal store of Local Computer using MMC. Notice that the certificate shows it was issued by the Development CA.


5. Run this third command to create a private-key certificate signed by the CA certificate created above. The certificate will be automatically installed into the Personal store of  Current User, as shown in the figure below. Notice also that the Intended Purpose shows Client Authentication.

[code language=”powershell”]
makecert.exe -pe -ss My -sr CurrentUser -a sha1 -sky exchange -n "CN=ClientCertificatesTest"
-eku -sk SignedByCA -ic DevelopmentCA.cer -iv DevelopmentCA.pvk


6. Now let’s get into the code. A good place in ASP.NET Web API 2 to validate the client certificate is an ActionFilterAttribute, by calling GetClientCertificate on the request message (see some examples here). An action filter is an attribute that you can apply to a controller action — or an entire controller — that modifies the way in which the action is executed.

[code language=”csharp” highlight=”5″]
public override void OnActionExecuting(HttpActionContext actionContext)
var request = actionContext.Request;

if (!this.AuthorizeRequest(request.GetClientCertificate()))
throw new HttpResponseException(HttpStatusCode.Forbidden);

7. Use your local IIS to host your Web API code. Under the site configuration, open the site bindings and configure HTTPS using the SSL certificate created above. Select the ‘localhost’ certificate create in step 3.


8. Open the SSL Settings under your web site in IIS and select Accept. The options available are:

  • Ignore: Will not request any certificate (default)
  • Accept: IIS will accept a certificate from the client, but does not require one
  • Require: Require a client certificate – to enable this option, you must also select “Require SSL”


Changing this value will add the following section in the ApplicationHost.config (by default located under C:WindowsSystem32inetsrvconfig). The value SslNegotiateCert equals the Accept we’ve selected before.

[code language=”xml” highlight=”6″]

<location path="subscriptions">
<access sslFlags="SslNegotiateCert" />

Note: If you want to enable this from Web.config instead of using ApplicationHost.config, notice that the  <access> element is not allowed to be overridden from the Web.config by default. To enable overriding the value from Web.config you can change the overrideModeDefault value of the <access> section like this: <section name=”access” overrideModeDefault=”Deny” />. Please notice this is not recommended for production servers, as this would imply changing the behavior for the entire IIS server.

9. Now when browsing to the site using HTTPS in a browser like Internet Explorer you should get prompted for a client certificate. Select the ClientCertificatesTest client certificate you’ve created. As we’ve only selected ‘Accept’ in IIS SSL Settings, if you click Cancel, you should be able to browse to the site all the same, even if you didn’t provide a client certificate.

Also, notice that you are now shown an untrusted certificate warning because you’ve installed the Development CA cert as a Trusted Root Certification Authority.


Finally, if you want to know how to perform a request programmatically using client certificates, you can check this Gist.

Note: I’m actually not an expert in security, this post is mostly the results of a couple of battles, some of them won some of them lost – so feel free to provide feedback!

Restoring a TeamCity 8.1 Backup

TeamCity has a backup functionality that allows you export all the build configurations, artifacts, build history, etc. to a zip file. This zip file is database independent, and can even be used to migrate data between different databases. For instance, you can restore data from a HSQL database to a SQL Express database, as well as restore a backup of SQL Express database to a new SQL Express database. Also, the backup can be used to replicate a TeamCity server into up a second failover server in case of any issues. In this post I will show you how you can restore the backup in an existing TeamCity 8.1 server that uses SQL Express (for instructions on how to create the server see this post). Also, for more specific information check the TeamCity Backup Docs.

Note: TeamCity does not provide specific support for replication/redundancy/high availability or clustering solutions. However, as this post will show, you can easily replicate and start a TeamCity server using the same data if the currently active server malfunctions. Also notice that if you are building a replicated environment, you can reuse the same TeamCity licenses (see this article).

There are several ways you can trigger a backup on TeamCity, you can use a command-line tools, the Web UI, or even use the REST API to trigger backups automatically. Before starting, let’s review how to run the Backup from TeamCity Web UI. After logging in with an administrator account, go to the Backup option on the left side menu. Select the Custom backup scope, select all the options and click Start Backup. Make note of the backup file path (highlighted).


Now, assuming that you have a 2nd TeamCity server that you’ve just created, let’s see how to restore the backup in this new server. Please notice that for the build configurations to work as expected, the software installed in the 2nd TeamCity VM need to be identical to the primary, as well as any assets required by the configuration.

1. Place the Team City zip backup file obtained above in ‘C:ProgramDataJetBrainsTeamCitybackup’ folder.

2. Create a new file with the following content. Update the {SQLServerUser}, {SQLServerPass} and {TeamCityDb} placeholders with the values of your current configuration.

Note: Remember that SQL Server should have Mixed mode authentication enabled and port 1433 opened.




3. Open a Powershell console. Stop the TeamCity service using the following command.

Stop-Service TeamCity

4. Also, make sure of the following:

  • The {TeamCityDb} database specified is empty. You can try running this snippet using SQL Server Management Studio to cleanup its content.
  • The ‘C:ProgramDataJetBrainsTeamCityconfig’ folder is empty.
  • The ‘C:ProgramDataJetBrainsTeamCitysystem’ folder is empty.

Note: These steps assume your TeamCity Data Directory is C:ProgramDataJetBrainsTeamCity.

5. Open a console at the C:TeamCitybin folder and run the maintainDb  command tool as follows. Make sure you upgrade the full name of the backup file ({YOUR_DATE} placeholder).

C:TeamCitybin>maintainDB.cmd restore -F TeamCity_Backup_{YOUR_DATE}.zip -A C:ProgramDataJetBrainsTeamCity -T


6. Start the TeamCity service again.

Start-Service TeamCity

7. Open a web browser and navigate to http://localhost.


8. Open the C:TeamCitylogsteamcity-server.log file and copy the token in the last line of the file into the text box. Click Confirm.

9. In the following page click Upgrade.


10. Make sure you copy any required assets by the Build Configurations from one machine to the other (for example any SSH keys). And Magic! All your build configurations should be there.

Note: Even if the configurations are displayed, it does not mean they will run successfully. Please *test* each one of them and fix any issues. Here is when you will know if you are missing installing something.


11. Make sure you upgrade the Server URL in the TeamCity Global Settings.



Installing TeamCity 8.1 in a Windows Azure Virtual Machine

In this post I will show how to create a Continuous Integration environment using TeamCity 8.1 and Windows Azure Virtual Machines. You will require a Windows Azure Account to follow these steps, but you can create a Free Trial here.

TeamCity out-of-the-box runs an internal database using the HSQLDB database engine. However this engine is *not* recommended for production environments, so TeamCity also supports MySQL, PostgreSQL, Oracle and SQL Server databases. In the following instructions, I’ve also included steps showing how to set up a SQL Server Express 2012 to serve as TeamCity backend storage.

1. Let’s start by creating the Virtual Machine in Windows Azure. Using the Management Portal, create a new Windows Server 2012 R2 Datacenter VM in Windows Azure (more details on how to do this here).



2. Follow the usual creation process and make sure you open HTTP port 80.


3. Once the machine is up and running, connect to the VM using Remote Desktop. Log in using the administrator account defined when creating the VM.


4. Once within the VM, in the Server Manager that opens, start by turning off IE Enhanced Security Configuration. This will allow you to browse the web and download some required software from within the VM.

5. Install .NET Framework 3.5 using ‘Add Roles and Features’ wizard. This will be required by the SQL Server installation.


6. Download SQL Server 2012 Express, together with SQL Server Management Studio from this link.


7. Install SQL Server 2012 Express & SQL Server Management Studio. You can mostly use the default options of the stand-alone installation.

8. Once SQL Server is installed, open SQL Server Configuration Manager. Under SQL Server Network Configuration, enable the TCP/IP protocol.


9. Right-click the TCP/IP protocol and open its properties. Set the IPAll group, TCP Port value to 1433.


10. Open SQL Server Management Studio and connect to the local SQL Server Express (.SQLEXPRESS). Open the Server Properties and in the Security tab, enable Mixed Mode Authentication by selecting the SQL Server and Windows Authentication mode option. This will allow you to connect to the database from TeamCity using a SQL Server username and password.


11. Create a new database named TeamCity.

12. Once the database is in place, right-click the Security node for the server and create a new Login named TeamCityUser. In the User Mapping tab, give db_owner permissions to the user in the TeamCity database.


13. Right-click the server in Object Explorer and click Restart.


14. Open the Windows Firewall properties and add ‘Inbound’ and ‘Outbound’ rules for accepting HTTP connections through port 80.



15. Download and install TeamCity 8.1 RC Windows Installer. You can leave all default options, including the installation folder, except for the Service Account. Select to bun TeamCity under the SYSTEM account.

Note: Actually, I would be recommend to create a new user account for TeamCity, but for simplicity purposes we leave it this way.


16. Open a Web Browsers in the server and browse to http://localhost. Wait until the TeamCity First Start page is displayed, click Proceed.


17. Download Microsoft JDBC SQL Server driver. Extract its content and place the sqljdbc4.jar file in the C:ProgramDataJetBrainsTeamCitylibjdbc folder.

Note: If the C:ProgramData folder is not displayed, make sure you select Show Hidden Items in Windows Explorer.


18. Open a PowerShell console. Set the Java environment variables using the code snippet below.

[Environment]::SetEnvironmentVariable(JAVA_HOME”, “C:TeamCityjrebin”, “Machine”)

[Environment]::SetEnvironmentVariable(“JRE_HOME”, “C:TeamCityjre”, “Machine”)

19. Go back to the browser click Refresh JDBC driver. Make sure the driver is loaded correctly. Fill the server name, database name and user you’ve created in the steps above.  Click Proceed.

  • Server: localhostSQLEXPRESS
  • Database name: TeamCity
  • Login: TeamCityUser


20. Accept the license agreement displayed and continue with the creation of the TeamCity administrator account.


21. And voilá! TeamCity running. Try connecting to the VM from outside using its DNS name to make sure connectivity is working fine. For example



Software Prerequisites for Builds

Once TeamCity is installed, you will probably want to install more software to the virtual machine to be able to build your solution, run tests, perform deployments, etc. Please notice that the required software will probably depend on the project you want to build. For example if you are building applications for Windows Azure, you might want to install the Windows Azure SDK.

If you want to avoid installing a full Visual Studio, which might require licensing, you can try installing first Visual Studio Express. All the same, if you still do not want to install Visual Studio *at all* here are some links to most of the stand-alone components of Visual Studio required to perform a build. Install the ones that match your scenario.

  • Microsoft Build Tools 2013 (MSBuild)
    • Since Visual Studio 2013, MSBuild is now a component of Visual Studio. However, for build servers, you can install this standalone package which includes MSBuild and the VB/C# compilers (more on this here)
  • Test Agents for Microsoft Visual Studio 2013
    • This installer includes MSTests and allows you to run unit tests from TeamCity
  • Windows Software Development Kit (SDK) for Windows 8
    • This will install several tools you might require during build like the assembly linker (AL.exe)
  • Web Deploy
    • In case you want to deploy apps from TeamCity
  • StyleCop
    • For running  source analysis from within your build configurations
  • FxCop (Code Analysis)
    • To be able to run Code Analysis during builds. Installing FxCop without Visual Studio is actually *not* supported. However, if you still want to give it a try, you can perform some manual steps following this stackoverflow question, and this one. It worked for me, and I will probably write a post about this later.

Troubleshooting 405 Error when Deploying Apps to SharePoint 2013 Online from Visual Studio


You create a new SharePoint 2013 online tenant for development purposes. You want to deploy a ‘provider-hosted’ app from Visual Studio (more info on apps here). You receive the following error when running/deploying the app:

Error occurred in deployment step ‘Install app for SharePoint’: The remote server returned an unexpected response: (405) Method Not Allowed.


Proposed Workaround

To solve this issue you first need to register your SharePoint app by browsing to the following page: http://{yourSharePointServerName}/sites/{yourSiteName}/_layouts/15/appregnew.aspx. You should fill Client Id and Secret with the values you are already using in your app or Generate new ones. Additionally, you need to fill the Title, App Domain and Redirect URI. If you are running the SharePoint app in your dev environment (i.e. localhost) make sure the port is included in the App Domain. Also make sure that the Redirect URI is HTTPS.

You can lookup app registration information for the app that you have registered in this page http://{yourSharePointServerName}/_layouts/15/appinv.aspx. To do a lookup, you have to use the client Id (also known as the app Id). You can learn more about this process in this article: Guidelines for registering apps for SharePoint 2013.


If you, by any chance, make a mistake when defining the values, and you want to change them after you clicked Create, then you are lost Confused smile. At least, I couldn’t find a way to update the values.

I assumed the following procedure *would* work, but it didn’t.

  1. Open the http://{yourSharePointServerName}/_layouts/15/appregnew.aspx page.
  2. Complete the App Id and clicking Lookup.
  3. Complete the Permission Request XML with the <AppPermissionRequests> value from AppManifest.xml of the SharePoint project. For example:
  4. <AppPermissionRequests AllowAppOnlyPolicy=”true”>
      <AppPermissionRequest Scope=”http://sharepoint/content/sitecollection/web” Right=”Manage” />
      <AppPermissionRequest Scope=”http://sharepoint/search” Right=”QueryAsUserIgnoreAppPrincipal” />

  5. Click Create.

If you perform the lookup again, you will find that the values remain the same.



Shrinking a Dynamically Expanding VHDX using Hyper-V Manager

Windows Server 2012, introduces the VHDX format for virtual hard disks, which among it’s new features, allows resizing without the need of third party tools. In this post I’ll show how to resize a disk using the Windows Disk Management utility and Hyper-V Manager.

In my case, I used this procedure to solve the VHD_BOOT_HOST_VOLUME_NOT_ENOUGH_SPACE error when booting Windows 8 from a virtual hard disk. Dynamically expanding disks can be tricky for booting because they require enough space in the disk where the VHDX is stored to allow it to expand to its full size. In these cases, following this procedure to shrinking the VHDX so that it’s full size fits in your drive will solve the issue.

Note: A dynamically expanding virtual hard disk is one in which the size of the .vhdx file grows as data is written to the virtual hard disk. When you create a dynamically expanding virtual hard disk, you specify a maximum file size. This size restricts how large the disk can become.

To shrink a VHDX, the disc must have un-partitioned space, so first, you need to resize one of the VHDX partitions.

1. Attach the VHD you want to resize.


2. Shrink the primary volume of the disk.



Tip: If you want to free more shrinkable space, try defragmenting the VHDX disk.

2. Detach the VHD.


3. Now open Hyper-v Manager and choose Edit Disk.



4. Locate the disk and select the Shrink option.





5. Wait till the operation completes. To validate that the shrink has worked as expected you can use the Inspect Disk option from Hyper-V Manager.


New Bing Developer Platform

Microsoft’s Bing platform is expanding rapidly and is now opening many of its services to developers, allowing them to build apps with more compelling, human interactions. As Tim mentioned, we had the great opportunity to help the Bing Evangelism team create its //BUILD/ conference keynote demo where this new rich development platform was presented. Some of the controls and APIs are now publicly available and others will be released soon – let’s take a quick glance at everything.

Bing OCR Control

During the demo, Gurdeep Singh Pall, vice president of Microsoft’s Online Services Division scanned and translated a Spanish coupon for a dinner idea using a Windows Store app. This showed how the new Bing Optical Character Recognition (OCR) control, can work together seamlessly with the Translation control.

The Bing Optical Character Recognition (OCR) control is a XAML control that you can add to any XAML-based Windows Store application; it works in Windows 8 or 8.1. The control works by taking a picture, sending the picture to a web service for analysis, and then returning the text and position data to the device. It can detect both letters and numbers, and works with 8 different languages (setting the language beforehand improves accuracy).

Using the OCR control mainly implies:

  1. Adding the control to an XAML page
  2. Setting the ClientId and ClientSecret values
  3. Creating a call to the StartPreviewAsync() method. You can put this call in the page load event handler or in another event handler such as a button click.
  4. Parsing the response from the OCR service by handling the OcrControl.Completed Event. This implies reading a collection of Line objects, which each contain a collection of Word objects. Each word object contains a text string. Along with text, the OCR result also contains positional information. The Line.Box and Word.Box properties give coordinates of rectangles to mark the text in the captured image.

You can check out the control in action with this sample and the OCR Control documentation here.

Quick hint: If you want to remove punctuation from detected words, try this regular expression:

private string RemovePunctuation(string value)
    Regex rgx = new System.Text.RegularExpressions.Regex(@"[^a-zA-Z0-9áéíóúñÁÉÍÓÚÑÇç -$+]");
    return rgx.Replace(value, string.Empty).Trim();


Bing Translation Control

The new Bing Translator control was released for Windows 8 and 8.1. The control receives text and provides automatic machine translation into a specified language. Some of its features include auto-detection of the source language and translation available in 42 different languages.

You can check the translation control documentation here:

Quick code sample:

var translator = new Bing.Translator.TranslatorControl();
var result = await translator.TranslatorApi.TranslateAsync(string.Empty, "en", string.Empty, keyword);

Quick Hint: The first string.Empty (for the source language) instructs the control to auto-detect the source language of the provided text.


Bing 2D Maps SDK

Microsoft Maps platform has long been available to developers. However, it’s now being included as part of the Bing platform and new Bing Maps SDKs that have been released. These new SDKs provide mapping, routing, and traffic capabilities for Windows Store apps in both WinJS and WinRT controls. Additionally, the Bing Maps SDK for the Windows 8.1 Preview supports Visual Studio 2013.

Text-To-Speech API (only available in Windows 8.1)

Text to speech and speech recognition have been available in Windows Phone 8 apps since its release. However, the Bing team has helped enhance the text-to-speech capabilities of Windows. As a result, Windows 8.1 includes a new text-to-speech API that allows apps to speak out loud to make user interaction more natural and intuitive.  The API supports reading text in 16 different languages and does not require an internet connection. Additionally, it supports SSML, the standard designed to provide a rich, XML-based markup language to assist the generation of synthetic speech in Web and other applications.

You can check out the API documentation here.

Quick code sample:

MediaElement media = new MediaElement();
SpeechSynthesizer synth = new Windows.Media.SpeechSynthesis.SpeechSynthesizer();
SpeechSynthesisStream stream = await this.synth.SynthesizeTextToStreamAsync("Hello World");, stream.ContentType);;

Bing Speech Recognition Control (available soon)

The demo also showed some speech interaction where the Windows Store app recognized commands and search queries spoken by the presenter. That’s because a new Bing Speech Recognition Control will be released for Windows 8.1. The new control will allow developers to build voice-controlled apps.


Bing Map 3D SDK (available soon)

During the demo, Gurdeep took an aerial tour over Valencia using a new maps control that provided an immersive 3D experience. In the demo, we learned that Bing will soon be providing a new map SDK with 3D photorealistic imagery. The level of detail of the images shown so far is really amazing. You can zoom in so close that you might see who is hanging their clothes out to dry 🙂


Bing Entity API (available soon)

In the demo, within the context of the 3D map, Gurdeep performed a query using the speech recognition control to ask the app, “Who is the  architect [of Valencia’s City of Arts and Sciences]?” A card showing Santiago Calatrava was displayed, including bio details and related work. This card was used to introduce the new Bing Entity API. This new service will allow developers to build apps that can better leverage Bing knowledge about People, Places, Things and Actions. The new API will be mainly centered on real world entities and their associated attributes. Entities will also have relationships, allowing navigation between them.

More details from the Bing Dev Team Blog: “The Bing Entity API allows developers to create applications that are aware of the things that surround us every day and build scenarios that augment users’ abilities to discover and interact with their world faster and more easily than they can do today”


Keep tuned to the new Bing Developer Platform home page to get updates on the controls still to be released.

Installing SharePoint 2013: Failed to Create Sample Data (NullReferenceException) Issue

I was getting the following error when installing SharePoint 2013 in the stand-alone configuration over a clean Windows Server 2012 virtual machine:

Failed to create sample data.

An exception of type System.NullReferenceException was thrown. Additional exception information: Object reference not set to an instance of an object.

After testing without luck all of the approaches explained in these links:

The one that finally solved it was: Installing all Windows Updates and re-running the Configuration Wizard.


Another Tip: Do not start installing SharePoint 2013, without installing first .NET Framework 3.5 manually from Windows Features first. If you are unable to install it because of this error:

“The source files could not be found”

It is because you need to specify the Windows Server Installation disc – check this post or this one for more information.

Happy SharePoint coding! 🙂

image2 image3

Diff Before Commit: A Best Practice

I wanted to write a post to put more emphasis on a practice that, even if no silver bullet, constantly helps me to improve the quality of my daily work and reduce the amount of errors when writing code (yes, we all make mistakes).

‘Diff before commit’ is what I call reviewing each of the changes you’ve made to the code before actually committing the code to the repository. Most versioning tools provide the ability to do this seamlessly. In particular, if you use TortoiseGit or TortoiseSVN, when you open the commit dialog, you are shown the list of changed files (first picture below). Double-clicking on each one will open a merge tool (second picture below) where you can spot the difference between the previous version and your new version. It allows you to easily identify modified lines, new lines and deleted lines.

This practice of systematically reviewing the changes on each commit can help you to:

  • Make sure no files have been modified other than the ones you actually updated. For example, some IDE’s can modify files without your awareness.
  • Make sure you did not leave behind any temporary change made to the code just for testing purposes. For example, debug statements the like.
  • Give a quick second look at the code you’ve written – more often than not this will help you spot bugs early
  • Write a better and more descriptive comment for the commit, as you’ve just reviewed all the changes performed

It’s true that if you’ve made many changes, it can be quite tedious to over each file reviewing the modifications, but believe me it will pay off in the end. (And if you find yourself committing lots of changes at once, perhaps you should be doing more frequent commits 🙂 )

Going a little bit further on reviewing your code before committing, if you use GitHub, you can try Pull Requests. In a nutshell, pull requests facilitate pair reviewing and sign off of the code. Each time you start working on a feature, you fork the repository and when done you make a pull request to merge your changes. The request, including a the diff of your modifications, needs to be reviewed by someone else on the team, accepted and pulled into the source repository. Some benefits of this workflow include:

  • You add a second level of ‘diff before commit’, having someone else review your code and check for bugs (definitely two pair of eyes see better than one)
  • You introduce a collaborative code reviewing workflow, where other team member is responsible for ‘signing off’ your code
  • You can learn from the feedback from others
  • It fosters discussion and communication

You can find a more detailed explanation by checking the GitHub team flow. Of course it comes at some cost – you add the overhead of requiring someone else’s time to accept your changes. I’m quite sure (again) it pays off in the end.

Happy coding! And thanks to @litodam, @sballarati and @nbeni for their contributions to this post.

Visual Studio Load Tests: Troubleshooting Performance Counters Collection Issues

When performing load testing, Visual Studio allows you to collect performance counters from the machines under test, including the controller and agents (which are collected by default). However, when trying to add a counter from a remote machine in your Visual Studio Load Test, you might get the following error:

Cant read performance counter categories for computer ‘…’


Or you might not get any values when running the load test and then get the following error in the Test Results

The performance counter category ‘Processor Information’ cannot be accessed on computer ‘I-NB160’ (Category does not exist.) ; check that the category and computer names are correct.


To troubleshoot these issues I’ve found the following tips really useful. You have to check them in the machine from which you are trying to collect the counters. My test rig was configured with 1 controller and 2 agents using Windows Server 2008 R2.

  • First, make sure that the user the Controller Service is running under is part of the Performance Monitor Users group. Additionally, make sure that the user under which Visual Studio is running is also part of this group.


  • Then make sure the Performance Logs and Alerts and the Remote Registry services are started.



For more details, check these two blog posts I’ve found around this subject:

If you want to learn more on how to configure a Test Rig, check my previous post.