blankline

Open Space Session: Working with Distributed Teams

During the Agile Camp Bariloche 2015, there was a discussion session regarding working in remote teams, where attendees shared their experiences and lessons learned while working with distributed teams. The discussion lasted 50 minutes, was attended by 12 people, and I had the chance to drive it. It was a great experience!

Following you can find a summary of what was talked about.


Concept of “Remote Work Culture”
To be part of a team that is spread out, it seems it is useful for all team members to understand and incorporate a remote work culture. This refers to specific mindsets that although may be familiar, become critical when working remotely. We identified the following:

  • Have a Plan B for connectivity issues. When you work in an office, it’s expected for there to be ways to manage connectivity issues because someone has made sure solutions exist. But when you work from home, only one person is responsible. For example, you may have a close relative who lives nearby, a bar, or have a second ISP. These backup plans should allow you, for example, to continue with a conference call even if there is a connectivity failure. The important thing is that whoever is working remotely should make sure a Plan B is in place.
  • Personalization of people. In the event that the members of a team do not personally know each other, it’s common to begin to “depersonalize” the other. That is, you start thinking of someone as just an email address or a robot, which leads to even greater distance between team members. One way to deal with this is to make it a priority to put names to faces. For example, using a profile picture or holding conference calls with video. Neither of these take much effort, but they have a lot of impact on the human side of the relationship. Other ideas that were mentioned include a) that the team members share a 60-second video introducing themselves b) keep a Skype, Hangout, etc. session open permanently, ideally used with the camera. It is important when you hold a videoconference that you can clearly see the other person’s face to be able to see his/her gestures which are a fundamental part of communication.
  • Communication, frequent visibility. It becomes essential that communication be frequent so that team members are in sync. This is especially important in cases where, for example, 5 team members are in one office and 1 is remote. The whole team should make a conscious effort to ensure that decisions and discussions reach everyone. For example, if those 5 people meet for a last-minute discussion, it would be helpful to call the remote team member or make sure whatever was decided during the discussion is shared.
  • Transmission of the remote work culture. It is important that team members make an effort to spread the culture of remote work among all members, especially those who do not have experience working remotely. This is important both individually and collectively: it is likely that better results will be achieved if the team views remote work as something they want to see succeed, and that this obligation not lie solely upon those working remotely, but that it be the responsibility of each and every team member.
  • Discipline regarding space and time. This can be especially important for someone working from home, and requires the person to organize his space and time so that he can separate work from his other activities. This will allow the team member to feel more comfortable both personally and with the rest of the team. The following tips are suggested:
    1. Dedicate a space in your home for work. Use this space only for work.
    2. Define fixed start and end times of the workday, including a lunch break.
    3. Get ready in the morning as if you were going into the office.
    4. Let others (customers or team members) know what times you will be available and respect those times.
    5. Be punctual for meetings. When working remotely, arriving late can cause negative impact, creating an emptiness which can affect relationships with peers.
  • Don’t get blocked; always look for a “default action”. If a team member is at home and can’t quickly get ahold of other team members (perhaps due to a time difference or simply because someone is not answering), it is important to try to move on without getting blocked, because when working remotely, such blockages can cause the distance in the relationship to feel even greater (apart from the obvious waste of time). A practical way to handle this is by deciding alone what decision he will make, communicate that decision and continue forward, rather than waiting for a response from the other party.

A remote work culture is not something that is easy to adopt if one has never worked remotely. This is why the entire team should be involved in an adaptation process at the beginning where they will accept that certain things will change and where everyone can decide together to make it work.

To travel or not to travel?
We all agree that the trips help with team building and are crucial at the beginning of a project, or when team members have still not met in person. Then, once the project starts, the practicality of work trips will depend on each project. Some variables identified include: complexity, duration, turnover, and project value.

Don’t abandon pair programming
Consistently dedicating time to remote pair programming seems to be useful in improving the relationship between team members. For example, you can set a schedule for 30 minutes each day.

Recommendations for building a remote team
The following recommendations could be applied to all types of teams; however, they are likely to have greater impact on distributed teams:

  • It is easier to work in remote teams with fewer members. The more people there are, the more complex it gets.
  • Try to reduce turnover of team members.
  • Try to make sure remote team members have experience/seniority. It is very difficult for a trainee to get started on his own without team members next to him.
  • Work with people you trust.

Southworks at Microsoft //build/ 2015

build2015

Southworks recently partnered with Microsoft DX to create several keynote demos for //build/ 2015, Microsoft’s annual conference for developers.

Windows Containers & .NET on Linux

On Day 1, Mark Russinovich showed how the existing Docker tooling will be used with Windows Containers, and how you’ll be able to deploy (and debug) an ASP.NET application from Visual Studio to a Docker container hosted in Linux.

Building Awesome Web Apps (Fabrikam)

Scott Hanselman presented the Fabrikam Maker retail app for managing 3D printing orders. In this demo, Scott showed how Azure App Service services can run a sophisticated retail scenario with both an online and a physical store presence. He walked the audience through the different Azure services involved in the solution: App Service Web Apps, App Service API Apps, App Service Logic Apps, Visual Studio Online and Application Insights. He also showed multiple clients all integrated to the same services, including the Website, a Universal App running on a Surface and a Mobile application created with Cordova Tools. During the last segment of the demo he revealed a new member of the Visual Studio family, Visual Studio Code, a free, lightweight and most importantly cross-platform smart code editor for Mac OS, Linux and Windows.

dashboard (1)

Higher Level Data Services

By leveraging Elastic Pools and Elastic Queries, a new feature that’s coming to Azure Data Services, Lara Rubbelke showed how you can easily execute a simple T-SQL script across the 30 databases of the pool and how you can easily query those 30 databases from an ASP.NET application as if it were a single database.

report

Machine Learning, Streaming Analytics, Power BI and Genomics with R

On Day 2, Joseph Sirosh took the stage to talk about data analysis and reporting both as real-time operations and predictive analysis based on historical data.  As part of his presentation he showed a Windows Phone application that displayed the results of his personal genome analysis.

phone

ManifoldJS

John Shewchuk walked through the process of packaging an existing application into both an Android and an iOS app using manifoldJS, a node js command line tool. manifoldJS takes the metadata about your site and generates hosted apps by packaging your web experience as either native or Cordova apps across Android, iOS, Windows, Chrome OS and Firefox OS.

image001

Secondary pinning: Apply image background color to the tile

Windows Phone applications support secondary tiles which enable a user to pin specific content or experiences. In a [WAT](http://wat.codeplex.com/) based application it is commonly nice to use an image from the pinned web page as the tile image.

For example, if the application allows the users to search for different products and the user generates a pin for a particular item, it looks great if the tile includes the product image.

image

The problem

However, sometimes the image background color is different from the one applied to app tiles which makes them look unattractive.

clip_image002

As one possible approach to solve this problem, this article describes how to use the background color of the image, taking the color of the top left corner pixel, and apply it to a secondary tile background.

clip_image003

Initial Code

Following is an initial version of the code to generate the tiles with a white background. The displayed code just covers the functionality to draw the image in the canvas. After drawing the image, the result needs to be saved to a file and the tile would be created using that file.

Code

Comments

// Get canvas

var canvas = WAT.options.tileHandler.getElementsByTagName(“canvas”)[0];

var ctx = canvas.getContext(‘2d’);

 

// Clears and set a white background

ctx.fillStyle = ‘white’;

ctx.fillRect(0, 0, canvas.width, canvas.height);

 

// Load the image onto canvas

var img = new Image();

img.src = wideLogoUri;

 

img.onload = function () {

    // Get the image size

    var originalWidth = img.width;

    var originalHeight = img.height;

    var minPixelSize = originalHeight < originalWidth ? originalHeight : originalWidth;

 

    var tileNewHeight = originalHeight;

    var tileNewWidth = originalWidth;

 

    if (minPixelSize > canvas.height) {

        // The image needs to be scaled

        var scaleRatio = canvas.height / minPixelSize;

        tileNewHeight = originalHeight * scaleRatio;

        tileNewWidth = originalWidth * scaleRatio;

    }

 

    // Center the image in the canvas

    var offsetX = (canvas.width – tileNewWidth) / 2;

    var offsetY = (canvas.height – tileNewHeight) / 2;

 

    ctx.drawImage(img, offsetX, offsetY, tileNewWidth, tileNewHeight);

 

    // Apply Transforms

    // Add an overlay

    ctx.fillStyle = ‘black’;

    ctx.globalAlpha = 0.4;

    ctx.fillRect(0, 280, canvas.width, canvas.height);

    ctx.globalAlpha = 1;

 

    // Save the image back

    var imageUri = canvas.toDataURL();

    var encodingInfo = {

        pixelFormat: Windows.Graphics.Imaging.BitmapPixelFormat.rgba8,

        alphaMode: Windows.Graphics.Imaging.BitmapAlphaMode.premultiplied,

        width: canvas.width,

        height: canvas.height,

        dpiX: 150,

        dpiY: 150

    }

 

    var squareXOffset = (canvas.width / 2) – (canvas.height / 2),

        squareImageWidth = canvas.height;

 

 

 

 

 

Sets a white background to the canvas

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Draws the image on top of the white background

 

 

Draws a grey overlay in top of the image in order to properly display the tile text

 

Clearing the canvas

Considering that the same canvas is used to generate the different tiles, we need to clear the canvas before starting to generate a new tile. The logic described in the previous section is clearing the canvas by filling the whole canvas space with a white rectangle.

// Clears and set a white background
ctx.fillStyle =
‘white’;

ctx.fillRect(0, 0, canvas.width, canvas.height);

 

This works fine in the described implementation because the background color to apply is always white. However, in our case, the color to apply may vary depending on the image applied to the tile.
In order to clear the canvas properly we will use the clearRect function.

// Clear canvas

ctx.clearRect(0, 0, canvas.width, canvas.height);

 

This code is clearing a rectangle that covers the whole canvas. As a consequence, it is clearing the whole canvas.

Taking the background color of the image

In order to take the background color of the image, first we need to obtain the pixels information corresponding to the image. To achieve this, we will use the getImageData function provided by the canvas context. This function returns an instance of CanvasImageData which, through the data property, offers an instance of CanvasPixelArray. This array has the pixels information we need.

The pixels array contains RGBA values. Each item of the array represents one of the values to conform an RGBA color. For example, in order to get the color corresponding to the top left corner of the image, we need to get the first 4 items from the array:

  • pixelsArray[0] will return the RED value
  • pixelsArray[1] will return the GREEN value
  • pixelsArray[2] will return the BLUE value
  • pixelsArray[3] will return the ALPHA value (used to apply transparency).
    • The ALPHA value contained in the array is a value between 0 and 255.

Since we just need to get the color information corresponding to the top left corner pixel and apply that color to the canvas background, we can get the first four elements of the array and set the canvas context fillStyle applying the RGBA color determined based on the obtained values.
The code would look something like this:

Code

Comments

var imageData = ctx.getImageData(offsetX, offsetY, width, height);

 

 

ctx.fillStyle = ‘rgba(‘ + imageData.data[0] + ‘,’ + imageData.data[1] + ‘,’ + imageData.data[2] + ‘,’ + imageData.data[3] / 255 + ‘)’;

 

 

 

 

 

ctx.fillRect(0, 0, canvas.width, canvas.height);

Gets the canvas context image data

 

Applies RGBA color to the canvas fill style. Since the obtained alpha value is a number between 0 and 255 and the RGBA constructor expects a value between 0 and 1, we need to apply a conversion.

 

Draws a rectangle covering the whole canvas using the color defined before.

 

Applying the color to the background

The code displayed above is taking the color of a pixel and painting a rectangle of the canvas size having the obtained color as background color. However, this action will paint the rectangle over the image we want to use in the tile. As a result, the tile would have visible just a rectangle of the obtained color.

In order to paint the rectangle behind the image we need to use the globalCompositeOperation property of the canvas context.
To paint the rectangle behind the tile image that is already painted in the canvas, we need to set the globalCompositeOperation as “destination-over“. This indicates the canvas to paint the new items behind the existent image.
Once we set this property, we can paint the rectangle with the obtained color. Finally, after painting the rectangle we need to set the globalCompositeOperation as “source-over“. By applying this value, the future elements would be painted in front of the existent image. This step is required just in case any other elements need to be added to the tile.

Applying the described logic, the resulting code should look as follows:

Code

Comments

var imageData = ctx.getImageData(offsetX, offsetY, tileNewWidth, tileNewHeight);

 

ctx.globalCompositeOperation = “destination-over”;

 

 

 

ctx.fillStyle = ‘rgba(‘ + imageData.data[0] + ‘,’ + imageData.data[1] + ‘,’ + imageData.data[2] + ‘,’ + imageData.data[3] / 255 + ‘)’;

 

 

ctx.fillRect(0, 0, canvas.width, canvas.height);

 

 

 

 

 

ctx.globalCompositeOperation = “source-over”;

Gets the canvas context image data

 

 

Sets the globalCompositeOperation to “destination-over” in order to start painting behind the tile image.

 

Applies RGBA color to the canvas fill style.

 

Draws a rectangle covering the whole canvas using the color defined before. This rectangle is painted behind the image painted in the canvas before.

 

Sets the globalCompositeOperation to “source-over” in order to paint new elements on top of the existent ones.

 

The code described above will paint in the canvas the image to be used in the tile with the color of the top left corner pixel as background color. In order to actually generate the tile, we just need to save the canvas content to a file and provide it as image to be used by the tile.

Some considerations

  • The approach works better if the images have a solid background color.
    • If the images have gradients or different colors in the background, the color we would take from the image might not look good as tile background color. Sometimes, scaling the image to take the whole tile space is a better solution.

Reusing Azure Media Services Locators to avoid facing the "5 Shared Access Policy" limitation

If you have developed VOD or Live workflows with Azure Media Services, you might have faced the following error when creating multiple Asset Locators: “Server does not support setting more than 5 shared access policy identifiers on a single container”.

<?xml version="1.0" encoding="utf-8"?> <m:error xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <m:code /> <m:message xml:lang="en-US">Server does not support setting more than 5 shared access policy identifiers on a single container.</m:message> </m:error>

To understand the reason behind this error (and how to avoid it), first let me clarify the differences among Azure Storage Stored Access Policies, Media Services Access Policies and Media Services Locators:

  • Stored Access Policies. This is an Azure Storage Services REST API feature that provides an additional level of control over shared access signatures (SAS) on the server-side for containers, queues, or tables. This feature has the limitation that you can include up to 5 Stored Access Policies for each container, queue or table.
  • Access Policies. This is an Azure Media Services REST API entity that it is just used as a “template” for creating Locators. There is no mapping between a Media Services Access Policy and a Storage Services Stored Access Policy and, therefore, there is no explicit limitation on the number of Access Policies you can create.
  • Locators. This is an Azure Media Services REST API entity that provides an entry point to access the files contained in an Asset container. An Access Policy is used to define the permissions and duration that a client has access to a given Asset. When a Locator is created, the Media Services REST API creates a Stored Access Policy in the container associated with the Asset. Therefore, the same Stored Access Policy limitation also applies for Locators: you can create up to 5 Locators for a given Asset.

As you can see, the error is thrown by an Azure Storage Services limitation on the number of Stored Access Policies for a container, and the same limitation is inherited by the number of Media Services Locators for an Asset.

There are different options to avoid getting this error (depending on your scenario):

  1. Delete the asset locators after you are done using them. For example, if you need to upload a new asset file, you have to create a SAS locator with Write permissions. Once the operation is complete, you no longer need the locator, so it is safe to delete it. Take into account that this approach does not apply to some scenarios: if you want to publish an asset for adaptive streaming (On-Demand Origin locator) or progressive download (SAS locator), the locator must persist; otherwise, deleting the locator will “unpublish” the asset.
  2. Reuse the locators that are available in the asset. Instead of creating a new locator every time, check if the asset already contains one that matches the type and access policy permissions you need. If you find one, make sure it is not expired (or near expiration) before using it; otherwise, create a new locator.
  3. Leverage the Azure Media Services Content Protection feature. If you are trying to get granular control over your content by creating a different Locator for each client, there is a better way now: you can dynamically encrypt your content with AES or PlayReady, set a token authorization policy for the content key or license, and make the token expire after a short period of time (long enough for the player to retrieve the content key or license and start the playback). This way, you will be using a single long-lived Locator for all your clients. For more details, you can check this blog post: Announcing public availability of Azure Media Services Content Protection Services.

 

In this post, I will focus on Option #2 and show you how to implement a helper extension method to let you reuse your Locators and also update the duration if it happens to be expired (or near expiration). Below, you can find a proposed implementation that takes care of this.

Note: This code uses the Windows Azure Media Services .NET SDK Extensions NuGet package.

namespace Microsoft.WindowsAzure.MediaServices.Client { using System; using System.Linq; using System.Threading.Tasks; public static class LocatorCollectionExtensions { public static readonly TimeSpan DefaultExpirationTimeThreshold = TimeSpan.FromMinutes(5); public static async Task<ILocator> GetOrCreateAsync(this LocatorBaseCollection locators, LocatorType locatorType, IAsset asset, AccessPermissions permissions, TimeSpan duration, DateTime? startTime = null, TimeSpan? expirationThreshold = null) { MediaContextBase context = locators.MediaContext; ILocator assetLocator = context .Locators .Where(l => (l.AssetId == asset.Id) && (l.Type == locatorType)) .OrderByDescending(l => l.ExpirationDateTime) .ToList() .Where(l => (l.AccessPolicy.Permissions & permissions) == permissions) .FirstOrDefault(); if (assetLocator == null) { // If there is no locator in the asset matching the type and permissions, then a new locator is created. assetLocator = await context.Locators.CreateAsync(locatorType, asset, permissions, duration, startTime).ConfigureAwait(false); } else if (assetLocator.ExpirationDateTime <= DateTime.UtcNow.Add(expirationThreshold ?? DefaultExpirationTimeThreshold)) { // If there is a locator in the asset matching the type and permissions but it is expired (or near expiration), then the locator is updated. await assetLocator.UpdateAsync(startTime, DateTime.UtcNow.Add(duration)).ConfigureAwait(false); } return assetLocator; } public static ILocator GetOrCreate(this LocatorBaseCollection locators, LocatorType locatorType, IAsset asset, AccessPermissions permissions, TimeSpan duration, DateTime? startTime = null, TimeSpan? expirationThreshold = null) { using (Task<ILocator> task = locators.GetOrCreateAsync(locatorType, asset, permissions, duration, startTime, expirationThreshold)) { return task.Result; } } } }

Every time you need a Locator for an Asset, you can use the GetOrCreate method as follows. Of course, if you call the GetOrCreate method multiple times using different parameter combinations (locator type and access policy permissions), you might end up facing the “5 shared access policy” limitation. That’s why it is also important to delete the locators that are not needed as explained in Option #1.

var myContext = new CloudMediaContext("%accountName%", "%accountKey%"); var myAsset = myContext.Assets.Where(a => a.Id == "%assetId%").First(); var myLocator = myContext.Locators.GetOrCreate(LocatorType.Sas, myAsset, AccessPermissions.Read, TimeSpan.FromDays(30));

 

Enjoy!

BlankLine

Grunt-manifoldjs: a grunt task to create hosted apps as part of your build process

After playing a while with manifold.js, I created a grunt task that consumes manifoldjs as a simple part of your build process. You can find it at npm.

Read More

BlankLine

Using manifoldjs from the command line to create site-based apps

In my last post, I briefly introduced you to manifold.js. In this post, I want to show you how you can use it from the command line tool in order to generate your apps.

Read More

BlankLine

Building hosted apps with W3C Manifest for web apps and manifoldjs

A few days ago, manifoldjs was released. This tool creates hosted web apps and some polyfill apps for Android, iOS, Windows 8.1, Windows Phone 8.1, Windows 10, FirefoxOS, Chrome, and the web, all based on the W3C Manifest for Web Apps.

manifoldjs

manifoldjs

Read More

[Spanish] Construyendo aplicaciones Media con Microsoft Azure Media Services @ Global Azure Bootcamp 2015 Buenos Aires, Argentina

For those who don’t read Spanish, this blog post provides details about a Spanish speaking session at the Global Azure Bootcamp 2015 Buenos Aires, Argentina.


Global Azure Bootcamp 2015 Buenos Aires, Argentina

El sábado pasado junto con Mariano Vazquez presentamos Microsoft Azure Media Services en el Global Azure Bootcamp 2015 Buenos Aires, Argentina. La charla duró aproximadamente 90 minutos y cubrimos los siguientes temas:

  • Introducción a conceptos de Media en general, como Progressive Download vs. Adaptive Streaming, protocolos disponibles de Adaptive Streaming, transcoding vs. transmuxing, etc.
  • Arquitectura de Microsoft Azure Media Services (PaaS)
  • Principales características de la plataforma:
    • Video-on-Demand (VOD)
    • Live Streaming
    • Dynamic Packaging
    • Dynamic Encryption (content protection)
  • Demostraciones:
  • Nuevas características anunciadas recientemente:

Queremos agradecerles a todos los que asistieron al evento y les repetimos que pueden contactarnos (@marianodvazquez y @mconverti) en caso de que tengan preguntas o dudas sobre alguno de los temas presentados. El material que utilizamos para la presentación ya esta disponible para ser descargado desde nuestro repositorio GitHub @ https://github.com/mconverti/bootcamp2015-aplicaciones-media-con-azure-media-services.

Construyendo aplicaciones Media con Microsoft Azure Media Services

Por úlitmo, les dejamos algunos links con recursos adicionales relacionados con esta charla:

 

Enjoy!

BlankLine

Global Azure Bootcamp 2015 Argentina – Content

Today we participated of the Global Azure Bootcamp 2015 at Microsoft Argentina offices!

Global Azure Bootcamp 2015

Global Azure Bootcamp 2015

We had a full day with very interesting presentations! And all the content is already available at http://j.mp/GAB-Arg-2015.

Thanks everyone who participated and special thanks to our local sponsors (Microsoft Argentina, Autocosmos.com, TriggerDB, Southworks) and local speakers (Hernan Meydac Jean, Marcos Castany, Maximiliano Accotto, Matias Quaranta, Mariano Converti, Mariano Vazquez, Diego Poza, Nicolas Bello Camilletti)

For more information about the event check our site.

Group Photo

 

BlankLine

Global Azure Bootcamp 2015

On Saturday, April 25, 2015 we are going to be part of the Global Azure Bootcamp 2015 at Microsoft Argentina offices!

Read More