BlankLine

Grunt-manifoldjs: a grunt task to create hosted apps as part of your build process

After playing a while with manifold.js, I created a grunt task that consumes manifoldjs as a simple part of your build process. You can find it at npm.

Read More

BlankLine

Using manifoldjs from the command line to create site-based apps

In my last post, I briefly introduced you to manifold.js. In this post, I want to show you how you can use it from the command line tool in order to generate your apps.

Read More

BlankLine

Building hosted apps with W3C Manifest for web apps and manifoldjs

A few days ago, manifoldjs was released. This tool creates hosted web apps and some polyfill apps for Android, iOS, Windows 8.1, Windows Phone 8.1, Windows 10, FirefoxOS, Chrome, and the web, all based on the W3C Manifest for Web Apps.

manifoldjs

manifoldjs

Read More

[Spanish] Construyendo aplicaciones Media con Microsoft Azure Media Services @ Global Azure Bootcamp 2015 Buenos Aires, Argentina

For those who don’t read Spanish, this blog post provides details about a Spanish speaking session at the Global Azure Bootcamp 2015 Buenos Aires, Argentina.


Global Azure Bootcamp 2015 Buenos Aires, Argentina

El sábado pasado junto con Mariano Vazquez presentamos Microsoft Azure Media Services en el Global Azure Bootcamp 2015 Buenos Aires, Argentina. La charla duró aproximadamente 90 minutos y cubrimos los siguientes temas:

  • Introducción a conceptos de Media en general, como Progressive Download vs. Adaptive Streaming, protocolos disponibles de Adaptive Streaming, transcoding vs. transmuxing, etc.
  • Arquitectura de Microsoft Azure Media Services (PaaS)
  • Principales características de la plataforma:
    • Video-on-Demand (VOD)
    • Live Streaming
    • Dynamic Packaging
    • Dynamic Encryption (content protection)
  • Demostraciones:
  • Nuevas características anunciadas recientemente:

Queremos agradecerles a todos los que asistieron al evento y les repetimos que pueden contactarnos (@marianodvazquez y @mconverti) en caso de que tengan preguntas o dudas sobre alguno de los temas presentados. El material que utilizamos para la presentación ya esta disponible para ser descargado desde nuestro repositorio GitHub @ https://github.com/mconverti/bootcamp2015-aplicaciones-media-con-azure-media-services.

Construyendo aplicaciones Media con Microsoft Azure Media Services

Por úlitmo, les dejamos algunos links con recursos adicionales relacionados con esta charla:

 

Enjoy!

BlankLine

Global Azure Bootcamp 2015 Argentina – Content

Today we participated of the Global Azure Bootcamp 2015 at Microsoft Argentina offices!

Global Azure Bootcamp 2015

Global Azure Bootcamp 2015

We had a full day with very interesting presentations! And all the content is already available at http://j.mp/GAB-Arg-2015.

Thanks everyone who participated and special thanks to our local sponsors (Microsoft Argentina, Autocosmos.com, TriggerDB, Southworks) and local speakers (Hernan Meydac Jean, Marcos Castany, Maximiliano Accotto, Matias Quaranta, Mariano Converti, Mariano Vazquez, Diego Poza, Nicolas Bello Camilletti)

For more information about the event check our site.

Group Photo

 

BlankLine

Global Azure Bootcamp 2015

On Saturday, April 25, 2015 we are going to be part of the Global Azure Bootcamp 2015 at Microsoft Argentina offices!

Read More

More Azure Management Portal updates for Azure Media Services

Last week, the Azure Media Services team published more updates in the Azure Management Portal that include some new features. In this post you will find a quick summary of what’s new.

 

Added support for JSON Web Token (JWT) format

Last month, the Azure Media Services team added support for JSON Web Token (JWT) format to restrict delivery of content keys; before this update, only Simple Web Token (SWT) format was supported.

With the latest Azure Portal updates, there is a new Token Type option in the Content Protection tab for both the AES and PlayReady authorization policies to let you set the format: JSON Web Token (default) or Simple Web Token.

Please take into account that:

  • After applying changes to the authorization policies, it could take up to a few minutes to take effect.
  • The Content Protection tab only lets you configure the authorization policies that are used for the encryption features in the Azure Portal. If you have a custom authorization policy, you will need to configure it programmatically using the SDK/REST API.

To learn more about how to use JWT format with dynamically encryption, you can read these blog posts by @GEOTRIF:

Token Type option in Content Protection tab

 

Disabled Play command when the asset has custom delivery policies

The Play command in the Content tab is now disabled for assets that have a custom delivery policy created programmatically using the SDK/REST API (i.e. not with the Encryption command).

The reason behind this update is that the Azure Portal relies on the delivery policies created with the Encryption command. If you create a custom one, it might not cover the delivery protocols used by the Azure Portal. Therefore, to avoid any potential playback issues that could be misleading (users might think that there is an issue in the dynamic encryption configuration), the Play command is now disabled in this scenario.

Play command disabled with custom delivery policies

 

Added support for Streaming Endpoints advanced configuration

The Configure sub-tab for streaming endpoints now have three new sections:

  • Cross Domain Policy: This section lets you specify the cross domain access policy for Adobe Flash clients. For more information, see Cross-domain policy file specification.
  • Client Access Policy: This section lets you specify the client access policy for Microsoft Silverlight clients. For more information, see Making a Service Available Across Domain Boundaries.
  • Custom Host Names: This section lets you configure a streaming endpoint to accept traffic directed to a custom host name. This allows for easier traffic management configuration through a Global Traffic Manager (GTM) and also for branded domain names to be used as the streaming. To perform CRUD operations on the custom host names, click the Configure button to open the Manage Custom Host Names dialog.

Please take into account that these sections (along with the rest) are only enabled for editing when the streaming endpoint has at least one streaming unit.

Streaming Endpoint advanced configuration

Manage streaming endpoint custom host names 

 

Enjoy!

Connecting to Microsoft Azure Media Services REST API

If you are a .NET/C# developer, you can consume all the Azure Media Services features using the .NET SDK. It is available as a NuGet package which you can easily get from Visual Studio, and its source code can be found in GitHub. There is also an Extensions .NET SDK that you can use to simplify the implementation of common media workflows (also available as a NuGet package and in GitHub).

If you are working in a platform/language where there is no official SDK currently available, you can leverage Azure Media Services by implementing your own client for the Azure Media Services REST API. Luckily, this is not a complex task: the REST API is straightforward in terms of design, supports JSON/XML formats and it’s very well documented in MSDN, so you only need a standard HTTP client and you are good to go Smile.

In the case of the latter, you will need to perform some connection steps before accessing the Azure Media Services REST API:

  1. Get the access token (SWT format) from the ACS instance by providing the Media Services account name and key. You can find more details here.
  2. Send an authenticated GET request (with the bearer access token) to https://media.windows.net/ and you will receive a 301 Moved Permanently response with the REST API endpoint in the Location header. You can find more details here.
  3. Use the bearer access token and the new endpoint for sending all your subsequent requests to the REST API.

A common mistake when performing step #2 is not disabling the auto-redirect logic in the HTTP client used to send the request. This feature is likely to be enabled by default for GET requests in a standard HTTP client, so it would follow the redirection automatically, send a second request to the new location (similar to https://wamsbayclus001rest-hs.cloudapp.net/api/) and get a second response without the Location header and a body as follows:

{  
   "d":{  
      "EntitySets":[  
         "AccessPolicies",
         "Locators",
         "ContentKeys",
         "ContentKeyAuthorizationPolicyOptions",
         "ContentKeyAuthorizationPolicies",
         "Files",
         "Assets",
         "AssetDeliveryPolicies",
         "IngestManifestFiles",
         "IngestManifestAssets",
         "IngestManifests",
         "StorageAccounts",
         "Tasks",
         "NotificationEndPoints",
         "Jobs",
         "TaskTemplates",
         "JobTemplates",
         "MediaProcessors",
         "EncodingReservedUnitTypes",
         "Operations",
         "StreamingEndpoints",
         "Channels",
         "Programs"
      ]
   }
}

Of course, this error will prevent you from continuing with step #3 to actually access the Azure Media Services resources you want. Therefore, as a summary, if you are building your own Azure Media Services client, do not use auto-redirection logic in your HTTP client configuration when connecting to the REST API.

Happy coding!

Microsoft Azure Management Portal updates for Azure Media Services

Last week, the Azure Media Services team published some updates in the Azure Management Portal that include several improvements and new features. In this post you will find a quick summary of what has changed and what is new.

 

Content Protection tab

As part of the General Availability announcement of the Content Protection (AES and DRM) service, the following changes were applied in the Content Protection tab:

  • The Preview tag next to the Content Protection tab was removed since the service is now in General Availability. This also means that you no longer need to sign up for the PlayReady license service via the Azure Preview features page; this feature is now active in all the Media Services accounts.
  • The Branding Reporting table at the bottom is now optional; this means that the PlayReady license service is always enabled.

Content Protection tab

 

Content tab

These are the improvements and new features that were added in the Content tab:

  • The Encode command was renamed Process, since you can now select another media processor (not just the Azure Media Encoder).
  • The Process dialog (formerly Encode) was refactored as follows:
    • The encoding configuration for Azure Media Encoder was reorganized into these categories to make it clearer for users:
      • Presets for Adaptive Streaming (dynamic packaging): produces multi-bitrate MP4 files ready for dynamic packaging
      • Presets for Progressive Download: produces a single MP4 file ready for progressive download
      • Legacy Presets for Adaptive Streaming: produces Smooth Streaming files ready for adaptive streaming
      • Other (common workflows).
    • As of the General Availability announcement of the Azure Media Indexer back in September 2014, you can now select the Azure Media Indexer processor to submit an indexing job. You can also enter the optional title and description metadata fields for the task configuration preset, and choose whether to copy the generated TTML caption file back to the input asset. For more information related to the Azure Media Indexer you can check this blog post.
  • The Publish and Play commands now notify the user if the Media Services account does not have any streaming units when the content is dynamically encrypted (AES or PlayReady). Remember that the dynamic packaging and dynamic encryption features will only work in streaming endpoints that have at least one streaming unit.
  • The Publish command now creates a SAS locator (for progressive download) instead of an Origin locator (for adaptive streaming) when the asset contains a single MP4 file. This is the case, for example, when you generate an asset by submitting a job using any of the H264 Broadband * encoding configuration presets.

Content tab

 

Channels and Programs tabs

With the AES encryption and PlayReady encryption announcements for the Live Streaming service back in  November/December 2014, the Azure Media Services team has added support to configure these features from the portal:

  • The Start Streaming command in the Channels tab now includes three sub-commands:
    • Unencrypted: creates a channel program with default settings, publishes the associated asset with an Origin locator and starts the program.
    • AES Encrypted: same as Unencrypted command, but it also adds an asset delivery policy for AES dynamic encryption.
    • PlayReady Encrypted: same as Unencrypted command, but it also adds an asset delivery policy for PlayReady dynamic encryption.
  • The Create a new program dialog now includes the Streaming Encryption Type setting:
    • Unencrypted: creates a program with custom settings.
    • AES Encrypted: same as Unencrypted command, but it also adds an asset delivery policy for AES dynamic encryption.
    • PlayReady Encrypted: same as Unencrypted command, but it also adds an asset delivery policy for PlayReady dynamic encryption.
  • The Programs table now has a new Encryption column specifying the streaming encryption type.

Channels and Programs tabs

 

Enjoy!

eh

Azure Event Hubs, the Thing and the Internet

I have things, and I have Internet. Now I want to Internet my things…

I recently started playing with Azure Event Hubs and wanted to do a basic IoT sample with a real device and learn lots of new stuff. If you haven’t heard of Azure Event Hubs before,  Event Hubs is a “highly scalable publish-subscribe ingestor that can intake millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications.”

In this post, the objective is to analyze Kinect sensor readings in real time using Azure Event Hubs and monitor meaningful events in a website.

The scenario

I want to know in real time(ish) when a security breach is produced. In my fictitious scenario, a security breach is produced when someone gets too close to a person (have you seen The Island?) or a guarded object (if movies teach us anything, is that a laser grid alone won’t cut it. I don’t have access to a laser grid, either). Ok, so basically, I need to walk around with a Kinect strapped to my chest or strap a Kinect to a Fabergé egg).

I set out to do the following:

  • Dump the Kinect’s depth sensor readings in an event hub.
  • Analyze the stream of events using a worker role in real time (using the EventProcessorHost approach) and produce a proximity alert whenever I get too close to the sensor (say, 50cm).
  • Display the proximity alerts in a monitoring website as they occur.

You can checkout the progress here. It will require that you have a Kinect, the Kinect SDK (1.8) and a Microsoft Azure account.

The big picture

eh

 

The Sample Solution

I’ll make a brief overview of the projects in the sample.

The DepthRecorder project: It is actually the Depth-Basics sample of the Kinect SDK with a little twist. Every time a frame is received, I obtain the known distance of the closest pixel of the frame, prepare a new KinectEvent object and send it to the event hub. At roughly 30fps, it´s fun to watch how events quickly start to accumulate. (Checkout the ServiceBus explorer tool, too!)

The Cloud project: This project contains two worker roles. The EventsConsumer worker is responsible for analyzing the stream of events and produce a proximity alert whenever an event reports a depth reading of 50cms or less. As alerts appear, the worker role will dump them in a Service Bus queue. The AlertsListener worker is responsible for reading the alerts queue, and pushing alert messages down to the monitoring website (using SignalR)

The Web project: It’s an MVC project that will display the alerts on screen as they occur. It receives SignalR messages and will display a big red alert. For the time being, it won’t distinguish alert sources, but we’ll work on that :)

 

Work in progress

This is probably not the simplest approach, but I wanted to separate and decouple components as much as possible, because i wanted to use queues and because it’s fun :). So probably, for simplicity’s sake, we could get rid of the alerts queue and the alerts listener worker, and push alerts directly to the website.

 

Further reading

check out the following links for further reading:

Cloud Cover Episode 160

ScottGu’s blog post with links to more information

Clemens Vaster’s session at Teched

 

So there you go. Happy Learning!