Azure Media Services updates in new Azure portal

Azure Portal

Last month, the Azure Media Services team announced the availability of Azure Media Services functionalities through portal.azure.com (codename Ibiza) in public preview; you can check all the details in this blog post by @mingfeiy. After this announcement, there were some updates to the new portal with more features, enhancements and several bug fixes.

In this post you will find a quick summary of what is new and what has changed in the new portal for Azure Media Services.

 

Add FairPlay DRM protection onto both VOD and live stream

You can now configure the default content key authorization policy for FairPlay DRM from the Content Protection blade by providing the App Certificate (.pfx file containing the private key along with its password) and the Application Secret Key (ASK) received when you generate the certification using Apple Developer portal.

Content Protection blades for FairPlay DRM

After configuring the default content key authorization policy for FairPlay DRM, two new encryption options are going to be available for both VOD and live stream assets.

  • PlayReady and Widevine with MPEG-DASH + FairPlay with HLS
  • FairPlay only with HLS

Adding FairPlay delivery policy in a VOD asset

Adding FairPlay delivery policy in a live stream asset

For more details about FairPlay Streaming (FPS) support, you can check this blog post: Stream premium content to Apple TV with Azure Media Services.

 

Enable/Disable CDN in Streaming Endpoint

After creating a streaming endpoint, you can now enable/disable CDN feature from the Streaming Endpoint Details blade. Remember that, in order to enable CDN feature, the streaming endpoint must be in Stopped state and have at least one streaming unit.

Streaming Endpoint Details blade

 

Improve delete Channel experience

In order to delete a channel, the Azure Media Services REST API validates that:

  • The channel is in Stopped state 
  • The channel does not contain any programs

The original implementation of the Delete command was only enabled in channels satisfying both conditions. To make it easier for Azure portal users, it is now always enabled and takes care of performing these operations (if necessary):

  • Stop all the programs in the channel
  • Delete all the programs in the channel
  • Stop the channel
  • Delete the channel

Channel blade

 

Show ‘Account ID’ property in the Summary and Properties blades

The Media Services ‘Account ID’ is now available in both the Summary and Properties blades. This value is useful, for example, when you want to submit a support request through the Azure portal and univocally identify your account.

Summary and Properties blades

 

Bug Fixes

  • Create Media Services Account blade: Account Name availability validation fails when the user has a disabled subscriptions
  • Create Media Services Account blade: Location dropdown is empty for some subscriptions
  • Asset blade: Encrypt command does not get automatically enabled when the asset type changes after an encoding job finishes
  • Publish the Asset blade: Notification error message when scaling streaming endpoint while creating a streaming locator
  • Asset Media Analytics blade: Remove frame limit for Azure Media Hyperlapse Media Processor
  • Create a new Channel wizard (Custom Create): Channel creation fails when ingest streaming protocol is set to RTP/MPEG-2
  • Create a new Channel wizard (Custom Create): Wrong ingest streaming protocol set when creating multiple channels
  • Create a Live Event blade: Binding issue in Archive Window field
  • Streaming Endpoint Details blade: Streaming units max limit is 5 but it should be 10
  • Streaming Endpoint Settings blade: Setting an entry in the Streaming Allowed IP Addresses table breaks media streaming
  • Media Services blades fail to load in Safari for Mac
  • Delete operations do not work after performing an update on the same entity

 

Enjoy!

Microsoft Azure Media Services SDK for Java v0.9.1 released with support for Widevine dynamic encryption

microsoft-azure-java-sdk-for-media-services

Last Friday, the Azure SDK team published a new release of the Azure SDK for Java Maven packages; you can find the full list at http://search.maven.org/#search|ga|1|com.microsoft.azure. In particular, there were was a minor new release (v0.9.1) of the Azure Media Services SDK for Java that adds support for Widevine (DRM) Dynamic Common Encryption and License Delivery Service; below I’m listing the change log.

If you want to use the Java SDK in your Maven project, you just need to add the “azure-media” dependency in your pom.xml file as follows:

<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-media</artifactId>
<version>0.9.1</version>
</dependency>

 

To demonstrate the new Java SDK features, I created the azure-media-dynamic-encryption-playready-widevine sample console Java application that contains a VOD end-to-end workflow that uses PlayReady and Widevine (DRM) Dynamic Common Encryption and the License Delivery Service for playback. It is based on the .NET sample explained in this documentation article: https://azure.microsoft.com/documentation/articles/media-services-protect-with-drm/.

You can access the full source code of this sample at: https://github.com/southworkscom/azure-sdk-for-media-services-java-samples/tree/master/azure-media-dynamic-encryption-playready-widevine.

Media Services SDK for Java sample projects in Eclipse

 

v0.9.1 Change Log

 

Enjoy!

New Microsoft Azure Media Services SDK for PHP release available with New features and samples

Azure Media Services SDK for PHP

Last week the Azure SDK team published a new release of the Azure SDK for PHP package that contains updates and new features for Microsoft Azure Media Services. In particular, the Azure Media Services SDK for PHP now supports the latest Content Protection features (AES and DRM – both PlayReady and Widevine – dynamic encryption with and without Token restriction), and listing/scaling Encoding Units. This release also includes three new PHP samples that show how to use these new features; below you can find the full change log with all the details about these updates.

In this post, I’ll focus on explaining how to use one of these new features: implement a VOD workflow that applies PlayReady and Widevine (DRM systems) with Dynamic Common Encryption (CENC) using Token restriction for the license.

  1. Make sure you have PEAR and Composer properly installed and configured (php.ini) in your local development box.
  2. Add the following dependencies in the composer.json file in the root of your project.
    "repositories": [
    {
    "type": "pear",
    "url": "http://pear.php.net",
    "vendor-alias": "pear-pear2.php.net"
    }
    ],
    "require": {
    "pear-pear.php.net/HTTP_Request2": "0.4.0",
    "pear-pear.php.net/mail_mime": "*",
    "pear-pear.php.net/mail_mimedecode": "*",
    "firebase/php-jwt": "^3.0",
    "microsoft/windowsazure": "dev-master"
    }

  3. In your index.php main file include the autoload.php file generated by Composer to load all the dependencies, and add the use statements for the required namespaces.
    require_once 'vendor/autoload.php';

    use WindowsAzure\Common\ServicesBuilder;
    use WindowsAzure\Common\Internal\MediaServicesSettings;
    use WindowsAzure\Common\Internal\Utilities;
    use WindowsAzure\MediaServices\Models\Asset;
    use WindowsAzure\MediaServices\Models\AccessPolicy;
    use WindowsAzure\MediaServices\Models\Locator;
    use WindowsAzure\MediaServices\Models\Task;
    use WindowsAzure\MediaServices\Models\Job;
    use WindowsAzure\MediaServices\Models\TaskOptions;
    use WindowsAzure\MediaServices\Models\ContentKey;
    use WindowsAzure\MediaServices\Models\ProtectionKeyTypes;
    use WindowsAzure\MediaServices\Models\ContentKeyTypes;
    use WindowsAzure\MediaServices\Models\ContentKeyAuthorizationPolicy;
    use WindowsAzure\MediaServices\Models\ContentKeyAuthorizationPolicyOption;
    use WindowsAzure\MediaServices\Models\ContentKeyAuthorizationPolicyRestriction;
    use WindowsAzure\MediaServices\Models\ContentKeyDeliveryType;
    use WindowsAzure\MediaServices\Models\ContentKeyRestrictionType;
    use WindowsAzure\MediaServices\Models\AssetDeliveryPolicy;
    use WindowsAzure\MediaServices\Models\AssetDeliveryProtocol;
    use WindowsAzure\MediaServices\Models\AssetDeliveryPolicyType;
    use WindowsAzure\MediaServices\Models\AssetDeliveryPolicyConfigurationKey;
    use WindowsAzure\MediaServices\Templates\PlayReadyLicenseResponseTemplate;
    use WindowsAzure\MediaServices\Templates\PlayReadyLicenseTemplate;
    use WindowsAzure\MediaServices\Templates\PlayReadyLicenseType;
    use WindowsAzure\MediaServices\Templates\MediaServicesLicenseTemplateSerializer;
    use WindowsAzure\MediaServices\Templates\WidevineMessage;
    use WindowsAzure\MediaServices\Templates\AllowedTrackTypes;
    use WindowsAzure\MediaServices\Templates\ContentKeySpecs;
    use WindowsAzure\MediaServices\Templates\RequiredOutputProtection;
    use WindowsAzure\MediaServices\Templates\Hdcp;
    use WindowsAzure\MediaServices\Templates\TokenRestrictionTemplateSerializer;
    use WindowsAzure\MediaServices\Templates\TokenRestrictionTemplate;
    use WindowsAzure\MediaServices\Templates\SymmetricVerificationKey;
    use WindowsAzure\MediaServices\Templates\TokenClaim;
    use WindowsAzure\MediaServices\Templates\TokenType;
    use WindowsAzure\MediaServices\Templates\WidevineMessageSerializer;

  4. Create a rest proxy instance for the Azure Media Services REST API.
    // Replace the placeholders with your Media Services credentials
    $restProxy = ServicesBuilder::getInstance()->createMediaServicesService(new MediaServicesSettings("%account-name%", "%account-key%"));

  5. Create a new asset using your mezzanine source file.
    // Replace the placeholder with your mezzanine file name and path
    $sourceAsset = uploadFileAndCreateAsset($restProxy, "%source-mezzanine-file.mp4%");

    function uploadFileAndCreateAsset($restProxy, $mezzanineFileName) {
    // Create an empty "Asset" by specifying the name
    $asset = new Asset(Asset::OPTIONS_NONE);
    $asset->setName("Mezzanine " . $mezzanineFileName);
    $asset = $restProxy->createAsset($asset);
    $assetId = $asset->getId();

    print "Asset created: name=" . $asset->getName() . " id=" . $assetId . "\r\n";

    // Create an Access Policy with Write permissions
    $accessPolicy = new AccessPolicy('UploadAccessPolicy');
    $accessPolicy->setDurationInMinutes(60.0);
    $accessPolicy->setPermissions(AccessPolicy::PERMISSIONS_WRITE);
    $accessPolicy = $restProxy->createAccessPolicy($accessPolicy);

    // Create a SAS Locator for the Asset
    $sasLocator = new Locator($asset, $accessPolicy, Locator::TYPE_SAS);
    $sasLocator->setStartTime(new \DateTime('now -5 minutes'));
    $sasLocator = $restProxy->createLocator($sasLocator);

    // Get the mezzanine file content
    $fileContent = file_get_contents($mezzanineFileName);

    print "Uploading...\r\n";

    // Perform a multi-part upload using the Block Blobs REST API storage operations
    $restProxy->uploadAssetFile($sasLocator, $mezzanineFileName, $fileContent);

    // Generate the asset files metadata
    $restProxy->createFileInfos($asset);

    print "File uploaded: size=" . strlen($fileContent) . "\r\n";

    // Delete the SAS Locator (and Access Policy) for the Asset
    $restProxy->deleteLocator($sasLocator);
    $restProxy->deleteAccessPolicy($accessPolicy);
    return $asset;
    }

  6. Submit a transcoding job for the source asset to generate a multi-bitrate output asset suitable for adaptive streaming.
    $encodedAsset = encodeToAdaptiveBitrateMP4Set($restProxy, $sourceAsset);

    function encodeToAdaptiveBitrateMP4Set($restProxy, $asset) {
    // Retrieve the latest 'Media Encoder Standard' processor version
    $mediaProcessor = $restProxy->getLatestMediaProcessor('Media Encoder Standard');

    print "Using Media Processor: {$mediaProcessor->getName()} version {$mediaProcessor->getVersion()}\r\n";

    // Create the Job; this automatically schedules and runs it
    $outputAssetName = "Encoded " . $asset->getName();
    $outputAssetCreationOption = Asset::OPTIONS_NONE;
    $taskBody = '<?xml version="1.0" encoding="utf-8"?><taskBody><inputAsset>JobInputAsset(0)</inputAsset><outputAsset assetCreationOptions="' . $outputAssetCreationOption . '" assetName="' . $outputAssetName . '">JobOutputAsset(0)</outputAsset></taskBody>';

    $task = new Task($taskBody, $mediaProcessor->getId(), TaskOptions::NONE);
    $task->setConfiguration('H264 Multiple Bitrate 720p');

    $job = new Job();
    $job->setName('Encoding Job');

    $job = $restProxy->createJob($job, array($asset), array($task));

    print "Created Job with Id: {$job->getId()}\r\n";

    // Check to see if the Job has completed
    $result = $restProxy->getJobStatus($job);

    $jobStatusMap = array('Queued', 'Scheduled', 'Processing', 'Finished', 'Error', 'Canceled', 'Canceling');

    while($result != Job::STATE_FINISHED && $result != Job::STATE_ERROR && $result != Job::STATE_CANCELED) {
    print "Job status: {$jobStatusMap[$result]}\r\n";
    sleep(5);
    $result = $restProxy->getJobStatus($job);
    }

    if ($result != Job::STATE_FINISHED) {
    print "The job has finished with a wrong status: {$jobStatusMap[$result]}\r\n";
    exit(-1);
    }

    print "Job Finished!\r\n";

    // Get output asset
    $outputAssets = $restProxy->getJobOutputMediaAssets($job);
    $encodedAsset = $outputAssets[0];

    print "Asset encoded: name={$encodedAsset->getName()} id={$encodedAsset->getId()}\r\n";

    return $encodedAsset;
    }

  7. Create a new Common Encryption content key and linked it to the multi-bitrate output asset.
    $contentKey = createCommonTypeContentKey($restProxy, $encodedAsset);

    function createCommonTypeContentKey($restProxy, $encodedAsset) {
    // Generate a new content key
    $keyValue = Utilities::generateCryptoKey(16);

    // Get the protection key id for content key
    $protectionKeyId = $restProxy->getProtectionKeyId(ContentKeyTypes::COMMON_ENCRYPTION);
    $protectionKey = $restProxy->getProtectionKey($protectionKeyId);

    $contentKey = new ContentKey();
    $contentKey->setContentKey($keyValue, $protectionKey);
    $contentKey->setProtectionKeyId($protectionKeyId);
    $contentKey->setProtectionKeyType(ProtectionKeyTypes::X509_CERTIFICATE_THUMBPRINT);
    $contentKey->setContentKeyType(ContentKeyTypes::COMMON_ENCRYPTION);

    // 3.3 Create the ContentKey
    $contentKey = $restProxy->createContentKey($contentKey);

    print "Content Key id={$contentKey->getId()}\r\n";

    // Associate the content key with the asset
    $restProxy->linkContentKeyToAsset($encodedAsset, $contentKey);

    return $contentKey;
    }

  8. Create a new content key authorization policy with PlayReady and Widevine options using Token restriction, and linked it to the content key.
    // You can also use TokenType::SWT 
    $tokenTemplateString = addTokenRestrictedAuthorizationPolicy($restProxy, $contentKey, TokenType::JWT);

    function addTokenRestrictedAuthorizationPolicy($restProxy, $contentKey, $tokenType) {
    // Create content key authorization policy restriction (Token)
    $tokenRestriction = generateTokenRequirements($tokenType);
    $restriction = new ContentKeyAuthorizationPolicyRestriction();
    $restriction->setName('Content Key Authorization Policy Restriction');
    $restriction->setKeyRestrictionType(ContentKeyRestrictionType::TOKEN_RESTRICTED);
    $restriction->setRequirements($tokenRestriction);

    // Configure PlayReady and Widevine license templates.
    $playReadyLicenseTemplate = configurePlayReadyLicenseTemplate();
    $widevineLicenseTemplate = configureWidevineLicenseTemplate();

    // Create content key authorization policy option (PlayReady)
    $playReadyOption = new ContentKeyAuthorizationPolicyOption();
    $playReadyOption->setName('PlayReady Authorization Policy Option');
    $playReadyOption->setKeyDeliveryType(ContentKeyDeliveryType::PLAYREADY_LICENSE);
    $playReadyOption->setRestrictions(array($restriction));
    $playReadyOption->setKeyDeliveryConfiguration($playReadyLicenseTemplate);
    $playReadyOption = $restProxy->createContentKeyAuthorizationPolicyOption($playReadyOption);

    // Create content key authorization policy option (Widevine)
    $widevineOption = new ContentKeyAuthorizationPolicyOption();
    $widevineOption->setName('Widevine Authorization Policy Option');
    $widevineOption->setKeyDeliveryType(ContentKeyDeliveryType::WIDEVINE);
    $widevineOption->setRestrictions(array($restriction));
    $widevineOption->setKeyDeliveryConfiguration($widevineLicenseTemplate);
    $widevineOption = $restProxy->createContentKeyAuthorizationPolicyOption($widevineOption);

    // Create content key authorization policy
    $ckapolicy = new ContentKeyAuthorizationPolicy();
    $ckapolicy->setName('Content Key Authorization Policy');
    $ckapolicy = $restProxy->createContentKeyAuthorizationPolicy($ckapolicy);

    // Link the PlayReady and Widevine options to the content key authorization policy
    $restProxy->linkOptionToContentKeyAuthorizationPolicy($playReadyOption, $ckapolicy);
    $restProxy->linkOptionToContentKeyAuthorizationPolicy($widevineOption, $ckapolicy);

    // Associate the authorization policy with the content key
    $contentKey->setAuthorizationPolicyId($ckapolicy->getId());
    $restProxy->updateContentKey($contentKey);

    print "Added Content Key Authorization Policy: name={$ckapolicy->getName()} id={$ckapolicy->getId()}\r\n";
    return $tokenRestriction;
    }

    function generateTokenRequirements($tokenType) {
    $template = new TokenRestrictionTemplate($tokenType);

    $template->setPrimaryVerificationKey(new SymmetricVerificationKey());
    $template->setAudience("urn:contoso");
    $template->setIssuer("https://sts.contoso.com");
    $claims = array();
    $claims[] = new TokenClaim(TokenClaim::CONTENT_KEY_ID_CLAIM_TYPE);
    $template->setRequiredClaims($claims);

    return TokenRestrictionTemplateSerializer::serialize($template);
    }

    function configurePlayReadyLicenseTemplate() {
    $responseTemplate = new PlayReadyLicenseResponseTemplate();

    $licenseTemplate = new PlayReadyLicenseTemplate();
    $licenseTemplate->setLicenseType(PlayReadyLicenseType::NON_PERSISTENT);
    $licenseTemplate->setAllowTestDevices(true);
    $responseTemplate->setLicenseTemplates(array($licenseTemplate));

    return MediaServicesLicenseTemplateSerializer::serialize($responseTemplate);
    }

    function configureWidevineLicenseTemplate() {
    $template = new WidevineMessage();
    $template->allowed_track_types = AllowedTrackTypes::SD_HD;

    $contentKeySpecs = new ContentKeySpecs();
    $contentKeySpecs->required_output_protection = new RequiredOutputProtection();
    $contentKeySpecs->required_output_protection->hdcp = Hdcp::HDCP_NONE;
    $contentKeySpecs->security_level = 1;
    $contentKeySpecs->track_type = "SD";
    $template->content_key_specs = array($contentKeySpecs);

    $policyOverrides = new \stdClass();
    $policyOverrides->can_play = true;
    $policyOverrides->can_persist = true;
    $policyOverrides->can_renew = false;
    $template->policy_overrides = $policyOverrides;

    return WidevineMessageSerializer::serialize($template);
    }

  9. Create a new asset delivery policy for PlayReady and Widevine dynamic common encryption for the MPEG-DASH streaming protocol, and linked it to the multi-bitrate output asset.
    createAssetDeliveryPolicy($restProxy, $encodedAsset, $contentKey);

    function createAssetDeliveryPolicy($restProxy, $encodedAsset, $contentKey) {
    // Get the license acquisition URLs
    $playReadyUrl = $restProxy->getKeyDeliveryUrl($contentKey, ContentKeyDeliveryType::PLAYREADY_LICENSE);
    $widevineURl = $restProxy->getKeyDeliveryUrl($contentKey, ContentKeyDeliveryType::WIDEVINE);

    // Generate the asset delivery policy configuration
    $configuration = [AssetDeliveryPolicyConfigurationKey::PLAYREADY_LICENSE_ACQUISITION_URL => $playReadyUrl,
    AssetDeliveryPolicyConfigurationKey::WIDEVINE_LICENSE_ACQUISITION_URL => $widevineURl];
    $confJson = AssetDeliveryPolicyConfigurationKey::stringifyAssetDeliveryPolicyConfiguartionKey($configuration);

    // Create the asset delivery policy
    $adpolicy = new AssetDeliveryPolicy();
    $adpolicy->setName('Asset Delivery Policy');
    $adpolicy->setAssetDeliveryProtocol(AssetDeliveryProtocol::DASH);
    $adpolicy->setAssetDeliveryPolicyType(AssetDeliveryPolicyType::DYNAMIC_COMMON_ENCRYPTION);
    $adpolicy->setAssetDeliveryConfiguration($confJson);

    $adpolicy = $restProxy->createAssetDeliveryPolicy($adpolicy);

    // Link the delivery policy to the asset
    $restProxy->linkDeliveryPolicyToAsset($encodedAsset, $adpolicy->getId());

    print "Added Asset Delivery Policy: name={$adpolicy->getName()} id={$adpolicy->getId()}\r\n";
    }

  10. Publish the multi-bitrate output asset with an origin locator to generate the base streaming URL.
    publishEncodedAsset($restProxy, $encodedAsset);

    function publishEncodedAsset($restProxy, $encodedAsset) {
    // Get the .ISM asset file
    $files = $restProxy->getAssetAssetFileList($encodedAsset);
    $manifestFile = null;

    foreach($files as $file) {
    if (endsWith(strtolower($file->getName()), '.ism')) {
    $manifestFile = $file;
    }
    }

    if ($manifestFile == null) {
    print "Unable to found the manifest file\r\n";
    exit(-1);
    }

    // Create a 30-day read-only access policy
    $access = new AccessPolicy("Streaming Access Policy");
    $access->setDurationInMinutes(60 * 24 * 30);
    $access->setPermissions(AccessPolicy::PERMISSIONS_READ);
    $access = $restProxy->createAccessPolicy($access);

    // Create an origin locator for the asset
    $locator = new Locator($encodedAsset, $access, Locator::TYPE_ON_DEMAND_ORIGIN);
    $locator->setName("Streaming Locator");
    $locator = $restProxy->createLocator($locator);

    // Create the base streaming URL for dynamic packaging
    $stremingUrl = $locator->getPath() . $manifestFile->getName() . "/manifest";

    print "Base Streaming URL: {$stremingUrl}\r\n";
    }

    function endsWith($haystack, $needle) {
    $length = strlen($needle);
    if ($length == 0) {
    return true;
    }

    return (substr($haystack, -$length) === $needle);
    }

  11. Generate a test Token to retrieve the PlayReady/Widevine license and enable playback in Azure Media Player.
    generateTestToken($tokenTemplateString, $contentKey);

    function generateTestToken($tokenTemplateString, $contentKey) {
    $template = TokenRestrictionTemplateSerializer::deserialize($tokenTemplateString);
    $contentKeyUUID = substr($contentKey->getId(), strlen("nb:kid:UUID:"));
    $expiration = strtotime("+12 hour");
    $token = TokenRestrictionTemplateSerializer::generateTestToken($template, null, $contentKeyUUID, $expiration);

    print "Token Type {$template->getTokenType()}\r\nBearer={$token}\r\n";
    }

  12. Run the code using the following PHP command and make sure to copy the Base Streaming URL and Token values displayed in the console.
    php -d display_errors=1 index.php

  13. Try the Base Streaming URL and Token values in the Azure Media Player demo site: http://amsplayer.azurewebsites.net/. Make sure to use the Advanced Options form to the set Protection value to DRM (PlayReady and Widevine) and paste the token.

 

For more coding details about enabling PlayReady and Widevine dynamic common encryption, you can check the vodworkflow_drm_playready_widevine.php sample.

 

Change Log

 

Enjoy!

.NET Core: Cross Platform – Ubuntu Linux

In the previous posts we briefly described how .NET Core is making .NET cross platform available for Windows, OSX and now Linux!

If you search a little bit you’ll find different instructions to install .NET Core tooling (DNVM, DNX and DNU) on Ubuntu Linux. In this blog post you’ll find the bare minimum steps to get a Hello World sample running in Ubuntu 14.04.

For instance, the docs specify a Mono dependency to make DNU work on Linux. Apparently, this is no longer required :)

Remember, the .NET team is currently working on it and the CoreCLR is work in progress and open source!

.NET Version Manager (DNVM)

The .NET Version Manager helps retrieving versions of the DNX, using NuGet, and allowing you to switch between versions when you have multiple on your machine. DNVM is simply a way to manage and download NuGet packages and it is a set of command line utilities to update and configure which .NET Runtime to use.

We’ll use a little script to download and install DNVM in the machine. This script uses curl and unzip which can be installed with the following command:

sudo apt-get install unzip curl

 

After installing the prerequisites we are ready to download and install DNVM:

curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh

 
DNVM_install
After the script is executed you should be ready to go and install DNX. In my case, I had to execute one additional command to source the dnvm.sh and make it available to the console. Helpfully, this hint was provided by the script itself:

Type 'source /home/user/.dnx/dnvm/dnvm.sh' to start using dnvm

 

DNVM

Once this step is complete you should be able to run DNVM and see some help text.

Again, as I said in the OS X post, from this point this post is mostly a copy of my previous .NET Core: Cross platform – Windows post. Which is nice :)

 

.NET Execution Environment (DNX)

The .NET Execution Environment (DNX) contains the code required to bootstrap and run an application. This includes things like the compilation system, SDK tools, and the native CLR hosts.

DNX provides a consistent development and execution environment across multiple platforms (Windows, Mac and Linux) and across different .NET flavors (.NET Framework, .NET Core and Mono).

It is easy to install the .NET Core version of DNX, using the DNVM install command:

dnvm install -r coreclr latest

 

You can then use dnvm to list and select the active DNX version in your system (In my case, the latest version of DNX CoreCLR is 1.0.0-beta8)

dnvm use 1.0.0-beta8 -r coreclr

dnvm list

 
DNVM_list
 

Important: This is not strictly required at this point but before you can DNX run your application you must install some general purpose dependencies. You can install those using the following command:

sudo apt-get install libunwind8 libssl-dev

 

Hello world

A DNX project is simply a folder with a project.json file. The name of the project is the folder name.

Let’s first create a folder, set it as our current directory in command line:

mkdir HelloWorld && cd HelloWorld

 

Now create a new C# file HelloWorld.cs, and paste in the code below:

using System;

public class Program {
    public static void Main(string[] args){
        Console.WriteLine("Hello World from Core CLR!");
        Console.Read();
    }
}

 

Next, we need to provide the project settings DNX will use. Create a new project.json file in the same folder, and edit it to match the listing shown here:

{
    "version": "1.0.0-*",
    "dependencies": {
    },
    "frameworks" : {
        "dnx451" : { },
        "dnxcore50" : {
            "dependencies": {
                "System.Console": "4.0.0-beta-*"
            }
        }
    }
}

 

.NET Development Utility (DNU)

DNU is a command line tool that helps with the development of applications using DNX. You can use DNU to build, package and publish DNX projects. Or, as in the following example, you can use DNU to install a new package into an existing project or to restore all package dependencies for an existing project.

 

Important: The DNU utility has some dependencies on itself. Before you can dnu restore a project packages you must install the dependency using the following command:

sudo apt-get install libcurl3-dev

 

The project.json file defines the app dependencies and target frameworks in addition to various metadata properties about the app. See Working with DNX Projects for more details.

Because the .NET Core is completely factored we need to explicitly pull those libraries that our project depends on. We’ll  run the following command to download and install all packages that are listed in the project.json:

dnu restore

 

You’ll notice that even though our project only required System.Console, several dependent libraries have been downloaded and installed as a result.

 
DNU_restore
 

DNX run the app

At this point, we’re ready to run the app. You can do this by simply entering dnx run from the command prompt. You should see a result like this one:

dnx run

 
DNX_run
 

Further reading

https://dotnet.github.io/core/getting-started/

http://docs.asp.net/en/latest/getting-started/installing-on-linux.html#installing-on-debian-ubuntu-and-derivatives

http://www.mono-project.com/docs/getting-started/install/linux/#debian-ubuntu-and-derivatives

Intro to .NET Core

.NET Core: Cross Platform – Windows

.NET Core: Cross Platform – OS X

 

Summary

IHaveNoIdeaWhatImDoing

.NET Core: Cross Platform – OS X

.NET Core is making .NET cross platform an available for Windows, Ubuntu and OSX with a little help of our friends: DNVM, DNU and DNX.

The .NET Execution Environment (DNX) is a software development kit (SDK) and runtime environment that has everything you need to build and run .NET applications for Windows, Mac and Linux.

.NET Version Manager (DNVM)

The .NET Version Manager helps retrieving versions of the DNX, using NuGet, and allowing you to switch between versions when you have multiple on your machine. DNVM is simply a way to manage and download NuGet packages and it is a set of command line utilities to update and configure which .NET Runtime to use.

While it is pretty easy to install DNVM in a Windows box, on OSX the easiest way to get DNVM is to use Homebrew.

Homebrew: Bootstrapping the Bootstrapper

Package managers are the key component that have completely changed the face of modern software development. Homebrew is an OSX package manager which makes easy to install, upgrade and remove software packages.

In a nutshell, it uses GitHub repositories to download software (such as scripts), stores them in its own location and the creates symlinks providing easy access to them.

 

If you don’t have Homebrew installed then follow the Homebrew installation instructions. If you have the Developer tools installed you can use the following script in a terminal window:

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

 

Once you have Homebrew up and running you can use the following commands to install DNVM:

brew tap aspnet/dnx
brew update
brew install dnvm

source dnvm.sh

 

Under the hood, Homebrew cloned the aspnet/homebrew-dnx  repository and installed dnvm using the dnvm.rb script. The dnvm.sh is a bag of functions that need to be sourced, hence the last command executed. Also, it is recommended to check and upgrade to the latest version of DNVM using the following command:

dnvm upgrade

 

DNVM

 

.NET Execution Environment (DNX)

From this point, this post is mostly a copy of my previous .NET Core: Cross platform – Windows post. Which is nice :)

DNX provides a consistent development and execution environment across multiple platforms (Windows, Mac and Linux) and across different .NET flavors (.NET Framework, .NET Core and Mono).

 

It is easy to install the .NET Core version of DNX, using the DNVM install command:

dnvm install -r coreclr latest

 

You can then use dnvm to list and select the active DNX version in your system (In my case, the latest version of DNX CoreCLR is 1.0.0-beta8)

dnvm use 1.0.0-beta8 -r coreclr

dnvm list

DNVM_use_list

 

Hello world

A DNX project is simply a folder with a project.json file. The name of the project is the folder name.

Let’s first create a folder, set it as our current directory in command line:

mkdir HelloWorld && cd HelloWorld

 

Now create a new C# file HelloWorld.cs, and paste in the code below:

using System;

public class Program {
    public static void Main(string[] args){
        Console.WriteLine("Hello World from Core CLR!");
        Console.Read();
    }
}

 

Next, we need to provide the project settings DNX will use. Create a new project.json file in the same folder, and edit it to match the listing shown here:

{
    "version": "1.0.0-*",
    "dependencies": {
    },
    "frameworks" : {
        "dnx451" : { },
        "dnxcore50" : {
            "dependencies": {
                "System.Console": "4.0.0-beta-*"
            }
        }
    }
}

.NET Development Utility (DNU)

DNU is a command line tool that helps with the development of applications using DNX. You can use DNU to build, package and publish DNX projects. Or, as in the following example, you can use DNU to install a new package into an existing project or to restore all package dependencies for an existing project.

DNU

Important: If you are not seeing the output above you might be facing an issue with your OSX installation. By the time I’m writing this, there is a know issue regarding a missing dependency on ICU4C. In my case, I fixed it by brewing the dependency and everything worked like a charm again.

brew install icu4c

 

The project.json file defines the app dependencies and target frameworks in addition to various metadata properties

about the app. See Working with DNX Projects for more details.

Because the .NET Core is completely factored we need to explicitly pull those libraries that our project depends on. We’ll  run the following command to download and install all packages that are listed in the project.json:

dnu restore

 

You’ll notice that even though our project only required System.Console, several dependant libraries have been downloaded and installed as a result.
DNU_restore

 

DNX run the app

At this point, we’re ready to run the app. You can do this by simply entering dnx run from the command prompt. You should see a result like this one:

dnx run

DNX_run

 

Further reading

https://dotnet.github.io/core/getting-started/

http://brew.sh/

https://github.com/aspnet/homebrew-dnx

https://github.com/aspnet/dnx/issues/2875

Intro to .NET Core

.NET Core: Cross Platform – Windows

 

Summary

WhatIfIToldYouOSX

.NET Core: Cross Platform – Windows

The .NET Execution Environment (DNX) is a software development kit (SDK) and runtime environment that has everything you need to build and run .NET applications for Windows, Mac and Linux.

However, package managers are the key component that have completely changed the face of modern software development and they’re very tied together in the .NET Core main tools: DNVM, DNU and DNX.

.NET Version Manager (DNVM)

The .NET Version Manager helps retrieving versions of the DNX, using NuGet, and allowing you to switch between versions when you have multiple on your machine. DNVM is simply a way to manage and download NuGet packages and it is a set of command line utilities to update and configure which .NET Runtime to use.
You can use the PowerShell script below to install DNVM. It just downloads an executes the dnvminstall.p1 script from the ASPNET/Home repo.

@powershell -NoProfile -ExecutionPolicy unrestricted -Command "&{$Branch='dev';iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.ps1'))}"

It just installs the dnmv.ps1 command line tool and adds it to the %PATH% environment variable. No version of DNX will be installed at this point. Also, it is recommended to check and upgrade to the latest version of DNVM using the following command:

dnvm upgrade

 

DNVM

DNVM solves the bootstrapping problem of getting and selecting the correct version of the DNX to run. You’ll find that the NuGet gallery hosts cross platform versions of DNX:

.NET Execution Environment (DNX)

The .NET Execution Environment (DNX) contains the code required to bootstrap and run an application. This includes things like the compilation system, SDK tools, and the native CLR hosts.

DNX provides a consistent development and execution environment across multiple platforms (Windows, Mac and Linux) and across different .NET flavors (.NET Framework, .NET Core and Mono).

It is easy to install the .NET Core version of DNX, using the DNVM install command:

dnvm install -r coreclr latest

 
DNX

You can then use dnvm to list and select the active DNX version in your system (In my case, the latest version of DNX CoreCLR is 1.0.0-beta8)

dnvm use 1.0.0-beta8 -r coreclr

dnvm list

 

Hello world

A DNX project is simply a folder with a project.json file. The name of the project is the folder name.

Let’s first create a folder, set it as our current directory in command line:

mkdir HelloWorld && cd HelloWorld

 

Now create a new C# file HelloWorld.cs, and paste in the code below:

using System;
 
public class Program {
    public static void Main(string[] args){
        Console.WriteLine("Hello World from Core CLR!");
        Console.Read();
    }
}

 

Next, we need to provide the project settings DNX will use. Create a new project.json file in the same folder, and edit it to match the listing shown here:

 

{
    "version": "1.0.0-*",
    "dependencies": {
    },
    "frameworks" : {
        "dnx451" : { },
        "dnxcore50" : {
            "dependencies": {
                "System.Console": "4.0.0-beta-*"
            }
        }
    }
}

.NET Development Utility (DNU)

DNU is a command line tool that helps with the development of applications using DNX. You can use DNU to build, package and publish DNX projects. Or, as in the following example, you can use DNU to install a new package into an existing project or to restore all package dependencies for an existing project.
DNU

 

The project.json file defines the app dependencies and target frameworks in addition to various metadata properties
about the app. See Working with DNX Projects for more details.

Because the .NET Core is completely factored we need to explicitly pull those libraries that our project depends on. We’ll  run the following command to download and install all packages that are listed in the project.json:

dnu restore

 

You’ll notice that even though our project only required System.Console, several dependant libraries have been downloaded and installed as a result.
DNU_restore

DNX run the app

At this point, we’re ready to run the app. You can do this by simply entering dnx run from the command prompt. You should see a result like this one:

dnx run

Further reading

https://dotnet.github.io/core/getting-started/

https://github.com/aspnet/Home/wiki/Version-Manager

http://docs.asp.net/en/latest/dnx/overview.html

http://docs.asp.net/en/latest/dnx/console.html

Intro to .NET Core

.NET Core: Cross Platform – OS X

Intro to .NET Core

.NET is a general purpose development platform. It has several key features that are attractive to many developers, including automatic memory management and modern programming languages, that make it easier to efficiently build high-quality apps. Multiple implementations of .NET are available, based on open .NET Standards that specify the fundamentals of the platform.

.NET Implementations

There are various implementations of .NET, some coming from Microsoft, some coming from other companies and groups:

  • The .NET Framework is the premier implementation of the .NET Platform available for Windows server and client developers.

    There are additional stacks built on top the .NET Framework, for example Windows Forms and Windows Presentation Foundation (WPF) for UI, Windows Communication Foundation (WCF) for middleware services and ASP.NET as a web framework.

  • Mono is an open source implementation of Microsoft’s .NET Framework based on the ECMA standards for C# and the Common Language Runtime.
  • .NET Native is the set of tools used to build .NET Universal Windows Platform (UWP) applications. .NET Native compiles C# to native machine code that performs like C++.

I’ll explain a little bit more on what is .NET Core below. But first, let’s take a look at the .NET Ecosystem.

.NET Ecosystem

The NET Ecosystem is undergoing a major shift and restructuring in 2015. There are a lot of “moving pieces” that need to be tied together in order for this new ecosystem and all of the recommended scenarios to work. As you can see, this is a very vibrant and diverse ecosystem.

You might not know but most of these projects are currently open sourced and fostered by the .NET Foundation (independent) organization. Yes! These projects are open source!

 

10kft_view

A wild .NET implementation appeared!

.NET Core is a cross-platform implementation of .NET that is primarily being driven by ASP.NET 5 workloads, but also by the need and desire to have a modern runtime that is modular and whose features and libraries can be cherry picked based on the application’s needs.

It includes a small runtime that is built from the same codebase as the .NET Framework CLR. The .NET Core runtime includes the same GC and JIT (RyuJIT), but doesn’t include features like Application Domains or Code Access Security.

There are several characteristics of .NET Core:

  • Cross-platform support is the first important feature. For applications, it is important to use those platforms that will provide the best environment for their execution. Thus, having an application platform that can enable the app to be ran on different operating systems with minimal or no changes provides a significant boon.
  • Open Source because it is proven to be a great way to enable a larger set of platforms, supported by community contribution.
  • Better packaging story – the framework is distributed as a set of packages that developers can pick and choose from, rather than a single, monolithic platform. .NET Core is the first implementation of .NET Platform that is distributed via NuGet package manager.
  • Better application isolation as one of the scenarios for .NET Core is to enable applications to “take” the needed runtime for their execution and deploy it with the application, not depending on shared components on the targeted machine. This plays well with the current trends of developing software and using container technologies like Docker for consolidation.
  • Modular – .NET Core is a set of runtime, library and compiler components. Microsoft uses these components in various configurations for device and cloud workloads.

NuGet as a 1st class delivery vehicle

In contrast to the .NET Framework, the .NET Core platform will be delivered as a set of NuGet packages.

Using NuGet allows for much more agile usage of the individual libraries that comprise .NET Core. It also means that an application can list a collection of NuGet packages (and associated version information) and this will comprise both system/framework as well as third-party dependencies required. Further, third-party dependencies can now also express their specific dependencies on framework features, making it much easier to ensure the proper packages and versions are pulled together during the development and build process.

If, for example, you need to use immutable collections, you can install the System.Collections.Immutable package via NuGet. The NuGet version will also align with the assembly version, and will use semantic versioning.

 

0841.Pic4

Open Source and Cross-Platform

Last year, the .NET Core main repositories were made open source: CoreFX (Framework libraries) and CoreCLR (Runtime) are public in GitHub. The main reasons for this is to leverage a stronger ecosystem and lay the foundation for a cross platform .NET and it is a natural progression on current .NET Foundation’s open source efforts:

However, as of only a few months ago (April) you can install .NET Core on Windows, Linux and OSX. This makes code written for it is also portable across application stacks, such as Mono, and platforms making it feasible to move applications across different environments with ease.

Windows_Linux_OSX

Further reading

http://blogs.msdn.com/b/dotnet/archive/2014/12/04/introducing-net-core.aspx

http://blogs.msdn.com/b/dotnet/archive/2014/11/12/net-core-is-open-source.aspx

http://dotnet.github.io/core/

http://dotnet.github.io/core/about/

http://dotnet.readthedocs.org/en/latest/getting-started/overview.html

http://dotnet.github.io/core/about/overview.html

.NET Core: Cross Platform – Windows

.NET Core: Cross Platform – OS X

Tips for Running a TeamCity CI Server in Microsoft Azure

Note: Before starting to read this post, read this one, where the Jet Brains team (creators of TeamCity and Resharper, among other good stuff) explain how they’ve enhanced the scalability of their own server. Many of the tips explained below were based on those articles.

In this post I’m going to share some tips for running a TeamCity server end-to-end in Microsoft Azure. This includes the server virtual machine configuration, the agent virtual machines, the database, etc. I won’t cover instructions on how to install and configure TeamCity; instead I’ll focus mainly on specific advice on how to take advantage of Azure services to achieve a better scalability and availability of the server. The key aspects will be:

Let’s get started! But let me first warn you that if you are ‘Penny Pinching in the cloud’, you’re not going to like this Smile

The TeamCity Server

Virtual Machine

Not much to mention on this point. Notice that having the agents running in a separate virtual machine and the database on an external service makes it unnecessary to have super-fast hardware on the server virtual machine.

Database

With the release of SQL v12 in Azure, TeamCity can use Azure SQL database as an external database. So we’ve created a SQL Database Server with v12 enabled and then created a database in the Premium P2 tier, which also provides ‘geo-replication’ out of the box.

image

Disks and Data

TeamCity Data Directory is the directory in the file system used by TeamCity server to store configuration settings, build results and current operation files. The directory is the primary storage for all the configuration settings and holds all the data that is critical to the TeamCity installation. The build history, users and their data are stored in the database.

The server should be configured to use different disks for the TeamCity binaries and the Data Directory, as follows:

  • The TeamCity binaries are placed in the C: drive of the virtual machine (OS Disk)
  • Since the Azure virtual machine D: drive is a solid state drive (SSD) and provides temporary storage only, the TeamCity data directory is on a separate, attached VHD disk (E:). It’s not advisable to store the TeamCity data directory there. For instructions on how to attach another disk to an Azure virtual machine see this article.

image

Java Virtual Machine (JVM)

To avoid memory warnings it’s better to use the 64-bit JVM for the TeamCity server (see instructions here). TeamCity (both server and agent) requires JRE 1.6 (or later) to operate:

image

It’s also recommended to allocate the maximum possible memory: 4 GB (more information here). This mainly implies setting the ‘TEAMCITY_SERVER_MEM_OPTS’ Windows environment variable with the following values:

-Xmx3900m -XX:MaxPermSize=300m -XX:ReservedCodeCacheSize=350m

image

It’s always advisable to monitor the memory consumption from the Diagnostics page in the Server Administration section from time to time:

image

The TeamCity Agents

For better scalability of the server, it’s advisable to run the TeamCity agents in a different Azure virtual machine. The agents should only include the TeamCity agent software, plus all the software prerequisites to run builds (Visual Studio, Webdeploy, certificates, etc.).

To avoid configuring all the software prerequisites multiple times (once for each agent), you can do it once and then create a virtual machine image. You get the side benefit of being able to use this same image for the cloud agent configuration (more on this below). You can learn how to create/use an Azure virtual machine image in this article.

When you create the new agents based on the virtual machine image, you should remember to update the agent name in the ‘C:\BuildAgent\conf\buildAgent.properties’ file (see image below) and then restart the agent (from services.msc). Otherwise, all the agents will have the same name and will conflict when registering on the server machine. Also, make sure you open port 9090 in the Windows Firewall before creating the agent image.

image

Using Cloud Agents

With the release of the Azure Integration plugin (included out-of-the-box in TeamCity 9), you can use TeamCity Cloud Agents to provision agents virtual machines on-demand based on the build queue state. You can have a set of ‘fixed’ agents (that are always running) and a set of cloud agents that start when more build power is needed.

TeamCity triggers the creation of a new agent virtual machine when it has more than one build in the queue and no available agents. It can also be triggered manually (see below). The maximum number of agents created can also be configured, but it won’t create more than what the license allows. For example, to be able to scale from 3 to 6 agents using the cloud configuration, you need to have 6 TeamCity agent licenses available. At the same time, this can help you save some money in your Cloud bill, as TeamCity will delete the agents if they are idle for some time.

The requirements to configure the plug-in include:

  • Downloading the publish settings file (browse here) and uploading the management certificate you obtained in from that file (text only) to the TeamCity virtual machine.
  • Configuring a cloud service that will be used to provision the virtual machines (you can create an empty cloud service)
  • Providing an virtual machine image name, the maximum number of agents to be created, virtual machine size and name prefix. TeamCity will use the prefix plus a number to set the virtual machine name e.g. ‘tcbld-1’, ‘tcbld-2’.

image

image

image

For any questions, you can reach me on twitter @sebadurandeu.

Evaluating Netflix OSS tools using ZeroToDocker images in Azure

Introduction

ZerotoDocker is a project that allows anyone with a Docker host to run a single node of any Netflix OSS technology with a single command. The portability of Docker allows us to run the tools locally or in different cloud environments such as AWS or Azure. However, it is important to keep in mind that some of the Netflix OSS tools work only in AWS. In these cases, although we could start a Docker container running the application in other environments such as Azure, the tools won’t be able to provide the expected functionality.

If you are not familiar with Docker and you would like to read more about it, you can check out this blog post.

Available Docker Images

Netflix OSS provides Docker images for the following tools:

  • Genie
  • Inviso
  • Atlas
  • Asgard
  • Eureka
  • Edda
  • A Karyon based Hello World Example
  • A Zuul Proxy used to proxy to the Karyon service
  • Security Monkey
  • Exhibitor managed Zookeeper

It is important to keep in mind that these images are not intended to be used in production environments.

Additionally, as we mentioned before, some of the Netflix OSS services corresponding to the images offer functionalities associated exclusively with AWS:

  • Atlas: According to the Atlas wiki in the Zero to Docker repository, it appears that the Atlas image requires the AWS APIs in order to work.
  • Asgard: It offers a web interface for application deployments and cloud management in Amazon Web Services (AWS)
  • Edda: It polls AWS resources via AWS APIs and records the results.
  • Security Monkey: It monitors policy changes and alerts on insecure configurations in an AWS account.

Evaluating Netflix OSS

In this section we show how you can test Genie and the “Hello Netflix OSS” sample application in a Docker environment. This sample application is based on Karyon and interacts with Eureka and Zuul.

As we mentioned before, these images can be run in different environments. In our case, we will test them in Azure.

If you would like to know how to set up an Azure VM with Docker, please take a look at these posts:

Running Genie on Docker

This section describes the steps to set up Genie 2.2.1 using the Docker image. If you’re looking for a different version, please see the list of releases here. The steps described in this document are based on the instructions provided here.

Please, consider that the Docker image we will use is not considered production ready.

Configuration

This section describes how to set up and configure the containers required to run the example.

Setup MySQL

The first step is to set up MySQL. In order to start a new container running the MySQL image, we need to run the following command:

docker run –name mysql-genie -e MYSQL_ROOT_PASSWORD=genie -e MYSQL_DATABASE=genie -d mysql:5.6.21

 

If you don’t have the MySQL image in your host, it will be downloaded. Otherwise, the container will start using the existing image:

clip_image001

The previous command will start a container named “mysql-genie”. We’ll use that name later to reference this container from Genie in order to establish a connection.

To verify if MySQL is running properly, we can do 2 things:

  • Run the “docker ps” command to check that the container is running

    clip_image002

  • Access the MySQL container
    • Run the following command to access the MySQL container.

      docker exec -it mysql-genie mysql – -pgenie

      clip_image003

    • Additionally, we can execute the “show Databases” command to make sure the “genie” database was created:

      show Databases;

      clip_image004

      We can see that there is a database called “genie”. If we check the tables of that database, we’ll see that it is empty.

    • Run the “use genie” to use the genie database.

      use genie;

      clip_image005

    • Execute the “show Tables;” command. No information will be displayed.
    • Finally, exit the MySQL container by running “exit”.

Set up Hadoop to Run Example

We just need to run the “sequenceiq/hadoop-docker” image. In this case, we will run the command in interactive mode to be able to configure our Hadoop container and verify that everything is working as expected.

docker run –name hadoop-genie -it -p 10020:10020 -p 19888:19888 -p 211:8088 sequenceiq/hadoop-docker:2.6.0 /etc/bootstrap.sh -bash

clip_image006

Since we already have an endpoint configured for port 211 in our Azure VM, we included the port mapping “211:8088” to be able to access the Hadoop Resource manager.

Once we have Hadoop running, we will modify the /etc/hosts file. We will add “hadoop-genie” (the container name) after the container id to the first line (space separated). This will allow the daemons to resolve each other when a job is submitted in the future from the Genie node by container name.

So, we will run the following command to start editing the hosts file.

vi /etc/hosts

 

After editing the file, it should look as follows:

172.17.0.7 83893db7d234 hadoop-genie

127.0.0.1 localhost

::1 localhost ip6-localhost ip6-loopback

fe00::0 ip6-localnet

ff00::0 ip6-mcastprefix

ff02::1 ip6-allnodes

ff02::2 ip6-allrouters

 

Finally, we will start the Job History Server:

/usr/local/hadoop/sbin/mr-jobhistory-daemon.sh start historyserver

clip_image007

Running the “jps” command, we should see something like this:

bash-4.1# jps

356 SecondaryNameNode

1001 Jps

190 DataNode

514 ResourceManager

112 NameNode

598 NodeManager

932 JobHistoryServer

 

We will leave Hadoop running in the current SSH client and open a new one to start working with Genie.

Run the Genie Container

Once we have opened a new SSH client, accessed our VM and configured Docker properly, we can run the Genie container.

In our case, we have an endpoint in our Azure VM configured for port 210, so we will run the Genie container mapping public port 210 to container port 8080, which is the default Genie port.

docker run -p 210:8080 –name genie –link mysql-genie:mysql-genie –link hadoop-genie:hadoop-genie -d netflixoss/genie:2.2.1

clip_image008

Once the Genie container is running, we can verify if everything is working properly.

First, we will check the connection with the MySQL container from Genie:

  • Access the MySQL container by running:

    docker exec -it mysql-genie mysql -pgenie

  • Run the “use genie” command

    use genie;

    clip_image009

  • Finally, check the genie database tables by running:

    show tables;

    clip_image010

To exit the MySQL container, we need to run “exit”.

Once we have verified the connection with the database, we can access the Genie UI from a browser. We should be able to access Genie by accessing our VM URL and providing port 210. In our case, the URL is:

http://vm-docker-demo.cloudapp.net:210

clip_image011

As you can see, there are no clusters or jobs.

In order to finish our verification, we will check that the connection with Hadoop is working. To do this, we will access the Genie container by running:

docker exec -it genie /bin/bash

 

Then, we will ping the Hadoop container. We should see information about packages that have been sent and received successfully.

ping hadoop-genie

 

To stop the ping command, we will press “Ctrl + C”.

clip_image012

Run the example

We have everything in place, and we are ready to run the example. The example configures Genie with the Hadoop configuration information for the Hadoop container mentioned earlier as well as two commands (Hadoop and Pig). Then, it will launch a Hadoop MR job which is the example provided by Hadoop: ”hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep input output ‘dfs[a-z.]+’”.

We are already in the Genie container, so we can start to run the example.

First, we’ll execute the setup script to register the Hadoop cluster and Hadoop / Pig commands.

/apps/genie/example/setup.py

clip_image013

In order to verify that everything was registered as expected, we can go to the Genie UI and check the commands and clusters sections:

  • Home page:

    clip_image014

  • Clusters view:

    clip_image015

  • Commands view:

    clip_image016

Additionally, we can verify that “excite.log.bz2” is in HDFS by using:

hadoop fs -ls

clip_image017

Finally, if everything is OK, we can run the example job.

/apps/genie/example/run_hadoop_job.py

clip_image018

Once the Job is started, we will see it in the Genie UI:

clip_image019

If we access the Jobs section, we should see it there too:

clip_image020

We can also see the output of the Job by accessing:

http://{azure-vm-dns-name}:210/genie-jobs/{job-id}

clip_image021

Here, we can check the different logs.

When the job finishes executing, the console should resemble the following:

clip_image022

That’s it! We ran Genie and submitted a job to a Hadoop cluster. Additionally, we were able to check the different logs corresponding to the job.

Running the “Hello Netflix OSS” sample on Docker

This section describes how to run the “Hello Netflix OSS” sample image in combination with Eureka and Zuul, running the corresponding Docker images in an Azure VM.

Please take into account that the Docker images we will use are not considered production-ready.

Run Eureka

The first thing we will do is to start a container running the Eureka image. Since we have an endpoint for port 212 in our Azure VM, we will map it to container port 8080 since Eureka is accessible through that port.

docker run -d -p 212:8080 –name eureka netflixoss/eureka:1.1.147

clip_image001[6]

After running the image, we should be able to access the Eureka page through the following URL:

http://{azure-vm-dns-name}:212/eureka

In our case, the URL is:

http://vm-docker-demo.cloudapp.net:212/eureka/

clip_image002

Hello Netflix OSS sample application

Once Eureka is running, we can start the sample image:

docker run -d -p 213:8080 -p 214:8077 –name hello-netflix-oss –link eureka:eureka netflixoss/hello-netflix-oss:1.0.27

clip_image003[5]

In this case, we configured the ports to be able to access the application though port 213 and the embedded Karyon admin services console through port 214:

Additionally, we defined a link with the Eureka container to allow Eureka to communicate with the sample application container.

Zuul

Although the application is already accessible, we will start a container running Zuul to access the sample application through Zuul.

So, we need to start a container running the Zuul image and link it to Eureka.

docker run -e “origin.zuul.client.DeploymentContextBasedVipAddresses=HELLO-NETFLIX-OSS” -p 210:8080 -d –name zuul –link eureka:eureka netflixoss/zuul:1.0.28

clip_image006[6]

Here, we are making Zuul accessible through port 210 of our Azure VM. If we access the Zuul port, we will see that the sample application is displayed:

clip_image007[6]

At this point, if we check the Eureka application, it will show both applications: Zuul and the sample application.

clip_image009

In this scenario we were able to run an application based on Karyon, establish a communication with Eureka and access the application through Zuul.

blankline

Netflix OSS – Security Tools

Netflix has released different security tools and solutions to the open source community. The security-related open source efforts generally fall into one of two categories:

  • Operational tools and systems to make security teams more efficient and effective when securing large and dynamic environments
  • Security infrastructure components that provide critical security services for modern distributed systems.

Below you can find further information about some of the security tools released by Netflix.

Security Monkey

Security Monkey monitors policy changes and alerts on insecure configurations in an AWS account. While Security Monkey’s main purpose is security, it also proves a useful tool for tracking down potential problems as it is essentially a change tracking system.

It has a Docker image, but the functionality works with AWS.

Scumblr

Scumblr is a web application that allows performing periodic searches and storing/taking actions on the identified results. Scumblr uses the Workflowable gem to allow setting up flexible workflows for different types of results.

Workflowable is a gem that allows easy creation of workflows in Ruby on Rails applications. Workflows can contain any number of stages and transitions, and can trigger customizable automated actions as states are triggered.

Scumblr searches utilize plugins called Search Providers. Each Search Provider knows how to perform a search via a certain site or API (Google, Bing, eBay, Pastebin, Twitter, etc.). Searches can be configured from within Scumblr based on the options available by the Search Provider. Examples of things you might want to look for are:

  • Compromised credentials
  • Vulnerability / hacking discussion
  • Attack discussion
  • Security relevant social media discussion

Message Security Layer

Message Security Layer (MSL) is an extensible and flexible secure messaging framework that can be used to transport data between two or more communicating entities. Data may also be associated with specific users, and treated as confidential or non-replayable if so desired.

MSL does not attempt to solve any specific use case or communication scenario. Rather, it is capable of supporting a wide variety of applications and leveraging external cryptographic resources. There is no one-size-fits-all implementation or configuration; proper use of MSL requires the application designer to understand their specific security requirements.