Tag Archives: Emerging Technology

Using Facebook’s QuickReplies with PromptDialogs

In my last post I showed how easy it is to send Facebook’s quick replies using Microsoft Bot Framework. Using what we’ve learned; I’m now going to show you how you can use quick replies when using a PromptDialog.

 

Some of the PromptDialogs coming out of the box with the BotBuilder library (like the PromptChoice or PromptConfirm) expects a set of options (like the options to choose from) to be presented to the user. By default, these dialogs use a PromptStyler that based on the PromptStyle selected when creating the dialog; it will render the options in different ways: cards with buttons, all the options inline or one option per line to mention some of them.

The good news is that you can create your own PromptStyler and so, change the way the options are being rendered.

 

At this point, you might already have spotted what I will be doing as part of this post Smile. The whole idea is to create a custom PromptStyler that will take the provided options and render them as quick replies.

 

The code
The full sample is in GitHub. To run the sample, publish your bot, for example to Azure or use Ngrok to interact with your local bot in the cloud.

The code is extremely simple and is taking advantage of the models I put together in the previous post.

[Serializable]
public class FacebookQuickRepliesPromptStyler : PromptStyler
{
public override void Apply<T>(ref IMessageActivity message, string prompt, IList<T> options)
{
if (message.ChannelId.Equals("facebook", StringComparison.InvariantCultureIgnoreCase) && this.PromptStyle == PromptStyle.Auto && options != null && options.Any())
{
var channelData = new FacebookChannelData();

var quickReplies = new List<FacebookQuickReply>();

foreach (var option in options)
{
var quickReply = option as FacebookQuickReply;

if (quickReply == null)
{
quickReply = new FacebookTextQuickReply(option.ToString(), option.ToString());
}

quickReplies.Add(quickReply);
}

channelData.QuickReplies = quickReplies.ToArray();

message.Text = prompt;
message.ChannelData = channelData;
}
else
{
base.Apply<T>(ref message, prompt, options);
}
}
}

 

Using it in your dialogs is a no-brainer. Just provide an instance of the new PromptStyler when defining the prompt options and that’s it.

var promptOptions = new PromptOptions<string>(
"Please select your age range:",
options: new[] { "20-35", "36-46", "47-57", "58-65", "65+" },
promptStyler: new FacebookQuickRepliesPromptStyler());

PromptDialog.Choice(context, this.ResumeAfterSelection, promptOptions);

 

One caveat that I’m not addressing in this post is how to access the payload of the quick replies from the PromptDialog. In order to access to the payload, you will likely have to extend the PromptDialog (PromptConfirm is sealed but you can inherit from PromptChoice), override the MessageReceivedAsync and/or the TryParse and include the logic related to extracting the payload. Please refer to my previous post for the logic on how to extract the payload.

 

The outcome

image

image

Enjoy!

Sending Facebook’s Quick replies using Microsoft Bot Framework

Facebook’s quick replies are a great way to present buttons to the users. Per Facebook’s Quick Replies documentation:

Quick Replies provide a new way to present buttons to the user. Quick Replies appear prominently above the composer, with the keyboard less prominent. When a quick reply is tapped, the message is sent in the conversation with developer-defined metadata in the callback. Also, the buttons are dismissed preventing the issue where users could tap on buttons attached to old messages in a conversation.

Taking advantage of this feature when using Microsoft Bot Framework is really simple, even knowing this functionality is very specific to Facebook.

If you want to use special features or concepts for a channel, the Bot Framework provides a way to send native metadata to that channel giving you much deeper control over how your bot interacts on a channel. The way Bot Framework enables this is through the ChannelData property in the C# SDK and the sourceEvent property in Node.js.

Not every capability provided by a channel must go through the ChannelData property; that will basically depend on whether the functionality can be consistent across channels. When that’s the case, it’s very likely the functionality will be addressed in the core API, like with Rich card attachments.

 

The code

The full sample is in GitHub. To run the sample, publish your bot, for example to Azure or use Ngrok to interact with your local bot in the cloud.

The quick replies schema is pretty straightforward. Below is the portion of the code that creates some text only quick replies and assigns them to the activity’s ChannelData.

if (reply.ChannelId.Equals("facebook", StringComparison.InvariantCultureIgnoreCase))
{
var channelData = JObject.FromObject(new
{
quick_replies = new dynamic[]
{
new
{
content_type = "text",
title = "Blue",
payload = "DEFINED_PAYLOAD_FOR_PICKING_BLUE",
image_url = "https://cdn3.iconfinder.com/data/icons/developperss/PNG/Blue%20Ball.png"
},
new
{
content_type = "text",
title = "Green",
payload = "DEFINED_PAYLOAD_FOR_PICKING_GREEN",
image_url = "https://cdn3.iconfinder.com/data/icons/developperss/PNG/Green%20Ball.png"
},
new
{
content_type = "text",
title = "Red",
payload = "DEFINED_PAYLOAD_FOR_PICKING_RED",
}
}
});

reply.ChannelData = channelData;
}

Note that in the code I’m checking for the channel so quick replies are only sent in outgoing messages to a Facebook channel.


ProTip
: If you are planning to use quick replies in many places or in many bots and you don’t want to remember the required schema; I would recommend creating some helper classes to ease the work. I included those as part of the sample, and now the code looks like:

var channelData = new FacebookChannelData
{
QuickReplies = new[]
{
new FacebookTextQuickReply("Blue", "DEFINED_PAYLOAD_FOR_PICKING_BLUE", "https://cdn3.iconfinder.com/data/icons/developperss/PNG/Blue%20Ball.png"),
new FacebookTextQuickReply("Green", "DEFINED_PAYLOAD_FOR_PICKING_GREEN", "https://cdn3.iconfinder.com/data/icons/developperss/PNG/Green%20Ball.png"),
new FacebookTextQuickReply("Red", "DEFINED_PAYLOAD_FOR_PICKING_RED")
}
};

When a Quick Reply is tapped, a text message will be sent to your webhook Message Received Callback. The text of the message will correspond to the title of the Quick Reply. The message object will also contain the payload custom data defined on the Quick Reply.

Accessing the payload information only requires using the ChannelData of the message received after the user tapped the Quick Reply button. Optionally, you can create a typed model of the response and deserialize the channel data.

private async Task OnColorPicked(IDialogContext context, IAwaitable<IMessageActivity> result)
{
var colorMessage = await result;

var message = $"Color picked: {colorMessage.Text}.";

if (colorMessage.ChannelId.Equals("facebook", StringComparison.InvariantCultureIgnoreCase))
{
var quickReplyResponse = colorMessage.ChannelData.message.quick_reply;

if (quickReplyResponse != null)
{
message += $" The payload for the quick reply clicked is: {quickReplyResponse.payload}";
}
else
{
message += " It seems you didn't click on a quick reply and you just typed the color.";
}
}

await context.PostAsync(message);

context.Wait(this.MessageReceivedAsync);
}

 

The outcome

As soon as you send a message to the Bot, it will respond with a question and the three quick replies. Intentionally, I’m showing quick replies with and without images.

image_thumb17

Once you tap on the quick reply button, the Bot will respond with the selected color and also it will show the defined payload for the quick reply button.

image_thumb18

Quick replies are just hints, you can still type something different, in which case the sample will detect that you didn’t tap on a quick reply button and it won’t look for the defined payload.

image_thumb19

 

As you can see, quick replies are powerful and really easy to use right from your Bot Framework based bot. If one of the channels you are supporting within your bot is Facebook, I would strongly recommend using them. To learn more about Facebook features with Bot Framework, please read here.

Introduction to Azure Machine Learning

Note: If you are already familiar with machine learning you can skip this post and jump directly to the Creating a Machine Learning Web Service post by Diego Poza, which explains how you can use Azure Machine Learning with a specific example.

Machine learning is a science that allows computer systems to independently learn and improve based on past experiences or human input. It might sound like a new technique, but the reality is that some of our most common interactions with our apps and the Internet are driven by automatic suggestions or recommendations, and some companies even make decisions using predictions based on past data and machine learning algorithms.

This technology comes in handy specially when handling Big Data. Today, companies collect and accumulate data at massive, unmanageable rates (websites clicks, credit card transactions, GPS trails, social media interactions, etc.), and it’s becoming a challenge to process all the valuable information and use it in a meaningful way. This is where rule-based algorithms fall short: machine learning algorithms use all the collected, “past” data to learn patterns and predict results (insights) that helps make better business decisions.

Let’s take a look at these examples of machine learning. You may be familiar with some of them:

  • Online movie recommendation on Netflix, based on several indicators like recently watched, ratings, search results, movies similarities, etc. (see here)
  • Spam filtering, which uses text classification techniques to move potentially harmful emails to your Junk folder.
  • Credit scoring, which helps banks decide whether or not to grant loans to customers based on credit history, historical loan applications, customers’ data, etc.
  • Google’s self-driving cars, which use Computer vision, image processing and machine learning algorithms to learn from actual drivers’ behavior.

As seen in the examples above, machine learning is a useful technique to build models from historical (or current) data, in order to forecast future events with an acceptable level of reliability. This general concept is known as Predictive analytics, and to get more accuracy in the analysis you can also combine machine learning with other techniques such as data mining or statistical modeling.

In the next section, we will see how we can use machine learning in the real world, without the need to build a large infrastructure and to avoid reinventing the wheel.

What is Azure Machine Learning?

Azure Machine Learning is a cloud-based predictive analytics service for solving machine learning problems. It provides visual and collaborative tools to create predictive models that can be published as ready-to-consume web services, without worrying about the hardware or the VMs that perform the calculations.

AzureMLService

Azure Machine Learning Studio

You can create predictive analysis models in the Azure ML Studio, a collaborative, drag-and-drop tool to manage Experiments, which basically consists of datasets and algorithms to analyze the data, “train” the model and evaluate how well the model is able to predict the values. All of this can be done with no programming because it provides a large library of state of the art Machine Learning algorithms and modules, as well as a gallery of experiments authored by the community and ready-to-consume web services from Microsoft Azure Marketplace that can be purchased.

Azure Machine Learning Studio

Next steps

  • What is Azure Machine Learning Studio?
    Understand more about the Azure Machine Learning Studio workspace and what you can do with it.
  • Machine learning algorithm cheat sheet
    Investigate some of the state of the art machine learning algorithms and to help you choose the right algorithm for your predictive analytics solution. There are three main categories of machine learning algorithm: supervised learning, unsupervised learning, and reinforcement learning. The Azure Machine Learning library contains algorithms of the first two, so it might worth a look.
  • Azure Machine Learning Studio site
    Get started, read additional documentation and watch webinars about how to create your first experiment in the Azure Machine Learning Studio tool.

fakewin8: Configuring valid parameters for fake methods

Yesterday I blogged about fakewin8, a set of components that leverage code generation to create fake classes, which can be used to simplify unit testing in environments where dynamic proxy generation is not a viable option. If you are developing Windows Store or Windows Phone apps you should take it for a spin to see how it feels like.

Today’s blog post explains how fakewin8 allows you to define valid parameters for fake methods.

A bit of context

Commonly, when creating unit tests you need to setup constraints for mock method invocations. This is usually done by providing predicates or specific values for the parameters with which a method must be invoked. If these are not met, your test should fail.

Implementation

In fakewin8 you can configure fake methods to only accept invocations that match a certain set of predicates based on its parameters. If an invocation does not match a specified predicate an InvalidInvocationException is thrown. To specify constraints for a parameter of type T, a predicate of type Func<T, bool> must be used. For example:

[gist id=04e749d3d85e0f6b2d71 file=Accept.cs]

For the common scenario where any possible value is acceptable for a particular parameter, you can use Any<T>.IsOK() which creates a Func<T, bool> that always returns true:

[gist id=04e749d3d85e0f6b2d71 file=Any.cs bump=1]

fakewin8: Easy fakes for Windows Store apps unit tests

In case you are short in time, here’s the Github link: https://github.com/dschenkelman/fakewin8. Otherwise, read on.

If you are reading this, you probably know that due to the changes in the reflection API, all unit testing libraries that depend on the creation of dynamic proxies do not work in Windows RT (for example Moq). As someone who does a lot of Windows 8 development and makes heavy use of unit tests, this was something that really changed the way I approach unit testing.

I’ve seen many different proposals to workaround this issue, such as having linked files, having the components under test in a portable class library (and then using any mocking framework) and using public properties that expose setters for the methods to fake. While those approaches do work fine (in fact I’ve tried all of them), none of them fully suit my needs. I just did not feel completely comfortable with only one of them.  That’s why I came up with the following.

Premises

  1. For each interface or base class, I want to create only one class that can be used as a stub/mock in any test method. This means that all methods must be easy to setup with different logic for different unit tests.
  2. No additional components should be required (i.e.: no portable class libraries, no linked files).
  3. Method invocations should be automatically tracked so assertions can be performed based on them.
  4. Fake class generation should be automatic. We want to focus on the tests development, not the fakes development.

Proposal

fakewin8 proposes the usage of classes like FakeAction and FakeFunc, which act as normal Action and Func, but keep track of the number invocations and parameters on each of them.

For example, for this interface:

public interface INavigationService
{
    void Navigate(string view);

    void GoBack();
}

The following fake class should be created (and only this class should be required):

public class FakeNavigationService : INavigationService
{
    public FakeAction<string> NavigateAction { get; set; }

    public FakeAction GoBackAction { get; set; }

    public void Navigate(string viewName)
    {
        this.NavigateAction.Invoke(viewName);
    }

    public void GoBack()
    {
        this.GoBackAction.Invoke();
    }
}

The FakeAction and FakeFunc classes (at the moment they support until up to 3 parameters) can be leveraged like this in your unit tests:

// arrange
this.fakeNavigationService.NavigateAction = FakeMethod.CreateFor<string>(view => { });

// act

// assert
Assert.AreEqual(1, this.fakeNavigationService.NavigateAction.NumberOfInvocations);
Assert.AreEqual("ViewName", this.fakeNavigationService.NavigateAction.Invocations.ElementAt(0).FirstParameter);

Additionally, given the path to an assembly and an output directory, you can automatically generate the fake classes.

FakeWin8.Generator.Console.exe <dllPath> <outputDir>

The generated code for the sample interface is the following one (no indentation yet):

public class FakeNavigationService : INavigationService
{
public FakeAction<string> NavigateAction { get; set; }

public FakeAction GoBackAction { get; set; }

public void Navigate(string view)
{
this.NavigateAction.Invoke(view);
}

public void GoBack()
{
this.GoBackAction.Invoke();
}
}

Intro to OWIN talk and a simple IP filtering middleware sample

Last week I introduced OWIN during a brown bag talk at Southworks. I’m by no means an expert on the subject, but the reason I decided to do it was that I felt it was very hard for me to find a single blog post/video that provided the information required to understand what OWIN was, why it exists and how it can be used.

That’s why while preparing the talk I decided to create a Github repository to store the source, slides, demo script and some links in a single, centralized place. Please, head over to the site if you would like to check out the slides or get more context on OWIN, as I won’t be including everything in this post.

image

I also realized that this provides an easy way to share the asset’s once the talk is over, so I’ll probably continue doing this in the future.

IP Filtering Middleware

As you learned from checking the Github project out, OWIN allows you to setup a pipeline of middleware components that are allowed to inspects, routes, or modifies request and/or response messages. With the goal of providing a simple sample, I used Katana (Microsoft’s implementation of OWIN) to create an IP filtering middleware component. It allows you to provide a simple Func to determine whether a request should be rejected based on the client’s IP address. Of course this is not something that you would use in production, but it gives an idea of the kind of things that can be achieved with OWIN and Katana.

namespace Owin.IpFilter
{
    using System;
    using System.Collections.Generic;
    using System.Globalization;
    using System.IO;
    using System.Net;
    using System.Text;
    using System.Threading.Tasks;

    using AppFunc = System.Func<System.Collections.Generic.IDictionary<string, object>, System.Threading.Tasks.Task>;

    public class IpFilterMiddleware
    {
        private const string Html404 = "<!doctype html><html><head><meta charset="utf-8"><title>404 Not Found</title></head><body>The resource cannot be found.</body>" + "</html>";

        private readonly AppFunc nextMiddleware;

        private readonly Func<IPAddress, bool> rejectRequest;

        public IpFilterMiddleware(AppFunc nextMiddleware, Func<IPAddress, bool> rejectRequest)
        {
            if (nextMiddleware == null)
            {
                throw new ArgumentNullException("nextMiddleware");
            }

            this.nextMiddleware = nextMiddleware;
            this.rejectRequest = rejectRequest;
        }

        public Task Invoke(IDictionary<string, object> environment)
        {
            var remoteAddress = IPAddress.Parse((string)environment["server.RemoteIpAddress"]).MapToIPv4();

            if (this.rejectRequest(remoteAddress))
            {
                var responseStream = (Stream)environment["owin.ResponseBody"];
                var responseHeaders =
                    (IDictionary<string, string[]>)environment["owin.ResponseHeaders"];

                var responseBytes = Encoding.UTF8.GetBytes(Html404);

                responseHeaders["Content-Type"] = new[] { "text/html" };
                responseHeaders["Content-Length"] = new[] { responseBytes.Length.ToString(CultureInfo.InvariantCulture) };

                environment["owin.ResponseStatusCode"] = 404;

                return responseStream.WriteAsync(responseBytes, 0, responseBytes.Length);
            }

            return this.nextMiddleware(environment);
        }
    }
}

I also created an extension method that allows you to easily integrate the middleware component into the pipeline:

namespace Owin.IpFilter
{
    using System;
    using System.Net;

    public static class Extensions
    {
        public static IAppBuilder UseIpFiltering(this IAppBuilder appBuilder, Func<IPAddress, bool> rejectRequest)
        {
            appBuilder.Use(typeof(IpFilterMiddleware), rejectRequest);
            return appBuilder;
        }
    }
}

Integrating with the pipeline

For example, if you wanted to reject requests that are not from a private network nor the local machine you could set things up like this:

public class Startup
{
    public void Configuration(IAppBuilder appBuilder)
    {
        appBuilder.UseIpFiltering(
            remoteAddress =>
            {
                var bytes = remoteAddress.GetAddressBytes();
                return bytes[0] != 192 && bytes[0] != 172 && bytes[0] != 10 && bytes[0] != 127 && bytes[0] != 0;
            });
    }
}

Wrapping up

This is by no means a full intro to OWIN (for that refer to the resources provided in Github), but I hope it provides an initial idea of the kind of things that you can do with OWIN and Katana.

If you have feedback please let me know.

Enjoy Windows Azure Media Services in the Comfort of your own Console

A few days ago Damian Schenkelman did a brown bag about scriptcs. Right after he explained Script Packs I asked if there was something already in place for Azure Media Services.

“Not yet”, he said. Thus I decided to start authoring one given two main motivations:

    • To learn more about scriptcs.
    • My past experiences in high profile events

There isn’t much to say regarding the first bullet (learning new stuff is part of my DNA); however I will expand on the second one.

During the last few years I have had the opportunity to be part of the streaming operations war room in high profile events (such as the London 2012 Olympics) where things happen really fast and troubleshooting can be stressful. Sometimes you need to quickly check the state of an asset or a job status and there is certainly no time to launch Visual Studio, start building a console application, etc. to look over everything.

At that time, we had a lot of tooling helping on a daily basis and a few handy tools to check things over. However…wouldn’t it have been great if you could have just started your console and started coding away to interact with Windows Azure Media Services?

Now it’s possible Smile

demo

Go ahead and take a look at the docs of this script pack to get started. I would like to thanks Damian Schenkelman for his contributions to the script pack

If you want to learn more about scriptcs, be sure to check out the project wiki and read the blog posts created by the community.

 

Happy scripting.

Ez.

Introduction to HCatalog, Pig scripts and heavy burdens

As an eager developer craving for Big Data knowledge, you’ve probably come across many tutorials that explain how to create a simple hive query or pig script, I have.  Sounds simple enough, right?  Now imagine your solution has evolved from one ad-hoc query, to a full library of hive, pig and Map/reduce scripts, scheduled jobs, etc.  Cool!  The team has been growing too, so now there’s an admin in charge of organizing and maintaining the cluster and stored data.

The problem

One fine day, you find none of your jobs are working anymore.

Here’s a simple Pig script.  It’s only a few lines long, but there’s something wrong in almost every line of this script, considering the scenario I just described:

raw = load ‘/mydata/sourcedata/foo.bar’ using CustomLoader()
as (userId:int, firstName:chararray, lastName:chararray, age:int);

filtered = filter raw by age > 21;

store results into = ‘/mydata/output/results.foo’;

Problem 1: you’ve hardcoded the location of your data, why would you assume the data will be in the same place?  The admin found a better way of organizing data and he restructured the whole file system, didn’t you get the memo?

Problem 2: you’ve been blissfully unaware of the fact that you’ve given your script the responsibility of knowing the format of the raw data, too, by using a custom loader.  What if you are not in control of the data you consume? what if the provider found a new awesome format and wants to share with their consumers?

Problem 3: your script is also responsible of handling the schema for the data it consumes.  Now the data provider has decided to include a new field, remove another few and you wish you’d never got into this whole Big Data thing.

Problem 4: last but not least, several other scripts use the same data, so why would you have metadata related logic duplicated everywhere?

The solution

So, the solution is to go back, change all your scripts and make them work with the new location, new format and new schema and look for a new job.  Thanks for watching.

OR, you can use HCatalog.

HCatalog is a table and storage management layer for Hadoop.  So, when before you had something like this (note that you would need to write custom loaders for every new format you use):

now you have this:

See how Pig, Hive, MR no longer need to know about data storage or format, they only care about querying, transforming, etc.  If we used HCatalog, the script would look something like this:

raw = load `rawdatatable` using HCatLoader();
filtered = filter raw by age > 21;

store results into `outputtable`

1. your script no longer knows (or cares) where the data is.  It just knows it wants to get data from the “sample” table.

2. your script no longer cares in what format the raw data is in, that is handled by HCatalog, and we use the HCatLoader for that.

3. no more defining schema for your raw data in your script.  no more schema duplication.

4. other scripts, for example hive, can now share one common general structure for the same data, sharing is good.

Rmember, when you setup your HdInsight cluster, remember HCatalog, and let that script do what it was meant to do, perform awesome transformations on your data, and nothing more.

For more information, you can visit HCatalog’s page.

Another cool post Alan Gates

Happy learning!

Developing Big Data Solutions on Windows Azure, the blind and the elephant

What is a Big Data solution to you?  Whatever you are thinking of, I cannot think of a better example than the story of the blind and the elephant.  “I’m a dev, It’s about writing some Map/Reduce awesomeness”, or “I’m a business analyst, I just want to query data in Excel!”,  “I’m a passer-by, but whatever this is, it’s yellow”… and so on.

I’m currently working with Microsoft’s Patterns and Practices team, developing a new guide in the Windows Azure Guidance Series called “Developing Big Data Solutions on Windows Azure”.

The guide will focus on discussing a wide range of scenarios, all of which have HDInsight on Windows Azure as a key player, the related technologies from the Hadoop ecosystem and Microsoft’s current BI solution stack.

We just made our first early preview drop on codeplex, so you can take a peek and see how the guide is shaping up so far.

So go get it, have a read and tell us what you think, we appreciate your feedback!

To Arrow Function or not to Arrow Function

If you have used TypeScript you are probably aware of the compiled outcome of an Arrow Function (section 4.9.2 of the TypeScript Language Specification). The idea behind using () => {…}; instead of function(){…}; is to have access to the object represented by this in the outer scope, which is what you would expect when using a C# lambda. Of course this feature is particulalry useful in JavaScript because this holds a reference to the object to which a function was applied or called, and more often than not the this inside the function’s execution context ends up being different from the one in the outer execution context.

When using an Arrow Function, TypeScript automatically assigns this to a separate variable in the outer execution context (called _this), which can be accessed by the function that results of the compilation of the Arrow Function through its closure. As the previous sentence was probably not the simplest one to digest, take the following TypeScript code snippet as an example:

class MyClass {
    constructor() {
    }

    private add(a, b) {
        return a + b;
    }

    public partialAdd(a) {
        return (b) => {
            return this.add(a, b);
        }
    }

    public partialAdd2(a) {
        return function(b) {
            return this.add(a, b);
        }
    }
}

The resulting JavaScript is:

var MyClass = (function () {
    function MyClass() {
    }
    MyClass.prototype.add = function (a, b) {
        return a + b;
    };
    MyClass.prototype.partialAdd = function (a) {
        var _this = this;
        return function (b) {
            return _this.add(a, b);
        }
    };
    MyClass.prototype.partialAdd2 = function (a) {
        return function (b) {
            return this.add(a, b);
        }
    };
    return MyClass;
})();

As you can see, partialAdd stores this in the local _this variable which is accessible from the function that is returned through its closure. On the other hand, partialAdd2 uses this as the variable, but as I mentioned before this might not be “an instance of MyClass” but something completely different instead.

Complex Cases

You could (wrongly) deduce that every time you need access to the outer this inside the function you should use an Arrow Function, but what if you need to use both the outer this and the object on which the function was applied? That could be a usual situation in an event handler, which applies the callback functions to the target DOM elements when handling the event (for the sake of the example assume that there is no way to other way to access the DOM element other than this).

In these cases, it is important to understand what the compiler is doing underneath the hood. The two common approaches that would lead to errors are:

  1. Using an Arrow Function: this would cause TypeScript’s this to shadow the compiled JavaScript’s this, not allowing you to access the DOM elements that was clicked inside the click handler.
  2. Using a function literal: this would cause the outer this to not be accessible, which is unexpected for developers used to C#.

The Answer

One possible (and simple) solution to this is the usual workaround (that = this):

class MyClass {
    constructor() {
    }

    private add(a, b) {
        return a + b;
    }

    public partialAdd(a) {
        var that = this;
        return function(b) {
            // this is accessible inside the function
            // and points to the object to which the function was applied
            var _this = this;
            // that is accessible inside the function through the closure
            // and points to the object that had partialAdd called
            return that.add(a, b);
        }
    }
}

The following figures show that TypeScript’s type inference system confirms the comments from the previous code snippet:

this can be an instance of any type, as it depends on the object on which the function was called

image

that is an instance of MyClass

image

Conclusion

This is not a complex issue, but is one that I’ve seen a couple of times while working with TypeScript, so hopefully if you ever run into it you will be able to solve it in no time.