Angular Components New in Version 1.5 - Part 1

Wednesday, February 24, 2016

If you have a large Angular 1.x application or you are starting a new Angular application but you are not quite ready to use Angular 2.0, there are things you can do that will make your current application easier to upgrade should you want to do that in the future.

One of those things you can do to ease your future upgrade is to start using the Angular "component" function helper which is new in version 1.5 instead of using "directives". An angular component is basically a directive that is simpler to write and corresponds better to many of the concepts that are being used today in Angular 2.0.

Sample App

Let's look at an example of how I can refactor an Angular directive into a component. I have a directive that shows an image, and the user can rotate the image forwards or backwards based on the button they click.

This is really basic but here is what it looks like:

Simple Image Rotator Directive

To use this directive I place the element on the HTML page.

<student-image-component images="vm.images"></student-image-component>

Here is the complete code for the directive:

In this particular example, I am using the controllerAs setting to alias a controller to the directive and the "bindToController" setting to bind the scope of the directive to the controller. Doing this creates a "vm" binding on the template.

In the link function, I am handling all of the event listener logic for the previous and next button.

Convert to Component:

Okay so let's take the above code and convert it into a component and discuss some of the differences.

First thing to notice at the bottom of the code sample, is that I am now using the "component()" function helper instead of the directive. The signature for this function requires an object instead of a function that returns an object, so right off the bat, this makes for much simpler code (especially if you are writing it in TypeScript).

Secondly, by default the component defaults the "restrict" setting to "E" (for element), the "controllerAs" setting  is defaulted to "$ctrl", and the "bindToController" setting to true so you no longer need any of those settings making the code even more concise. In the template. since $ctrl is now the default template a binding I need to change "vm" to "$ctrl".

Also, where in directives you defined scopes, in components you define a bindings and by default the scope is isolated. So there no longer the ability to use inherited scopes which is a good thing. Notice the "<" indicator for the images binding? That specifies that images the property is a one way binding, so if the images values changed outside the component it will flow into the component; however, images values changed inside the component will not propagate back outside the component. The benefit to this approach is that it will make your application easier to debug as you don't have to worry about your component changing the outside model inadvertently. It may also be an improvement performance although I haven't seen any evidence on that.

Now the question is the component no longer has a link function, so how do I get access to the DOM? Depending on what you are doing, using components might be a deal breaker, such as listening to events on a DOM element, but if you are not doing that, you can inject the $element into the controller and change the DOM via the $element. In this case, I am only listing to the "ng-click" for the previous and next buttons, so I just need to add those callback functions in the controller and make the code necessary code changes to get that working. Let's look at those changes.

Refactoring the link code

First, I want to make sure my prevClicked and next nextClicked functions are integrated in the view, so I need to update my template, so I prefix those with the "$ctrl" object.

I then update my controller with the ng-click events:

I now need to decide what to do with the element object that used to be passed in through the link function, since I was using it to change the URL on the src attribute. Actually, it turns out in this case I don't need the element, since I can create a new property on the controller that has the current URL and use the new life-cycle $onInit() event to update the image.

The $onInit() life-cycle event

Angular components have a new life-cycle event called $onInit. This event is similar to the "componentDidMount" event in React applications in that it gets fired as soon as the component is mounted on to do DOM and it only gets fired once.

So in the above code I can create a new property called "currentImageUrl" and set the "ng-src" to that in the $onInit() event.

Now if I take my code for the ng-click handlers and place them in my controller, the only thing I have to do is update the currentImageUrl property instead of updating the DOM attribute.

Here is the code now for the component:

Looking Ahead

This is nice and the code is much cleaner than the earlier directive code, but there is still some more refactoring that could be done. For example, the prevClick and nextClick code are pretty similar. Perhaps, I can create a child component for the buttons and reuse it for the previous and next events. For that, we will need to look at the new "required" property for retrieving the values from the parent controller which will be covered in a future blog post.

As usual, you can see the code at Githhub.

JavaScript ES6 Rest and Spread Operator

Wednesday, November 11, 2015

Working with arrays and parameters have become a bit easier now that spread and rest parameters have been released as a part of the ECMAScript 6 upgrade.

The syntax for these operators are the same (...) with the difference being:

  • The rest operator will collect the parameters into an array
  • The spread will take an array and move them into parameters when used when calling a function.

The Rest Operator

So in this example, the function will take the parameters on the end of the call and add them together:

In the example above, the "amounts" is a rest parameter because it has the (...) in front of it. This tells the function to expect a list of parameters to come in and combine them into an array. If the call does not contain any amounts then the amounts variable will equal an empty an array.

When you are using the rest operator, there can only be one per function placed at the end of the function signature.  Also, when you use a rest operator, the "arguments" object in that function can no longer be used.

The Spread Operator

As stated before, the spread operator has the same syntax as the rest operator but works in reverse. Instead of passing one to many parameters in the function call, you pass and array designated with the (...) and it get spread out over the parameters in the method signature.

In the above code, if the array is bigger than the amount of parameters, then the array fills in the parameters until there are no more parameters in the function signature. The rest of the items in the array are ignored.

One way the spread makes working with arrays so much nicer, is the ability to use the spread operator to combine arrays.

You can also use the spread operator to "push" an array on to another array instead of just a single a value or an object.

Summary

These operators not only give syntactical sugar when working with arrays, but I am noticing developers in the JavaScript community taking these operators and using them in ways that makes their code more decoupled. A specific object does not need to be passed to a function anymore. A function could take an array of values or a function call could spread values over a list of parameters.

I think it's a great addition to the language.

Promises in ES6

Friday, October 23, 2015

Overview

Promises have been around in JavaScript in one form or another for a while now, but they are typically provided by a framework. For example, if you ever used the jQuery ajax function to make an API call, behind the scenes you were using jQuery's promise mechanism.

In this example, the "ajax" function returns a promise that when it is fulfilled fires the "done" function.

Now, with the release of the new EcmaScript 6 standards, promises are now a part of the JavaScript language.

My Bad Promise Analogy

One of my favorite restaurants to stop at on long road trips is The Cracker Barrel. Often times, they are super busy, and while there I will have to check in at the hostess stand. Typically, they write my name and down and give my a buzzer that will go off when a table is ready. In the mean time I can walk around their gift shop and if I see something cool (usually a ridiculous looking hat) I can buy it. Once the buzzer goes off, I hand the buzzer to the hostess and they then take my family and I to a table where I invariably eat too many biscuits.

In this case, that buzzer that was given to me represents a promise. It is a promise to fulfill something, and in the meantime, I can go about my business doing other things. The act of taking my family to the table at a later time is fulfilling that promise.

This is similar to how a promise works. You make an asynchronous call to do something and you are immediately returned a promise. At a later time, when that asynchronous task completes, it will fulfill the  promise, and execute the code to handle the returned response.

The Different States

  • When a promise is first created and before it has not been "settled", it is in a pending state.
  • When a promise finishes its asynchronous task and has a successful resolution, the promise is changed to the fulfilled or resolved state.
  • When a promise finishes and the task has failed, the promise is changed to the rejected state.
  • Once a promise is fulfilled or rejected, it cannot be changed to any other state. The promise is "settled".

Basic Example

Let's look a basic promise example.

To create a promise, I instantiate a new promise and pass in two callback functions. The first function will fire if when the asynchronous task is successful, the second callback will fire if the asynchronous task fails.

The doSomething function will execute and the code execution will continue executing passed this section. The "then" function executes when the doSomething finishes its asynchronous task.

Using Promises with windows.fetch.

In a more realistic example, Promise will be mostly used with API calls and one way to make an API call is with fetch function that has recently been added to the windows object. The fetch object itself returns a native JavaScript promise so the two objects work well together.

The fetch function returns a promise that when resolved, returns a response. If that response has JSON data in it, then it needs to be serialized, so that serialization function "json()" also returns a promise.

Chaining a Promise

One of the nice feature of Promises, is when the "then" function is fired, it returns a new promise just like the first promise. This makes it easy to treat multiple asynchronous tasks synchronously.

So in the above example, I am calling addYada function repeatedly three times in sequential order and not all at the same time by placing each subsequent call in the "then" function.

The "All" Static Function

There are circumstances where I need to make several API calls. I want to make them all at once, but I don't want to do anything until I get the last API response.

In this case use the Promise.All() function manage the promises.

The "All" promise returns an array of the resolved promises.

The "Race" static function

Say I have a load-balanced environment, and I want to make the same API calls to several servers to get the fastest response possible. I want to execute the first call to come back. In this scenario I could use the "Race" static function. The race function takes several promises like the "all" function, but when the first response is resolved, the race function executes.

The "Resolve" static function

I often find my self writing functions that first check a caching mechanism for a value before I actually make an API call. However, the problem I run into is that typically the value retrieved from cache is synchronous while the value retrieved from API call is asynchronous. The cached version will not have a promise associated with  it so I have nothing to resolve at this point.

In this case, I use the Resolve function. The resolve function creates a promise and resolves it immediately. I can use this to pass the value from cache and return it to the client.

The Reject static function

Like the Resolve static function, there are times I want to handle an API call failure by logging the error first and then pass it back to the UI.

By returning the rejected promise, I am ensuring that the UI is getting a settled promise back and it can so something to let the user know.

Getting Webstorm to work with Angular2, and TypeScript

Wednesday, September 2, 2015

Update: I have updated this post with new information at the bottom about WebStorm 11 EAP

Well, I finally found the time to start looking into Angular 2, so naturally I went to the website to do the quick start demo. I got it working but I did have a some hangups getting my IDEs (yes that plural) to work with Angular 2. This is understandable considering at this writing Angular 2 is in "Alpha".

The quickstart I am doing is located on the Angular 2 site.

Get the Correct Version

First off, by default, Webstorm runs TypeScript 1.4, so you are going to have to get the latest version of the typescript via NPM.

$ npm install -g tsc

The current version as of this writeing is 1.5.3.

Once it is install you will neet to tell Webstorm to use this version.

TypeScript Languages and Preference settings - Sett the bin folder location

Run the Correct Command Parameters

Once you have the correct version, Webstorm will still complain about the attributes. So you will need to tell it the following:

  • What module system you are using. In this case I am using "system".
  • You will need to tell it what version of JavaScript to compile to. I am compiling to "ES5"
  • Next to prarameters take care of the decorators of the class statment.

Webstorm Typescript settings - setting the command parameters.

After that, following the directions on the quickstart example, everything worked fine.

 

Update:

With Webstorm 11 EAP you can now use the tsconfig.json file to tell Webstorm what TypeScript settings to use.

I just installed version 11 EAP:

WebStorm Version 11 EAP Logo

Notice, Webstorm now uses TypeScript 1.5.3.

TypeScript settings for WebStorm in version 11EAP

Also, you can now tell WebStorm to use your tsconfig.json file that is in the root of your project.

 

 

Real Time Twitter Stream Component Using Node, Express, Socket.IO, and React on the client.

Friday, August 21, 2015

Introduction

So where I work, we have a large legacy app that we would like to modernize without have to rewrite the entire application from scratch. We have been looking at the React framework because as there site states, "you can try it out on a small feature" and it won't impact the rest of your application. It also helps that it was developed and supported by the team at Facebook.

If you are not familiar with React, it is another client side framework, developed by Facebook, but unlike Angular, Ember, etc, React only focuses on the "View" portion of your MVC application.

What's a Virtual DOM

React uses what they call a "virtual DOM" to interact with DOM elements, so when you as developer are working with React, you actually never get a direct reference to a DOM element, but instead you write JSX code which is very similar to HTML and then React manages the interactions with the DOM for you.

This is where React really shines, because they optimized the interaction to be really fast. So whatever changes you want to make to the DOM, you make the changes in the JSX code, and then React figures the fastest possible way to make the changes to the DOM behind the scenes.

The Stream with Node, Twitter, and Socket.IO

I have discussed building a realttime stream before using Node, so I won't  go into too much details there, but here I will say that this time I am using Socket.IO and since Twitter has real time streaming APIs, I decided to use that for streaming purposes.

 

The above code is pretty straight forward. I am using Express to handle starting up the Node application, and then I have brought in two node packages (Socket.IO and Twitter). So once the application starts, I then kick off Socket.IO to start publishing on a socket and then I kickoff Twitter to start listening for tweets that have the word "JavaScript' in it.

Note: for obvious reasons I did not check in the twitter.js file in the secret folder but basically it is the following. You just need to supply your own Twitter credentials. You can set that up on their application site.

The Client Side

On the client side I am then listening for the stream of tweets via Socket.IO and when a tweet comes in I am firing off a callback function that will update the list of tweets.

The React Code

I have broken my React code into three different files for demo

Twitter-Feed Component

The fist file is parent file and it contains the starting point where React is going to take over the DOM and inject my code.

Since in React, you are building components, the basic way do that is to call the React.createClass function. Inside that function you can set several properties as well as add your own properties to create the component.

Ultimately, once a component is built, I then call the React.render function which takes two parameters (the parent component, and in this case, the DOM element I am attaching the component to.

The properties in my TwitterFeed class are as follows:

  • getInitialState: this property takes a function that allows you set the initial state of properties I am going to use. In my case I have a tweets array. It is recommended that this property only be called in your parent component.
  • addTweet: this is a function I created to add a tweet to the tweet list and then setting the state. The state is what React uses to pass data from one component to the other. It is recommended that only the parent component can change the state. All child conponents are immutable and only listen for changes made by the parent.
  • componentWillMount: this is a React event that fired right before the component is rendered. There are various event similar to this one like componentDidMount which fire immediately after the component is rendered.
  • render: this is the function the renders the final HTML to the DOM. Inside this render function I am calling the child component TwitterList and passing that component the tweets array from the state. The second parameter is where in the DOM you want the component to be attached to.

Twitter-List Component

This component takes the tweets from the parent Twitter-Feed component and create the DOM container which is a DIV and then makes a call to its child to create a single tweet component.

This components gets the tweets from the parent via the "props" object.

Also note that am calling a propTypes property. This is useful as it sets the expectation of what type of data it expects. When this is set and you don't pass in the correct data to the component, React will send a nice error explaining what went wrong in the console.

Tweet Component

The final child component builds the single tweet row. It is passed the tweet from the parent Twitter-List component

The Final Result

Obviously I need to work on the presentation a little bit but when the tweet list is automatically being updating realtime as the tweets are being made.

Live Twitter Stream

I have uploaded the code to GitHub, so take a look and let me know what you think. 

TypeScript, Angular and Factories - Another Gotcha (Classic Functions vs Arrow Function)

Wednesday, August 5, 2015

In my last post I talked about how AngularJs Factories need to be instantiated first before they are returned to the calling code.

I came across another problem while converting my JavaScript Angular code to TypeScript and that is how public functions are created in JavaScript depending on how you structure them in TypeScript.

Consider the following TypeScript code:

Which then generates this JavaScript code:

The problem here is the code that is generated builds a function by creating a prototype and the way this code is structured, "this" is referring to the global lexical scope and not scope of this class. So this.$http is undefined.

Funny thing is I did not originally write the code this way but using Visual Studio and Resharper, the code suggestion asked me if I wanted to change it.  Hey! Why not?? Looks cleaner.

Here is the slightly different TypeScript example, this time using arrow function to implement the needed function:

And again, here is the generated JavaScript

Now the generated getUser function is in my constructor as a closure function and the "this" has been preserved by defining it with the "_this" variable. Plus, I think this approach is easier to read.

I think the point here is that in TypeScript, you should try as much as possible to opt for the arrow functions. As matter of fact, if you go the TypeScript Handbook site, they talk about this in detail.

TypeScript, Angular and Factories, Services and Providers

Monday, August 3, 2015

I have been refactoring an Angular project of mine to use TypeScript. I have been doing this for a few reasons:

  • To be able to start using the EcmaScript 6 and beyond coding features.
  • To make it easier to upgrade to Angular 2 in the future.
  • To take advantage of TypeScrip's typing and other features.

One of the issues I have come across while doing this exercise is that all my services are actually Angular Factories.

Angular has three types of services you can have to reuse logic across your application and they are services, factories, and providers.

Factories:

Factories are typically the most common way to create these components. A factory is basically a function call that returns an object and that object contains properties and functions.

Services:

Services are similar to factories except that they are instantiated when you call them. So the functionality you want to reuse you would put the logic on the "this" keyword which is essentially attaching the functionality to the service itself.

When you use a service Angular will automatically call it using the "new" keyword.

Providers

Providers are a little more involved, but they are useful when you want to setup some configuration before before you actually call it. It is the only type of service that is available in the "config" section of a module which is executed first, so you can use it at that point to set it up and then call it later in a controller for example.

The problem in TypeScript

Because in TypeScript you create interfaces and then classes that implement those interfaces, factories will not new up your class. So the following will not work in TypeScript.

The code that is generated looks like this:

To get around this issue, I had to "new up" the classes myself before I returned them as a factory. 

The generated JavaScript code now looks like this:

So since this is the case, you might as well start using services rather than factories if you are going to use TypeScirpt, but if you are going to use factories, remember to instantiate them first before you inject them into your code.

AngularJs 2.0 - RIP Controller, $scope, and angular.module

Monday, October 27, 2014

Well, I just went through the slide deck of a presentation that Igor Minar gave at the ng-europe conference.

There wasn't much details, but in the deck it says that controller, $scope, and the angular.module command are going away. That's like the most commonly used pieces of the framework.

At first, I was a little taken back by this revelation, but the more I think about it, the more it make sense. Lately, I have tended to just have one line of code in a controller that basically attached a service to the $scope, and the latest craze I've seen is the "Controller As syntax".

On the down side, this seems like totally different paradigm for Angular and a totally new approach to take in and get accustom too. Then again, with ECMAScript 6 coming out soon as well, I'll be learning a lot of new syntax already, so might as well throw Angular 2.0 in too. Two birds one stone.

Anyhow, what do you think about these changes?

UPDATE:

Some more details have emerged about Angular 2.0 and were written about here: "AngularJS 2.0 Details Emerge"

UPDATE:

The keynote session on Angular 2.0 Core has been posted on youtube.

 

Unit Testing a Service that Makes an API Call Using $httpbackend

Friday, August 22, 2014

Introduction

This is part of a series on Unit Testing in AngularJS. You can see my others posts:

Creating a Single Page Application will inevitably require making an API call to a service at some point, and AngularJs provides two mechanisms to do accomplish this in its framework. The one instance is using the ngResource module, and the other approach is using the $http service. Personally, I am sort of torn as to what approach to use but I am finding my self gravitating more towards using the $http service just because it is more customizable. I think ngResource shines if you have a predictable set of RESTFUL API calls and you just want to make calls and get responses back, but if you want so more control of things like exception management, or provide a chain of promises, I find the $http service to be more straight forward in setting up.

$httpBackend

Another nice thing the $http service is that is comes with the corresponding $httpBackend service which can be used for mocking calls so your unit test have no external dependencies. After all, that is whole point of a unit test is that it is self contained and not reliant on any outside factors.


The Service

So in this example I have a service that makes an API call to get the friends for a current user:

 

This service has a GET call and a POST call and I want to write a test that will test a mock of the POST call.


The Test Setup

So lets look at the unit test and talk about what is going in this code.

So the first thing that is done is that we need to define our service and also define the $httpBackend service that will be mocking our calls.

I am also defining a mocked response that will be returned one the service is called.

In the beforeEach section, the service and $httpBackend are injected into our environment before each test.

And then after each call we want to ensure that the calls were expected to be made were actually made and that there were no exceptions thrown on those calls.

The Unit Test

Here is the first line of code from the test

$httpBackend.expectPOST('/api/friends/refresh').respond(refreshResponse);

 

This tells the $httpBackend service to expect a POST call to be made to a service and that it will return the refreshResponse object that was defined at the top of the page.

The next lines of code are the actual execution of the service.

var deferredResponse =  friendsApiService.refreshUsers();

var users;

deferredResponse.then(function(response){

    users = response.data;

});

Since we want to preserve the asynchronous nature of the call while at the same time be able to test the response of the call we will then need to flush the response. This will explicitly flush any pending request so that when we can evaluate the response it returns within the unit test. If we did not do this, since our call is asynchronous, our test would finish before the call was actually completed.

Once the flush is complete, we now can evaluate our response to see of our test is working.

 

 

Unit Testing a Simple Controller in AngularJs

Tuesday, August 12, 2014

Introduction

This is part of a series of post related to how to set up and execute unit testing in AngularJs.

In this post, I will show an example of unit testing a simple controller.

In this case I have a started building a controller, and one of the first things I want to test is a collapsible panel inside my view. The way the panel works is when at first the user initially goes to view that has a list of items, at the top of the view there is a button that opens and closes a panel that contains a search text box so the user can filter the items on the page.


The Controller Code

So to start out, my new controller looks like this:

 

The $scope initially sets the isSearchCollapsed property to true, in which the view uses to indicate the panel is closed.

Here is the Jade view that shows or hides the panel.

 

Using the Angular UI directive, the panel slides open and closed based on the value of the isSearchCollapsed property. The value of that property is change every time the toggleSearchCollapsed is executed.


The Test Code

To test the controller code, I need to be able to replicate the controller and its scope in the test environment and Angular gives us a way to do that with two built in functions. Lets look at the code

First thing I need to do is to replicate the controller's scope so I can then attach it to the replicated controller. The $rootScope object has a function called $new which will do that for me. This function basically is used to create child scopes from the base scope.

The next thing I need to do is replicate the controller assigning our newly created scope to that controller. Angular uses the $controller service internally to instantiate controllers, and we can use it to instantiate our controller in our test environment.

Once that is set up we are ready to start testing, and in this case, I have three tests.

  • Starting from scratch my first test is there just to make sure I have successfully replicated the controller and it's scope.
  • The second test, makes sure the isSearchCollapsed property is initially set to false.
  • The third test, validates that when the toggleSearchCollapsed is triggered that the value of the isSearchCollapsed is changed. Since my second test passed and I know the initial value will be true, when I call the function and test the value again it should be false.