Category Archives: JavaScript

ECMAScript ES6 (ES2015) changes overview

I’ve been playing recently with ReactJS a bit, and was pleasantly surprised when seeing great changes, the JavaScript language has undergone, over the last c.a. 2 years.

This made me realize, that i need to study those changes in more detail, which is how this blog entry came to existence 🙂

According to Wikipedia, “ECMAScript (or ES) is a scripting-language specification, standardized by the European Computer Manufacturers Association. (…) JavaScript is the best-known implementation of ECMAScript since the standard was first published, with other well-known implementations including JScript and ActionScript” (anyone remembering the Flash platform authored by Macromedia?).

In June 2015, sixth edition of ECMAScript (ES6) was introduced, which later changed its name to ECMAScript 2015 (ES2015).

Among the design objectives, that the TC39 (Ecma Technical Committee 39) team defined for the new version of the language, were:

  • Goal 1: Be a better language (for writing: complex applications, libraries (possibly including the DOM) shared by those applications, code generators)
  • Goal 2: Improve interoperation (i.e. adopt de facto standards where possible)
  • Goal 3: Versioning (keep versioning as simple and linear as possible)

Some of the new constructs, that caught my attention:


1. let/const vs. var

In ES5, you declare variables via var. Such variables are function-scoped, their scopes are the innermost enclosing functions

In ES6, you can additionally declare variables via let and const. Such variables are block-scoped, their scopes are the innermost enclosing blocks.

let is roughly a block-scoped version of var.

const works like let, but creates variables whose values can’t be changed.

var num = 0;

if (num === 0) {
  let localSpeed = 100;
  var globalSpeed = 200;

  for (let i = 0; i < 0; i++){
    num += (localSpeed + globalSpeed) * 1;

  console.log(typeof i);  // undefined

console.log(typeof localSpeed);  // undefined
console.log(typeof num);  // number
console.log(typeof globalSpeed);  // number

General advice by Dr. Axel Rauschmayer (author of Exploring ES6):

  • Prefer const. You can use it for all variables whose values never change.
  • Otherwise, use let – for variables whose values do change.
  • Avoid var.


2. IIFEs vs. blocks

In ES5, you had to use a pattern called IIFE (Immediately-Invoked Function Expression) if you wanted to restrict the scope of a variable tmp to a block:

(function () {  // open IIFE
  var tmp = ···;
}());  // close IIFE

console.log(tmp);  // ReferenceError

In ECMAScript 6, you can simply use a block and a let declaration (or a const declaration):

{  // open block
  let tmp = ···;
}  // close block

console.log(tmp);  // ReferenceError


3. concatenating strings vs. template literals

In ES5, you put values into strings by concatenating those values and string fragments:

function printCoord(x, y) {
  console.log('('+x+', '+y+')');

In ES6 you can use string interpolation via template literals:

function printCoord(x, y) {
  console.log(`(${x}, ${y})`);

Template literals also help with representing multi-line strings.


4. function expressions vs. arrow functions

In ES5, such callbacks are relatively verbose:

var arr = [1, 2, 3];
var squares = (x) { return x * x });

In ES6, arrow functions are much more concise:

const arr = [1, 2, 3];
const squares = => x * x);


5. for vs. forEach() vs. for-of

Prior to ES5, you iterated over Arrays as follows:

var arr = ['a', 'b', 'c'];
for (var i=0; i<arr.length; i++) {
  var elem = arr[i];

In ES5, you have the option of using the Array method forEach():

arr.forEach(function (elem) {

A for loop has the advantage that you can break from it, forEach() has the advantage of conciseness.

In ES6, the for-of loop combines both advantages:

const arr = ['a', 'b', 'c'];
for (const elem of arr) {

If you want both index and value of each array element, for-of has got you covered, too, via the new Array method entries() and destructuring:

for (const [index, elem] of arr.entries()) {
  console.log(index+'. '+elem);


6. Handling multiple return values

A. via arrays

In ES5, you need an intermediate variable (matchObj in the example below), even if you are only interested in the groups:

var matchObj = /^(\d\d\d\d)-(\d\d)-(\d\d)$/.exec('2999-12-31');
var year = matchObj[1];
var month = matchObj[2];
var day = matchObj[3];

In ES6, destructuring makes this code simpler:

const [, year, month, day] = /^(\d\d\d\d)-(\d\d)-(\d\d)$/.exec('2999-12-31');

(The empty slot at the beginning of the Array pattern skips the Array element at index zero.)

B. via objects

In ES5, even if you are only interested in the properties of an object, you still need an intermediate variable (propDesc in the example below):

var obj = { foo: 123 };
var propDesc = Object.getOwnPropertyDescriptor(obj, 'foo');
var writable = propDesc.writable;
var configurable = propDesc.configurable;

console.log(writable, configurable);  // true true

In ES6, you can use destructuring:

const obj = { foo: 123 };
const {writable, configurable} = Object.getOwnPropertyDescriptor(obj, 'foo');
console.log(writable, configurable);  // true true


7. Handling parameter default values

In ES5, you specify default values for parameters like this:

function foo(x, y) {
  x = x || 0;
  y = y || 0;

ES6 has nicer syntax:

function foo(x=0, y=0) {


8. Handling named parameters

A common way of naming parameters in JavaScript is via object literals (the so-called options object pattern):

selectEntries({ start: 0, end: -1 });

Two advantages of this approach are: Code becomes more self-descriptive and it is easier to omit arbitrary parameters.

In ES5, you can implement selectEntries() as follows:

function selectEntries(options) {
  var start = options.start || 0;
  var end = options.end || -1;
  var step = options.step || 1;

In ES6, you can use destructuring in parameter definitions and the code becomes simpler:

function selectEntries({ start=0, end=-1, step=1 }) {


9. arguments vs. rest parameters

In ES5, if you want a function (or method) to accept an arbitrary number of arguments, you must use the special variable arguments:

function logAllArguments() {
  for (var i=0; i<arguments.length; i++) {

In ES6, you can declare a rest parameter (args in the example below) via the …operator:

function logAllArguments(...args) {
  for (const arg of args) {

Rest parameters are even nicer if you are only interested in trailing parameters:

function format(pattern, ...args) {

Handling this case in ES5 is clumsy:

function format(pattern) {
  var args = [], 1);


10. apply() vs. the spread operator (…)

In ES5, you turn arrays into parameters via apply().

ES6 has the spread operator for this purpose.

A. Math.max() example

ES5 – apply():

Math.max.apply(Math, [-1, 5, 11, 3])

ES6 – spread operator:

Math.max(...[-1, 5, 11, 3])

B. Array.prototype.push() example

ES5 – apply():

var arr1 = ['a', 'b'];
var arr2 = ['c', 'd'];

arr1.push.apply(arr1, arr2); // arr1 is now ['a', 'b', 'c', 'd']

ES6 – spread operator:

const arr1 = ['a', 'b'];
const arr2 = ['c', 'd'];

arr1.push(...arr2); // arr1 is now ['a', 'b', 'c', 'd']


11. concat() vs. the spread operator (…)

The spread operator can also (non-destructively) turn the contents of its operand into Array elements. That means that it becomes an alternative to the Array method concat().

ES5 – concat():

var arr1 = ['a', 'b'];
var arr2 = ['c'];
var arr3 = ['d', 'e'];

console.log(arr1.concat(arr2, arr3)); // [ 'a', 'b', 'c', 'd', 'e' ]

ES6 – spread operator:

const arr1 = ['a', 'b'];
const arr2 = ['c'];
const arr3 = ['d', 'e'];

console.log([...arr1, ...arr2, ...arr3]); // [ 'a', 'b', 'c', 'd', 'e' ]


12. function expressions in object literals vs. method definitions

In JavaScript, methods are properties whose values are functions.

In ES5 object literals, methods are created like other properties. The property values are provided via function expressions.

var obj = {
  foo: function () {
  bar: function () {;
  }, // trailing comma is legal in ES5

ES6 has method definitions, special syntax for creating methods:

const obj = {
  foo() {
  bar() {;


13. constructors vs. classes

ES6 classes are mostly just more convenient syntax for constructor functions.

A. Base classes

In ES5, you implement constructor functions directly:

function Person(name) { = name;
Person.prototype.describe = function () {
  return 'Person called ';

Note the compact syntax for method definitions – no keyword function needed.

Also note that there are no commas between the parts of a class

B. Derived classes

Subclassing is complicated in ES5, especially referring to super-constructors and super-properties.

This is the canonical way of creating a sub-constructor Employee of Person:

function Employee(name, title) {, name); // super(name)
  this.title = title;

Employee.prototype = Object.create(Person.prototype);
Employee.prototype.constructor = Employee;
Employee.prototype.describe = function () {
  return // super.describe()
    + ' (' + this.title + ')';

ES6 has built-in support for subclassing, via the extends clause:

class Employee extends Person {
  constructor(name, title) {
    this.title = title;
  describe() {
    return super.describe() + ' (' + this.title + ')';


14. custom error constructors vs. subclasses of Error

In ES5, it is impossible to subclass the built-in constructor for exceptions, Error.

The following code shows a work-around that gives the constructor MyError important features such as a stack trace:

function MyError() {
  var superInstance = Error.apply(null, arguments); // Use Error as a function
  copyOwnPropertiesFrom(this, superInstance);
MyError.prototype = Object.create(Error.prototype);
MyError.prototype.constructor = MyError;

function copyOwnPropertiesFrom(target, source) {
  Object.getOwnPropertyNames(source).forEach(function(propKey) {
    var desc = Object.getOwnPropertyDescriptor(source, propKey);
    Object.defineProperty(target, propKey, desc);
return target;

In ES6, all built-in constructors can be subclassed, which is why the following code achieves what the ES5 code can only simulate:

class MyError extends Error {


15. objects vs. Maps

Using the language construct object as a map from strings to arbitrary values (a data structure) has always been a makeshift solution in JavaScript. The safest way to do so is by creating an object whose prototype is null. Then you still have to ensure that no key is ever the string ‘__proto__’, because that property key triggers special functionality in many JavaScript engines.

The following ES5 code contains the function countWords that uses the object dictas a map:

var dict = Object.create(null);

function countWords(word) {
  var escapedWord = escapeKey(word);
  if (escapedWord in dict) {
  } else {
    dict[escapedWord] = 1;

function escapeKey(key) {
if (key.indexOf('__proto__') === 0) {
    return key+'%';
  } else {
    return key;

In ES6, you can use the built-in data structure Map and don’t have to escape keys. As a downside, incrementing values inside Maps is less convenient.

const map = new Map();
function countWords(word) {
  const count = map.get(word) || 0;
  map.set(word, count + 1);

Another benefit of Maps is that you can use arbitrary values as keys, not just strings.


16. New string methods

A. indexOf vs. startsWith

if (str.indexOf('x') === 0) {} // ES5
if (str.startsWith('x')) {} // ES6

B. indexOf vs. endsWith

function endsWith(str, suffix) { // ES5
  var index = str.indexOf(suffix);
  return index >= 0 && index === str.length-suffix.length;
str.endsWith(suffix); // ES6

C. indexOf vs. includes

if (str.indexOf('x') >= 0) {} // ES5
if (str.includes('x')) {} // ES6

D. join vs. repeat (the ES5 way of repeating a string is more of a hack):

new Array(3+1).join('#') // ES5
'#'.repeat(3) // ES6


17. New Array methods

A. Array.prototype.indexOf vs. Array.prototype.findIndex

The latter can be used to find NaN, which the former can’t detect:

const arr = ['a', NaN];
arr.indexOf(NaN); // -1
arr.findIndex(x => Number.isNaN(x)); // 1

As an aside, the new Number.isNaN() provides a safe way to detect NaN (because it doesn’t coerce non-numbers to numbers):

isNaN('abc') // true
Number.isNaN('abc') // false

B. Array.prototype.slice() vs. Array.from() (or the spread operator)

In ES5, Array.prototype.slice() was used to convert Array-like objects to Arrays. In ES6, you have Array.from():

var arr1 =; // ES5
const arr2 = Array.from(arguments); // ES6

If a value is iterable (as all Array-like DOM data structure are by now), you can also use the spread operator (…) to convert it to an Array:

const arr1 = [...'abc']; // ['a', 'b', 'c']
const arr2 = [ Set().add('a').add('b')]; // ['a', 'b']

C. apply() vs. Array.prototype.fill()

In ES5, you can use apply(), as a hack, to create in Array of arbitrary length that is filled with undefined:

// Same as Array(undefined, undefined)
var arr1 = Array.apply(null, new Array(2)); // [undefined, undefined]

In ES6, fill() is a simpler alternative:

const arr2 = new Array(2).fill(undefined); // [undefined, undefined]

fill() is even more convenient if you want to create an Array that is filled with an arbitrary value:

// ES5
var arr3 = Array.apply(null, new Array(2)).map(function (x) { return 'x' }); // ['x', 'x']

// ES6
const arr4 = new Array(2).fill(‘x’); // ['x', 'x']

fill() replaces all Array elements with the given value. Holes are treated as if they were elements.


18. CommonJS modules vs. ES6 modules

Even in ES5, module systems based on either AMD syntax or CommonJS syntax have mostly replaced hand-written solutions such as the revealing module pattern.

ES6 has built-in support for modules. Alas, no JavaScript engine supports them natively, yet. But tools such as browserify, webpack or jspm let you use ES6 syntax to create modules, making the code you write future-proof.

A. Multiple exports in CommonJS

//------ lib.js ------
var sqrt = Math.sqrt;
function square(x) {
  return x * x;
function diag(x, y) {
  return sqrt(square(x) + square(y));
module.exports = {
  sqrt: sqrt,
  square: square,
  diag: diag,

//------ main1.js ------
var square = require('lib').square;
var diag = require('lib').diag;

console.log(square(11)); // 121
console.log(diag(4, 3)); // 5

Alternatively, you can import the whole module as an object and access square and diag via it:

//------ main2.js ------
var lib = require('lib');

console.log(lib.square(11)); // 121
console.log(lib.diag(4, 3)); // 5

B. Multiple exports in ES6

In ES6, multiple exports are called named exports and handled like this:

//------ lib.js ------
export const sqrt = Math.sqrt;
export function square(x) {
  return x * x;
export function diag(x, y) {
  return sqrt(square(x) + square(y));

//------ main1.js ------
import { square, diag } from 'lib';

console.log(square(11)); // 121
console.log(diag(4, 3)); // 5

The syntax for importing modules as objects looks as follows (line A):

//------ main2.js ------
import * as lib from 'lib'; // (A)

console.log(lib.square(11)); // 121
console.log(lib.diag(4, 3)); // 5

C. Single exports in CommonJS

Node.js extends CommonJS and lets you export single values from modules, via module.exports:

//------ myFunc.js ------
module.exports = function () { ··· };

//------ main1.js ------
var myFunc = require('myFunc');

D. Single exports in ES6

In ES6, the same thing is done via a so-called default export (declared via export default):

//------ myFunc.js ------
export default function () { ··· } // no semicolon!

//------ main1.js ------
import myFunc from 'myFunc';



That would be it,




Web Storage – client-side data storage

While investigating the best solution for client-side data storage i came across W3C Web Storage specification, which may be of interest to you as well.


The specification “…defines an API for persistent data storage of key-value pair data in Web clients“. It mentions two different types of storage:

  • Session storage – purpose of which is to remember all data in the current session, but forget it as soon as the browser tab or window gets closed
  • Local storage – which stores the data across multiple browser sessions (persistent storage) and as a result makes it possible to close the page (or window) and still preserve the data within the browser


Both mechanisms use the same Storage interface:

interface Storage {
  readonly attribute unsigned long length;
  DOMString? key(unsigned long index);
  getter DOMString getItem(DOMString key);
  setter creator void setItem(DOMString key, DOMString value);
  deleter void removeItem(DOMString key);
  void clear();


The storage facility is similar to traditional HTTP cookie storage but offers some benefits commonly understood as:

  • Storage capacity: Browsers have enabled a minimum of 5Mb of storage inside a web storage object (IE has allowed 10Mb but it varies by storage type and browser).
  • Data transmission: Objects are not sent automatically with each request but must be requested.
  • Client side access: Servers cannot directly write to web storage which provides some additional controls from client-side scripting.
  • Data storage: Array level name/value pairs provides a more flexible data model


Basic operations on both Web Storage mechanisms, look like this:

// session storage
  sessionStorage.setItem('key', 'value');         // set
  var item = sessionStorage.getItem('key');       // retrieve
  var item = sessionStorage.removeItem('key');    // remove
  sessionStorage.clear();                         // clear all
  var no_of_items = sessionStorage.length;        // no. of current items

// local storage
  localStorage.setItem('key', 'value');           // set
  var item = localStorage.getItem('key');         // retrieve
  var item = localStorage.removeItem('key');      // remove
  localStorage.clear();                           // clear all
  var no_of_items = localStorage.length;          // no. of current items


The specification also provides a StorageEvent interface to be fired whenever the storage area changes. It exposes following attributes:

  • storageArea -that tells the type of storage used (Session or Local)
  • key – key which is being changed.
  • oldValue – the old value of the key.
  • newValue – the new value of the key.
  • url – the URL of the page whose key is changed.


Privacy Implications:

  • As has been discussed in the W3C spec and other forums, there are some considerations for privacy in place both within the spec design and implemented in the variable user agent controls present today in common web browsers. Within the spec, there are options for user agents to:
  • Restrict access to local storage to “third party domains” or those domains that do not match the top-level domain (e.g., that sit within i-frames). Sub-domains are considered separate domains unlike cookies.
  • Session and time-based expirations can be set to make data finite vs. permanent.
  • Whitelist and blacklisting features can be used for access controls.


Key facts:

  • Storage per origin: All storage from the same origin will share the same storage space. An origin is a tuple of scheme/host/port (or a globally unique identifier). For example, and are two separate origins, as are and as well as and
  • Storage limit: As of now, most browsers that have implemented Web Storage, have placed the storage limit at 5 Mb per domain. You should be able to change this storage limit on a per-domain basis in the browser settings:
    • Chrome: Advanced>Privacy> Content>Cookies
    • Safari: Privacy>Cookies and Other Website Data; “Details”
    • Firefox: Tools> Clear Recent History > Cookies
    • IE: Internet Options> General> Browsing History>Delete> Cookies and Website Data
  • Security considerations: Storage is assigned on a per-origin basis. Someone might use DNS Spoofing to make themselves look like a particular domain when in fact they aren’t, thereby gaining access to the storage area of that domain on a user’s computer. SSL can be used in order to prevent this from happening, so users can be absolutely sure that the site they are viewing is from the same domain name.
  • Where not to use it: If two different users are using different pathnames on a single domain, they can access the storage area of the whole origin and therefore each other’s data. Hence, it is advisable for people on free hosts who have their sites on different directories of the same domain (for example, and, to not use Web Storage on their pages for the time being.
  • Web Storage is not part of the HTML5 spec: It is a whole specification in itself.



Cookies and Web Storage really serve different purposes. Cookies are primarily for reading server-side, whereas Web Storage can only be read client-side. So the question is, in your app, who needs the data — the client or the server?

  • If it’s your client (your JavaScript), then by all means use Web Storage. You’re wasting bandwidth by sending all the data in the HTTP header each time.
  • If it’s your server, Web Storage isn’t so useful because you’d have to forward the data along somehow (with Ajax or hidden form fields or something). This might be okay if the server only needs a small subset of the total data for each request.


Web Storage vs. Cookies:

  • Web Storage:
    • Pros
      • Support by most modern browsers
      • Stored directly in the browser
      • Same-origin rules apply to local storage data
      • Is not sent with every HTTP request
      • ~5MB storage per domain (that’s 5120KB)
    • Cons
      • Not supported by anything before:
        • IE 8
        • Firefox 3.5
        • Safari 4
        • Chrome 4
        • Opera 10.5
        • iOS 2.0
        • Android 2.0
      • If the server needs stored client information you purposefully have to send it.
  • Cookies:
    • Pros
      • Legacy support (it’s been around forever)
      • Persistent data
      • Expiration dates
    • Cons
      • Each domain stores all its cookies in a single string, which can make parsing data difficult
      • Data is not encrypted
      • Cookies are sent with every HTTP request Limited size (4KB)
      • SQL injection can be performed from a cookie


If you’re interested in Cookies, you can read more here.


Finally, if you’re looking for a client-side data storage solution for AngularJS, you may want to take a look at angular-cache.




Take care!





Sharing data between controllers in AngularJS (PubSub/Event bus example)

Basically, there are two ways of handling the communication between controllers in AngularJS:

  • using a service which acts as a PubSub/Event bus when injected into controllers:
    • code example (John Lindquist’s fantastic webcast can be found here):
      'use strict';
      angular.module('myAppServices', [])
        .factory('EventBus', function () {
          return {message: "I'm data from EventBus service"}
      'use strict';
      angular.module('myAppControllers', ['myAppServices'])
        .controller('FirstCtrl', function ($scope, EventBus) {
          $ = EventBus;
        .controller('SecondCtrl', function ($scope, EventBus) {
          $ = EventBus;


    • note:
      In case you don’t need a controller anymore on your page, there’s no way (other than manual) to automatically “unsubscribe” such controllers (as of today AngularJS doesn’t support component life-cycle hooks, by the use of which you could wire/un-wire components). This is because of closures used in controllers that are not “de-allocated” (memory) when the function returns. As a result, you’ll be still sending messages to such “unused” controllers.


  • depending on the parent/child relation between scopes, you can transmit events using either $broadcast or $emit methods:
    • if the scope of FirstCtrl is parent to the scope of SecondCtrl, you should use $broadcast method in the FirstCtrl:
      'use strict';
      angular.module('myAppControllers', [])
        .controller('FirstCtrl', function ($scope) {
        .controller('SecondCtrl', function ($scope) {
          $scope.$on('UPDATE_CHILD', function() {
            // do something useful here;


    • if there’s no parent/child relation between scopes, you should inject $rootScope into the FirstCtrl and broadcast the event into other controllers (including SecondCtrl) and their corresponding (child in this case) $scope’s:
      'use strict';
      angular.module('myAppControllers', [])
        .controller('FirstCtrl', function ($rootScope) {


    • finally, when you need to dispatch the event from a child controller (SecondCtrl) to $scope’s upwards , you should use the $emit method:
      'use strict';
      angular.module('myAppControllers', [])
        .controller('FirstCtrl', function ($scope) {
          $scope.$on('UPDATE_PARENT', function() {
            // do something useful here;
        .controller('SecondCtrl', function ($scope) {


    • note:
      because $broadcast will dispatch events downwards through (all) scope’s hierarchy, it results in a slight performance hit (more details and performance tests results, here).






Tricky behavior of AngularJS $resource service.

When using $resource service of AngularJS in one of the projects recently, i faced a tricky problem and thought it may be valuable to share the solution here.


Namely, one of the back-end services is returning an Array of String values like this, when making a GET call using a REST client:



Having a standard AngularJS service defined like this:

angular.module('myAppBackendService', ['ngResource'])
  .factory('BackendApi', ['$resource', 'BackendHost', 'BackendPort', 'BackendVersion',
    function ($resource, BackendHost, BackendPort, BackendVersion) {
      var connString = BackendHost + ':' + BackendPort + '/' + BackendVersion;
      return {
        values: $resource(connString + '/values/:id',
        }, {
          query: {method: 'GET', isArray: true},
          get: {method: 'GET', params:{id:'@id'}, isArray: true},
          save: {method: 'POST', isArray: true}


and invoked like this

$scope.values = BackendApi.values.get(
  function(value) {
    // do something interesting with returned values here
    $log.debug('Success: Calling the /values back-end service', value);
  function(errResponse) {
    // do something else in case of error here
    $log.debug('Error: Calling the /values back-end service', errResponse);


i was getting a successful response from the server, however the data format which i was getting was completely unexpected to me:

    "0" : "V",
    "1" : "a",
    "2" : "l",
    "3" : "u",
    "4" : "e",
    "5" : "_",
    "6" : "1"
    "0" : "V",
    "1" : "a",
    "2" : "l",
    "3" : "u",
    "4" : "e",
    "5" : "_",
    "6" : "2"

you can imagine my surprise when trying to figure out what the heck was wrong with it?


After spending some time trying to google out a solution, i finally found the reason for such behavior. Listen to this:

“…ngResource expects an object or an array of objects in your response”

“…When isArray is set to true in the list of actions, the ngResource module iterates over each item received in the response and it creates a new instance of a Resource. To do this Angular performs a deep copy between the item received and the Resource class, which gives us an object with special methods ($save$deleteand so on)”

“…Internally angular uses angular.copy to perform the deep copy and this function only operates with objects andarrays, when we pass a string, it will treat it like an object.

Strings in JS can behave as arrays by providing sequential access to each character. angular.copy will produce the following when passed a string

angular.copy('hi',{}) => {0:'h', 1:'i'}

Each character becomes a value in an object, with its index set as the key. ngResource will provide a resource with properties 0 and 1.”



So, what are the possible solutions then?

  1. Use the “transformResponse” action of $resource service (you can read more about this in the documentation of the service itself, here)
  2. Use the lower level $http service:
      $scope.test = data;
  3. Return an array of objects in your json response:
      {'data': "hello"},
      {'data': "world"}


Happy coding!






AngularJS custom HTTP headers in resource service

Recently i had to make an HTTP call from the browser (client-side) using JavaScript / AngularJS to a REST API (server-side) and retrieve data. Since the authentication mechanism of the API required a security token to be passed over with the request, i studied AngularJS specs on how to do it best. Basically, there are two ways to do it, either as a:

  1. query parameter, or
  2. custom HTTP header


Because i didn’t wanted the security token to appear anywhere in the logs or debugging console (like on the picture below, in case of making use of option 1 just mentioned, ie. query parameter), i decided on passing the token as a custom (there’s no standard header for passing tokens) HTTP header.

AngularJS query API token


Since i use Yeoman (app workflow/scaffolding tool) i noticed that through a standard angular-template used for generating an application scaffolding, you’re getting dependency on angular framework in version 1.0.7 (last stable version as of writing this post). Although this is what you would generally expect (stable version, not a snapshot), the problem is that angular documentation for $resource service (which is what i prefer to use over $http service), does not mention the possibility of sending HTTP headers (regarding $http – i think of it as a solution for rather “general purpose AJAX calls”).


One way to set HTTP headers is by accessing $httpProvider.defaults.headers configuration object, like this:

$httpProvider.defaults.headers.get['API-Token'] = 'vy4eUCqpQmGoeWsnHKwCQw'

(more documentation about that you’ll find here), but this way you’re modifying $httpProvider globally which may not be what you exactly want.


Google search came with help and i found issue 736, which acknowledges that “$resource should support custom http headers”, but it is with the (unstable) release 1.1.3 where this feature is supported for sure (maybe earlier “unstable” versions do support it too, haven’t checked that actually, but definitely none of the stable versions do, as of today).


So, what is it that you have to do in order to introduce an unstable version of AngularJS into your project managed by Bower?

bower install angular-unstable
bower install angular-resource-unstable

(dependency on angular-resource.js is required in order for it to work).


Now, the only other thing left to do is to update your index.html file accordingly (to make use of proper version of libraries) :

<script src="bower_components/angular-unstable/angular.js"></script>
<script src="bower_components/angular-resource-unstable/angular-resource.js"></script>


…and you can start adding custom HTTP headers in your code:

angular.module('usersService', ['ngResource'])
    .factory('User', function($resource, api-token) {
        var User = $resource('\\:8080/1.0/users', { }, {
            query: {
                method: 'GET',
                isArray: true,
                headers: { 'API-Token': api-token }
        return User


Hope this short post will save some of your time 🙂 Cheers!




AngularJS, Karma and debugging unit tests in WebStorm or IntelliJ IDEA

Recently i had to debug few Javascript unit tests in WebStorm IDE and was wondering if it’ll be as easy of an experience as it is in case of Java and IntelliJ IDEA (where i originally come from).

WebStorm 6 doesn’t offer native Karma test runner support (ver 7 which is currently in EAP, does – details here), but using a NodeJS plug-in you can execute any kind of NodeJS application (Karma included).


OK, what we’ll need in this exercise is following:

  • One of two JetBrains IDE’s, either:
    • WebStorm (great for JavaScript code) or
    • IntelliJ IDEA (Java’s no. 1 IDE)
  • NodeJS plug-in installed in the IDE:
    • WebStorm comes having it already pre-installed
    • in case of IDEA (Ultimate version, because Community Edition doesn’t have the required JavaScript plug-in for it to work, see here) the plug-in can be downloaded from here.
  • NodeJS environment which can be downloaded from here.
  • Karma (old name Testacular) test runner installed (“npm install -g karma”) that allows running unit (or E2E) tests in one of these browsers:
    • Chrome
    • ChromeCanary
    • Firefox
    • Opera
    • Safari (only Mac)
    • PhantomJS
    • IE (only Windows)
  • Chrome/Firefox “JetBrains IDE Support” extension (required for debugging) that can be downloaded from here.


Installing the NodeJS plug-in in IntelliJ IDEA:

  • Open “Settings” dialog (File -> Settings… in the menu bar)
  • Select “Plugins” (under “IDE Settings”)
  • Click “Browse repositories…”
  • Click “Download and Install” on the “NodeJS” plug-in
  • Press “Restart” when asked, to restart the IDE


Configuring IDE to execute Karma test in NodeJS using the plug-in:

  • Open the Run/Debug Configuration dialog by selecting “Edit Configurations” in the Run area of the main toolbar of WebStorm.
  • Add the following two configurations (picture below):
    • “Karma Run”: to perform a “single run” of your unit tests.
    • “Karma Server”: to start Karma in “Continuous Integration” mode (automatic re-runs of your tests whenever files change).
  • Configure the “Karma Run” configuration:
    • Press the “+” button in the top-left of the “Run/Debug Configurations” dialog.
    • Select “Node.js” in the list
    • Fill in the following fields:
      • Name: enter “Karma Run”
      • Path to Node: absolute path to NodeJS executable (i.e. “c:\NodeJS\node.exe”)
      • Working Directory: absolute path of your AngularJS application (i.e. “C:\MyProjects\AngularApp”)
      • Path to Node App JS File: Should point to the (globally, ie. -g) installed “Karma” NodeJs executable (i.e. “C:\Users\…\AppData\Roaming\npm\node_modules\karma\bin\karma”)
      • Application Parameters: run karma.conf.js –single-run –no-auto-watch –reporters dots
    • Press “Apply”
  • Configure the “Karma Server” configuration:
    • Essentially take the same steps as while configuring “Karma Run”, changing only the following:
      • Name: enter “Karma Run”
      • Application Parametersstart karma.conf.js –no-single-run –auto-watch –reporters dots
  • Configure the “Karma Debug” configuration to allow debugging of Karma unit tests
    • Press the “+” button in the top-left of the “Run/Debug Configurations” dialog.
    • Select “JavaScript Debug -> Remote” in the list
    • Fill in the following fields:
      • Name: enter “Karma Debug”
      • URL to openhttp://localhost:8100/debug.html (port number depends on your configuration in karma.conf.js file (passed as an “Application Parameter” in the previous two configurations)
      • Browser: chose either Chrome or Firefox
      • Set the “Remote URL” field to point to “http://localhost:8100/base

        Karma Debug configuration in IntelliJ WebStorm


Finally, run your “Karma Server” configuration and while it’s working in the background (–auto-watch mode), set debugging breakpoints in your code and fire “Karma Debug” configuration.

That’s it. Hope this small guide turns out to be helpful.



Yeoman, Karma and e2e testing

Imagine my surprise when i first looked at the Gruntfile.js of my newly generated angular app and found out that Yeoman is configuring by default only the karma:unit section, missing out the karma:e2e piece. I know the product is fairly new, but i would expect it to be a little more “mature” (spent quite some time figuring out why my e2e tests aren’t working and how to solve the problem).


Anyway, in order to run your e2e tests, you have to do the following:

  1. Update the Gruntfile.js
karma: {
    e2e: {
        configFile: 'karma-e2e.conf.js'
    unit: { ... }
  1. Update the karma-e2e.conf.js
urlRoot = '/e2e/';
proxies = {
    // port has to be the same your web server is running on
    '/': 'http://localhost:9000'


Also, if you want Grunt to run your e2e tests without the need to manually run the web server first, you can additionally define the following task:

grunt.registerTask('test:e2e', function () {[


(and “test:unit” task accordingly, for consistency sake):

grunt.registerTask('test:unit', [