Web Storage – client-side data storage

While investigating the best solution for client-side data storage i came across W3C Web Storage specification, which may be of interest to you as well.

 

The specification “…defines an API for persistent data storage of key-value pair data in Web clients“. It mentions two different types of storage:

  • Session storage – purpose of which is to remember all data in the current session, but forget it as soon as the browser tab or window gets closed
  • Local storage – which stores the data across multiple browser sessions (persistent storage) and as a result makes it possible to close the page (or window) and still preserve the data within the browser

 

Both mechanisms use the same Storage interface:

interface Storage {
  readonly attribute unsigned long length;
  DOMString? key(unsigned long index);
  getter DOMString getItem(DOMString key);
  setter creator void setItem(DOMString key, DOMString value);
  deleter void removeItem(DOMString key);
  void clear();
};

 

The storage facility is similar to traditional HTTP cookie storage but offers some benefits commonly understood as:

  • Storage capacity: Browsers have enabled a minimum of 5Mb of storage inside a web storage object (IE has allowed 10Mb but it varies by storage type and browser).
  • Data transmission: Objects are not sent automatically with each request but must be requested.
  • Client side access: Servers cannot directly write to web storage which provides some additional controls from client-side scripting.
  • Data storage: Array level name/value pairs provides a more flexible data model

 

Basic operations on both Web Storage mechanisms, look like this:

// session storage
  sessionStorage.setItem('key', 'value');         // set
  var item = sessionStorage.getItem('key');       // retrieve
  var item = sessionStorage.removeItem('key');    // remove
  sessionStorage.clear();                         // clear all
  var no_of_items = sessionStorage.length;        // no. of current items

// local storage
  localStorage.setItem('key', 'value');           // set
  var item = localStorage.getItem('key');         // retrieve
  var item = localStorage.removeItem('key');      // remove
  localStorage.clear();                           // clear all
  var no_of_items = localStorage.length;          // no. of current items

 

The specification also provides a StorageEvent interface to be fired whenever the storage area changes. It exposes following attributes:

  • storageArea -that tells the type of storage used (Session or Local)
  • key – key which is being changed.
  • oldValue – the old value of the key.
  • newValue – the new value of the key.
  • url – the URL of the page whose key is changed.

 

Privacy Implications:

  • As has been discussed in the W3C spec and other forums, there are some considerations for privacy in place both within the spec design and implemented in the variable user agent controls present today in common web browsers. Within the spec, there are options for user agents to:
  • Restrict access to local storage to “third party domains” or those domains that do not match the top-level domain (e.g., that sit within i-frames). Sub-domains are considered separate domains unlike cookies.
  • Session and time-based expirations can be set to make data finite vs. permanent.
  • Whitelist and blacklisting features can be used for access controls.

 

Key facts:

  • Storage per origin: All storage from the same origin will share the same storage space. An origin is a tuple of scheme/host/port (or a globally unique identifier). For example, http://www.example.org and http://abc.example.org are two separate origins, as are http://example.org and https://example.org as well as http://example.org:80 and http://example.org:8000
  • Storage limit: As of now, most browsers that have implemented Web Storage, have placed the storage limit at 5 Mb per domain. You should be able to change this storage limit on a per-domain basis in the browser settings:
    • Chrome: Advanced>Privacy> Content>Cookies
    • Safari: Privacy>Cookies and Other Website Data; “Details”
    • Firefox: Tools> Clear Recent History > Cookies
    • IE: Internet Options> General> Browsing History>Delete> Cookies and Website Data
  • Security considerations: Storage is assigned on a per-origin basis. Someone might use DNS Spoofing to make themselves look like a particular domain when in fact they aren’t, thereby gaining access to the storage area of that domain on a user’s computer. SSL can be used in order to prevent this from happening, so users can be absolutely sure that the site they are viewing is from the same domain name.
  • Where not to use it: If two different users are using different pathnames on a single domain, they can access the storage area of the whole origin and therefore each other’s data. Hence, it is advisable for people on free hosts who have their sites on different directories of the same domain (for example, freehostingspace.org/user1/ and freehostingspace.org/user2/), to not use Web Storage on their pages for the time being.
  • Web Storage is not part of the HTML5 spec: It is a whole specification in itself.

 

Cookies:

Cookies and Web Storage really serve different purposes. Cookies are primarily for reading server-side, whereas Web Storage can only be read client-side. So the question is, in your app, who needs the data — the client or the server?

  • If it’s your client (your JavaScript), then by all means use Web Storage. You’re wasting bandwidth by sending all the data in the HTTP header each time.
  • If it’s your server, Web Storage isn’t so useful because you’d have to forward the data along somehow (with Ajax or hidden form fields or something). This might be okay if the server only needs a small subset of the total data for each request.

 

Web Storage vs. Cookies:

  • Web Storage:
    • Pros
      • Support by most modern browsers
      • Stored directly in the browser
      • Same-origin rules apply to local storage data
      • Is not sent with every HTTP request
      • ~5MB storage per domain (that’s 5120KB)
    • Cons
      • Not supported by anything before:
        • IE 8
        • Firefox 3.5
        • Safari 4
        • Chrome 4
        • Opera 10.5
        • iOS 2.0
        • Android 2.0
      • If the server needs stored client information you purposefully have to send it.
  • Cookies:
    • Pros
      • Legacy support (it’s been around forever)
      • Persistent data
      • Expiration dates
    • Cons
      • Each domain stores all its cookies in a single string, which can make parsing data difficult
      • Data is not encrypted
      • Cookies are sent with every HTTP request Limited size (4KB)
      • SQL injection can be performed from a cookie

 

If you’re interested in Cookies, you can read more here.

 

Finally, if you’re looking for a client-side data storage solution for AngularJS, you may want to take a look at angular-cache.

 

 

 

Take care!

 

 

 

Resources:

Elasticsearch custom tokenizers – nGram

If you’ve been trying to query the Elasticsearch index for partial string matches (similarly to SQL’s “LIKE” operator), like i did initially, you’d get surprised to learn that default ES setup does not offer such functionality.

 

Here’s an example using “match” type query (read more about QueryDSL here):

curl -XGET 'http://search.my-server.com/blog/users/_search?pretty=true' -d '
{
  "query" : {
    "match" : {
      "username" : "mar"
    }
  }
}'

{
  "took" : 3,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
     "failed" : 0
  },
  "hits" : {
    "total" : 0,
    "max_score" : null,
    "hits" : [ ]
  }
}

whereas, when i search after full username, the result is following:

curl -XGET 'http://search.my-server.com/blog/users/_search?pretty=true' -d '
{
  "query" : {
    "match" : {
      "username" : "mariusz"
    }
  }
}'

{
  "took" : 9,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 5.5108595,
    "hits" : [ {
      "_index" : "blog",
      "_type" : "users",
      "_id" : "835",
      "_score" : 5.5108595,
      "_source" : {
        "id" : 835,
        "version" : 5,
        "creationTime" : "2013/11/29 03:13:27 PM,UTC",
        "modificationTime" : "2014/01/03 01:50:17 PM,UTC",
        "username" : "mariusz",
        "firstName" : "Mariusz",
        "lastName" : "Przydatek",
        "homeAddress" : [],
        "email" : "me@mariuszprzydatek.com",
        "interests" : ["Start-ups", "VC", "Java", "Elasticsearch", "AngularJS"],
        "websiteUrl" : "http://mariuszprzydatek.com",
        "twitter" : "https://twitter.com/mprzydatek",
        "avatar" : "http://www.gravatar.com/avatar/8d8a9d08eddb126c3301070af22f9933.png",
      }
    } ]
  }
}

Wondering why’s that? I’ll save you the trouble of studying Elasticsearch specs and provide the explanation here and now.

 

It’s all about how your data (“username” field to be precise) is being indexed by Elasticsearch; to be specific: which built-in tokenizer (one of many) is being used to create search tokens.

By default ES is using the “standard” tokenizer (more details about it here). What we need instead is the nGram tokenizer (details).

 

Here’s how you can check how your data has actually been “tokenized” by ES:

curl -XGET 'http://search.my-server.com/blog/users/_search?pretty=true' -d '
{
  "query" : {
    "match" : {
      "username" : "mariusz"
    }
  },
  "script_fields" : {
    "terms" : {
      "script" : "doc[field].values",
      "params" : {
        "field" : "username"
      }
    }
  }
}'

{
  "took" : 1191,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 5.5053496,
    "hits" : [ {
      "_index" : "blog",
      "_type" : "users",
      "_id" : "835",
      "_score" : 5.5053496,
      "fields" : {
        "terms" : [ "mariusz" ]
      }
    } ]
  }
}

So as you can see near the end of the JSON above, there’s only one token created for field username: “mariusz”. No wonder why querying for partial string “mar” wasn’t working.

 

What you need to do in order to allow partial string search, is following:

  1. remove the whole current index (i know – sorry – there’s no other way, the data has to be re-tokenized again and where/when it happens, is at the indexing time)
  2. create a new custom tokenizer
  3. create a new custom analyzer
  4. create new index that has the new tokenizer/analyzer set as defaults

 

Let’s start with removing the old index:

curl -XDELETE 'http://search.my-server.com/blog'

{
  "ok" : true,
  "acknowledged" : true
}

 

Now, we can combine steps 2,3 and 4 within a single command:

curl -XPUT 'http://search.my-server.com/blog/' -d '
{
  "settings" : {
    "analysis" : {
      "analyzer" : {
        "default": {
          "type" : "custom",
          "tokenizer" : "my_ngram_tokenizer",
          "filter" : "lowercase"
        }
      },
      "tokenizer" : {
        "my_ngram_tokenizer" : {
          "type" : "nGram",
          "min_gram" : "3",
          "max_gram" : "20",
          "token_chars": [ "letter", "digit" ]
        }
      }
    }
  }
}'

{
 "ok" : true,
 "acknowledged" : true
}

 

Let’s add now the same data (profile of user mariusz) and see how it got tokenized:

curl -XGET 'http://search.my-server.com/blog/users/_search?pretty=true' -d '
{
  "query" : {
    "match" : {
      "username" : "mar"
    }
  },
  "script_fields" : {
    "terms" : {
      "script" : "doc[field].values",
      "params" : {
        "field" : "username"
      }
    }
  }
}'

{
  "took" : 309,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 0.26711923,
    "hits" : [ {
      "_index" : "blog",
      "_type" : "users",
      "_id" : "22",
      "_score" : 0.26711923,
      "fields" : {
        "terms" : [ "ari", "ariu", "arius", "ariusz", "ius", "iusz",
                    "mar", "mari", "mariu", "marius", "mariusz", "riu",
                    "rius", "riusz", "sz", "usz" ]
      }
    } ]
  }
}

 

 

Wow, hell of a ride it was 🙂 Now you can see way more tokens created. I leave it up to you to check whether querying for partial string “mar” works now.

 

Take care!

 

 

Resources:

Token-based Authentication Plugin for ActiveMQ

This post is a part of ActiveMQ Custom Security Plugins series.

 

Similarly to how we did in case of the IP-based Authentication Plugin for ActiveMQ, in order to limit the connectivity to the ActiveMQ server based on Token (assuming the connecting client, eg. a browser through a JavaScript over STOMP protocol) is providing such token when trying to establish a connection with the broker), we’ll need to override the addConnection() method of the BrokerFilter.class.

 

For the purpose of this example, i’ll be using Redis as the data store against which i’ll be checking the Tokens of connecting clients; to make a decision whether a client is allowed to establish a connection with the broker (Token exists in Redis) or not (otherwise). To hit Redis from Java i’ll be using the Jedis driver.

 

Step1: Implementation of the plugin logic:

import org.apache.activemq.broker.Broker;
import org.apache.activemq.broker.BrokerFilter;
import org.apache.activemq.broker.ConnectionContext;
import org.apache.activemq.command.ConnectionInfo;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import redis.clients.jedis.Jedis;
import java.util.Map;

public class TokenAuthenticationBroker extends BrokerFilter {

  private final Logger logger = LoggerFactory.getLogger(getClass());
  public final static String REDIS_KEY = "authentication:activemq:tokens";

  Map<String, String> redisConfig;

  public TokenAuthenticationBroker(Broker next, Map<String, String> redisConfig) {
    super(next);
    this.redisConfig = redisConfig;
  }

  @Override
  public void addConnection(ConnectionContext context, ConnectionInfo info) throws Exception {
    String host = redisConfig.get("host");
    int port = Integer.parseInt(redisConfig.get("port"));

    logger.debug("Establishing Redis connection using [host={}, port={}] ", host, port);
    Jedis jedis = new Jedis(host, port);

    String token = context.getUserName();

    logger.debug("Querying Redis using [key={}, token={}] ", REDIS_KEY, token);
    String response = jedis.hget(REDIS_KEY, token);

    if(response == null) {
      throw new SecurityException("Token not not found in the data store");
    } else {
      logger.debug("Found token [{}] belonging to user: {}. Allowing connection", token, response);
    super.addConnection(context, info);
    }
  }
}

 

As you can see in the example above, the token provided by the connecting client can be read in ActiveMQ directly from the context (by using the getUserName() method; assuming the client is sending the token as a query parameter named “username”). Having the token, next thing we need to do is to query the Redis store (under the REDIS_KEY) and check whether the token exists (hget() method invoked on jedis object/driver). Depending on the value of response, we’re making the decision whether to addConnection() or throw an SecurityException.

 

Also, after the actual plug-in logic has been implemented, the plug-in must be configured and installed. For this purpose, we need an implementation of the BrokerPlugin.class, which is used to expose the configuration of a plug-in and to install the plug-in into the ActiveMQ broker.

 

Step2: Implementation of the plugin “installer”:

import org.apache.activemq.broker.Broker;
import org.apache.activemq.broker.BrokerPlugin;
import java.util.Map;

public class TokenAuthenticationPlugin implements BrokerPlugin {

  Map<String, String> redisConfig;

  @Override
  public Broker installPlugin(Broker broker) throws Exception {
    return new TokenAuthenticationBroker(broker, redisConfig);
  }

  public Map<String, String> getRedisConfig() {
    return redisConfig;
  }

  public void setRedisConfig(Map<String, String> redisConfig) {
    this.redisConfig = redisConfig;
  }
}

 

The installPlugin() method above is used to instantiate the plug-in and return a new intercepted broker for the next plug-in in the chain. The TokenAuthenticationPlugin.class also contains getter and setter methods used to configure the TokenAuthenticationBroker. These setter and getter methods are available via a Spring beans–style XML configuration in the ActiveMQ XML configuration file (example below).

 

Step3: Configuring the plugin in activemq.xml:

// "/apache-activemq/conf/activemq.xml"
<broker brokerName="localhost" dataDirectory="${activemq.base}/data" xmlns="http://activemq.apache.org/schema/core">
  <plugins>
    <bean id="tokenAuthenticationPlugin" class="com.mycompany.mysystem.activemq.TokenAuthenticationPlugin" xmlns="http://www.springframework.org/schema/beans">
      <property name="redisConfig">
        <map>
          <entry key="host" value="localhost" />
          <entry key="port" value="6379" />
        </map>
      </property>
    </bean>
  </plugins>
</broker>

 

That’s all there is to it 🙂

 

Happy Coding!

 

 

Resources:

IP-based Authentication Plugin for ActiveMQ

To limit the connectivity to the ActiveMQ server based on IP address, we’ll need to override the addConnection() method of the BrokerFilter.class, mentioned in my initial post on ActiveMQ Custom Security Plugins.

 

Example implementation (from the book “ActiveMQ in Action”):

import org.apache.activemq.broker.Broker;
import org.apache.activemq.broker.BrokerFilter;
import org.apache.activemq.broker.ConnectionContext;
import org.apache.activemq.command.ConnectionInfo;
import java.util.List;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

public class IPAuthenticationBroker extends BrokerFilter {

  List<String> allowedIPAddresses;
  Pattern pattern = Pattern.compile("^/([0-9\\.]*):(.*)");

  public IPAuthenticationBroker(Broker next, List<String> allowedIPAddresses) {
    super(next);
    this.allowedIPAddresses = allowedIPAddresses;
  }

  public void addConnection(ConnectionContext context, ConnectionInfo info) throws Exception {
    String remoteAddress = context.getConnection().getRemoteAddress();
    Matcher matcher = pattern.matcher(remoteAddress);
    if (matcher.matches()) {
      String ip = matcher.group(1);
        if (!allowedIPAddresses.contains(ip)) {
          throw new SecurityException("Connecting from IP address " + ip + " is not allowed" );
        }
    } else {
      throw new SecurityException("Invalid remote address " + remoteAddress);
    }
    super.addConnection(context, info);
  }
}

As you can see, the implementation above performs a simple check of the IP address using a regular expression to determine the ability to connect. If that IP address is allowed to connect, the call is delegated to the BrokerFilter.addConnection() method. If that IP address isn’t allowed to connect, an exception is thrown.

 

After the actual plug-in logic has been implemented, the plug-in must be configured and installed. For this purpose, we need an implementation of the BrokerPlugin.class, which is used to expose the configuration of a plug-in and to install the plug-in into the ActiveMQ broker.

 

import org.apache.activemq.broker.Broker;
import org.apache.activemq.broker.BrokerPlugin;
import java.util.List;

public class IPAuthenticationPlugin implements BrokerPlugin {

  List<String> allowedIPAddresses;

  public Broker installPlugin(Broker broker) throws Exception {
    return new IPAuthenticationBroker(broker, allowedIPAddresses);
  }

  public List<String> getAllowedIPAddresses() {
    return allowedIPAddresses;
  }

  public void setAllowedIPAddresses(List<String> allowedIPAddresses) {
    this.allowedIPAddresses = allowedIPAddresses;
  }
}

The installPlugin() method above is used to instantiate the plug-in and return a new intercepted broker for the next plug-in in the chain. The IPAuthenticationPlugin.class also contains getter and setter methods used to configure the IPAuthenticationBroker. These setter and getter methods are available via a Spring beans–style XML configuration in the ActiveMQ XML configuration file (example below).

 

// "\apache-activemq\conf\activemq.xml"
<broker brokerName="localhost" dataDirectory="${activemq.base}/data" xmlns="http://activemq.apache.org/schema/core">
  <plugins>
    <bean id="ipAuthenticationPlugin" class="com.mycompany.mysystem.activemq.IPAuthenticationPlugin" xmlns="http://www.springframework.org/schema/beans">
      <property name="allowedIPAddresses">
        <list>
          <value>127.0.0.1</value>
        </list>
      </property>
    </bean>
  </plugins>
</broker>

To summarize, creating custom security plugins using ActiveMQ plugin API, consists of following three steps:

  1. Implementing the plugin logic (overriding methods of the BrokerFilter.class – first code snippet above)
  2. Coding the plugin “installer” (implementing the BrokerPlugin.class – second code snippet)
  3. Configuring the plugin in activemq.xml file (Spring beans-style XML – third code snippet)

 

Happy coding!

 

 

Resources:

ActiveMQ Custom Security Plugins

With this post i’m starting a short series of articles on creating custom security plugin’s for ActiveMQ server (probably the most flexible MOM/messaging solution around; imho).

 

To get a quick overview of how powerful ActiveMQ plugin API really is, let’s start with some basic background information:

  • The flexibility of ActiveMQ plugin API comes from the BrokerFilter class
  • BrokerFilter class provides the ability to intercept many of the available broker-level operations, such as:
    • adding consumers to the broker
    • adding producers to the broker
    • committing transactions in the broker
    • adding connections to the broker
    • removing connections from the broker
  • Custom functionality can be added by extending the BrokerFilter class and overriding a method for a given operation

 

Using the ActiveMQ plugins API is one way to approach broker security; used often for requirements (security, among others) that can’t be met using either:

  • ActiveMQ’s native Simple Authentication Plugin (which handles credentials directly in XML configuration file or in a properties file)
    or
  • JAAS-based pluggable security modules (JAAS stands for Java Authentication and Authorization Service). What is worth mention is that ActiveMQ comes with JAAS-based implementations of some modules that can authenticate users using properties files, LDAP, and SSL certificates; which will be enough for many use cases.

 

OK, having said the above, let’s move on and study following example implementations:

 

 

Resources:

Sharing data between controllers in AngularJS (PubSub/Event bus example)

Basically, there are two ways of handling the communication between controllers in AngularJS:

  • using a service which acts as a PubSub/Event bus when injected into controllers:
    • code example (John Lindquist’s fantastic webcast can be found here):
      'use strict';
      angular.module('myAppServices', [])
        .factory('EventBus', function () {
          return {message: "I'm data from EventBus service"}
        });
      
      'use strict';
      angular.module('myAppControllers', ['myAppServices'])
        .controller('FirstCtrl', function ($scope, EventBus) {
          $scope.data = EventBus;
        })
        .controller('SecondCtrl', function ($scope, EventBus) {
          $scope.data = EventBus;
        });
      

 

    • note:
      In case you don’t need a controller anymore on your page, there’s no way (other than manual) to automatically “unsubscribe” such controllers (as of today AngularJS doesn’t support component life-cycle hooks, by the use of which you could wire/un-wire components). This is because of closures used in controllers that are not “de-allocated” (memory) when the function returns. As a result, you’ll be still sending messages to such “unused” controllers.

 

  • depending on the parent/child relation between scopes, you can transmit events using either $broadcast or $emit methods:
    • if the scope of FirstCtrl is parent to the scope of SecondCtrl, you should use $broadcast method in the FirstCtrl:
      'use strict';
      angular.module('myAppControllers', [])
        .controller('FirstCtrl', function ($scope) {
          $scope.$broadcast('UPDATE_CHILD');
        })
        .controller('SecondCtrl', function ($scope) {
          $scope.$on('UPDATE_CHILD', function() {
            // do something useful here;
          });
        });
      

 

    • if there’s no parent/child relation between scopes, you should inject $rootScope into the FirstCtrl and broadcast the event into other controllers (including SecondCtrl) and their corresponding (child in this case) $scope’s:
      'use strict';
      angular.module('myAppControllers', [])
        .controller('FirstCtrl', function ($rootScope) {
          $rootScope.$broadcast('UPDATE_ALL');
        });
      

 

    • finally, when you need to dispatch the event from a child controller (SecondCtrl) to $scope’s upwards , you should use the $emit method:
      'use strict';
      angular.module('myAppControllers', [])
        .controller('FirstCtrl', function ($scope) {
          $scope.$on('UPDATE_PARENT', function() {
            // do something useful here;
          });
       })
        .controller('SecondCtrl', function ($scope) {
          $scope.$emit('UPDATE_PARENT');
       });
      

 

    • note:
      because $broadcast will dispatch events downwards through (all) scope’s hierarchy, it results in a slight performance hit (more details and performance tests results, here).

 

Cheers!

 

 

Resources:

Tricky behavior of AngularJS $resource service.

When using $resource service of AngularJS in one of the projects recently, i faced a tricky problem and thought it may be valuable to share the solution here.

 

Namely, one of the back-end services is returning an Array of String values like this, when making a GET call using a REST client:

[
  "Value_1",
  "Value_2",
  "Value_3",
  (...)
]

 

Having a standard AngularJS service defined like this:

angular.module('myAppBackendService', ['ngResource'])
  .factory('BackendApi', ['$resource', 'BackendHost', 'BackendPort', 'BackendVersion',
    function ($resource, BackendHost, BackendPort, BackendVersion) {
      var connString = BackendHost + ':' + BackendPort + '/' + BackendVersion;
      return {
        values: $resource(connString + '/values/:id',
        {
          id:'@id'
        }, {
          query: {method: 'GET', isArray: true},
          get: {method: 'GET', params:{id:'@id'}, isArray: true},
          save: {method: 'POST', isArray: true}
        })
      };
  }]);

 

and invoked like this

$scope.values = BackendApi.values.get(
  function(value) {
    // do something interesting with returned values here
    $log.debug('Success: Calling the /values back-end service', value);
  },
  function(errResponse) {
    // do something else in case of error here
    $log.debug('Error: Calling the /values back-end service', errResponse);
  });

 

i was getting a successful response from the server, however the data format which i was getting was completely unexpected to me:

[
  {
    "0" : "V",
    "1" : "a",
    "2" : "l",
    "3" : "u",
    "4" : "e",
    "5" : "_",
    "6" : "1"
  },
  {
    "0" : "V",
    "1" : "a",
    "2" : "l",
    "3" : "u",
    "4" : "e",
    "5" : "_",
    "6" : "2"
  }
]

you can imagine my surprise when trying to figure out what the heck was wrong with it?

 

After spending some time trying to google out a solution, i finally found the reason for such behavior. Listen to this:

“…ngResource expects an object or an array of objects in your response”

“…When isArray is set to true in the list of actions, the ngResource module iterates over each item received in the response and it creates a new instance of a Resource. To do this Angular performs a deep copy between the item received and the Resource class, which gives us an object with special methods ($save$deleteand so on)”

“…Internally angular uses angular.copy to perform the deep copy and this function only operates with objects andarrays, when we pass a string, it will treat it like an object.

Strings in JS can behave as arrays by providing sequential access to each character. angular.copy will produce the following when passed a string

angular.copy('hi',{}) => {0:'h', 1:'i'}

Each character becomes a value in an object, with its index set as the key. ngResource will provide a resource with properties 0 and 1.”

 

 

So, what are the possible solutions then?

  1. Use the “transformResponse” action of $resource service (you can read more about this in the documentation of the service itself, here)
  2. Use the lower level $http service:
    $http.get('/res').success(function(data){
      $scope.test = data;
    });
    
  3. Return an array of objects in your json response:
    [
      {'data': "hello"},
      {'data': "world"}
    ]
    

 

Happy coding!

 

 

 

 

Resources: