Thursday, March 6, 2014

Testing your HAPI service

For the past few weeks I've been building a REST API with hapi. It's a bit of a mind shift from middleware based frameworks like Express or Django but so far I'm very happy with it. The design really shows that it was made for building serious services.

As far as testing goes I couldn't find much material on how to go about and test my API with, say something like Mocha. I suppose I could've just browsed the hapi source code and see how it was tested internally, but what I ended up doing (after some googling) was that I'd actually start the hapi server before each test case and then stop it after each test. In the tests I used mikeal's request library to query the API. This worked but it felt a bit awkward compared to, for example, what Django provides.

Luckily, I ran into Eran's great talk about hapi where he briefly mentions the server.inject() method. I remember seeing that method before but didn't really understand what it would be good for. But actually with that method you can easily build your API tests.  So, I created a generic function for querying  my API:

/**
 * @param {hapi.Server} server - hapi server instance
 * @param {object} requestobj - options for making the request
 */
exports.request = function (server, requestobj, callback)
    server.inject(requestobj, function (res)
        if (res.payload)
            res.body = JSON.parse(res.payload);
        }
        callback(res);
    });



In the tests I would do something like this (note that the following test uses Mocha but any test framework should do just fine):


    beforeEach(function (done) {
        var api = require('../api/api.js');
        this.server = api.createServer();
        done();
    });


    it('should GET all players', function (done) {
        var request = { url: '/player/', method: 'GET' };
        test.request(this.server, request, function (res) {
            res.statusCode.should.be.equal(200);
            res.body.length.should.be.equal(6);
            done();
        });
    });

And that was it. No more starting and stopping the server all the time.


Wednesday, February 26, 2014

Ember, Leaflet and Leaflet.Draw

I spent some time trying to figure out how to hook all these components together and decided to document the process here in case someone else is struggling with this as well.

I had a minimalistic Ember project in which I wanted to use Leaflet maps. I quickly found out about the ember-leaflet project. I added the ember-leaflet.js dependency to my index.html and following code into my ember application:

App.TileLayer = EmberLeaflet.TileLayer.extend({
    tileUrl: 'http://localhost:7000/tiles/{z}/{x}/{y}.png'
});
App.MapView = EmberLeaflet.MapView.extend({
    childLayers: [App.TileLayer],
    center: [34.227, -118.55],
    zoom: 11
});
Only problem I had with this, was that the leaflet map was taking too much space. Basically it filled the whole ember view. So, when I had other stuff in my handlebars template (besides the  <div id="map"></div> placeholder), the map would fill the whole template pouring over the other elements that I had in my template. This is why I ended up creating another ember view inside the outer view that would only contain the map. Then in the parent template I would have all my html elements as before but where I previously had the div-placeholder for the map, I put {{render "map"}} and then in map.hbs file just the map-div:  <div id="map"></div>.

So far so good.

I also needed draw controls and sure enough, Leaflet.Draw had everything I wanted. I added the css and js files into my project and added following options to my map configuration:

App.MapView = EmberLeaflet.MapView.extend({
   childLayers: [App.TileLayer],
   center: [34.227, -118.55],
   zoom: 11,
   options: {
       drawControl: true
   }
});
That would show the controls on the map just fine, but I wanted to get the edit controls working as well and as Leaflet.Draw documentation tells us I needed to create the Draw controls myself. I wasn't quite sure where to put that code so I ended up overriding the didCreateLayer and initialising the drawing controls there. But I still wasn't able to hook to all the events that Leaflet.Draw provides. Of course, I probably could have gotten the leaflet map handle from ember-leaflet and attach my event handler directly there, but that seemed like a hack. Going through the ember-leaflet code I noticed that it internally uses concatenatedProperties: ['events'] to gather all supported events. After that I just introduced that property in my derived view and added all Leaflet.Draw events there. At that point the code looked like this:

App.MapView = EmberLeaflet.MapView.extend({
    childLayers: [App.TileLayer],
    center: [34.227, -118.55],
    
    zoom: 11,

    events: [
        'draw:created',
        'draw:edited',
        'draw:deleted',
        'draw:drawstart',
        'draw:drawstop',
        'draw:editstart',
        'draw:editstop',
        'draw:deletestart',
        'draw:deletestop'
    ],

    didCreateLayer: function () {
        this._super();

        var map = this.get('layer'),
            drawnItems = new L.FeatureGroup(),
            drawControl;


        this.set('drawnItems', drawnItems);
        map.addLayer(this.drawnItems);
        drawControl = new L.Control.Draw({
            edit: {
                featureGroup: this.drawnItems
            }
        });
        map.addControl(drawControl);
    },

    "draw:created": function (e) {
        var layer = e.layer,
            type = e.layerType,
            drawnItems = this.get('drawnItems'),
            bounds;

        drawnItems.addLayer(layer);
        return this;
    }
});

That was it. Now I'm able to get all the Leaflet.Draw events to my view.



Monday, February 17, 2014

Doctor of Lego

So, I finally got my Ph.D.

With Professor Cesare Pautasso from University of Lugano being the opponent in my defence, we had good discussions. The topic of my dissertation is "Engineering Web Applications: Architectural Principles for Web Software". In case you are interested in getting the pdf file, here's the link.



The whole process took couple of more years than I was hoping for because I was really caught up with the implementation side of these things in the company I work for. It would've been nice to publish this few years earlier but no harm done. If you're looking for a TL;DR version of my dissertation this post is a very brief summary of the subject.

Friday, January 20, 2012

Single page web apps and REST

I was inspired by the Trello tech stack blog post. I've been talking and writing about something similar during the last year or so but never blogged about it. I agree on most of the points made in the post but they could -- and should -- go even further.

Traditionally, web applications are burdened with the spaghetti of too many technologies. The application logic is fragmented over many technologies: HTML/templates, JavaScript and a server side language. This kind of fragmentation makes it very difficult to maintain a consistent code base and to apply proven software patterns like information hiding and responsibility assignment. Web frameworks alleviate  problems to some extent but they don't take away the underlying problems. In the end, it's up to the developer to try and maintain the codebase and the responsibilities of different components. On the other hand these frameworks make things even worse because developers become dependent on their features and start using them blindly, maybe even without complete understanding of what goes on behind the scenes. The code they produce and the skillset they end up with is very difficult to reuse.

We have all been there; we start with a clean design, we set up our MVC model as provided by the framework. We implement the application logic on the server and the view becomes more or less a static web page that is presented to the user. Then we realize that our application is too slow and too error prone. Too many page reloads and too many milliseconds there is a blank screen before a new page comes up. Now starts the optimization; we start adding adding stuff into HTML templates for more convenient rendering and especially we start adding JavaScript to make pages more dynamic. Now we have a web application that is implemented in HTML templates, Java and, JavaScript. Moreover, the client and the server are very tightly coupled and any kind of reusability is seriously compromised.

This is not good.

Highly interactive web applications should be written more like the Trello application; do full MVC on the client side and build an API on the server that the client app consumes directly. Key point here is that you should completely decouple the client and the server. Implement the client as a single page web application that you can either host on a server or wrap it as an application that can be distributed in one (or several) of the web stores. The server on the other hand should be implemented in terms of REST. We are already dealing with HTTP so it makes sense to make best use of it. Another thing that speaks for using REST is its clear separation of responsibilities between the client and the server: the application is responsible for maintaining its state and the server is responsible for maintaining the state of the resources it exposes. There's no more guesswork like "should I implement this functionality in JavaScript, HTML templates or on the server?". You implement the application as much as possible in the client and use the server API only when necessary. 

Compared to the traditional way of implementing web applications this approach brings with it many advantages (these points are discussed in further detail in my paper presented at USENIX WebApps '11, slides):
  • Reusable service interface: the server API becomes reusable. You can use it from another application (possible a mobile app) or you can publish it for others to use.
  • Reusable client application: if you need the application in another environment, you can build the same REST API on top of that system and be able to use the same client application.
  • Responsibilities are easier to assign and enforce: REST sets a strict fence between the client and the server. This makes things like data validation, error handling and localization more explicit and easier to implement. Or should I say, more difficult to mess up.
  • Easier development model: the application logic is not fragmented over different technologies. The client and the server can be developed and especially tested individually and they both have their own internal design and architecture. Implementation of the client application becomes more like implementing a traditional desktop application.
  • Better user experience: the application is not reloaded all the time, so it's snappier. Moreover, the network traffic is minimized because only the payload data is transferred. After the initial page load (bootstrapping), no more CSS, HTML or JavaScript is transferred.

If you looked at the Trello tech stack, you can see that they did half of what I just explained. They have the single page web app with full MVC and they have the server API too. Only thing missing is the REST API. Actually, it just so happens that they are now also building the REST API but it's a different one. What I would like to see is them eating their own dog food and use the API directly from the web application. For some reason this seems to be fairly common these days: many web applications (that even have an API to begin with) have one API for their own use and another API for 3rd party access. If this is due to the fact that they don't want to allow 3rd party access to all of the same resources they have themselves, they should just implement proper authorization scheme. But that's another topic.

We have successfully utilized this approach on several of our (closed source / private) products so far. Not only have we been able to reuse the REST API but also the applications. We have implemented the same API on top of another system to be able to use some of the same applications in that setting. Compared to what we were doing before (traditional model 2 web applications) we have seen a serious boost in productivity and reusability.

Friday, September 30, 2011

Access Control for Your RESTful API

Problem

When your RESTful API gains popularity and different types of client applications (browsers, mobile kits, desktop UIs, other programs) start consuming it, it becomes apparent that they are all using your interface differently. Moreover, they all should have different set of resources available for them. Some applications only need to read couple of resources while others need read/write access to most of your resources. At that point, coming up with a flexible access control scheme becomes crucial. You could try to identify the application that is consuming your interface and based on that only allow access to certain resources, but that would be too easy to break. The only secure way of doing the access control, is to check the authorization of each request. This means tying the access control to the logged in user. However, even then there are usually different types of users (groups with different permissions) accessing the interface.

Let's take an online book store as an example. The API supports three different user groups:

  • Administrator has access to every bit of information supported by the API 
  • Merchant may add and remove books from the store
  • Buyer can place orders, cancel them and modify her account details
We also have to deal with more fine grained access control situations when, for example clients fetch a list of orders from the system. What should be requested and what should be returned? They both can do GET request on /order/ resource but most likely the result will look different for Merchant and Buyer. So, even if the request looks the same, Merchant will see all orders made to her shop whereas User will only see her own orders. Granted, we could alleviate this by providing different URLs for each list (e.g. /shop/123/order/ and /user/jedi/order/) but usually you end up in a situation where you need to filter the results based on user's authorization anyway.

Even more complicated access control rules are required when you actually need to filter out some of the properties from representations. An example of this might be when a client requests user information (GET on /user/jedi/) and some of that information -- such as SSN or credit card number -- should be hidden from Merchant but visible to the User.

Solution

What I've come up with so far, is what I called 4-tier filtering. This access control scheme has the following tiers:
  1. Filter resources -- possibility to hide resources from a user
  2. Filter methods -- possibility to define which methods (post/get/put/delete) are allowed for a user
  3. Filter resultset -- possibility to filter the result of a collection type* resource
  4. Filter properties -- possibility to leave out some of the properties (of the representation)
* by collection type resource I mean resources that return list of elements (e.g. /order/).

Implementation

Not many REST-frameworks take special interest in fine grained and flexible configuration of access control (please, post links in comments if you can suggest some). Of the 4 tiers defined above, the first two are fairly easy to implement generically while tiers 3 and 4 are much more difficult. What I've been using for the first two tiers so far is a Django middleware -- codenamed Bulldog -- that I implemented. I should mention that there's nothing Python or Django specific in this approach; it should be applicable in any environment (I'm actually in the process of doing the same thing in node.js).

Bulldog combines the first and second tiers of the access control in a way we have come to know from relational databases. This requires having users, groups and permissions. The implementation utilizes django's built-in support for these entities. Bulldog defines four permissions (CRUD) for each resource that the given interface supports. These permissions are then assigned to users and groups. A user automatically receives all permissions from the group she belongs to. So, in the permission table we have permissions like (these are automatically populated by the middleware):

resource_order_post
resource_order_get
resource_order_put
resource_order_delete
resource_user_post
....

The format being resource_<resource name>_<method>. The resource_ prefix is used for making a distinction between permissions dealing with access control tiers one and two and all other permissions. User may have any combination of these resources. That is, she may be able to GET order, but not UPDATE it and she may be able to DELETE a user but not POST (create) it.

By default Bulldog denies all access to any resource. Only if the user has been granted with any of the resource_* permissions, she is allowed to access those resources. This being a django middleware the incoming HTTP request never sees the REST framework let alone the API implementation if it doesn't have proper authorization.

Next

I haven't yet figured out what's the best way to implement tiers 3 and 4 generically but obviously I'll need a different set of permissions and the implementation will probably need to grant everything by default (as opposed to denying everything).

In theory, tier 3 could also be implemented as a middleware by filtering the resultset before returning it to the client. In practice however, this would be terrible waste of CPU and memory because client would potentially end up doing a full table scan into memory and then filtering out most of the records. More likely, access control for tier 3 will have to be presented using a specification pattern and that specification instance is passed along with the request. Later on, a repository or another source of data can use the specification to filter the data.

Now, in order to implement tier 4, the REST framework should provide means to define representations using for example, JSON schema or similar. In that definition, I could add annotations (required permissions) for the properties that are subject to access control.

I'll give this some more thought and write a followup with (hopefully) some code...

Wednesday, September 21, 2011

Oracle instant client and cx_Oracle on OS X Lion

In case you are doing Python development on a Mac and connecting to an Oracle database, there's a good chance that you've already run into the segfault (Segmentation fault: 11) screen. First, I thought it was something to do with the cx_Oracle that I had just updated but it turns out that the 64-bit version of the Oracle instant client is busted on the OS X Lion platform.

Only way around this is to use the 32-bit version of the instant client instead. The way you do this is that you download and install the 32-bit instant client (basic-lite and sdk) from Oracle, use Python in 32 bit mode and install cx_Oracle.

  1. instant client comes with pretty decent installation instructions, so just follow them (set three env vars and create the symlink)
  2. to run python in 32 bit mode you have two options (this is all explained in Python's man page):
    1. % defaults write com.apple.versioner.python Prefer-32-Bit -bool yes
    2. % export VERSIONER_PYTHON_PREFER_32_BIT=yes
  3. remove the old cx_Oracle by simply removing the .egg under /Library/Python/2.7/site-packages/. So, for example: % sudo rm /Library/Python/2.7/site-packages/cx_Oracle-5.1-py2.7-macosx-10.7-intel.egg
  4. lastly say: sudo -E easy_install cx_Oracle

These instructions assume you're installing cx_Oracle globally to your system, hence sudo. I actually tried going through this process in a virtualenv but it didn't work. However, I only tried once so maybe I missed something.. and I had to do a global install anyway so I haven't bothered with the virtualenv for now. Will try that again later.

(In case you run into problems, you might want to reboot after the installation because I think OS X leaves some libraries into memory and when you switch back and forth with different versions of libraries, you may actually end up using a different library than what you think.)

Wednesday, April 13, 2011

RGB support for google.maps.Polyline strokeColor gone?

Suddenly our sites stopped showing some of the polylines drawn on google maps components. I spent half a day debugging and then noticed that the ones that were missing used rgb value for setting the stroke color for the polyline, like so:


  var color = "rgb(" + rgb[0] + "," + rgb[1] + "," + rgb[2] + ")";
  return new google.maps.Polyline({

      path: path,
      geodesic: true,
      strokeColor: color,
      strokeOpacity: 1.0,
      strokeWeight: 2
  });

However, as of now (4/13/2011) this doesn't work anymore. I couldn't find anything official about this from google's docs. If somebody has any details on this, comments are welcome.

For the curious: the reason for this was that the polyline had a fading color, so that the end of the polyline was darker than the beginning.