12 Jun 2013

I've been a huge fan of LISP and related programming languages (scheme, racket, etc.) for a very long time. I've been evangelizing the utility of learning how the language works and understanding functional concepts among my friends for almost as long, and recently a friend of mine finally caved and read McCarthy's original paper. Once he realized how important it was, he decided to learn while solving the L-99, a list of 99 problems for LISP, (based on Haskell's list of 99 problems), except we use Scheme/Racket for our implementations, because of the excellent Dr. Racket environment on both Windows and Linux.

The purpose of the L-99 was to provide a series of challenges that would force someone to learn about the language, its syntax and structure, standard paradigms, etc., which something I think it accomplishes very well. But I think it also provides a great backdrop for discussing algorithms regardless of the language they are written in. L-99 therefore provides two opportunities: the first is learning how to do something in LISP, and the second is learning how to do it "the best way"—the one with the lowest time and/or memory complexities. An example I'd like to mention is the list reversal problem, #5, because it is easy to understand and has two easy solutions with very different characteristics. My friend's solution (in Racket), which works correctly, is given below:

(define (list-reverse x)
  (cond
    [(null? (cdr x)) x]
    [else (append (list-reverse (cdr x)) (list (car x)))]))

As I mentioned above, his solution works correctly (aside from a little bit of parameter validation that is missing), but a close reading reveals that its time complexity is O(n^2). This is because the append function is O(n), and it is called once per item in the list to reverse. In another language, with another library (say, C++ with STL containers), appending to a list is a constant time operation, which is likely why he didn't think much of using append here. That said, an opaque list structure can be made (in LISP) that offers constant time append operations, but this isn't how a chain of cons cells works. For comparison, here is my solution to the problem, which uses a helper function to reverse the list in O(n) time:

(define (list-reverse-linear x)
  (define (lr-inner xa xr)
    (cond
      [(empty? (cdr xa)) (cons (car xa) xr)]
      [else (lr-inner (cdr xa) (cons (car xa) xr))]))
  (cond
    [(not (pair? x)) x]
    [(empty? (cdr x)) x]
    [else (lr-inner (cdr x) (cons (car x) null))]))

I suppose it is obvious to most people that faster asymptotic complexity is always better, but just for the sake of argument, let's look at the time required for reversing a list with 10,000 elements in it:

; Input
(define (gen-list n)
  (cond
    [(= n 0) null]
    [else (cons n (gen-list (- n 1)))]))
(define tl (gen-list 10000))
(time-apply list-reverse (cons tl null))
(time-apply list-reverse-linear (cons tl null))

; Result is, in milliseconds (cpu real gc)
(3432 3427 2293)
(0 1 0)

A stark contrast—based on a measurement with time-apply, it takes nearly 3.5 seconds to reverse the list using the O(n^2) algorithm, but less than one millisecond to do so with the O(n) algorithm. A big part of this, of course, also comes from the garbage collection time, which accounts for nearly 2/3 of the run time in the slower algorithm, due to how many intermediate lists are generated and discarded, while the linear algorithm does not allocate any unused items. Another important difference is that my solution is tail-recursive, which enables significant optimizations (ones that Dr. Racket is capable of doing, too), compared to my friend's solution.

I think that the L-99 has a wonderful secondary purpose. Not only can it be used to introduce someone to the standard idioms, tools, and techniques of functional programming, but it can also be used to introduce and discuss analysis of algorithms, as the problems can generally be solved in multiple ways, with different time and space complexity requirements. It even provides a platform to discuss different implementation strategies (such tail-recursion vs. not) and experiment with their effects on performance, all with problems and solutions that are very simple to state and understand. This is a great aspect of LISP as a tool for learning in general—it frees you as a programmer from worrying about necessary but conceptually irrelevant implementation details, to focus on understanding what you're doing, why you're doing it, and how you're doing it, on an algorithmic level. Mastering the relevant skills translates readily into using other languages, where it helps separate the problems—algorithms, memory management, program structure, inheritance organization, the list goes on—out in your mind, allowing you to solve each individually.

Normally I'd write another 3+ pages, at minimum, trying to beat this to death, but I think this is a pretty self-defending position. Instead, I'd like to ask you a few questions to think about and maybe take with you as you do or learn about programming. Have you ever looked at the L-99? Can you solve all the problems, even if you don't take the time to write out complete solutions? Are your solutions optimal in their asymptotic complexities? I'd also love to hear about any particularly clever solutions people have created.

Ever since I wrote my first query for MySQL, nearly a decade ago, as far as web development goes, I have wanted nothing more than to not write database queries in SQL. I wouldn't say that I dislike the idea of SQL, even; I just don't like writing queries. SQL is pretty flexible and extremely powerful, but it's tedious. Queries read almost like English sentences, and despite the repetition of writing nearly identical structure many times, SQL syntax errors (magically) happen at a rate far higher than they do in my PHP or JavaScript source code, likely due to the extensive amount of escaping and quoting that happens while writing queries by hand. Within a couple years, I grew tremendously as a PHP developer and learned about the concept of database abstraction, but sadly it didn't solve my problem.

The Problem with Database Abstraction

Database abstraction seems like a great idea—it hides the details of the database API from the user application. And, for this purpose, it is excellent. Perhaps it was my naivety, but I expected more. I expected database abstraction to abstract the syntax of SQL, not just hide the API for talking to MySQL. During those first few years, I worked on several different projects, and in almost every case, we used ad hoc solutions for hiding some of the details of constructing queries, such as a select() method that accepts a WHERE clause and fills in the rest of the query automatically. In the time since, these types of solutions have been included in abstraction layers such as the one provided by the Zend Framework, but they suffer from many problems.

Chief among these problems is incomplete SQL abstraction. This was a common aspect of our ad hoc solutions in 2005, as we really just wanted to hide part of the query syntax, but even modern solutions require the user to fill in some portions of the SQL, although this is improving. The problem, however, is that without complete SQL abstraction, the results aren't really portable. The entire objective of abstracting the API is to provide backend portability without requiring significant changes, yet any hand-written SQL or portions of SQL provides an opportunity for using non-portable features of the particular DBMS at hand. The other consequence of incomplete abstraction is that input escaping must be done by the consuming code. This is easy, but it requires extra code and provides extra opportunity to make mistakes.

The next problem is that a lot of solutions are verbose. Sometimes this is due to the language syntax—like PHP's array syntax—but the library structure is almost always a contributing factor. Granted, because we are doing abstraction, some overhead is not unreasonable, but when you combine the source code overhead with incomplete abstraction, it raises a valid question of the utility: now you are writing more code than if you wrote SQL directly plus if you change backends, you will need to inspect all of your code anyway, and possibly make changes despite using an adapter.

Finally, most abstraction layers do not provide a means for simplifying and standardizing the generation of frequently used queries or query fragments. They may provide abstraction and query generation, but it must be repeated wherever a query is generated. You may ease the pain by defining additional functions that modify query objects to perform the repeated tasks or add groups of conditions to where clauses, but this doesn't hook into the rest of the abstraction in a clean way. Recently, I've come to learn of an approach known as Object-Relational Mapping (ORM), which provides some of this capability, as it provides a means of translating key/value maps into database objects and vice versa, but it appears to suffer from limitations in the complexity of the types of queries that can be created using the object mapping approach.

In 2006/2007, I began developing my own solution designed specifically to address these problems, in PHP. I've actually written about the approach before, on a blog long gone, where I posted some proto-code that formed the core of what I called "database filtering" and used for about 3 years on all of my PHP work. The premise is simple: the user constructs a map, in the form of a PHP array, which specifies a filter—that is, a set of matching conditions—for obtaining data from the database, and then passes it to a function that decodes the filter, escapes all of the input, formats it according to the database field types, and creates a complete query to obtain the data. This combined features of ORM with a more general query generation technique, because the filters did not actually reflect the structure of the table: they reflected the structure of an abstract data interface, which could be extended to include conversion rules for many different types of objects. An important aspect of my approach was automatically doing "the right thing" by converting array parameters to IN() conditions, but leaving scalars as simple equality. It greatly simplified my code because I no longer had to worry about formatting my data every time that I hit the database. Effectively, it formed an abstraction layer for data retrieval, freeing me from thinking about SQL entirely. In fact, aside from writing the primitives used by the filtering code, I didn't write any SQL in my application logic at all.

Now that I've made the switch from PHP to Node.js, the first thing I needed was a suitable database abstraction layer. I decided to use the excellent node-mysql package for handling the connection to MySQL, and I decided to port my old database filtering library from PHP to Node.js, with many improvements, and release it on github. If you hadn't guessed, the last six paragraphs were just background and motivation for my actual purpose here: to introduce db-filters for Node.js, explain why it is a good solution to these problems, and ideally motivate you, the reader, to use it for all of your SQL needs, and then convince your friends to use it as well. I mentioned the Zend Framework earlier, but since I am developing a Node.js solution, it is more useful to compare against Node.js Database Drivers a database abstraction layer that provides query generating functionality for MySQL and drizzle (which is really just also MySQL, but the use of different libraries makes it a good example of how current solutions are effective for API abstraction, but not language abstraction).

How I'd Like to Solve the Problem

The primary goals of db-filters are specifically to address the three problems outlined above, with many eventual stretch-type goals that may or may not be feasible. I want to provide a solution that has a complete abstraction of SQL, is simple to use, has a succinct yet flexible API, provides appropriate parameter escaping without programmer action, and supports pseudo-fields (I call them special keys) in table definitions that can be used to implement frequently repeated object property decoding in a central location with proper hooks into the SQL abstraction and filter decoding process. Additionally, db-filters provides all of these capabilities to INSERT, UPDATE, and DELETE queries as well, which is uncommon in other query-building abstraction layers. Importantly, db-filters is not an API abstraction, although it is an effective consequence, as it wraps around an underlying driver for communications. You use it with a (presumably simpler) library that provides a means for sending queries and retrieving results. With that in mind, let's take a look at some example SQL and see how to generate it using a filter. All of the examples assume that appropriate filter definitions exist for the tables being used. Normally, filter definitions are initially created using the generation script included with db-filters, and then any necessary special handlers are implemented manually, so I will not include the definitions themselves.

We'll start with a pair of basic queries to illustrate the simplicity of the API, as well as the inferred formatting. We have a table of users, and we wish to retrieve all of the information for a specific user, and then we wish to retrieve information for a group of users, using two queries:

SELECT * FROM `users` WHERE `id` = 1
SELECT * FROM `users` WHERE `id` IN (1, 2, 3, 4, 5)

Side note: all of the "expected SQL" I am providing is the exact output of the filtering code given afterwards, except that I use the buildQuery() method to obtain the SQL without sending it to the server. I provide db-mysql comparison code in a few places, but the installation actually failed on my system, so I am not able to guarantee that my comparison code is 100% correct as it is based on the (limited) documentation available on their website, but it should provide a good approximation of how much code must be written, including how much SQL is hand-written. Assuming that we have a filter for the users table stored in the users variable, the code would look like this:

// Success and failure are callback functions invoked asynchronously with the results
// they forward the results of mysql.query exactly as given
users.select({id : 1}).exec(success, failure)
users.select({id : [1, 2, 3, 4, 5]}).exec(success, failure)

// Comparison code snippet using Node.js Database Drivers' db-mysql
conn.query().select('*')
    .from('users')
    .where('`id` = ?', [1])
    .execute(callback);

this.query().select('*')
    .from('users')
    .where('`id` IN ?', [[1, 2, 3, 4, 5]])
    .execute(callback)

At this point, you should probably not be impressed, but it should already be apparent that db-filters provides a more effective way of creating queries. db-mysql requires an extra method call, query(), requires specifying the * field manually, which is SQL-specific and can be inferred by not including other field definitions, and requires partial SQL inclusion in the call to where(). Although it doesn't matter in this example, requiring you to write out the contents of the WHERE clause defeats the point of abstracting the backend, because you will be tempted to use MySQL extensions, and suddenly your code isn't portable anyway. With the filtering approach, you write no SQL, you write less code, you write simpler code, and the result is completely portable to any database engines supported by the library. With that in mind, let's look at a more complicated type of query, to see if these attributes extend to these solutions as well. This time I have to write a query that that retrieves all of the posts on a specific page in a thread, along with the user information for the authors of those posts, ideally using a join. First, the SQL we expect to see:

SELECT `p`.*, `u`.* FROM posts AS p LEFT JOIN users AS u ON `p`.`userId` = `u`.`id` WHERE `p`.`threadId` = 1 ORDER BY `p`.`posted` ASC LIMIT 60, 20

This is generated with another very straightforward Node.js snippet. It is important to note that there are no SQL literals or directly-embedded strings in this code, only function calls, table/field names, and parameters. Comparable solutions often require ordering clauses such as posted ASC or ON clauses such as `p`.`userId` = `u`.`id`, which seem fairly innocuous but still serve to break the abstraction by requiring hand-written pieces of SQL.

// Using db-filters with users and posts as predefined filters
var page = 3;
var tId = 1;
var postsPerPage = 20;
posts.select({threadId : tId}, 'p')
    .order(db.$asc('posted'))
    .limit(page*postsPerPage, postsPerPage)
    .left_join(users, 'u')
    .on(['userId', 'id'])
    .exec(success, failure);

// What db-mysql code might look like
this.query().select(['`p`.*', '`u`.*'])
    .where('`p`.`threadId` = ?', [tId])
    .join({type : 'LEFT', table : 'users', alias : 'u', conditions : '`p`.`userId` = `u`.`id`'})
    .order({'`p`.`posted`' : true})
    .limit(page*postsPerPage, postsPerPage)
    .execute(callback)

This is where the difference is really apparent. db-mysql does not know how to combine fields with tables, so all table names/aliases must be prepended onto field names manually whenever they are used in the query. The syntax is also very verbose, requiring 33% more typing despite a weaker abstraction. By contrast, the db-filters code has much more readability to it, contains exactly zero back ticks (`), and can in principle emit a valid query for any RDBMS, because there are again no keywords or operators. These differences only become more significant as we consider different situations. Let's look at one last SELECT query, which will demonstrate how the function abstraction and special fields work in a filter-based approach:

SELECT * FROM users WHERE `name` = 'greg' AND `password` = MD5(CONCAT(`salt`, 'pass')) LIMIT 1

This might be part of a simple salted, hashed login system. This isn't exactly a robust login solution (use bcrypt instead, if you're building one at home), but I'm not presenting a debate, I'm demonstrating how to use the library, and this was the easiest type of demonstration to come up for calling SQL functions.

var name = 'greg';
var pass = 'pass';
users.select({name : name, password : db.$md5(db.$concat(db.$field('salt'), pass))})
    .limit(1)
    .exec(success, failure);

// And again in db-mysql
this.query().select('*')
    .from('users')
    .where('`name` = ? AND `password` = MD5(CONCAT(`salt`, ?))', [name, pass])
    .limit(1)
    .execute(callback);

Using db-mysql, we are forced to write out the entire body of the WHERE clause ourselves, including directly calling MySQL functions. If we had been targeting PostgreSQL, instead, string concatenation would be written using a ||, which I believe is not valid MySQL syntax. Or, if we were using SHA1 hashing, instead of MD5, our solution would not be portable to PostgreSQL, as it does not provide a SHA1() function, although it does provide the digest() function, which can be used for SHA1 hashing. But, by using the abstraction layer for db.$md5, db.$sha1, or db.$concat, the query building engine that implements these functions for each targeted database can produce the appropriate query syntax to provide the desired effect, emitting SHA1('text') on MySQL, but digest('text', 'sha1') on PostgreSQL.

You may find yourself saying "well, it's nice that you've abstracted the entire function list, but typing out all of that db.$md5 and db.$concat stuff is quite tedious in its own right, so I'm not sure I am willing to do that everywhere that I want to include code like this." That is where the final key feature of db-filters becomes relevant: special fields. We can update our filter definition to include a map of special handlers in addition to column names by supplying a third argument to the constructor:

// Earlier column definitions omitted
var columns = {  ...  };
var special = {
'salt_password' : function(key, value, terms, options) {
    value = db.$eq(db.$md5(db.$concat(db.$field('salt'), value)));
    terms.push(value.get('password', this, options));
}
};

var users = new db('users', columns, special);

// Now, use the salt_password special key to invoke the handler, which can automatically apply rules
users.select({name : name, salt_password : pass})
    .limit(1)
    .exec(success, failure);

This generates identical SQL to the "long method" but it simplifies the calling code. The special handler is defined once using somewhat uglier syntax, but it can then be reused many times providing consistency in generating queries that use that condition, and it does not require sacrificing any abstraction to do so. This may seem contrived, as password logins generally happen in one place, but if you require a user to supply a password to change his settings in as well as when logging in, then you can define your password transformation once and use it in both places. By pushing terms to the array that is used to create where clauses, special handlers can emit zero or more terms, which allows them to be used to perform ORM as well, or any type of complicated decoding and parameter preparation.

My last two examples will be to demonstrate how db-filters handles INSERT and UPDATE queries, along with a comparison to db-mysql. In contrast to other libraries, db-filters uses the same filter decoding logic to specify parameters for insert and update queries (and delete), allowing you to leverage all of the function abstraction and the use of special key handlers in these queries as well, automatically.

-- Create a new post in our thread
INSERT INTO posts SET `threadId` = 1, `userId` = 1, `posted` = NOW(), `post` = 'Sample post'
-- Update the user's post count
UPDATE users SET `postCount` = `postCount` + 1 WHERE `id` = 1

The code will also introduce a useful feature of the helper functions: for all functions, if one of the arguments is missing, it is assumed to be the column that keys the value, compressing the syntax for the most common situation, computing fields as a function of themselves.

var user = {id : 1};
var post_text = 'Sample post';

// With db-filters
posts.insert({threadId : tId, userId : user.id, posted : db.$now(), post : post_text})
    .exec(success, failure);
users.update({postCount : db.$add(1)}, {id : user.id})
    .exec(success, failure);

// With db-mysql
this.query().insert('posts',
    ['threadId', 'userId', 'posted', 'post'],
    [tId, user.id, {value : 'NOW', escape : false}, post_text])
    .execute(callback);
this.query().update('users')
    .set({postCount : {value : '`postCount` + 1', escape : false}})
    .where('id = ?', [user.id])
    .execute(callback);

By now I sound like a broken record, but again, with the approach used in db-filters, SQL is completely abstracted, whereas db-mysql includes writing raw SQL, in addition to the tedious syntax used to indicate literals that should not be escaped. The db-filter code could also be improved by adding a special key for 'user' that understands how to decode the id property from the user object, which would simplify the database "client" code that creates and updates posts.

Final Thoughts and the Future

One thing I want to make a note of is that it seems like I'm panning db-mysql pretty hard with all of my comparisons, but that isn't really my goal. Realistically, the purpose of db-mysql is to provide an abstraction layer for the API to interact with the database, not to generate portable queries, and it seems that it handles that task well. But, it is always necessary to draw a comparison when advocating a different way to do things, and so it helps that db-mysql also attempts to provide a query abstraction interface, something that I don't think it should be doing (and something that node-mysql doesn't do). I think separating the two tasks makes a lot of sense: one library to perform low level communication, and another library to perform query generation, allowing either to be replaced if a better alternative is developed.

Next, I'd like to briefly address performance. It seems like with a powerful abstraction layer, you'd sacrifice a lot in terms of performance, but a quick benchmark of running the example code in this post 10,000 times, generating 70,000 queries total only took approximately 3330 ms, on average, on my Amazon EC2 micro instance. This works out to less than 50 us per query, which is significantly less time than you will spend waiting for and interacting with the database afterwards. The performance thus seems "acceptable" to me, but I do think it could be faster. This won't really be improved until an undetermined time in the future, as there are more important things to work on.

Finally, db-filters still has a long way to go. At this point, it is possible to create a lot of queries with complex relationships, perform multi-way joins, and use most of the built-in MySQL functions, all without writing any actual SQL. But, there are gaps in the function list, mostly in the date/time area, as these functions can be used in the most diverse ways, and there are limitations to the types of structures that can be generated using where clauses. For instance, you cannot yet specify relationships between fields on two tables in a join using a WHERE clause—the only relationships can be in the ON parameter, which restricts the relationship to that of equality, based on the current implementation. Perhaps most importantly, db-filters currently only targets MySQL. In theory, the abstraction presented should allow it to be used with any RDBMS, but I am currently only developing the one translation engine, because I only use MySQL, so support for other engines is limited to those that are compatible with MySQL's extensions to SQL. I'd really like to see it ported to support a NoSQL solution with the same interface, but that would be a tremendous undertaking and will definitely wait until the core API and features are frozen.

If you've made it this far, I hope I've convinced you that db-filters is the best solution for database interaction. It is a very specific solution that attacks only the problem of query generation, and it is specifically designed to address all of the problems that I've identified with existing solutions. The project is available on github as well as through npm. There is more complete API documentation, along with a function reference in the project README file, which I will (thankfully) omit here—I'm not trying to explain the details of using the library; I'm only trying to demonstrate its capability and advantages. Please post any bugs, feature requests, missing functions, or other suggestions on the github page.

In Node.js, a single process is used to handle many concurrent clients by taking advantage of the hypothesis that the majority of the time spent responding to server requests involves waiting for I/O, which includes waiting for other networked resources to become available. Node.js and programs written for it are essentially designed to use cooperative multitasking, allowing a central loop to manage tasks and call them when their resources are available. However, as a consequence of JavaScript’s implementation, true continuations are not available, so tasks cannot be written in a single, uniform control flow that suspends the task on I/O, resuming it in place when data is available (however, libraries exist for this purpose).

Instead, an asynchronous approach is used: when an I/O request is made, a callback is also supplied, to be invoked when the data is available. This technique is frequently generalized to include all or most library interfaces, because it provides greater consistency to simply assume that all information is available either asynchronously or synchronously, rather than burdening the programmer with additional requirements to remember which approach a specific function uses. Sometimes this is referred to as continuation passing style, because data is never passed back to a calling context via return; instead it is forwarded to the next function as an argument.

Despite the advantages of an asynchronous structure for multitasking, code that makes frequent use of callbacks, including nesting callbacks, can quickly become spaghetti code and create a maintenance nightmare. Consider this snippet, which uses express and node-mysql to handle a web request by pulling data from a database and returning it in JSON format:

// Note that error handling is omitted for brevity and simplicity
server.post("/:id", function(req, res) {
    var id = parseInt(req.params.id) || 1;
    pool.getConnection(function(err, conn) {
        conn.query('SELECT * FROM `users` WHERE `id` = ?', id, function(err2, rows) {
            res.send(rows[0]);
        });
    });
});

All of the functions are defined locally and anonymously, growing inward as each callback is added. Of course, we can suspend our instinct for good style just this once and not add additional indentation for callbacks, because there will be no code in the parent context, but this is only a small stopgap. Really, the problem is that we're defining callbacks exactly where they are used, which limits our ability to reuse them. An improvement, then, would be to restructure our code as a series of complete functions, and then pass them by name as callbacks, which permits better reuse and avoids the infinite tower of indentation, but it is not without its own problems:

function send_user_info(res, err, rows) {
    // ... error checking here
    res.send(rows[0]);
}

function get_user_info(id, res, err, conn) {
    // ... error checking here
    conn.query('SELECT * FROM `users` WHERE `id` = ?', id, _.partial(send_user_info, res));
}

function handle_id(req, res) {
    var id = parseInt(req.params.id) || 1;
    pool.getConnection(_.partial(get_user_info, id, res));
}

server.get('/:id', handle_id); 

One immediate problem with this style is that we now have to pass parameters into the other functions, as we cannot rely on creating closures with anonymous functions. The solution I've chosen here involves using the underscore library to perform "partial execution" and specify some of the parameters to be passed to the function. Effectively this just wraps the function in an anonymous function that forwards the given arguments, but overall it is much cleaner (and shorter) to write out.

This solution is really not ideal for more complex code, even if it looks acceptable for the example given here. Lots of attempts have been made to deal with this. I could spend the next week going over existing solutions like promises, deferred, and Q, but I would instead like to introduce a different solution that I have been working on for a while, a control flow library that is called flux-link. The library aims to mimic an overall synchronous code flow while providing a shared local state object that can be used to avoid partial execution and (very dangerous) global variables. Let's start by reworking the above example to use flux-link:

var fl = require('flux-link');

function get_db(env, after) {
    pool.getConnection(after);
}

function get_user_info(env, after, err, conn) {
    // ... error checking
    conn.query('SELECT * FROM `users` WHERE `id` = ?', env.id, after);
}

function send_user_info(env, after, err, rows) {
    // ... error checking
    env.res.out(rows[0]);
    after();
}

handle_id = new fl.Chain(get_db, get_user_info, send_user_info);

server.get('/:id', function(req, res) {
    var init_env = {
        id : parseInt(req.params.id) || 1,
        req : req,
        res : res
    };
    var env = new fl.Environment(init_env, console.log);
    handle_id.call(null, env, null);
}

This code looks very similar to the second example above (intentionally so, this does not use all of the library features), except that we no longer need partial evaluation to specify any of our callbacks. Instead, data is passed through the environment variable, which is made available to each callback in turn, as private, local state. Each callback that is to be used in the chain must accept two specific arguments: env and after, which hold the environment and a reference to the next callback in the chain, but thanks to the magic of partial execution, these are already bound inside each "after" parameter, so user code does not need to be aware of them. This means that chained callbacks can be passed to other library code (such as when after is passed to pool.getConnection()) and will behave as expected, receiving all of the parameters that are normally supplied by the library, in addition to the environment and next callback.

But, when you look closely at it, this code can be improved even further, with better modularity, as the argument passing introduces unnecessary coupling between function prototypes, and even a generic function to set up routes in express and automatically invoke callback chains. To demonstrate this, let's add another route and another chain to retrieve information about another type of entity tracked on our website, groups, which will have a very similar implementation to users:

var fl = require('flux-link');

function init_db(env, after) {
    pool.getConnection(function(err, conn) {
        // ... Check for connection error
        env.conn = conn;
        after();
    });
}

function get_user_info(env, after) {
    var id = parseInt(env.req.params.id) || 1;
    env.conn.query('SELECT * FROM `users` WHERE `id` = ?', id, after);
}

function get_group_info(env, after) {
    var id = parseInt(env.req.params.id) || 1;
    env.conn.query('SELECT * FROM `groups` WHERE `id` = ?', id, after);
}

function print_result0(env, after, err, rows) {
    // ... error check
    env.res.out(rows[0]);
    after();
}

function create_route_handler(chain) {
    return function(req, res) {
        var init_env = {req : req, res : res};
        env = new fl.Environment(init_env, console.log);
        chain.call(null, env, null);
    }
}

var handle_user = new fl.Chain(init_db, get_user_info, print_result0);
var handle_group = new fl.Chain(init_db, get_group_info, print_result0);

server.get('/users/:id', create_route_handler(handle_user));
server.get('/groups/:id', create_route_handler(handle_group));

Importantly, init_db, create_route_handler, and even print_result0 all can be implemented once as part of an internal library and then used to construct websites with only minimal project-specific code. Furthermore, chains can be constructed out of other chains. So, it is easily possible to create a "pre-routing" type chain that performs mandatory tasks such as establishing a database connection, parsing session data, or whatever else you need, and a "post-routing" chain that might fetch additional data required on every page (say, users online count, for a forum, or notification information). By storing data in the environment, you can designate an output object, write to it during your per-route chain, and then have the final step of the post-route handler be to pass the output object through a template renderer and then send the result to the browser. Although it is much more sophisticated, this is the basic premise of a system I use internally to implement projects.

The flux-link library itself has many additional features, such as proper exception handling, call trace generation (to aid in debugging), and utility functions to replace common tasks (such as checking for errors and throwing exceptions). The project is MIT Licensed, hosted on github and is also available in (hopefully) stable releases on npm. I have plans to expand its capabilities in the future, including a proper looping construct (although loops inside chains can be implemented already, it is a little tricky to do so) as well as adding solutions to any other control flow problems that I may come across that fit with the theme of creating callback chains. Comments, bug reports, suggestions, and any other feedback is welcome, here, on github, or anywhere else that you may be able to reach me.

Recently, I wrote a post about using iptables to block certain IPs from accessing my server, making it drop traffic without acknowledgement, so that I would appear to no longer exist on the internet. My intent was to mitigate a small brute force login problem I was seeing on my server from one IP address. However, since then, I've learned from a variety of sources (including Ars Technica and pretty much every other electronic news site that this was likely just a small window into a much larger botnet attack that is attempting to compromise wordpress installs using many machines at once. Obviously, in this case, manually blacklisting each IP address is infeasible. I recommended installing Limit Login Attempts as a good plugin to prevent botnets from attempting to brute force one's admin page, but now it is time to go a step further. I would like to expand the blacklisting to simply drop traffic from anyone who exceeds the login limit, in order to save my server the processing time for handling illegitimate requests.

In order to do this, I'm going to be using a program known as fail2ban which monitors log files for pre-specified regular expressions that identify undesirable behavior, and then blocks traffic from the IP addresses generating this behavior. The bans are timed and are added and removed automatically by the fail2ban daemon program, with configuration options for the duration, number of failures necessary to trigger a ban, etc. By combining fail2ban with Limit Login Attempts, we can leverage both a Wordpress-level solution to identify problem users easily and a kernel firewall-level solution to remove them from the server with minimal impact.

Fail2ban operates on log file parsing, which means it is incompatible with currently released versions of Limit Login Attempts. I will talk to the author and see if I can get him to incorporate writing to an actual log file in addition to logging in the MySQL database running wordpress, but for now, we'll simply hack in the functionality we need, because it only takes a few lines. In limit-login-attempts.php, in the function limit_login_notify_log, which starts around line 563, we'll append the following lines to the end of the function, making it write a simple message with the user's IP address to a pre-specified and hardcoded log file name whenever a lockout is also written to the database:

$file = fopen(LIMIT_LOGIN_FILE_PATH, 'at');
if ($file) {
    // Note that the 't' flag will offer \n to \r\n translation on windows
    fwrite($file, date('M j h:i:s ') . sprintf(__('IP locked out by wp-limit-login: %s', 'limit-login-attempts'), $ip) . "\n");
    fclose($file);
}

Then, we also need to add a line defining the constant LIMIT_LOGIN_FILE_PATH, so I've added define(LIMIT_LOGIN_FILE_PATH, '/var/log/nginx/wp-limit-login.log'); at the top of the file. Perfect, now whenever logging is enabled (and why would you disable it?), the offending user's information will be written to both the MySQL database to display in the Limit Login Attempts dashboard and to the specified log file. Make sure the file path you choose for the log file is writeable by whichever process runs your php instance (be it apache, php-fpm, etc). This log file is very simple, which suits my purposes, but you may want to augment yours with some other information, such as potentially the name of the blog, if you're running a multi-site installation, or the number of times an IP has been locked out. It's also entirely possible to put this file inside your wordpress install, if you don't have access to /var/log for instance. If you put it inside the plugin directory itself, the standard wordpress rewrite rules should prevent clients from accessing it, but I haven't verified this, so I'd recommend keeping it outside of your webroot to avoid disclosing the IP addresses (we're responsible administrators, even if the people we're banning aren't).

Next, we add a filter to the fail2ban configuration. Filters are defined in /etc/fail2ban/filter.d, one per file. I suggest a simple name, like wplogin.conf. Inside this filter, we configure the regular expression to capture a failure and an optional regular expression to ignore, which trumps a failure line. Based on the output above, our filter is very simple:

# Capture failed login attempts through wordpress limit logins module

[Definition]

# default message is "IP locked out by wp-limit-login: <ip address>"
failregex = [^:]*: <HOST>

ignoreregex =

With the filter in place, we must add a jail configuration to fail2ban. For that, we edit /etc/fail2ban/jail.conf and add a new jail that references the filter we just created. I would strongly recommend that you add your own IP address on the ignoreip line , in CIDR notation, because if not, all of your traffic to your server will be dropped (for a specified amount of time), which will make it impossible for you to access.

[wplogin]
enabled = true
filter = wplogin
action = iptables-allports[name=wplogin]
logpath = /var/log/nginx/wp-limit-login.log
maxretry = 1
bantime = 600
ignoreip = 127.0.0.1/32

Because Limit Login Attempts already counts how many times someone has failed to log in, an entry here means they should be banned. Therefore, we set the retry count to just one, so that it will ban them on their first attempt. We also use the iptables-allports action to ban the user's IP address, which does almost exactly the same thing as the script I presented before, except that it uses a separate fail2ban chain and names the rules so that it can go back and remove them later with minimal fuss (i.e. it is more sophisticated). I would also recommend increasing the ban time, as the default is only 10 minutes. Unless you run into problems with actual readers being banned (unlikely, as the botnet appears to run on other servers, not on infected clients), you can easily set this to an hour, or a day, at your discretion.

After that, simple launch fail2ban by executing:

# fail2ban-client start

and fail2ban will begin monitoring the log file. You may notice that fail2ban gives a warning on startup saying that the log file doesn't exist. If this happens, create the file, change its owner to match the account that will be writing to it, and restart fail2ban:

# touch /var/log/nginx/wp-limit-login.log
# chown nginx /var/log/nginx/wp-limit-login.log
# fail2ban-client reload

This should combine the power of Limit Login Attempts to protect your installation from brute force password attacks and fail2ban to drop traffic to your server, improving its effectiveness. The combination will reduce the amount of server resources spent dealing with illegitimate requests, which can turn an almost-DoS caused by brute force password hammering into a non-issue.

Yesterday, I ran across the original eyes.js library for Node.js, which provides improved functionality to replace util.inspect(). It seemed nice and did a good job color-coding the output, but it was missing a few features that I wanted, such as the ability to hide certain types (such as null or undefined properties), as well as the ability to specify stacked color attributes on the output. Initially, I opted to modify the original source project, but as it went on I realized I didn't really like the way it was structured and decided to rewrite the entire module, using colors and underscore to simplify the implementation.

Therefore, I forked the repository and rewrote the entire core library file. My branch is available at https://github.com/gmalysa/spyglass. It is also available to be installed through npm:

npm install spyglass

The README on github has a very in-depth explanation of how to use spyglass, so I won't repeat all of it here, but the basic idea is that it allows for colored, pretty-printed object inspection, optionally writing the output to a stream (such as the console) or simply saving it as a string. It has a built-in type identification system (which allows for better resolution than the typeof operator) with user-customizable hooks that allow for special display routines to be provided for any user-defined class. Reasonable defaults are provided for all of the built in Javascript classes; more might come for some built in Node.js objects as well. Object properties can be hidden from the output based on exact name matches, regular expressions, or value types (i.e. null properties, function properties, etc.) to avoid unnecessary clutter.

If you're currently using eyes.js or are just looking for a tool to add powerful inspection/var_dump()-like capabilities to your Node.js project, please give it a try and let me know about any bugs or changes that might be useful!