Once a year, I have to pay my renter's insurance bill. I get an email from Progressive, go to the website and find that I can't remember the password. I pull up the password manager and find Progressive-Homesite in the list, but the password is incorrect. Oh, I forgot, 4 months ago I needed to print proof of insurance while I wasn't home and had to reset the password to log in on the office computer. I reset the password again and pay my bill, saving the new password. The same basic scenario plays out every year for every infrequent account I have: insurance, internet, electricity.

So, if I never know what my password is, how am I really being authenticated? It comes down entirely to having access to my email address, but the process is tedious:

  • Click lost password/forgot password/trouble logging in

  • Enter email address

  • Wait for email to arrive, check spam, etc.

  • Click a link, where I create a new password, sometimes with confusing results:

    Homesite's Password Reset

  • Log in with the newly created password

These are effectively all low security websites. Contrast this with my bank, where if I were to forget my password, I would need to make a phone call and verify my identity in another way or visit a branch in person. That said, security questions, common personal details for verification, etc. are all pretty bad ways to verify someone's identity because they get reused and cannot be changed, so it's possible that the actual security in those processes is still quite low. For the sake of argument, however, let's classify this as a low security website because the password reset is automated and doesn't even offer the opportunity for another human to notice something may be amiss, in contrast with one where a person might stop and ask why someone is asking for access to my account but is unable to pronounce my last name consistently in conversation.

This process sucks. It's already established that if I have access to email, I can access anything that uses that email as a lost password address--we should cut out the middle man and use email directly to authenticate. Google already has a solution to this with Google Identity Platform, which is much faster to use, doesn't require me to store additional credentials, and easier to build and use correctly than homemade authentication systems might be (cf. Homesite requiring special characters in the password and tripping over the ones my random generator produced above). But Google isn't the only one that does this, and OpenID has been developed and deployed widely for authenticating users through third parties. If you don't like Google, you could use Facebook, or Twitter, or Steam. Just stop creating password-based logins. For the reboot of my blog I've used Google Identity Platform, and for the Dota 2 league I run, players sign in through Steam. So in both cases the login security is handled by companies with better security teams, and a data breach on my end also reveals no private information--the account names and IDs are public in both cases, after all, and I store no passwords or security questions.

At the end of the day, though, OpenID is somewhat inconsistently implemented, so most stable providers--the big names we can reasonably expect to continue to exist in five years--have unique-ish APIs for it, or in some cases (like Steam) don't really support OpenID and instead use OAuth with a basic permission to read account details. So currently, you end up needing to support each authentication provider separately, and the identity details they provide are not linked: you cannot programmatically tell that the same "person" operates both a twitter and facebook account, even if they wish to make that information public, without having them log in to both during the same session on your site.

Ideally, users could register themselves with a public key that they provide. Both RSA and ECC are standardized and have robust library support; plus they are already available on every client and server. Digital signature algorithms provide a means for a user to prove that they can access the private key associated with a public key. I know the HTTP spec currently supports client authentication, but I don't remember if it permits the use of unsigned certificates or if this varies by implementation. In this case, because the user is providing their public key at registration, the issuer doesn't need to be verified: we just need to see the same key again plus proof of ownership to authenticate.

Specialized applications, like the DoD's Common Access Card have already deployed this strategy and proven to be very successful, but there are challenges with a widespread deployment for a hardware security token, because people are very good at losing their tokens. For the CAC and other truly sensitive situations, the recovery procedure makes sense: you have to go in person and answer a lot of questions, but for my reddit account there's no need to travel to California if I lose my key or it is stolen. Nonetheless, it'd be nice if some keys could be recovered or replaced, which requires that the keys take advantage of existing public key infrastructure, but without requiring a central authority.

In particular, users should be able to federate their keys: use a master key to sign certificates for secondary keys. Then, authentication can be done by presenting a signed certificate, although no external certificate authorities have been involved: I sign my own secondary key to use on a mobile device, so that the master key for an identity does not need to be risked. Key revocation is more complicated, but a couple of options come to mind. (1) Since we are the masters of our own destiny, signed registration certificates could include a URL to check for related revocation status when the matching key is presented in the future. However, an attacker could disable this singular service to use the stolen key along with its identity chain. (2) Use a distributed revocation network that allows a master key to publish both the creation and (later, if necessary) the recovation of a secondary key. With a blockchain the history of a key is immutably documented, easy to consult by identity consumers, and not controlled by a central authority. There are, however, a lot of technical details to figure out there, and the best solution may be (3) require users to log in with a "more authoritative" key of some kind (for instance in a 3 tiered scheme the intermediate key in between the master and the daily key could be used for this) to discredit a lost/stolen key at affected services, with no networked or third party server checks.

Something like that is pretty far away, but for now, I'd just be happier if most services switched to identity providers rather than storing usernames and passwords. It's easier for end users, increases security, and reduces the damage of data breaches.

Since its release last week, I've been playing quite a bit of Fallout 4. There's an interesting mini-game (which was in previous iterations as well) for "hacking" computer terminals, where you must guess the passcode on a list of possibilities with a limited number of guesses. Each failed guess provides the number of correct letters (in both value and position) in that particular word, but not which letters were correct, allowing you to deduce the correct passcode similarly to the game "Mastermind." A natural question is, "what is the best strategy for identifying the correct passcode?" We'll ignore the possibility of dud removal and guess resets (which exist to simplify it a bit in game) for the analysis.

Reformulating this as a probability question offers a framework to design the best strategy. First, some definitions: N denotes the number of words, z denotes the correct word, and x_i denotes a word on the list (in some consistent order). A simple approach suggests that we want to use the maximum likelihood (ML) estimate of z to choose the next word based on all the words guessed so far and their results:

 \hat{z} = \underset{x_i}{\mathrm{argmax}}~~\mathrm{Pr}(z=x_i)

However, for the first word, the probability prior is uniform--each word is equally likely. This might seem like the end of the line, so just pick the first word randomly (or always pick the first one on the list, for instance). However, future guesses depend on what this first guess tells us, so we'd be better off with an estimate which maximizes the mutual information between the guess and the unknown password. Using the concept of entropy (which I've discussed briefly before), we can formalize the notion of "mutual information" into a mathematical definition: I(z, x) = H(z) - H(z|x). In this sense, "information" is what you gain by making an observation, and it is measured by how it affects the possible states for a latent variable to take. For more compact notation, let's define F_i=f(x_i) as the "result" random variable for a particular word, telling us how many letters matched, taking values  \{0,1,...,M\} , where M is the length of words in the current puzzle. Then, we can change our selection criteria to pick the maximum mutual information:

 \hat{z} = \underset{x_i}{\mathrm{argmin}}~~H(z|F_i)

But, we haven't talked about what "conditional entropy" might mean, so it's not yet clear how to calculate H(z | F_i), apart from it being the entropy after observing F_i's value. Conditional entropy is distinct from conditional probability in a subtle way: conditional probability is based on a specific observation, such as F_i=1, but conditional entropy is based on all possible observations and reflects how many possible system configurations there are after making an observation, regardless of what its value is. It's a sum of the resulting entropy after each possible observation, weighted by the probability of that observation happening:

 H(Z | X) = \sum_{x\in X} p(x)H(Z | X = x)

As an example, let's consider a puzzle with M=5 and N=10. We know that  \forall x_i,\mathrm{Pr}(F_i=5)=p_{F_i}(5)=0.1 . If we define the similarity function L(x_i, x_j) to be the number of letters that match in place and value for two words, and we define the group of sets  S^{k}_{i}=\{x_j:L(x_i,x_j)=k\} as the candidate sets, then we can find the probability distribution for F_i by counting,

 p_{F_i}(k)=\frac{\vert{S^k_i}\vert}{N}

As a sanity check, we know that  \vert{S^5_i}\vert=1 because there are no duplicates, and therefore this equation matches our intuition for the probability of each word being an exact match. With the definition of p_{F_i}(k) in hand, all that remains is finding H(z | F_i=k), but luckily our definition for S^k_i has already solved this problem! If F_i=k, then we know that the true solution is uniformly distributed in S^k_i, so

 H(z | F_i=k) = \log_2\vert{S^k_i}\vert

Finding the best guess is as simple as enumerating S^k_i and then finding the x_i which produces the minimum conditional entropy. For subsequent guesses, we simply augment the definition for the candidate sets by further stipulating that set members x_j must also be in the observed set for all previous iterations. This is equivalent to taking the set intersection, but the notation gets even messier than we have so far, so I won't list all the details here.

All that said, this is more of an interesting theoretical observation than a practical one. Counting all of the sets by hand generally takes longer than a simpler strategy, so it is not well suited for human use (I believe it is O(n^2) operations for each guess), although a computer can do it effectively. Personally, I just go through and find all the emoticons to remove duds and then find a word that has one or two overlaps with others for my first guess, and the field narrows down very quickly.

Beyond its appearance in a Fallout 4 mini-game, the concept of "maximum mutual information" estimation has broad scientific applications. The most notable in my mind is in machine learning, where MMI is used for training classifiers, in particular, Hidden Markov Models (HMMs) such as those used in speech recognition. Given reasonable probability distributions, MMI estimates are able to handle situations where ML estimates appear ambiguous, and as such they are able to be used for "discriminative training." Typically, an HMM training algorithm would receive labeled examples of each case and learn their statistics only. However, a discriminative trainer can also consider the labeled examples of other cases in order to improve classification when categories are very similar but semantically distinct.

One of the benefits of an object oriented programming language is that functionality can be built into the object hierarchy in layers, deferring some details to a descendant while providing common functionality in a base class and avoiding repetition. However, when a group of classes all need to implement nearly the same method but with minor, class specific differences (more than just a constant), a curious anti-pattern tends to arise. In this situation, the base class implementation contains different behaviors based on the type identification of the true object even though we desire to push those implementations to the descendants.

In the Star Wars Combine's (SWC) codebase there are several examples of this, but today we'll use one from the actions system to illustrate the point. In this game, there are many different forms of travel, each of which works mostly the same way. Each works in steps, where the system establishes a deadline for the next travel update and then changes an entity's position after it expires. There are also common behaviors that need to be included, such as awarding experience points and sending events when travel expires.

The core functionality is captured in the abstract class StepTravelAction, which defines several key methods:

abstract class StepTravelAction extends BaseAction {

    public function setup(Entity $leader, $x, $y) {
        // ...
    }

    public function timerInterrupt(TimerInterrupt $payload) {
        // ...
    }

    public function abort() {
        // ...
    }

    public function finishTravelling($reason = null) {
        $leader = $this->leader;
        $travelType = $this->getTravelType();
        $party = $this->getParty();

        XPUtil::giveTravelXP($this->leader, $party, $this->XP, 'Finished ' . $this->getEventText());
        RewardUtil::testUniqueRewards($this->leader, enumRewardType::GROUND_TRAVEL);

        foreach ($this->getParty() as $member) {
            $member->travelID = null;
        }
        EntityLocationUtil::saveEntityLocation($leader);

        $this->delete();

        if ($reason == null) {
            $reason = TravelEndInterrupt::FINISH_NORMAL;
        }

        InterruptUtil::fire($leader, new TravelEndInterrupt($leader, $travelType, $reason));
    }

    abstract public function setupTravel();
    abstract public function applyStep(Entity $leader, Point $next);
    abstract public function getTravelType();
    // ...

}

Although the function bodies for some functions have been removed in the interest of space, we can see the core functionality, like aborting or finishing travel, is handled here in the base class. The concrete, travel type-specific details of things, like updating the entity's position, are farmed out to the derived classes.

However, at some point, it was decided that ground travel was awarding too much experience and could be abused by players (mostly to power level NPCs who can be set to patrol a location indefinitely, which takes months otherwise). Experience is awarded inside finishTravelling() as we can see, which is "common" to all of the travel action classes. At this point, there are several options for modifying the implementation, and the "simplest" in a language with dynamic type identification produces a design antipattern.

To reduce the XP awarded in ground travel only, an earlier programmer elected to add three lines to StepTravelAction::finishTravelling(), quickly resolving the issue:

    public function finishTravelling($reason = null) {
        // ...

        if ($this instanceof GroundTravelAction) {
            this->XP = ceil(this->XP / 5);
        }

        // ...
    }

This ignores the benefits of object inheritance, produces less elegant code, and reduces the sustainability of the code in the future. Behavior specific to GroundTravelAction is now no longer contained within GroundTravelAction itself, so if we wanted to further modify the XP for this case, a lot of code walking would be needed to figure out where to do it. If multiple such exceptions are added, you might as well not have support for polymorphism at all and do everything in a single struct that stores type IDs and uses switch statements, taking us back to the early 80s. Exceptions like this were added to several other methods (abort(), etc.) for the same change as well.

The correct approach here is to refactor this method into three different components:

  1. a concrete version of finishTravelling that follows the original implementation
  2. a virtual method implemented in StepTravelAction that simply forwards to the concrete implementation
  3. a virtual method in the descending class that layers additional functionality (such as changing the XP to be awarded) and then calls the concrete version from its parent

We need all three components because the default implementation is correct in most cases, so it should be available as the default behavior, but when we want to modify it we still need to have the original version available to avoid copying and pasting it into the derived class. I think it might be even worse if someone were to simply use a virtual method and copy it entirely into the derived class in order to add three lines.

For completeness, a preferable implementation would look like this and preserve the same behavior for all other derived classes:

abstract class StepTravelAction extends BaseAction {

    protected function _finishTravelling($reason = null) {
        $leader = $this->leader;
        $travelType = $this->getTravelType();
        $party = $this->getParty();

        XPUtil::giveTravelXP($this->leader, $party, $this->XP, 'Finished ' . $this->getEventText());
        RewardUtil::testUniqueRewards($this->leader, enumRewardType::GROUND_TRAVEL);

        foreach ($this->getParty() as $member) {
            $member->travelID = null;
        }
        EntityLocationUtil::saveEntityLocation($leader);

        $this->delete();

        if ($reason == null) {
            $reason = TravelEndInterrupt::FINISH_NORMAL;
        }

        InterruptUtil::fire($leader, new TravelEndInterrupt($leader, $travelType, $reason));
    }

    public function finishTravelling($reason = null) {
        $this->_finishTravelling($reason);
    }

}

class GroundTravelAction extends StepTravelAction {

    public function finishTravelling($reason = null) {
        $this->XP = ceil($this->XP / 5);
        $this->_finishTravelling($reason);
    }

}

The reason I want to highlight this is that the problem arose despite the developer having a familiarity with inheritance and deferring implementation details to derived classes where meaningful. There are abstract methods here and they are used to plug into common functionality implemented within StepTravelAction, but while targeting a change in behavior it is easy to lose sight of the overall design and simply insert a change where the implementation is visible. This kind of polymorphism antipattern is one of the most common implementation issues in SWC, likely due to the availability of the instanceof operator in PHP. In C++ it is a lot harder to do this (although RTTI does exist, I never use it), and people are often forced to learn the correct approach as a result.

Everything that happens in the world can be described in some way. Our descriptions range from informal and causal to precise and scientific, yet ultimately they all share one underlying characteristic: they carry an abstract idea known as "information" about what is being described. In building complex systems, whether out of people or machines, information sharing is central for building cooperative solutions. However, in any system, the rate at which information can be shared is limited. For example, on Twitter, you're limited to 140 characters per message. With 802.11g you're limited to 54 Mbps in ideal conditions. In mobile devices, the constraints go even further: transmitting data on the network requires some of our limited bandwidth and some of our limited energy from the battery.

Obviously this means that we want to transmit our information as efficiently as possible, or, in other words, we want to transmit a representation of the information that consumes the smallest amount of resources, such that the recipient can convert this representation back into a useful or meaningful form without losing any of the information. Luckily, the problem has been studied pretty extensively over the past 60-70 years and the solution is well known.

First, it's important to realize that compression only matters if we don't know exactly what we're sending or receiving beforehand. If I knew exactly what was going to be broadcast on the news, I wouldn't need to watch it to find out what happened around the world today, so nothing would need to be transmitted in the first place. This means that in some sense, information is a property of things we don't know or can't predict fully, and it represents the portion that is unknown. In order to quantify it, we're going to need some math.

Let's say I want to tell you what color my car is, in a world where there are only four colors: red, blue, yellow, and green. I could send you the color as an English word with one byte per letter, which would require 3, 4, 5, or 6 bytes, or we could be cleverer about it. Using a pre-arranged scheme for all situations where colors need to be shared, we agree on the convention that the binary values 00, 01, 10, and 11 map to our four possible colors. Suddenly, I can use only two bits (0.25 bytes, far more efficient) to tell you what color my car is, a huge improvement. Generalizing, this suggests that for any set  \chi of abstract symbols (colors, names, numbers, whatever), by assigning each a unique binary value, we can transmit a description of some value from the set using  \log_2(|\chi|) bits on average, if we have a pre-shared mapping. As long as we use the mapping multiple times it amortizes the initial cost of sharing the mapping, so we're going to ignore it from here out. It's also worthwhile to keep this limit in mind as a max threshold for "reasonable;" we could easily create an encoding that is worse than this, which means that we've failed quite spectacularly at our job.

But, if there are additional constraints on which symbols appear, we should probably be able to do better. Consider the extreme situation where 95% of cars produced are red, 3% blue, and only 1% each for yellow and green. If I needed to transmit color descriptions for my factory's production of 10,000 vehicles, using the earlier scheme I'd need exactly 20,000 bits to do so by stringing together all of the colors in a single sequence. But, given that by the law of large numbers, I can expect roughly 9,500 cars to be red, so what if I use a different code, where red is assigned the bit string 0, blue is assigned 10, yellow is assigned 110, and green 111? Even though the representation for two of the colors is a bit longer in this scheme, the total average encoding length for a lot of 10,000 cars decreases to 10,700 bits (1*9500 + 2*300 + 3*100 + 3*100), almost an improvement of 50%! This suggests that the probabilities for each symbol should impact the compression mapping, because if some symbols are more common than others, we can make them shorter in exchange for making less common symbols longer and expect the average length of a message made from many symbols to decrease.

So, with that in mind, the next logical question is, how well can we do by adapting our compression scheme to the probability distribution for our set of symbols? And how do we find the mapping that achieves this best amount of compression? Consider a sequence of  n independent, identically distributed symbols taken from some source with known probability mass function  p(X=x) , with  S total symbols for which the PMF is nonzero. If  n_i is the number of times that the  i th symbol in the alphabet appears in the sequence, then by the law of large numbers we know that for large  n it converges almost surely to a specific value:  \Pr(n_i=np_i)\xrightarrow{n\to \infty}1 .

In order to obtain an estimate of the best possible compression rate, we will use the threshold for reasonable compression identified earlier: it should, on average, take no more than approximately  \log_2(|\chi|) bits to represent a value from a set  \chi , so by finding the number of possible sequences, we can bound how many bits it would take to describe them. A further consequence of the law of large numbers is that because  Pr(n_i=np_i)\xrightarrow{n\to \infty}1 we also have  Pr(n_i\neq np_i)\xrightarrow{n\to \infty}0 . This means that we can expect the set of possible sequences to contain only the possible permutations of a sequence containing  n_i realizations of each symbol. The probability of a specific sequence  X^n=x_1 x_2 \ldots x_{n-1} x_n can be expanded using the independence of each position and simplified by grouping like symbols in the resulting product:

P(x^n)=\prod_{k=1}^{n}p(x_k)=\prod_{i=1}^{S} p_i^{n_i}=\prod_{i=1}^{S} p_i^{np_i}

We still need to find the size of the set  \chi in order to find out how many bits we need. However, the probability we found above doesn't depend on the specific permutation, so it is the same for every element of the set and thus the distribution of sequences within the set is uniform. For a uniform distribution over a set of size  |\chi| , the probability of a specific element is  \frac{1}{|\chi|} , so we can substitute the above probability for any element and expand in order to find out how many bits we need for a string of length  n :

 B(n)=-\log_2(\prod_{i=1}^Sp_i^{np_i})=-n\sum_{i=1}^Sp_i\log_2(p_i)

Frequently, we're concerned with the number of bits required per symbol in the source sequence, so we divide  B(n) by  n to find  H(X) , a quantity known as the entropy of the source  X , which has PMF  P(X=x_i)=p_i :

 H(X) = -\sum_{i=1}^Sp_i\log_2(p_i)

The entropy,  H(X) , is important because it establishes the lower bound on the number of bits that is required, on average, to accurately represent a symbol taken from the corresponding source  X when encoding a large number of symbols.  H(X) is non-negative, but it is not restricted to integers only; however, achieving less than one bit per symbol requires multiple neighboring symbols to be combined and encoded in groups, similarly to the method used above to obtain the expected bit rate. Unfortunately, that process cannot be used in practice for compression, because it requires enumerating an exponential number of strings (as a function of a variable tending towards infinity) in order to assign each sequence to a bit representation. Luckily, two very common, practical methods exist, Huffman Coding and Arithmetic Coding, that are guaranteed to achieve optimal performance.

For the car example mentioned earlier, the entropy works out to about 0.35 bits, which means there is significant room for improvement over the symbol-wise mapping I suggested, which only achieved a rate of 1.07 bits per symbol, but it would require grouping multiple car colors into a compound symbol, which quickly becomes tedious when working by hand. It is kind of amazing that using only ~3,500 bits, we could communicate the car colors that naively required 246,400 bits (=30,800 bytes) by encoding each letter of the English word with a single byte.

 H(X) also has other applications, including gambling, investing, lossy compression, communications systems, and signal processing, where it is generally used to establish the bounds for best- or worst-case performance. If you're interested in a more rigorous definition of entropy and a more formal derivation of the bounds on lossless compression, plus some applications, I'd recommend reading Claude Shannon's original paper on the subject, which effectively created the field of information theory.

For a client's website, I needed to enumerate the 12 months preceding a given date to create links to archived content. The site uses a javascript templating engine to create HTML, offloading the process from the server, so generating the list of months on the client side in javascript seemed like a reasonable choice. For the past week, everything looked great, but suddenly today I noticed that it was repeating the current month.

The code itself is pretty simple. It's written as a dustjs helper function that uses the Date.setMonth function to handle wrapping from January of one year to December of the previous year.

dust.helpers.month_list = function(chunk, ctx, bodies, params) {
    var count = dust.helpers.tap(params.count, chunk, ctx) | 0;
    var curDate = new Date();

    for (var i = 0; i < count; ++i) {
        var dateStr = (1900 + curDate.getYear()) + '-' + (curDate.getMonth()+1);
        chunk = chunk.render(bodies.block, ctx.push({date : dateStr}));
        curDate.setMonth(curDate.getMonth()-1);
    }

    return chunk;
}

Some quick printf debugging, adding console.log(curDate) at the start of the loop, shows this surprising result:

Tue Dec 31 2013 20:47:14 GMT-0800 (Pacific Standard Time)
Sun Dec 01 2013 20:47:14 GMT-0800 (Pacific Standard Time)
Fri Nov 01 2013 20:47:14 GMT-0700 (Pacific Daylight Time)
Tue Oct 01 2013 20:47:14 GMT-0700 (Pacific Daylight Time)

Apparently, on the 31st of the month, subtracting one month from the current date does not have the desired effect, in Chrome (on Windows 8.1). I ran the test again in IE 11 and observed the same behavior, as well as tried by manually setting the date to the 31st of October and subtracting a month, again seeing the same behavior. I'm not sure if that means this is somehow part of the specification, or if it's a bug caused by a library used underneath the hood in both browsers, but the end result is the same. My "clean" code to use setMonth(getMonth()-1) instead of writing out an if statement to detect the start of the year and wrap correctly now contains a cryptic loop that detects if the month didn't actually change and subtracts it again, all to deal with a bug that only happens once a month.

Post-migration update: The original comments for this post discussed the solution a bit and concluded that the best approach is to use curDate.setDate(0) to get the previous month, per the MDN documentation. The comments are lost to the internet now but I felt the correct solution was important enough to include in an update here.