Sun 06 March 2011

Hacking the Human Brain

Back in 2008 I was frequently riding a train twice a day for a ridiculous ~3 hour (each way) commute that nobody on this planet should ever have to do. Needless to say, I did a lot of reading, particularly issues of Wired Magazine. To this day, one article still stands fresh in my mind, which essentially dealt with the concept of surrendering your brain to an algorithmic approach to memorization. The man behind the core of the theory is Piotr Wozniak, a gentleman out of Poland who still somewhat astounds me to this day.

I won't reproduce the theory in full here, as the Wired article does a much better job writing things up, but the core things to take down are that the human brain tends to have times when it memorizes better or worse, and it's possible to capitalize on these moments to increase your potential for solidly committing something to memory. SuperMemo is an effort to implement this in software. It's cool and all, but I'm not sure I'm in 100% total agreement.

Hack Faster, Please.

You see the thing about the theory is that your core memory might work on a two week cycle; learn something today, see it again in two weeks, and if everything holds true you'll probably never forget it. However, I disagree with the concept that short term memory commitment can't be stronger for certain concepts.

Take something like teaching yourself a new language. If it's something truly foreign to you, the characters won't make sense, the pronunciations will sound totally off, and there's a good chance that anyone who's not forced through it will give up in about a week or two. Long term memory won't have a shot in that case; maybe not due to any particular flaw in the theory, but merely due to the lack of willpower some people have. In addition you have to factor in the concept of age: as we get older, our memory and the way it works through concepts changes. Short term memory is nowhere near as slighted when up against these two conceptual foes; are we certain there's no good middle ground to hit?

Can It Be Proven?

So let's provide a short bit of backstory here. This past week (beginning of March, 2011), I got the awesome opportunity to work with the folks at myGengo, a company that builds tools to help ease translation efforts. This required heading to Tokyo - for the astute, I had visited Tokyo some months prior, so I wasn't a total stranger to what I'd experience upon arrival. I do want to learn the language, though.

A typical approach for character memorization would be to make flash cards, sit down and repeatedly run through them. I won't lie, this bores the hell out of me. I'd much rather have something portable that I can use on the train when I'm traveling in the mornings. To that end, I just went ahead and built an Android application (app) to do this.

Katakana on the Android Market

Now, since I was already creating an app for this, I figured I could take some liberties. With the theory still lingering in the back of my head, I began to muse: what's my own learning pattern like? Well, for the past (roughly) seven years I've learned things incredibly quickly. In some cases this was by design, in other cases... well, you get the idea.

The thing is that it's worked for me so far, and it's the same for many other programmers I know. Programmers far and wide can attest to the fact that while there's no doubt benefits to the long-term memorization benefits, we consistently rely on short term to do our jobs. We accrue a sometimes ridiculous amount of information in a short period of time that will instantly come back to us when we need it. The key is, of course, when we need it, generally through a trigger (a linked piece of information, for example).

The Theory Itself

So this theory started formulating in my mind. What if I could apply elements of Wozniaks theory to short term memory, and then rely on the trigger to pick up the rest? Even in short term memory I found that I, personally, had a few-minutes window where if I reviewed the same concept, I'd commit it pretty quickly. The triggers, in this case, will be as I walk down the streets of Tokyo or read a menu.

I got down to building the app. The core details of building an Android app are outside the scope of this article; the algorithm I threw in is worth detailing a bit, though. In my mind, when you use an app on your phone, you're going to use it, at most, for five minutes. The app concept just lends itself to this, a bunch of miniature worlds that you can hop in and out of at will. So with that in mind, I set the high barrier for this experiment at five minutes - roughly the maximum amount of time I expect someone to stay engaged.

I'm assuming, based on my own use cases and the trials of a few friends, that on average I can expect people to get through roughly three cards in a minute. At five minutes, fifteen cards, not too bad. The question of where to 're-appear' content then came up - for this, I first settled on throwing it back in the users face every couple of minutes. The number of minutes is variable; it starts off set at one minute, but will adapt based on whether or not you answer correctly as things re-appear. If you memorize things at the four minute mark, for instance, it'll edge towards that - never 100% four minutes due to the relative inaccuracy of Android's timing, mind you, but it gets the job done.

I've been using the application myself for roughly two days now, and it's easily topped any effort I put in with books or traditional methods over the past two months. It's worth noting that I still can't write Japanese to save my life, but that's also a two fold issue: characters can be quite complex (kanji), and don't lend themselves well to a trigger-based scenario for recall. However, if I'm looking at a character screen, I can at least make some sense of what I'm seeing now.

Taking This Further

My theories aren't proven, but then again, it's the human brain we're dealing with. I released the Android app as a test of my take on Wozniak's theory with a bit of my own magic; based on how well it does, I'll release apps for Hiragana, Kanji, and anything else applicable. I personally believe that the effects of memory commitment through short term memory optimization can be optimized, and this is a pretty great and open way to give it a whirl.

Ryan around the Web