Mon 15 August 2011
The Nintendo Wii was released around the end of 2006. That's a solid
four years now; an amazing amount of time in the lifespan of a
technological device these days. Often overlooked is the fact that the
Wii has a web browser, which is in fact a build of Opera, offering
support for canvas, CSS3, and more advanced aspects of HTML5. This
should be incredible; why does nobody develop more for it?
The Ugly Truth
The chief portion is the target market; with so many internet enabled
devices laying around these days, the Wii's browsing experience is one
that tends to fall a little short. This was further compounded by a
small incident wherein, once the Wii's browser was released, an article
went up on Opera's official website about responding to Wii remote
commands in Javascript - Nintendo later demanded that they take it down,
and to this date I've never seen any official reasoning put forth by
either company.
With that said, I don't think the Wii (and the browser therein) are 100%
lost potential. One of my goals in life is to examine and improve the
methods with which we teach programming to children, and I believe the
Wii can work very well for these purposes. Typically, young children
don't have their own computers, and from what I've found the recurring
issue here is that when they're using their parents computers, they
don't have creative freedom to do something that carries with it the
idea of being possibly "destructive".
The Wii, on the other hand, is generally thought of as the "kids" device - it has a limited set of functionality that kids grasp pretty well
right off the bat, and coupled with the concept of "ownership" they'd
get out of this device it stands to reason they're more likely to
experiment on it.
There used to be various Wii javascript libraries/SDKs laying around,
but most of them are woefully incomplete or no longer available. So with
that all in mind, I set out to build a library that allows simple, easy,
event-based interaction with the Wii's browser, hiding away the somewhat
insane complexities of reading the remote codes.
Enter: wii-js
You can check out wii-js
over on GitHub; it's entirely open source and released under an
MIT-style license. While the repository has in-depth documentation of
library usage, check out the example below and see how simple this has
become:
/**
* Two wii remote instances; first one tracks the first
* controller (held horizontally), the second one tracks
* the second controller (held vertically).
*/
var wiimote = new Wii.Remote(1, {horizontal: true}),
wiimote2 = new Wii.Remote(2, {horizontal: false});
/**
* Listen for the "A button pressed" event on each wiimote.
*/
wiimote.when('pressed_a', function() {
alert('Wii Remote #1 pressed A!');
});
wiimote2.when('pressed_a', function() {
alert('Wii Remote #2 pressed A!');
});
/**
* Start the system!
*/
Wii.listen();
This example showcases how to set up basic controller instances, and
respond to events that they fire. Most button combinations are supported
(check the docs on GitHub for up-to-date event listings), but sadly this
library can only work with the actual Wii-remote. Nintendo (or Opera,
it's unknown who) opted not to provide controller mappings for anything
else, be it classic controllers or Gamecube controllers. All Wii remote
instances, with the exception of the first one, can get controller
events from the nunchuk; there doesn't appear to be a way around this,
which is nothing short of a shame.
That said, even with these limitations, it remains a pretty versatile
library. The next steps for it are bundling in some basic sound/game
engine support to make it even easier for kids to jump in and create.
Follow the project on GitHub to see updates!
Sadly, the Wii isn't the most performant device. It has significantly
less memory than most devices on the market today; due to this, it's
possible to get under-performant pretty quickly. While it does support
the canvas element, it appears you can't force a repaint any faster than
in 100ms intervals - anything lower and the Wii begins to choke
repeatedly. This isn't really fast enough for games; canvas may be
useful for other techniques, but for the most part any game engine
that's done outside of Flash needs to be using HTML or SVG. SVG seems
promising in recent tests, and it has a side benefit of reacting to DOM
events like HTML based scenarios do.
Opera on the Wii also appears to have some level of support for
Server-Sent
Events, which
could possibly prove useful for enabling lightweight two player
interaction. The performance considerations here are currently unknown
at this time.
The Future
Nintendo recently announced their new console, the Wii U. Whether it
will keep the Opera web browser is anyone's guess; it's worth noting
that the new 3DS replaced the Opera browser used on previous DS
incarnations with an engine similar to that of the PSP. There aren't too
many usage metrics which we can use to draw predictions from, either, so
at the moment it's a bit of "wait and see".
I'm going to continue developing the library and concepts surrounding it
during my free time, and ideally want to try teaching a small class or
two once I've further refined it. If you're interested in helping out,
fork away on GitHub!
Tue 31 May 2011
For those who haven't heard the news, Google has deprecated a slew of
their APIs,
leaving many developers and services in a bit of a pinch. While there's
admittedly still time for developers to transition, it's a good time to
start considering alternatives. In my opinion, it's probably a good idea
to choose an alternative that has the technology in question as a core
competency, otherwise you're much more liable to have your provider pull
the rug out from underneath you.
With that said, many engineers are hit particularly hard by the
deprecation of the Translation API that Google has so generously offered
up to this point, and desire a solid alternative. While there are other
machine translation APIs out there, I wanted to take a moment to show
more developers how integrating with the myGengo
API can get them the
best of both worlds.
A Polite Heads Up
As of May 31, 2011 I am currently working with myGengo to further
develop their translation services. However, this post represents my
own thoughts and opinions, and in no way represents myGengo as a
company. myGengo offers both free machine translation and paid
human translation under one API. I simply
want to show other developers that this is very easy to use.
Getting Started with the myGengo API
This takes all of 5 minutes to do, but it's required before you can
start getting things translated. A full
rundown is available,
which includes details on using the API Sandbox for extensive testing.
For the code below, we're going to work on the normal account.
A Basic Example
The myGengo API is pretty simple to use, but the authentication and
signing can be annoying to do at first (like many other APIs). To ease
this, there's a few client libraries you can use - the one I advocate
using (and to be fair, I also wrote it) is the
mygengo-python library,
which you just installed. With this it becomes incredibly easy to start
making calls and submitting things for translation:
from mygengo import MyGengo
gengo = MyGengo(
public_key = 'your_public_key',
private_key = 'your_private_key',
sandbox = False, # possibly True, depending on your dev needs
)
print gengo.getAccountBalance()['response']['credits']
The above script should print out your current account credits.
Actually Translating Text
Extending the above bit of code to actually translate some text is very
simple - the thing to realize up front is that myGengo works on a system
of tiers, with said tiers being machine, standard, pro, and
ultra. These dictate the type of translation you'll get back. Machine
translations are the fastest and free, but least accurate; the latter
three are all tiers of human translation, and their rates vary
accordingly (see the website for current rates).
For the example below, we're going to just use machine translation,
since it's an effective 1:1 replacement for Google's APIs. A great
feature of the myGengo API is that you can upgrade to a human
translation whenever you want; while you're waiting for a human to
translate your job, myGengo still returns the machine translation for
any possible intermediary needs.
Note: It's your responsibility to determine what level you need - if you're
translating something to be published in another country, for instance,
human translation will inevitably work better since a native translator
understands the cultural aspects that a machine won't.
# -*- coding: utf-8 -*-
from mygengo import MyGengo
gengo = MyGengo(
public_key = 'your_mygengo_api_key',
private_key = 'your_mygengo_private_key',
sandbox = False, # possibly False, depending on your dev needs
)
translation = gengo.postTranslationJob(job = {
'type': 'text', # REQUIRED. Type to translate, you'll probably always put 'text' here (for now ;)
'slug': 'Translating English to Japanese with the myGengo API', # REQUIRED. For storing on the myGengo side
'body_src': 'I love this music!', # REQUIRED. The text you're translating. ;P
'lc_src': 'en', # REQUIRED. source_language_code (see getServiceLanguages() for a list of codes)
'lc_tgt': 'ja', # REQUIRED. target_language_code (see getServiceLanguages() for a list of codes)
'tier': 'machine', # REQUIRED. tier type ("machine", "standard", "pro", or "ultra")
})
# This will print out 私はこの音楽が大好き!
print translation['response']['job']['body_tgt']
This really couldn't be more straight-forward. We've just requested our
text be translated from English to Japanese by a machine, and gotten our
results instantly. This is only the tip of the iceberg, too - if you
have multiple things you need translated, you can actually bundle them
all up and post them all at once (see this
example
in the mygengo-python repository).
Taking it One Step Further!
Remember the "human translation is more accurate" point I noted above?
Well, it hasn't changed in the last paragraph or two, so let's see how
we could integrate this into a web application. The problem with human
translation has historically been the human factor itself; it's slower
because it has to pass through a person or two. myGengo has gone a long
way in alleviating this pain point, and their API is no exception: you
can register a callback url to have a job POSTed back to when it's been
completed by a human translator.
This adds another field or two to the translation API call above, but
it's overall nothing too new:
# -*- coding: utf-8 -*-
from mygengo import MyGengo
gengo = MyGengo(
public_key = '',
private_key = '',
sandbox = False, # possibly False, depending on your dev needs
)
translation = gengo.postTranslationJob(job = {
'type': 'text', # REQUIRED. Type to translate, you'll probably always put 'text' here (for now ;)
'slug': 'Translating English to Japanese with Python and myGengo API', # REQUIRED. Slug for internally storing, can be generic.
'body_src': 'I love this music!', # REQUIRED. The text you're translating. ;P
'lc_src': 'en', # REQUIRED. source_language_code (see getServiceLanguages() for a list of codes)
'lc_tgt': 'ja', # REQUIRED. target_language_code (see getServiceLanguages() for a list of codes)
'tier': 'standard', # REQUIRED. tier type ("machine", "standard", "pro", or "ultra")
# New pieces...
'auto_approve': 0,
'comment': 'This is an optional comment for a translator to see!',
'callback_url': 'http://yoursite.com/your/callback/view'
})
# This will print out a machine translation (私はこの音楽が大好き!), and you can
# set up a callback URL (see below) to get the translated text back when it's been
# completed by a human. You can alternatively poll in intervals to check.
print translation['response']['job']['body_tgt']
# Credit for the note about machine translation goes to https://github.com/aehlke, who
# pointed out where I forgot to note. ;)
All we've done here is change the level we want, to use a human
(standard level), and supplied a callback url to post the job to once
it's completed. As you can see, the response from our submission
includes a free machine translation to use in the interim, so you're not
left completely high and dry. You can also specify a comment for the
translator (e.g, if there's some context that should be taken into
account).
Now we need a view to handle the job being sent back to us when it's
completed. Being a python-focused article, we'll use Django as our
framework of choice below, but this should be fairly portable to any
framework in general. I leave the routing up to the reader, as it's
largely basic Django knowledge anyway:
def update_job(request):
"""Handles parsing and storing a POSTed completed job from myGengo.
"""
if request.method == "POST":
# Load the POSTed object, it's JSON data.
resp = json.loads(resp)
# Your translated text is now available in resp['body_tgt']
# Save it, process it, whatever! ;D
return HttpResponse(200)
else:
return HttpResponse(400)
Now, wasn't that easy? Human translations with myGengo are pretty fast,
and you get the machine translation for free - it makes for a very
bulletproof approach if you decide to use it.
Room for Improvement?
mygengo-python is open
source and fork-able over on GitHub. I'm the chief maintainer, and love
seeing pull requests and ideas for new features. If you think something
could be made better (or is lacking completely), don't hesitate to get
in touch!
Sat 07 May 2011
Please Excuse the Tone. :(
I wrote this when I was younger and, arguably, an asshole (pardon my French). There may still be technical content of note in here, so I'm keeping it up, but please ignore the harsh and unnecessary tone.
When playing "contract engineer", you sometimes have to jump in and work
with a less than ideal codebase. This was the case on a recent project I
helped out on; the codebase is an install of ExpressionEngine 2
(EE2), a publishing system (CMS)
developed by the fine people at EllisLab,
favored by web designers all over the place. While I personally find it
too limiting for my tastes (I suspect this is due to my doing less
design based work these days), I can understand why people choose to
work with it - highly sensible defaults with a pleasing overall control
panel design that you can sell to customers. We can all agree that not
reinventing the wheel is a good thing.
That said, I would be lying if I didn't note that there are a few things
about EE2 that bug me. I'll write about them each in-depth in their own
articles; the focus of this one is on the somewhat limiting URL
structure that EE2 enforces on you, as well as how to get around this
and obtain a much higher degree of flexibility while still re-using your
same EE2 essentials (templates, session handling, etc).
The Scenario to Fix
The way that EE2 handles URL
routing is
pretty simple, and works for a large majority of use cases. The short
and sweet of it is this:
http://example.com/index.php/\ template_group/template/
That url will render a template named "template" that resides inside
"template_group", taking care of appropriate contextual data and such.
Let's imagine, though, that for SEO-related purposes you want a little
more dynamism in that url - the template_group
should act as more of
a controller, where it can be re-used based on a given data set. What to
do about this...
Wait! EE2 is CodeIgniter!
This is where things get interesting. EE2 is actually built on top of
CodeIgniter, an open source PHP framework
maintained by EllisLab. It's similar to Ruby
on Rails in many regards.
That said, if you're new to web development and reading this, please go
learn to use a real framework.
Learning PHP (and associated frameworks) first will only set you up for
hardships later.
Now, since we have a framework, we have to ask ourselves... why doesn't
EE2's setup look like a CodeIgniter setup? Well, EE2 swaps some
functionality into the CI build it runs on, so things are a bit
different. This is done (presumably) to maintain some level of backwards
compatibility with older ExpressionEngine installations.
Exposing the Underlying Components
The first thing we need to address is the fact that the CodeIgniter
router functions are being overridden. If you open up the main index.php
file used by EE2 and go to line 94-ish, you'll find something like the
following:
<?php
// ...
/*
* ---------------------------------------------------------------
* Disable all routing, send everything to the frontend
* ---------------------------------------------------------------
*/
$routing['directory'] = '';
$routing['controller'] = 'ee';
$routing['function'] = 'index';
// ...
You're gonna want to just comment those lines out. What's basically
going on there is that this is saying "hey, let's just have every
request go through this controller and function", but we really don't
want this. By commenting these out, the full routing capabilities of
CodeIgniter return to us.
One thing to note here is that if our desired route isn't found,
ExpressionEngine will still continue to work absolutely fine. This
is due to a line in the config/routes.php file:
<?php
// ...
$route['default_controller'] = "ee/index";
$route['404_override'] = "ee/index";
// An example of a route we'll watch for
$route['example/(:any)'] = "example_controller/test/$1";
// ...
The default controller, if no route is found matching the one we've
specified, is the EE controller, so nothing will break.
Controllers and Re-using Assets
So now that we've got a sane controller-based setup rolling, there's one
more problem to tackle: layouts and/or views. Presumably all your view
code is built to use the EE2 templating engine; it'd be insane to have
to keep a separate set of view files around that are non-EE2 compatible,
so let's see if we can't re-use this stuff.
A basic controller example is below:
<?php
if(!defined('BASEPATH')) exit('This cannot be hit directly.');
class Example_controller extends Controller {
function __construct() {
parent::Controller();
/* Need to initialize the EE2 core for this stuff to work! */
$this->core->_initialize_core();
$this->EE = $this->core->EE;
/* This is required to initialize template rendering */
require APPPATH.'libraries/Template'.EXT;
}
function test($ext) {
echo $ext;
}
}
/* End of file */
Now, viewing "/example/12345" in your browser should bring up a page
that simply prints "12345". The noteworthy pieces of this happen inside
the construct method; there's a few pieces that we need to establish in
there so we have a reference to the EE2 components.
Now, to use our template structures once more, we need to add in a
little magic...
<?php
// ...
private function _render($template_group, $template, $opts = array()) {
/* Create a new EE Template Instance */
$this->EE->TMPL = new EE_Template();
/* Run through the initial parsing phase, set output type */
$this->EE->TMPL->fetch_and_parse($template_group, $template, FALSE);
$this->EE->output->out_type = $this->EE->TMPL->template_type;
/* Return source. If we were given opts to do template replacement, parse them in */
if(count($opts) > 0) {
$this->EE->output->set_output(
$this->EE->TMPL->parse_variables(
$this->EE->TMPL->final_template, array($opts)
)
);
} else {
$this->EE->output->set_output($this->EE->TMPL->final_template);
}
}
// ...
This render method should be added to the controller example above; it
accepts three parameters - a template group, a template name, and an
optional multi-dimensional array to use as a context for template
rendering (i.e, your own tags). If the last argument confuses you, it's
probably best to read the EE2 third_party documentation on parsing
variables,
as it's actually just using that API. There's really less black magic
here than it looks like.
With that done, our final controller looks something like this...
<?php
if(!defined('BASEPATH')) exit('This cannot be hit directly.');
class Example_controller extends Controller {
function __construct() {
parent::Controller();
/* Need to initialize the EE2 core for this stuff to work! */
$this->core->_initialize_core();
$this->EE = $this->core->EE;
/* This is required to initialize template rendering */
require APPPATH.'libraries/Template'.EXT;
}
private function _render($template_group, $template, $opts = array()) {
/* Create a new EE Template Instance */
$this->EE->TMPL = new EE_Template();
/* Run through the initial parsing phase, set output type */
$this->EE->TMPL->fetch_and_parse($template_group, $template, FALSE);
$this->EE->output->out_type = $this->EE->TMPL->template_type;
/* Return source. If we were given opts to do template replacement, parse them in */
if(count($opts) > 0) {
$this->EE->output->set_output(
$this->EE->TMPL->parse_variables(
$this->EE->TMPL->final_template, array($opts)
)
);
} else {
$this->EE->output->set_output($this->EE->TMPL->final_template);
}
}
function test($ext) {
return $this->_render('my_template_group', 'my_template', array(
'template_variable_one' => 'SuperBus',
'repeatable_stuff' => array(
array('id' => 1, 'text' => 'This'),
array('id' => 2, 'text' => 'Will'),
array('id' => 3, 'text' => 'Be'),
array('id' => 4, 'text' => 'Repeatable'),
)
));
}
}
/* End of file */
Awesome! Now what?
Please go use a more reasonable programming
language that enforces better practices.
While you're at it, check out one of the best web frameworks
around, conveniently written in said
reasonable programming language.
Of course, if you're stuck using PHP, then make the most of it I
suppose. If this article was useful to you, I'd love to hear so!
Sat 16 April 2011
Note the Following!
This is an article I wrote for the March 2011th issue of (the now defunct)
JSMag. It was a great piece of literature
released monthly, and a great way to keep up to date on the latest news
in the Javascript community. Sad to see it go!
Node isn’t the first approach to event based programming, and with its
explosion of interest it probably won’t be the last. Typical JavaScript
patterns for callback functions involve passing around references to
functions and managing odd scope levels. In many cases this is less than
ideal; that said, there’s another option when you’re in Node: emit your
own events, and let functions attach and respond to those. EventEmitter
makes this incredibly easy!
The Typical Approach...
If you’ve written or even worked with JavaScript libraries before, you
probably understand the callback function scenario – that is, functions
that execute once a certain task is completed. A typical use might be
something like what you see in the following example:
var x = 1;
var foo = function(callbackfn) {
return callbackfn(x * 2);
};
foo(function(x) {
console.log(x);
});
Here, we’ve defined a function that accepts another function as its main
argument and passes the callback function a doubled version of x
. Pretty
simple, and many libraries use this technique for Ajax calls. Let’s take
a minute and spin the globe, though – what if, instead of arbitrarily
accepting a function and having to worry about possible scoping issues,
we could just announce when an event of interest has occurred, and fire
an attached function at that point? This would be so much cleaner than
passing around function references everywhere.
Enter: events.EventEmitter
The great thing about all this? We can actually do this in Node through
use of the events library. This, in many ways, is core to how things in
Node work. Everything is event based, so why shouldn’t we be able to
fire off our own events? To showcase what’s possible with this, let’s
build a basic library to connect to Twitter's Streaming API, which we
can then filter results from as we see fit.
The Basics: exporting an EventEmitter instance
Before we get into anything Twitter-specific, we’ll demonstrate basic
usage of EventEmitter
. The code below shows how simple this can really
be – it’s a contrived example that constantly increases numbers by one,
and emits an event called “even” every time the number becomes even.
var events = require('events'),
util = require('util');
var Foo = function(initial_no) { this.count = initial_no; };
Foo.prototype = new events.EventEmitter;
Foo.prototype.increment = function() {
var self = this;
setInterval(function() {
if(self.count % 2 === 0) self.emit('even');
self.count++;
}, 300);
};
var lol = new Foo(1);
lol.on('even', function() {
util.puts('Number is even! :: ' + this.count);
}).increment();
Usage of EventEmitter
is pretty simple – you basically want to inherit
all the properties from EventEmitter
itself into your object, giving it
all the properties it needs to emit events on its own. Events are sent
off as keywords (‘even
’, ‘error
’, etc), called directly on the object.
You can extend the prototype chain further, and EventEmitter
should work
fine and dandy.
Changing Tracks for a Moment...
Now that we’ve shown how EventEmitter
works, we want to go ahead and use
it for Twitter's Streaming API. For the unfamiliar, the Streaming API is
essentially a never ending flood of tweets. You open a connection, and
you keep it open; data is pushed to you, reversing the typical model of
“request/response” a bit in that you only really make one request.
EventEmitter
is perfect for this task, but to satisfy some basic needs
for interacting with Twitter's API, we’ll need a base library, like
what’s shown in the example below:
var util = require('util'),
http = require('http'),
events = require('events');
var TwitterStream = function(opts) {
this.username = opts.username;
this.password = opts.password;
this.track = opts.track;
this.data = '';
};
TwitterStream.prototype = new events.EventEmitter;
module.exports = TwitterStream;
Here we require the three main resources we’ll need (util
, http
and
events
), and set up a new Function Object that’s essentially an instance of
EventEmitter
. We’ll throw it over to exports
, too, so it plays nicely
when relying on it in outside code. Creating instances of our Twitter
object requires a few things – ‘track
’, which is a keyword to filter
tweets by, and a ‘username
’/’password
’ combination which should be self
explanatory (in terms of what they are).
Why ‘username
/password
’, though? Twitter's Streaming API requires some
form of authentication; for the sake of brevity in this article, we’re
going to rely on Basic Authentication, but moving forward it’s
recommended that you use OAuth for authenticating with Twitter, as it
relies on the user granting you privileges instead of actually handing
over their password. The OAuth ritual is much longer and more intricate
to pull off, though, and would push the length and scope of this article
far beyond its intentions.
Now that we’ve got the basic scaffolding for our library set up, let’s
throw in a function to actually connect, receive tweets, and emit an
event or two that other code can catch. Check out the following for a
prime example of how we can do this:
TwitterStream.prototype.getTweets = function() {
var opts = {
host: 'stream.twitter.com',
port: 80,
path: '/1/statuses/filter.json?track=' + this.track,
method: 'POST',
headers: {
'Connection': 'keep-alive',
'Accept': '*/*',
'User-Agent': 'Example Twitter Streaming Client',
'Authorization': 'Basic ' + new Buffer(this.username + ':' + this.password).toString('base64'),
}
},
self = this;
this.connection = http.request(opts, function(response) {
response.setEncoding('utf8');
response.on('data', function(chunk) {
self.data += chunk.toString('utf8');
var index, json;
while((index = self.data.indexOf('\r\n')) > -1) {
json = self.data.slice(0, index);
self.data = self.data.slice(index + 2);
if(json.length > 0) {
try {
self.emit('tweet', JSON.parse(json));
} catch(e) {
self.emit('error', e);
}
}
}
});
});
this.connection.write('?track=' + this.track);
this.connection.end();
};
If you’ve worked with Node before, this code shouldn’t be too daunting,
but we’ll summarize it just in case. We’re extending the prototype of
our Twitter object that we created before, and adding a method to start
the stream of tweets coming in. We set up an object detailing the host
,
port
, path
and method
, as well as some custom headers (notably, setting
‘keep-alive
’ and Basic Authentication
headers). This is passed to an
http.request()
call, and we then write our tracking data and end the
connection.
The response function has some logic to handle putting together tweets
that are sent in by Twitter. The API dictates that a tweet object will
end on the two characters ‘\\r
’ and ‘\\n
’, so we basically walk the
built up JSON strings as they come in and separate them out. If a JSON
string is successfully pulled out, we emit a ‘tweet’ event, and pass it
the parsed JSON data. If something went horribly wrong, we emit an
‘error
’ event and pass it the associated object.
Usage and Application
Alright, so now we should have a pretty functional library once we put
those two together. The code below shows how we can now use this library
in a simple script.
var TwitterStream = require('./twitterstream'),
util = require('util');
var twitter = new TwitterStream({
username: 'username',
password: 'password',
track: 'JavaScript',
});
twitter.on('tweet', function(tweet) {
util.puts(util.inspect(tweet));
});
twitter.on('error', function(e) {
util.puts(e);
});
twitter.getTweets();
Wrapping Things Up
EventEmitter
is an excellent, easy to implement option for dealing with
cases where you might want to defer an action until data is ready.
Readers with further questions should check out the Node.js
documentation on EventEmitter
.
Sun 06 March 2011
Back in 2008 I was frequently riding a train twice a day for a
ridiculous ~3 hour (each way) commute that nobody on this planet should
ever have to do. Needless to say, I did a lot of reading, particularly
issues of Wired Magazine. To this day, one article still stands fresh in
my mind, which essentially dealt with the concept of surrendering
your brain to an algorithmic approach to
memorization.
The man behind the core of the theory is Piotr Wozniak, a gentleman out
of Poland who still somewhat astounds me to this day.
I won't reproduce the theory in full here, as the Wired article does a
much better job writing things up, but the core things to take down are
that the human brain tends to have times when it memorizes better or
worse, and it's possible to capitalize on these moments to increase your
potential for solidly committing something to memory.
SuperMemo is an effort to implement this
in software. It's cool and all, but I'm not sure I'm in 100% total
agreement.
Hack Faster, Please.
You see the thing about the theory is that your core memory might work
on a two week cycle; learn something today, see it again in two weeks,
and if everything holds true you'll probably never forget it. However, I
disagree with the concept that short term memory commitment can't be
stronger for certain concepts.
Take something like teaching yourself a new language. If it's something
truly foreign to you, the characters won't make sense, the
pronunciations will sound totally off, and there's a good chance that
anyone who's not forced through it will give up in about a week or two.
Long term memory won't have a shot in that case; maybe not due to any
particular flaw in the theory, but merely due to the lack of willpower
some people have. In addition you have to factor in the concept of age:
as we get older, our memory and the way it works through concepts
changes. Short term memory is nowhere near as slighted when up against
these two conceptual foes; are we certain there's no good middle ground
to hit?
Can It Be Proven?
So let's provide a short bit of backstory here. This past week
(beginning of March, 2011), I got the awesome opportunity to work with
the folks at myGengo, a company that builds
tools to help ease translation efforts. This required heading to Tokyo -
for the astute, I had visited Tokyo
some months prior, so I wasn't a total stranger to what I'd experience
upon arrival. I do want to learn the language, though.
A typical approach for character memorization would be to make flash
cards, sit down and repeatedly run through them. I won't lie, this
bores the hell out of me. I'd much rather have something portable that
I can use on the train when I'm traveling in the mornings. To that end,
I just went ahead and built an Android application (app) to do this.
Katakana on the Android
Market
Now, since I was already creating an app for this, I figured I could
take some liberties. With the theory still lingering in the back of my
head, I began to muse: what's my own learning pattern like? Well, for
the past (roughly) seven years I've learned things incredibly quickly.
In some cases this was by design, in other cases... well, you get the
idea.
The thing is that it's worked for me so far, and it's the same for many
other programmers I know. Programmers far and wide can attest to the
fact that while there's no doubt benefits to the long-term memorization
benefits, we consistently rely on short term to do our jobs. We accrue a
sometimes ridiculous amount of information in a short period of time
that will instantly come back to us when we need it. The key is, of
course, when we need it, generally through a trigger (a linked piece
of information, for example).
The Theory Itself
So this theory started formulating in my mind. What if I could apply
elements of Wozniaks theory to short term memory, and then rely on the
trigger to pick up the rest? Even in short term memory I found that I,
personally, had a few-minutes window where if I reviewed the same
concept, I'd commit it pretty quickly. The triggers, in this case, will
be as I walk down the streets of Tokyo or read a menu.
I got down to building the app. The core details of building an Android
app are outside the scope of this article; the algorithm I threw in is
worth detailing a bit, though. In my mind, when you use an app on your
phone, you're going to use it, at most, for five minutes. The app
concept just lends itself to this, a bunch of miniature worlds that you
can hop in and out of at will. So with that in mind, I set the high
barrier for this experiment at five minutes - roughly the maximum amount
of time I expect someone to stay engaged.
I'm assuming, based on my own use cases and the trials of a few friends,
that on average I can expect people to get through roughly three cards
in a minute. At five minutes, fifteen cards, not too bad. The question
of where to 're-appear' content then came up - for this, I first settled
on throwing it back in the users face every couple of minutes. The
number of minutes is variable; it starts off set at one minute, but will
adapt based on whether or not you answer correctly as things re-appear.
If you memorize things at the four minute mark, for instance, it'll edge
towards that - never 100% four minutes due to the relative inaccuracy of
Android's timing, mind you, but it gets the job done.
I've been using the application myself for roughly two days now, and
it's easily topped any effort I put in with books or traditional methods
over the past two months. It's worth noting that I still can't write
Japanese to save my life, but that's also a two fold issue: characters
can be quite complex (kanji), and don't lend themselves well to a
trigger-based scenario for recall. However, if I'm looking at a
character screen, I can at least make some sense of what I'm seeing now.
Taking This Further
My theories aren't proven, but then again, it's the human brain we're
dealing with. I released the Android app as a test of my take on
Wozniak's theory with a bit of my own magic; based on how well it does,
I'll release apps for Hiragana, Kanji, and anything else applicable. I
personally believe that the effects of memory commitment through short
term memory optimization can be optimized, and this is a pretty great
and open way to give it a whirl.
Mon 07 February 2011
Well, time certainly flies by quickly. Since the last entry in this
little mini-series, I've globe-trotted some more (London, New Jersey,
New York City, DC, San Francisco, Seattle... San Francisco again...) and
released some new projects that've been in the pipeline for some time.
What's next?
On Traveling
London was a very, very interesting experience. I had the fun experience
of being stuck there well past my intended departure date due to a
massive snow storm that shut most of Europe down; London Heathrow, why
you refused the help of the Army to clear away snow is simply beyond me.
That said, the city of London itself is a nice place, one that I could
see myself spending more time in. The surrounding area is equally cool
and worth checking out! Yet again this was a country where public
transportation is pretty slick. Notice a recurring theme here?
The rest of my travels have been pretty US-centric; nothing noteworthy,
sans shooting up to Seattle for a week to visit with my younger brother.
Now, enough of all this personal drivel, there's work to discuss.
ProgProfessor
I think kids should be taught programming at a young age, but with
absolutely no initial focus on mathematics. People can fight it all
they want, but math doesn't interest kids, and a direct approach to
trying to make it interesting so more come into the subject field won't
work. Programming, if taught with a creative and artistic edge, is well
suited to fix this problem.
At least, that's my theory, and the entire line of reason behind my
efforts with ProgProfessor. This'll
be followed up soon with a few other new projects, stay tuned!
FeedBackBar
When I got back into San Francisco, I met up with my good friend
Brandon Leonardo. Awhile back he had conceived
of this pretty cool idea to distribute a "feedback bar" type widget,
where any site could sign up, throw some code on, and get immediate
feedback from users. It's an idea somewhat in the same realm as
UserVoice or Get
Satisfaction, but much more stripped
down and to the point. I thought it was pretty cool, and we managed to
hack it out in a night.
FeedBackBar is free and quick to
implement. Check it out and let us know what you think!
pyGengo - Pythonic myGengo Translation API Library
The other notable release I've thrown out in the past month is
pyGengo, a library that
wraps the myGengo API.
myGengo is a cool service that offers an easy, reliable way to get text
translated into other languages by other humans. Machine translation
alone can be notoriously incorrect, so having a human back it up is
quite the awesome technique to have up your sleeve.
pyGengo is fully documented and has a test suite to boot. Issues can be
filed on the Github Issue
Tracker, give it a
shot and let me know what you think!
So... What's Next?
I've got a few projects coming up that should be pretty significant
releases, so at the moment I'm working towards those. You should follow
me on Twitter to know when they're
released!