Computing has moved beyond data entry and data processing. However, we still think of computers from that perspective. That’s what we grew up with, it’s what we know. What we also know is that computers are now primarily used as communication devices. We communicate with each other via computers. Not just “a” computer, but multiple computers. Optimizing the efficiency of data processing (data entry, storage, retrieval and calculation) is no longer the focus. We are instead trying to optimize the conveyance and construction of ideas between people. Therefore we need to be sensitive to the idiosyncrasies of people in relation to each other and of the effects of the tools on those relations.
“It took a long time, but as computers became more powerful and we learned more about how people used them, eventually, we started catching on to the idea that, instead of designing software to work in the fashion that works best for the machine, we could design software to work in the fashion that works best for the people who use it…” (Jesse James Garett). We are still learning how to do this. We have many bad habits to unlearn.
I see the iPhone, iPod Touch and the iPad as the first significant steps in this direction. We are finally moving beyond the desktop/file cabinet metaphors and developing new ones. Naturally, there will be resistance from those with vested interests in the old paradigms and those who have trouble managing change. Physical tools have an important characteristic that computer applications don’t have: you can see them. That is, the tool and its interface can be seen as separate entities. A computer application has only its interface to present to its user. There is no material object a person can look at to gain clues as to the application’s purpose or operation. This means an application’s interface carries a heavy burden. It needs to convey its purpose and operation in a language its users understand without the benefit of a physical presence.
The purpose of any tool, whether it is a car, can opener or computer application, is to facilitate the goals of the tool’s users. As much as possible the tool should “stay out of the way” while it mediates the task at hand. Phrased another way, the tool users should be looking at and thinking about what it is they are trying to accomplish; they should not be looking at or thinking about the tool being used for the job. A well designed tool directs cognitive effort through itself, not into itself. That is, the effort goes into an object (the thing on which the person is acting) via the tool. The tool then, is the vector that propagates the operator’s effort to produce the result in the object desired by the tool operator. The tool operator’s focus is on the goal to be accomplished, not the tool used to accomplish it. A poorly designed tool in contrast, attracts attention to itself and so distracts the person from their goal. By enabling people to focus their attention on what they are doing, the device seems to disappear. This raises the question, what causes this illusion to break?
Focussing specifically on learning, I want the ability to annotate whatever object I see on the screen. That is, just like a book, magazine, newspaper, journal article or any other physical object, I want to be able to write on it, have those writings saved automatically and re-presented automatically to me when I return to that object. The “getting ready” step should be very small, just like on paper, where one need only pick up a pencil or pen and start writing. And just like on paper these writings should be private until I decide to share them. They should also be searchable and backed up as part of any syncing process.
Where my interest comes from: http://zpd.stormlantern.ca/papers/VirtualMargin_Thesis.pdf
Teaching, and eLearning in particular, suffers from a transmission/presentation model. So far, the vast majority of digital interfaces perpetuate this flaw. As I see it, we need to move towards more of an interaction model. Where students can work more directly with their study materials. Even though it’s possible for PDFs to have annotations in them, I’d wager most people who use them don’t know that. The annotation interface does not readily lend itself to use. People who want to make annotations on such documents print them first. What I want is a device where people don’t feel the need to print PDFs in order to make annotations. What I want is an interface that is so good that writing annotations on the device is the obvious and natural thing to do.
I’m not so interested in specifications or hardware accoutrements, what I’m interested in is functionality that supports learner’s interactions with ideas, that enables them to ‘talk it out’ and develop their own thinking. We aren’t there yet. On the bright side, students are starting to make the case for it:
This entry was posted on Tuesday, February 9th, 2010 at 9:49 am and is filed under Uncategorized . You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.