Tuesday, September 27, 2011

What I want to build next

Earlier this summer I finally finished the new Moodle question engine, which was released as part of Moodle 2.1. As you might expect with such a large change, a number of minor bugs were not spotted until after the release, but I (and others) have fixed quite a lot of them, and we will continue to fix more. I want to say "thank you" to everyone who has taken the time to report the problems they encountered. Pleasingly, some people, including Henning Bostelmann, Tony Levi, Pierre Pichet, Jamie Pratt, Joseph Rézeau and Jean-Michel Vedrine have not only been sending in bug reports, but also submitting bug fixes. I would like to thank them in particular. I don't know whether this means that the new Moodle development processes are working well and encouraging more contributors, or that I released the new question engine full of trivial bugs.

At the moment, apart from fixing bugs, we are about two months away from the end of the OU's one-year project to move from Moodle 1.9 to 2.x and implement a lot of new features at the same time. In the eAssessment area, we had about 30 work-packages to do, of which finishing the question engine was by far the biggest, and we have about 6 left to go. Most of the remaining tasks are at least started, but finishing them is what I, and the developers on my team, will be doing in the near future.

I have, however, been thinking ahead a bit, and I have an idea for what I would like to build, should I be given the opportunity. Honesty compels me to say these are not my ideas. I stole them from other people, and there are proper acknowledgements at the end of this post. I wanted to post about this because: 1. in my experience, if you post about your half-baked ideas, people will be able to suggest ways to make them better; and 2. I am hoping that at least one course-team at the OU will see this and say "we would love to use this in our teaching" because that might persuade the powers that be to let me build this.

Rationale

The Moodle quiz is a highly structured, teacher-controlled tool for building activities where students attempt questions. What I want to create is a more open activity where students can take charge of their learning using a bank of questions to practice some skill where the computer can mark their efforts and give feedback. For the sake of argument, I have been calling this the "Question practice" activity module.

The entry page

When a student goes into a Question practice activity, they see a front screen that lists all the categories in the question bank for this activity.

Next to each category, there are statistics for how the student has performed on that category so far. For example, it might say "recently you scored 19/21 (90%); all time you scored 66/77 (86%).” The categories are nested, and there is a subtotal for each category.

At the bottom of the page is an Attempt some questions… button. This takes the student to the …

Start a session form

… where they set up what practice they would like to do. Students can select which categories they want to attempt questions from. They may also be able to choose how many questions they want. For example "Give me 10 questions", "As many as possible in 20 minutes", or "Keep going until I say stop". The teacher will probably be able to constrain the range of options available here.

Once they are satisfied, the they clicks the "Start session" button. This takes them to the …

Attempt page

… which shows the student the first question, chosen according to the criteria they set. There will probably be a display of running statistics "In this session you have attempted 0 questions so far". The question will contain the controls necessary for attempting the question. There will also probably be a "Please stop, I'm bored" button, so the student can leave at any time.

When they get back to the front page, the statistics will have been updated.

If the student crashes out of a session, then when they go back in, the front page will have a "Continue current session" button.

Overall activity log

One batch of attempting questions will be called a 'practice session'. The system will keep track of all the sessions that the student has done, and what they achieved during each session.

The front page will have a link to a page that lists all of the student's sessions, showing what they achieved in each. This provides more detail than is visible on the front page.

Possible extensions

That is the key idea. Here are some further things that could be added to the basic concept.

Milestones

The system could recognise targets, goal, or achievement (I'm not sure of the best name). That would be something like "Attempt more than 10 questions from the Hard category, and score more than 90%". If the student achieves that target at any time, they system would notice, and the achievement would be recorded on the front page and in the session log in an ego-boosting way (e.g. a medal icon).

The whole point of this activity is to be as student-driven as possible, so should students be able to define their own targets or goals? Should students be able to set goals for each other?

Locks / Conditional access

The activity could also have locks, so that the student cannot access the questions in the Multiplication category until after they have scored more than 80% in the Hard addition category. Of course, unlocking a new category could be an achievement. We appear to be flirting with the gamification buzz-word here, so I will stop.

Performance comparison

Should there by any way for students to compare their performance, or achievements, with their peers? We are definitely getting to features that should be left until version 2.0. Let's get a basic system working first, but make sure it is extensible.

How hard would this be to build

I think this would not require too much work because a lot of the necessary building blocks already exist in Moodle. The question bank already handles questions organised into categories, and we would just use that. Similarly, the attempt page and practice sessions are very easy to manage with the new question engine.

The real work is in two places. First, building the start attempt form, and then writing the code that randomly selects questions based on the options chosen. Second, deciding what statistics to compute, and then writing the code to compute them.

Of course, before we can start writing any code, there are still a lot of details of the design to decide. Also one most not forget things like backup and restore, creating the database, and all the usual Moodle plumbing.

Overall, I think it would take a few months work to get a really useful activity built.

Credit where credit is due

I said earlier that I got most of these ideas from other people. To start with, things like this have been mooted in the Moodle quiz forum over the years. The discussions there usually start from Computerised Adaptive Testing, whereas this idea is about student-driven use of questions. I think the latter is more interesting. (As a mathematician, I think CAT is an interesting concept. I just don't think it would make a useful Moodle activity.)

The real inspiration for this came at a meeting in London at the start of 2011. That meeting was at UCL with Tony Gardiner-Medwin who has already built a system something like this, but stand-alone, not in Moodle; and David Emmett from University of Queensland, Brisbane (who was giving a seminar). David had been hoping to get a grant to build something like this proposal (in Moodle) but that did not pan out. We did, however, have a very interesting discussion, and that is where I got the key idea that this sort of question practice was most interesting if you could give the student control of their own learning as much as possible.

We have also discussed ideas like this on-and-off for a long time at the OU. There has, however, been a lot of other things we needed to deal with first. We had to do a lot of work getting the quiz system working to our satisfaction (a strand of work that eventually lead to the new question engine). We had to sort out the reporting of grades, including working with Moodle HQ on the new gradebook in Moodle 1.9, and integrating Moodle with our student information system. We had to make a new question types that our users wanted. Only now can we start to think seriously about the last piece of the jigsaw: more activities that use all the question infrastructure we have built. I hope this post is a useful starting point for discussing what one of those activities might be.

Saturday, August 6, 2011

The Good, the Bad and the Ugly

It was the best of times, it was the worst of times, ... It has certainly been a mixed week.

The good

... was that I helped three OU developers submit their first bug fix through the new Moodle development process: MDL-27631, MDL-28517 and MDL-28620. Hopefully those fixes all get through integration review next week.

The Bad

... was that the time had finally come to deal with a hot potato that we have been tossing around for some months; and, to mix metaphors, when the buck stopped, I was in the the wrong place at the wrong time.

As part of some new question types we are developing, we want students to be able to type responses that include superscripts and subscripts. For example 3×108 ms-1 or SO42-. We have an old implementation of this, done six years ago for OpenMark (for example this or this), but that never worked in Safari, and is a bit dodgy generally. We want a new, reliable implementation that works in IE, Firefox, Chrome and Safari.

Plan A: back near the start of spring, I quickly knocked up a partial solution using the YUI 2 Rich Text Editor library. It mostly worked, but there were issues. It did not work consistently across browsers, and it lets you nest superscripts inside subscripts inside superscripts which just gets confusing, so we want to prevent that.

I had a sneaking suspicion how hard it would be to get from my quick partial solution to a robust implementation. Therefore I moved on to other things, and tried to unload this job onto three other people in turn. There were plenty of other more urgent tasks on our todo list.

Time passed, and many of the other things got done, so at the start of the week I realised that creating this input widget could not be put off any longer. I also felt it was unfair to expect other developers to deal with a crappy job that I was not prepared to do myself, so I decided to have another go.

The other thing that had changed is that while attempting to implement this, Colin and Wale had both eliminated some blind alleys, and suggested some promising ideas. Therefore, I was continuing from a far better place than where I left off. Even so, it was a long week.

The Ugly

Plan B: Although we had a partial implementation in YUI 2, I did not want to continue with that. Moodle is trying to move away from YUI 2 and to YUI 3 as soon as possible. So, my first attempt was to use the YUI 3(.3) Rich Text Editor. As the docs make clear. That is not finished yet. It is also not terribly well documented. With hindsight, I now realise that it provides only a very thin rapper around the native editing facilities provided by web browsers. Therefore it does not really help with browser inconsistencies.

Plan C: Since the Rich Text Editor is only beta, I decided to have a look at what was new in YUI 3.4, which is due for release soon. The answer is that they have made quite a lot of progress - in the sense that if you are trying to walk from London to Edinburgh, you have made quite at lot of progress by the time you reach Milton Keynes. Compounded with the fact that it is hard to find any documentation for pre-release version of YUI, this approach also failed.

At this point, I decided to do a bit of reading. I found two excellent articles from Opera that explained exactly how contentEditable works in web browsers. I also found a good cross-browser compatibility table. That made me realise that YUI was hardly doing anything to help with cross-browser differences.

Plan D: Now that I knew roughly what the browsers were doing, I briefly toyed with the idea of implementing the widget entirely myself in plain JavaScript. Once again, getting something basic working was not too hard, but I had not even started to tackle the cross-browser differences.

Then I realised that TinyMCE, which Moodle uses, tends to work really well across browsers. It is a bit slow to load, because it is a huge mass of code, but perhaps all that code is there for a reason. A quick play with superscript and subscript in the Moodle HTML editor in various browsers confirmed that TinyMCE must be working around most of the problems. So I dived into the TinyMCE code with the original idea of stealing the bits I needed to make Plan D work. It did not take much looking for me to develop a new-found respect for how hard TinyMCE is working to keep different web browsers in line. I did not want to have to replicate all that.

Plan E: So I finally concluded I should just use TinyMCE directly. That is using a very large sledge-hammer to crack a nut, but at least it should work. Indeed, it was mostly just a matter of setting the right configuration options. What made it particularly good is that there is an option to limit which tags can be nested inside other tags. That robustly prevents people from nesting superscript inside subscript, etc.

I was very nearly there, but there were two more requirements. We did not want pressing enter to insert a line-break, and because we were only dealing with a single line of input, we wanted to use the up and down arrow keys as shortcuts for superscript and subscript. The only way I could find to do that was to write a simple TinyMCE plugin. Fortunately, that is well documented.

The end

I got there eventually. The code needs to be cleaned up, tested some more, and integrated into the question types we are building, but I don't foresee any problems doing that.

I would like to thank the Moxie Code, who make TinyMCE, even though they have completely ignored the patch I sent them some time ago in relation to MDL-27890. I would also like to thank Olav Junker Kjær, who wrote the Oracle blog posts, which were the most useful thing I read. Also, the team behind Firebug. I can't imagine doing JavaScript development without that debugging tool. Finally Colin, Wale and Jamie, who I tried to dump this on, and who in return gave me helpful ideas.

Thursday, July 7, 2011

Keeping the discipline of not changing Moodle core

We have said in the past that at the OU we made too many changes to core code in our Moodle 1.9 system, and that as we moved to Moodle 2, we would make far fewer. The release of Moodle 2.1 provides a good opportunity to stop and reflect on how we are doing.

Exactly how many core changes we had made in 1.9 seems to depend on who you ask. It was something of the order of one or two thousand depending on how you count. As a result, every time there is a new Moodle 1.9.x release, someone (Derek) has to do a couple of days painstaking merging to upgrade to the new version.

Moodle 2.1 was released on Friday. On Monday afternoon we decided to try upgrading our development branch to it. The merge (literally git merge MOODLE_21_STABLE) only took a few hours, and that was most mostly a matter of thinking before typing git checkout --theirs to resolve most of the conflicts in favour of the Moodle 2.1 code. Then we had to test test install, upgrade, and basic functionality before pushing the merge to our central repository.

But, how many OU-specific changes do we have in core code right now? Well, the answer appears to be eight. Let me explain that number.

To control the core code changes, we use a simple approval procedure. Each change must be proposed by one of the leading developers. They do this by opening a special sort of ticket in our issue tracking system. That serves as a permanent record of the change, and is also a place to log any discussion. The other leading developers then review the proposal. For the change to be approved, at least one other leading developer must endorse it with a +1 vote, and there must not be any -1 votes. Votes are normally accompanied by an explanation of why that developer is voting that way.

After a suitable time for votes, the issue is declared either accepted or rejected. OU-specific changes can be rejected for two reasons.We may decide that it is not acceptable to change core to implement this feature, so we drop the feature; or we think of some devious way to achieve the feature without changing core code.

If a change is approved, then the code is written. Well, in some cases the code will already have been written, because you can have a much more informed debate about whether a certain change is a good idea if you can see exactly what the proposed change is. Once the code is written and approved, it is committed to out git repository and the issue moves into state 'Code committed'. Finally, we may find a way to get rid of the OU-specific change in future. The most common way that happens is if we contribute the change upstream to moodle.org. For example the new Moodle question engine was an ou-specific change as long as we were using it in Moodle 2.0, but now we have upgraded to Moodle 2.1, it is standard code. Therefore, that issue has now changed status to 'No longer required'.

Overall, our we, have 22 ou-specific change issues in our bug tracker. The break-down is:

Rejected: 2
New (under discussion): 4
Approved (but not yet implemented): 1
Code committed: 8
No longer required: 7

Most of the 'Code committed' changes are pretty boring. For example three of them are bug-fixes to the questionnaire module that we have submitted upstream, but which have not been reviewed and accepted by the questionnaire maintainers yet. Therefore, those three will almost certainly end up as 'No longer required' in due course. Another example is that we want to customise the "Database connection failed / It is possible that the database is overloaded or otherwise not running properly" page that you get when Moodle fails to connect to the database. If Moodle can't connect to the database, then it cannot load the configuration, and so cannot determine which theme to use to display the error. Therefore, the only way customise that page is to edit lib/setup.php.

The one 'serious' ou-specific change we have is some hacking around in course/lib.php to support one of our custom modules called 'subpage' (not released yet, but we hope to share it eventually). Given more time, we might be able to find a more elegant way to handle these changes, but we don't have that sort of time at the moment.

While we have controlled the core code changes, we do have written a lot of custom plugins. Those range from big things like forumng and ouwiki, to small things like a local plugin that just implements a single web-service function. I'm afraid I don't have a complete list, but we must have more than 50 plugins by now. As far as I am aware, the upgrade from 2.0 to 2.1 did not break any of them.

Tuesday, April 12, 2011

Performance and Scalability

When you set up a web application, often you will start small with everything running on one server. Everything, in this case, typically means the application and the database and data files. That is nice and simple. It has the advantage that everything it fast because it is all one the one server.

The capacity is, however, limited. Suppose the load on your application increases. You can get some way just by upgrading the one server, adding more memory and faster processors, but that will only get you so far.

Eventually, you will have to scale out. You will get a number of separate web servers, with a load-balancer to distribute the incoming requests between them. All the web servers will connect to a shared database sever, or cluster of database servers. The files will probably go on a separate file server.

While this increases the total load that the whole system can support, it means, paradoxically, that processing a single request is slower. For example, if you switch from one server to three servers (application server, database and files) your site will not support three times as many users. The scalability will not be linear. That is because every connection to the database or to get a file now has to travel over the network. Accessing something across a network tends to be an order of magnitude slower than accessing something on the same server.

The above is all standard knowledge about scaling web applications. I have been thinking about about it recently because it explains the way my working life has been evolving. Just over six months ago I was working essentially on my own, re-developing the Moodle question engine. I had been working away like that for a year, and I had got a lot done.

Since then, things have changed, and I am now managing a team including three other developers, and two out-sourced development contracts. It has been particularly 'bad' this last couple of weeks as one development period of the project came to an end and I had to review a lot of code, and then I had to sort out everything we were supposed to be doing for the next three months. I am starting to wonder if I will every get any of my own development work done at all!

That is, however, just some exasperation showing. I know really that this has just been a brief spell with an excessive amount of administration. Overall I am happy that the OU is investing so much in its eAssessment systems (and the other parts of its VLE); as a team we are achieving more than I could on my own; but right now my inner geek would really like to go and hide in a cave for a while and just write code undisturbed.

Wednesday, March 9, 2011

Moodle bug tracker

Today, between fixing bugs and reviewing code, I spent a bit of time tinkering with my dashboard in the Moodle bug tracker. I was trying to make it as clear as possible which issues need my attention. I am quite pleased with the result:

tracker screen grab

The issue statistics widget does not just show you the pretty graphs, it also makes it easy to get at those issues. For example, if I click on 1.9.12 in the My targetted issues box, then I am taken to a list of those 11 issues. That particular widget I have used for a while, the new parts are the boxes just under there.

My: Ongoing pull requests I added to make it easy to find the things I have submitted for inclusion in next week's weekly build (hopefully). Thanks to Eloy, that filter is now available to everyone in the jira-developers group.

The next two boxes let me quickly get to issues with patches attached. There is an emerging convention of adding the label patch to such issues, where the attached code needs to be reviewed. This makes finding such issues very much easier. The whole point of the new development processes is to encourage more people to contribute patches, and then ensure those patches get looked at, rather than just sitting there for years. (Here is an example I found yesterday of what used to happen: MDL-13983). Therefore, as quiz maintainer, I need to be able to see easily if anyone has submitted any relevant patches. I also want easy access to bugs with patches that I created or commented on.

Having brought it up, can I say that I am quite happy with how the new processes are working so far. My impression is that since they were introduced, I have received more usable bug fixes for the quiz that in the past. I am not sure how much causality one can claim there, however, since as well as the new processes, we also had the Moodle 2.0 release. Moodle 2.0 has plenty of minor bugs that are ripe for fixing. So, it may just be that we are seeing lots of bug fixes because there are lots of bugs.

At the other end, it has made it a bit easier to get my code reviewed. Well, finished code where I have created a PULL request certainly gets is reviewed. It is still sometimes a problem to get comments on work-in-progress, because everyone is so busy.

Wednesday, February 23, 2011

Etiquette for questions

I have been working hard at converting the new Moodle question engine to work in Moodle 2.0, aiming at a deadline this Friday (25th February). On Friday we should have a first OU version of Moodle 2.0 with all the key features so that we can start testing, even though students won't get onto the new system before July. I have have basically finished the question engine, give or take a few features that are not needed for testing, and this week I am just doing some final tidying up of the code.

Hopefully, next week I can start the process of getting it reviewed for inclusion in Moodle 2.1. As I say, there are some gaps in the functionality that will need to be filled in before it can actually be committed, but there is a lot of code to be reviewed (lucky Eloy!) and so I hope we can kick off the process.

So, my excuse for not blogging about the new question engine recently is that I have been too busy working on it to write about it. In the last few days, however, I encountered a couple of nice ideas that would be easy to implement using the flexibility the new question engine gives, and I want to describe them. First, I need to remind you of one key point about the new system:

Question behaviours

As I explained before a key idea in the new question engine is that of question behaviours. Whereas a question type lets you have either a multiple-choice, a drag and drop, or a short-answer question, a behaviour controls how the student interact with the questions, of whatever type. For example, the student may have to type in answers to each question in the quiz, then submit everything, and only then are the questions marked. This is known as the "Deferred feedback" behaviour. Alternatively, the student may answer one question, have their answer marked immediately. If they are wrong, they get a hint and can then immediately have another go. If they get it right on the second or third try, they get fewer marks. This is called the "Interactive with multiple tries" behaviour.

When I was first working on this, I did wonder whether it was perhaps over-kill to make behaviours fully-fledged Moodle plugins. It seemed to me that I had already implemented all the types of behaviour anyone was likely to want. It turns out I was wrong. Here are three ideas for new behaviours have I have come across since had that naive thought.

Explain your thinking behaviour

The concept here is that, in addition to presenting the question to the student for them to answer in the usual way, you also give them a text-area with the prompt "Explain your answer". When the submit the question is graded as usual. Moodle does not do anything with the explanation, other than to store it, and re-display it later when the student or their teacher reviews their attempt. The point is that the student should reflect upon and articulate their thought processes, and the teacher can then see what they wrote, which might be useful for diagnosing what problems the students are having.

I'm not sure that this would really work. Would the students really bother to write thoughtful comments if there were no marks to be had? However, this would be relatively easy to implement, so we should build it and see what happens in practice. The teacher could always manually adjust the marks based on the quality of the reflection, if that was necessary to incentivise students.

I'm afraid I cannot remember who suggested this idea. It was a post in the Moodle quiz forum some time ago, just after I had implemented the behaviour concept and was thinking that my initial list of behaviours was all anyone could possibly want.

gnikram desab-ytniatreC

This idea I only came across yesterday evening, in a blog post from people in the OU's technology faculty. It is a slightly strange twist on certainty-based marking.

With classic CBM, the students answers the questions, and also says how certain they are they got it right (for example, on a three-point scale). The student will only get full marks if they get the question right, and are very certain that they were right. If, however, they express high certainty and are wrong, they are penalised heavily with a big negative mark. To maximise their score, the student must accurately gauge their level of knowledge. This hopefully promotes reflection, and self awareness by the student of their level of knowledge.

The idea from the OU technology faculty is to do this backwards, for multiple choice questions. Rather than getting the student to answer the question and then select a certainty, you first show them just the question stem without the choices, and get them to express a certainty. Only then do you show them the choices and let them chose what they think is the right answer.

Again, I am not sure if this would work, but it is sufficiently easy to do by creating a new behaviour plug-ing (and a some change to the multiple-choice question type so that you can output just the question, without the choices) that it has to be worth a try.

Free text responses with a chance to act on feedback

This last idea I only heard about this morning. There was a session of the OU's "eLearning community" all about eAssessment, which naturally I attended. This is a monthly gathering with a number of different presentations on some eLearning topic. The first three talks were about specific courses that have recently adopted eAssessment, and how students had engaged with that, what effect the effect had been on retention and pass rates, and so on. That was interesting, but not what I want to talk about here. The final talk was by Denise Whitelock from the OU's Institute of Educational Technology who has just completed a review of recent research into technology-enhance assessment for HEA that should be published soon. Here, I just want to pick up on one specific idea from her talk.

I'm afraid that again, I don't recall who deserves credit for this idea. (Once Denise's review is published, I will have a proper reference, but I did not take notes this morning.) It was another UK university that had done this. It was in the context of language teaching. The student had to type a sentence in answer to the question, then the computer graded that attempt and gave some feedback. Then, the student was immediately allowed to revise their sentence in light of the feedback, and get it re-marked. The final grade for the question is then a weighed sum of the first mark and the second mark. You need to get the weights right. The weight for the first try has to be big enough that the student tries hard to get the question right on their own before seeing the hints, and the weight for the second try, though smaller, also has to be big enough so that the student bothers to revise their response.

Now, the OU is currently creating a Moodle question type that can automatically grade sentence length answers using an algorithm that my colleague Phil Butcher implemented the first version of in 1978! (When I say we are creating this, what I actually mean is that we have contracted Jamie Pratt a free-lance Moodle developer to implement it to our specification.) Anyway, once you have that, the idea of allowing two tries, with feedback after the first try, and a final grade that is a weighted sum of the marks for the two tries, is just another behaviour.

So, my initial thought that people would not have many ideas for interesting new behaviours seems to have been wrong. The flexibility I built into the system is worth having.

Wednesday, February 9, 2011

Should you listen to futurologists?

Educause just published their annual survey describing "six areas of emerging technology that will have significant impact on higher education and creative expression over the next one to five years".

This got circulated round the team at work and I rather cynically asked "so, what did they predict last year then?" My colleague Pete Mitton took that question and ran with it to produce the following analysis:
OK, as I have a full set of Horizon Reports on my hard disk, here's a summary of their predictions for the years 2004-11.

I've pushed some titles together where the wording is different but the intent is the same (for example they've used mobile computing/mobiles/mobile phones in the past with the same meaning).

The numbers in the table are the time-to-adoption horizon in years.
2004 2005 2006 2007 2008 2009 2010 2011
User-created content 1 1 1
Social Networking 4-5 1 1
Mobiles 2-3 2-3 1 1 1
Virtual Worlds 2-3
New Scholarship and Emerging forms of publication 4-5
Massively Multiplayer Educational Gaming 4-5
Collaboration Webs 1
Mobile Broadband 2-3
Data Mashups 2-3
Collective Intelligence 4-5
Social Operating Systems 4-5
Cloud Computing 1
The Personal Web 2-3
Semantic-Aware Applications 4-5
Smart Objects 4-5
Open Content 1
Electronic Books 2-3 1
Simple Augmented Reality 4-5 4-5 2-3 2-3
Gesture-based computing 4-5 4-5
Visual Data Analysis 4-5
Game-based learning 2-3 2-3 2-3
Learning analytics 4-5
Learning Objects 1
Scaleable Vector Graphics 1
Rapid Prototyping 2-3
Multimodal Interfaces 2-3
Context Aware Computing aka Geostuff 4-5 4-5 2-3
Knowledge Webs 4-5
Extended Learning 1
Ubiquitous Wireless 1
Intelligent Searching 2-3

Of course, the purpose of a report like this is not to accurately predict the future. The aim is rather to stimulate informed debate about the technologies that are coming up. Within our team, at least, they seem to have succeeded.

I thought, however, that this analysis was interesting enough to share. It provides some context for year's predictions. More generally it shows how difficult it is predict future technology trends.