In one way, this is a positive step toward making learning, especially in technical subjects, available to more people, people who couldn't attend/be accepted into/afford MIT. They'll earn the certificates in this way: "They'll watch videos, answer questions, practice exercises, visit online labs, and take quizzes and tests. They'll also connect with others working on the material." As open courses, these could be hugely popular: 94,000 people enrolled in just one course (yes, one course) offered by Stanford last fall. The course will be as rigorous as a regular course, we're told. These are MOOC courses.
As always, the sticking point is assessment: how will the learning in the course be evaluated, and by whom?
Short answer: "It's unclear exactly how the assessment will work."
Longer answer: Technology and teaching assistants will be our saviors.
But how much will outside individuals get to interact with MIT professors? That's unclear.This sounds as though it might work in technical fields, where I'm assuming you have some fixed, highly complex content that has to be mastered. I don't have enough content knowledge about those fields to say. It has an advantage in that we're all used to using online forums, responding, and rating good answers highly. It's satisfying to help someone online, and this model would take advantage of that knowledge.
One way to promote such contact will be software that handles many questions, said Anant Agarwal, director of MIT's Computer Science and Artificial Intelligence Laboratory.
"Through voting and other mechanisms, you can create a funnel of requests so that the requests that come off the funnel at the very top can actually be answered by MIT professors and MIT TA's," he said. "A large number of questions at the lower parts of the funnel can actually be answered by other learners who may be slightly ahead."
MIT faculty members have also developed technology that can automatically grade essays. Other technologies that could come into play here include automatic transcription, online tutors, and crowdsourced grading.
But automated essay grading? Crowdsourced grading and the pointlessness of writing essays at all have already made their way into the conversation. Possibly MIT is thinking of anonymous grading along the lines of "the grading factory" or of outsourcing grading as business school professors are doing. Certainly some science instructors are enthusiastic about programs like SAGrader.
An essay grading program may not have the emotional kick of having a student come up at the end of the semester to thank you for helping her improve her writing, as happened to me and other bloggers recently, but MIT seems to say that the efficiency tradeoff is worth more than the emotional connection.
And if teaching assistants and adjunct tutors are the solution: does the profession really need to find MORE ways to exploit TA's and adjuncts? I'm guessing that only an Einstein in training is going to make it to the top of the question pyramid that MIT describes and that overworked and underpaid temporary faculty are going to do the bulk of it, without ever getting the satisfaction of having seeing individual students improve, unless they have a better memory for 94,000 names than I do.
I'm not saying this isn't the wave of the future; it might be. I'm not saying this can't work; for technical fields, it might. I don't know enough to say.
But if it's the wave of the future, why is MIT so careful to "distance" this "brand" from its own brand of education?