Meanwhile, I have read several stories about Coursera (including this one), which seems to operate under the assumption that the key to success in online education will be Ivy League instructors, rapid feedback, and peer interaction.

I have my doubts about the Ivy League instructors. I assume that they are devoting very little of their time and energy to this. Eventually, folks who are willing to put close to 100 percent of their bandwidth into online education should be better at it than the folks who put close to 0 percent.

http://econlog.econlib.org/archives/2012/04/why_take_yoga_c.html

asked 21 Apr '12, 17:53

robrambusch's gravatar image

robrambusch ♦
24.2k451252

edited 21 Apr '12, 17:57


In my opinion it is not necessary to have Ivy League instructors in order to provide quality education. However, I think it is the easiest way to get started. Let us be honest: Do you think that KnowItLab/Udacity or Coursera would have had the same publicity, and attracted as many students without the reputation of its teachers? I honestly doubt it. Leaving Khan Academy aside, just go to Youtube, search for educational videos, and look at their number of views if you need proof.

Now that the experiments of last fall are over, both Udacity and Coursera are hard at work figuring out the best way to implement online education. Already we start seeing differences in their respective approach: Whereas Coursera relies on the repuration of its professors to attract students, Udacity recently placed an open call to recruit teachers with more diverse backgrounds, not necessarily Ivy League ones. I can see both strategies succeed: Given the quality of Udacity's courses so far, I am enclined to trust them, and believe that they will pick the right people for future courses.

As for teachers' involvement, I think that Udacity has the upper hand right now because it is more consistent in that respect: All classes have office hours, and TAs/teachers have more presence in the forums. Involvement at Coursera seems to exhibit higher variance, but pretty much all professors either hold screenside chats, or post in the forums from time to time. To be honest, if a teacher does not want to devote their time to a MOOC, I doubt that (s)he would propose an online course in the first place.

link

answered 22 Apr '12, 08:18

-iv-'s gravatar image

-iv-
3215

1
Udacity have put me off a lot with their approach. Their courses are simply too dumbed down to be comparable with traditional college classes. It might help them pull in large numbers of students, but they're going to struggle to gain credibility with their own certification programme if people don't think the rigour is there. The only reason I would take a Udacity course now, is if I really wanted to take a course in that subject and nothing comparable was offered by Coursera/MITx.
(22 Apr '12, 09:57) jholyhead jholyhead's gravatar image
@jholyhead I tend to agree with you, but I wonder how much people are learning in the Udacity classes (sincerely wonder, I think measuring this would be helpful. That's why I like the MIT Mechanics Online pretest idea.). Different approaches work for different people. I am glad there are a variety of approaches being tried (For this reason I think the differences between the Coursera classes have a positive aspect as well as a negative). I'm taking CS212 (with Peter Norvig) from Udacity and liking it so far. He is taking a bit different approach from CS101 and CS373, and I think challenging students more. It is interesting to watch the reaction in the forums. I think the first office hours (and some of Peter's posts in the forums) give an idea of what he is trying to accomplish. I like his teaching style. Can anyone here offer thoughts on other classes in the current round of Udacity offerings?
(22 Apr '12, 10:42) rseiter ♦ rseiter's gravatar image
What do you mean by that? Do you mean that the level is too low, or that it is too easy to get the maximum score because of the new grading system?
(22 Apr '12, 11:02) -iv- -iv-'s gravatar image
a bit of both. At times the level dropped below what I would expect in a college level class, but the main thing was that the assignments were too easy and too brief. Most of the programming assignments involved taking the answer to a quiz question and tweaking a couple of lines of code. Understanding the material was entirely optional. And the exam was absurdly easy. I think they had the level about right with AI-Class, but 373 was actually a lot easier.
(22 Apr '12, 11:19) jholyhead jholyhead's gravatar image
My only experience with Udacity was cs373 and I also tend to agree with @jholyhead: while it was interesting and a nice complement for ai-class, I would not call it a cs3xx level course.
(22 Apr '12, 11:52) Ale Ale's gravatar image
@jholyhead: By that standard, most of the courses so far have been extremely easy. At least from a theoretical point of view. The only courses that could be remotely considered challenging are: SaaS unless you have had previous exposure to Ruby on Rails. NLP: Not conceptually hard, but programming assignments are sometimes absurd to the point of looking more like witchcraft that actual problem solving. Cryptography: Some programming assignments can be challenging when all you have is an old computer, the abundance of acronyms sometimes makes it hard to follow the teacher, and one question from problem set 4 was both genuinely subtle and interesting. I do not care about certificates. I just take courses to refresh my knowledge or expand it. In my opinion both Coursera and Udacity made a mistake by proposing them: They are a bit pointless since they carry no value. For the time being, they should only focus on finding the best way to implment online teaching. Certification can come later when they have the means to implement it properly. I also thought that the initial AI class was fine, but there were so many complaints about the grading policies, and so called "ambiguities" that I can understand their turnabout about it.
(22 Apr '12, 11:56) -iv- -iv-'s gravatar image
1
@rseiter: I think they are learning if I am to believe my professional experience. The random shuffle is a question I used to ask during job interviews. I must have asked it to dozens of PhD and MSc in maths, physics and computer science, and the number of people who managed to come up a linear algorithm is in practice surprisingly low. Most candidates have such a hard time on this simple question that I never even bothered to ask whether their algorithm would be biased or not.
(22 Apr '12, 12:17) -iv- -iv-'s gravatar image
2
@rseiter: (contd.) The current Udacity offering looks fairly solid to me. CS212: I really like what Peter Norvig is trying to accomplish even though his approach may be difficult to grasp for beginners. I have had to deal with enough bad code in my life that seeing someone trying to teach problem solving instead of blind coding is a breath of fresh air. CS253: I cannot really give any feedback on this one yet. Unit 1 was to basic to make any kind of meaningful assessment. CS262: I like Wes' style, plus he has an incredible voice (seriously). Unit 1 is a gentle introduction to regular expressions and finite state machines that will be used in lexical analysis. He does not cover the subject extensively, but this course should be a good introduction to Coursera's compiler course for those with no previous knowledge. CS387: It is much more accessible than Dan Boneh's offering on Coursera. Dave's does not cover the exact same topics as Dan, and his approach is more pragmatic. I learned some interesting tidbits about the Lorentz cipher during unit 1, and it looks like both courses will complement each other very well.
(22 Apr '12, 12:17) -iv- -iv-'s gravatar image
1
Ultimately, a solution to the certification problem is necessary if they want to change the education landscape, which in Udacity's case, is their stated aim. Udacity want to offer a 'degree' at some point, for completing all the courses in a defined syllabus, but if it is easy to pass them, then that degree has no real value. I think Coursera would have a much easier time offering a qualification as the courses are fairly rigourous in their content and non trivial in their assessments. If you come through PGM with a good grade, I'd believe you if you told me in a job interview that you understand PGMs.
(22 Apr '12, 12:41) jholyhead jholyhead's gravatar image
@-iv- thanks for the details on the current Udacity classes! I would add PGM to your list of challenging courses. I also would note that I think DAA is a good introduction to algorithms (whether or not it is challenging depends on how much programming experience you have IMHO). Thanks for the interesting comments about the random shuffle. What did everyone think about the level of granularity in the Udacity certificates? (for CS101: Certificate of Completion, Certificate of Accomplishment, Certificate of Accomplishment with High Distinction, Certificate of Accomplishment with Highest Distinction)
(22 Apr '12, 13:23) rseiter ♦ rseiter's gravatar image
2
@rseiter: I do not think that PGM is hard, but I understand why some people would think that. Instead I would qualify it as "demanding" due to the time required to do the homeworks (which are very interesting btw). I really enjoyed DAA. Algorithmics is one of my favourite subjects, and I was expecting this course to be a simple refresher about basic stuff like sorting. That is how it started, but it quickly switched to problems that I had never encountered before: Randomized selection, graph contraction, counting of minimum cuts, decomposition into strongly connected components, etc. Excellent stuff, and very well done. I would recommend this course without any reservation.
(22 Apr '12, 13:57) -iv- -iv-'s gravatar image
I am curious as to whether DAA's early inclusion of graph algorithms had anything to do with PGM being offered. @-iv- if I understand you correctly, you are saying PGM is similar to my take on NLP (not that conceptually hard, but demanding assignments, perhaps without the witchcraft ;-) (I'll note that I think NLP is getting more conceptually interesting as it goes on). Do you have a heavy duty math background (or similar)? I do find PGM conceptually hard (particularly if I delve into the book), but I think much of that can be explained by a strange aversion I have to certain types of abstract math (I can usually do it if I try, but find it neither easy nor pleasant). I would be interested in any other thoughts you could offer on your PGM experience.
(22 Apr '12, 14:25) rseiter ♦ rseiter's gravatar image
1
@rseiter: I have an MSc in applied probabilities, but I have never really studied what is being taucght in PGM. The basis however is the same: Bayes, conditional probabilities, and the like is typically what you see at the beginning of a MSc. It is not particularly difficult: You just need to be careful, and make sure that what you are calculating makes sense. Right now, I have only done three weeks of PGM (lagging behind because of my schedule). The first 2 weeks were just a deluge of definitions and were rather boring, but things seem to be picking up by week 3. The first homework was challenging because I had forgotten all about Octave. The second one and third ones were interesting as it introduced a way of representing/manipulating factors, and computing something out of them. The only piece that is missing is the optimization algorithm (i.e. the function that takes those and find the optimal parameters), but we are supposed to implement one by the end of the class. If so, that is awesome. I also like the fact that the programming assignments are not spoonfed as they were for the ML class. One factor may contribute to the difficulty of the PGM class, and that is the fact that the videos sometimes lack depth on some topics. Instead those points are the subject of quizzes, and they require you to spend time understanding the videos. An illustration of that was the question about I-maps during week 1.
(22 Apr '12, 15:40) -iv- -iv-'s gravatar image
2
@rseiter: (contd.) What I dislike about NLP is the fact videos for a large part consist of presenting one model after the other without going into to much details, and some issues are not even discussed. For instance, the second programming assignment asked you to implement Kneser-Ney smoothing to improve the performance for autocorrection. The problem is that I had no idea how to deal with out-of-vocabulary words when using Kneser-Ney. This topic is not discussed beyond Good-Turing during the course, and the clarification given in the forum was really confusing. I ended up implementing a hack which gave good result, but seemed inconsitant. That left a bad taste in my mouth. The fourth programming assignment was even worse, asking you to find features for named entity recognition. No methodology was provided to find good features or combine them beforehand. Often adding what seemed like an interesting feature would ruin the results. I ended up submitting an average solution out of frustration. There was no point spending time on this exercise anyway, I was not learning anything. Note I am NOT asking to be spoonfed (I am very happy with the assignments from other SaaS, DAA, Crypto, or PGM), but I think that asking students to find features out of the blue has no pedagocical value whatsoever.
(22 Apr '12, 15:59) -iv- -iv-'s gravatar image
@-iv- I have similar feelings about NLP. I have found each of the PAs very frustrating. For the most part, at the end I felt like I had learned something, but it seemed like there should be a better way to get there. The NER assignment was a good example. I had multiple head banging false starts where I made little progress, but it finally did come together for me and after that point I made good progress (I did stop at 11/12). In that case the first few lectures of the following week would have been helpful as well as some references (I actually posted some paper links in the forum while I was trying to sort that out and got an amusing response from Chris Manning). There were a couple of really cool tools posted in the forums (visualization and optimization) for that PA. But still I am left feeling it should not be necessary to suffer through that much frustration to learn (I think they are trying to replicate the process of solving problems in the real world). My experience of the NLP lectures has been mixed. I found the week 6 lectures (lexicalized and dependency parsing) pretty good.
(22 Apr '12, 16:19) rseiter ♦ rseiter's gravatar image
1
@rseiter: Yeah well, I was in a rush since I start doing the homework only hours before the hard deadline these days. I had a quick look at the forum to see if I could any useful tips, but there was too much noise there for them to be of any help. If you are to believe what is written there, you should implement everything and its contrary. Some people clearly had no clue as to what they were doing, and were posting non-sensical advice, or at least recommending feature sets that would clearly overfit the training set data. I plan on going back to this homework and look at what others have done once the dust settles, but wasting my time on this last week would have been an exercise in futility. Having a more structured exercise, such as a preparatory exercise that would give intuition about feature selection and combination would have been better approach.
(22 Apr '12, 16:38) -iv- -iv-'s gravatar image
2
@-iv- time pressure definitely changes things. I agree a more structured exercise would have helped. I really hope Coursera does some work on their forums. They are noisy and the lack of effective searching, no support for bubbling good answers to the top, and a lack of comment/answer separation makes the noise hard to work through. For getting intuition I highly recommend checking out the tools in https://class.coursera.org/nlp/forum/thread?thread_id=1154 (104 upvotes for the thread!). I just wish I had found that thread before I was almost done...
(22 Apr '12, 16:54) rseiter ♦ rseiter's gravatar image
1
@rseiter: Thank you! I wished I had found that thread last week. I somehow completely missed it. Not surprising considerig the poor quality of Coursera's forum software. The link you poseted looks like it points to a really helpful tool. I do not mind the time pressure. I spent a lot of time on the first assignment, and I ended up losing points because some instructions were not entirely clear. Since then I have been trying to spend as little time as possible on NLP assignments.
(22 Apr '12, 17:07) -iv- -iv-'s gravatar image
showing 15 of 18 show all

There is one issue that isn't covered here but is implicit in the "Eventually, folks who are willing to put close to 100 percent of their bandwidth into online education should be better at it than the folks who put close to 0 percent." remark.

That issue is how little of a professor's advancement has to do with the quality of his/her teaching. There may well be equivalents to publication, generating research dollars, and committee participation in the online universe but they aren't the most visible part of the job. I accept that Daphne Koller is certified to me by Stanford but Peter Norvig is certified to me by Google (and Stanford). Once I see their classes (which I'm attracted to by their reputation and connections to institutions), I judge for myself.

In the case where I don't care about certificates of achievement (which I don't) the affiliation of the teacher loses its meaning once I've enrolled. In the case where there is some kind of external certification via test available it doesn't matter who taught me, it matters how I did on the test.

Unless you believe that a D+ after hearing Daphne Koller lecture is somehow more intrinsically valuable than a D+ after hearing Generic Professor lecture. ;-)

link

answered 22 Apr '12, 13:31

robrambusch's gravatar image

robrambusch ♦
24.2k451252

1
I think that getting a D+ on a Daphne Koller test may be more valuable than getting a D+ on a Generic Professor test ;-) But I do realize you are thinking about equal tests. I think "Eventually, folks who are willing to put close to 100 percent of their bandwidth into online education should be better at it than the folks who put close to 0 percent." is an oversimplification. I would place my bets on someone who has a successful track record of teaching college students and then decides to put some portion (25% say) of their time into online education over someone who does not have that track record and puts in 100% of their time on online education. Some of the latter group will succeed (excel even, like Sal Khan) but not all. For me the affiliation of a teacher does have relevance (when the online class is a class they have taught live at college) because it validates their ability for that topic (I like Peter Norvig, but he may not be the best poetry teacher). The affiliation/experience also correlates well with the availability of reusable resources (autograders, existing programming assignments and infrastructure, etc.). Note that the existing class connection makes available a range of existing evaluation mechanisms (like http://www.ratemyprofessors.com/). I definitely agree about advancement vs. teaching. One of my best undergrad math professors was in a battle for tenure when I took his class so I got a good look at that phenomenon.
(22 Apr '12, 13:46) rseiter ♦ rseiter's gravatar image
1
People like Daphne Koller, Peter Norvig and Tim Roughgarden gain initial credibility based on their ties to famous, well respected institutions. That is important at the moment. Over time, however, assuming that everything goes well, instructors from lesser organisations will be loaned credibility as a result of being selected by Coursera or Udacity because we will trust Coursera to pick good instructors. Then the quality of instruction will go up, because teaching ability will matter less than non-teaching credentials.
(22 Apr '12, 13:47) jholyhead jholyhead's gravatar image
1
@jholyhead - "People like Daphne Koller, Peter Norvig and Tim Roughgarden gain initial credibility based on their ties to famous, well respected institutions. That is important at the moment." In other words we are outsourcing our judgement of teachers to the Stanford faculty committees on tenure in much the same way that the employers we disparage do when they interview people based upon a four-year-old decision by the Stanford Admissions Office. ;-)
(22 Apr '12, 18:51) robrambusch ♦ robrambusch's gravatar image
@robrambusch an interesting observation. I've always assumed that part of the rationale for employers relying on college admissions decisions in that way is at least partially caused by the limitations placed on the means employers are permitted to use to determine hiring decisions (e.g. from my limited understanding of employment law there are issues with using test scores for hiring). I would scorn employer use of college of attendance (vs. demonstrated results there) less if the employers were able to filter out admissions criteria not related to job capabilities (e.g. athletic/legacy admissions).
(22 Apr '12, 19:45) rseiter ♦ rseiter's gravatar image
At the moment that's the best system we have, but as I say, eventually, we will come to trust Coursera's judgement.
(23 Apr '12, 02:53) jholyhead jholyhead's gravatar image
@rseiter - "I would scorn employer use of college of attendance (vs. demonstrated results there) less if the employers were able to filter out admissions criteria not related to job capabilities (e.g. athletic/legacy admissions)." Well one could always open the interview by asking, "Were you among the approximately one third of your class admitted on actual merit without preferential adjustments?". Perhaps that will work as people are unfailingly honest and forthright. ;-) Or universities could start being transparent by coding their degrees with preference scores in the same way that US civil service exams do, so many points for being a veteran, etc. Think of how annoying it must be for a Navajo-lesbian-one-armed-violinist who is really brilliant. She goes through life with people making assumptions both for and against her when her intelligence is high enough that she needs no preferential treatment to be chosen in any competitive environment.
(23 Apr '12, 09:35) robrambusch ♦ robrambusch's gravatar image

I find it intriguing to read all the naysaying (both in articles and comments) which seems to have little factual basis (there are of course valid concerns and criticisms, just much of it isn't IMHO). I think "I have my doubts about the Ivy League instructors. I assume that they are devoting very little of their time and energy to this." falls in that category. Did the person who wrote that even take a look at an online course?

I thought this article linked from yours was another good one: http://www.insidehighered.com/news/2012/04/18/princeton-penn-and-michigan-join-mooc-party

link

answered 21 Apr '12, 20:42

rseiter's gravatar image

rseiter ♦
6.6k627

2
Yep, It seems to me the required "time and energy" to prepare the kind of problem sets and programming assignments we see in courses such us ML/PGM/NLP/SAAS is too high to be compatible with author's assumption...
(22 Apr '12, 11:35) Ale Ale's gravatar image
Your answer
toggle preview

Follow this Question via Email

Once you sign in you will be able to subscribe for any updates here

Q&A Editor Basics

  • to upload an image into your question or answer hit
  • to create bulleted or numbered lists hit or
  • to add a title or header hit
  • to section your text hit
  • to make a link clickable, surround it with <a> and </a> (for example, <a>www.google.com</a>)
  • basic HTML tags are also supported (for those who know a bit of HTML)
  • To insert an EQUATION you can use LaTeX. (backslash \ has to be escaped, so in your LaTeX code you have to replace \ with \\). You can see more examples and info here