PDA

View Full Version : Program Rankings Benchmark



PhDPolEcon
12-24-2014, 10:11 AM
There are so many references here to top 5, top 10, top 20, etc. My question is what ranking list is used to make that determination?

The 2013 US News ranking?

http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-humanities-schools/economics-rankings

Or some other benchmark.

Since rankings are so key to program selection, etc at least we should agree on what ranking list to use.

Feel free to add in the thread your own ranking of the top 10 schools.

Food4Thought
12-24-2014, 04:59 PM
You are correct, the US News ranking is usually what one is referring to when mentioning top-X ranking.

I think it is silly to rank the top-10, though because every single school there will provide excellent training and will land you an excellent job afterwards. If someone gets accepted to a top-10 program, they are best just going where their geographical preferences and research interests are most closely matched. The only time one should really care to rank school choices is from the 20-50 range.

publicaffairsny
12-24-2014, 07:23 PM
You are correct, the US News ranking is usually what one is referring to when mentioning top-X ranking.

I think it is silly to rank the top-10, though because every single school there will provide excellent training and will land you an excellent job afterwards. If someone gets accepted to a top-10 program, they are best just going where their geographical preferences and research interests are most closely matched. The only time one should really care to rank school choices is from the 20-50 range.

I don't know, I think there is a difference between those ranked 1-5 and those ranked 6-10.

lbox
12-24-2014, 07:39 PM
More like a difference between 1-2 and 3-10.

publicaffairsny
12-24-2014, 09:38 PM
More like a difference between 1-2 and 3-10.

Yes, there are clearly 3 tiers, 1-2, 3-6 and 7-10. (5-6 are both ranked #5)

Food4Thought
12-24-2014, 09:44 PM
More like a difference between 1-2 and 3-10.

You are correct. However, the term "ranking", in my mind, is gauging the quality of the program. The real difference between Harvard/MIT and the rest of the top-10 is job placement, which definitely is something worth considering. But in terms of quality of training, I hold to my position that there is no significant difference between going to Harvard and any of the other of the top-10 schools.

Plus, from someone not coming from an Ivy league school (or some school with specific connections to certain top schools, like BYU), the probability of getting into any of the top-10 schools is roughly the same, namely incredibly low. So, if one is to invest money in applications to these programs, it is better to not worry about relative ranking but rather to apply to where you are most competitive. This is where personal interests and comparative skill comes in to play.

publicaffairsny
12-24-2014, 10:01 PM
You are correct. However, the term "ranking", in my mind, is gauging the quality of the program. The real difference between Harvard/MIT and the rest of the top-10 is job placement, which definitely is something worth considering. But in terms of quality of training, I hold to my position that there is no significant difference between going to Harvard and any of the other of the top-10 schools.

Plus, from someone not coming from an Ivy league school (or some school with specific connections to certain top schools, like BYU), the probability of getting into any of the top-10 schools is roughly the same, namely incredibly low. So, if one is to invest money in applications to these programs, it is better to not worry about relative ranking but rather to apply to where you are most competitive. This is where personal interests and comparative skill comes in to play.

I agree generally, but actually think rankings probably say more within the top 10, than outside it. If you want to talk about quality of training, all programs are teaching the same material. The difference is who you are being taught by, and who you are learning with along with different aspects of culture. Within the top 10, the difference is the value of the programs brand on the market. Harvard, MIT, Chicago, Princeton and Berkeley have distinct brands, and the value of these brands is reflected in the rankings. I think in 7-10, this branding is less pronounced and some lower ranked programs actually have more distinct brands.

Food4Thought
12-25-2014, 05:50 AM
I agree generally, but actually think rankings probably say more within the top 10, than outside it. If you want to talk about quality of training, all programs are teaching the same material. The difference is who you are being taught by, and who you are learning with along with different aspects of culture. Within the top 10, the difference is the value of the programs brand on the market. Harvard, MIT, Chicago, Princeton and Berkeley have distinct brands, and the value of these brands is reflected in the rankings. I think in 7-10, this branding is less pronounced and some lower ranked programs actually have more distinct brands.

That is exactly what I am saying. The reason the top-10 programs are ranked the way they are is because of prestige or job placement, not because of quality of training like most other schools.

This is why I maintain that the marginal difference in quality, in all respects, diminishes as you go up rankings. The drop-off in professor quality as one goes down the rankings is much slower than the drop-off in student quality, so that doesn't really support informative top-10 rankings. Similarly, students' quality does not differ much across the top-10. Are you willing to argue that the difference between a student at Harvard and one at UPenn or Columbia is of a similar magnitude to the difference between a student at Boston University and one at Arizona State?

I don't see any reason to believe that rankings become more informative as one goes up rankings; rather, there seems to be good reason to believe that they become more arbitrary.

publicaffairsny
12-25-2014, 06:26 AM
That is exactly what I am saying. The reason the top-10 programs are ranked the way they are is because of prestige or job placement, not because of quality of training like most other schools.

This is why I maintain that the marginal difference in quality, in all respects, diminishes as you go up rankings. The drop-off in professor quality as one goes down the rankings is much slower than the drop-off in student quality, so that doesn't really support informative top-10 rankings. Similarly, students' quality does not differ much across the top-10. Are you willing to argue that the difference between a student at Harvard and one at UPenn or Columbia is of a similar magnitude to the difference between a student at Boston University and one at Arizona State?

I don't see any reason to believe that rankings become more informative as one goes up rankings; rather, there seems to be good reason to believe that they become more arbitrary.

I think tiers probably mean more than individual rankings throughout. Within tiers, rankings take on a second meaning which is brand positioning. And i think this effect is most pronounced in the top 10.

As for the comparison between students, i think the gap between a harvard and UPenn admit is probably similar to that between a bu and arizona state admit, though, the two schools are separated by a wider gulf and the top-30 divide. A better comparison might be Rochester and Michigan State. I think applicants might be quite interchangeable in terms of observables, but there is clear sorting based on intangibles.

Econhead
12-25-2014, 12:51 PM
In terms of what applicants should be paying attention too, average placement doesn't mean anything if you know what field you want to go in to. However, if you are open to a wide range of fields, average placement is certainly more important. Simple put, if you specialize in macro and you have a choice between a t15 and a t25, and both schools match your interests equally (as do all other qualifications), you need to look at where the macro guys are placing people. It is not inherently true that your placement will be better at a better ranked school, it depends who is there that sells people well. Connections are worth more than rank or university prestige.

PhDPolEcon
12-25-2014, 03:02 PM
How can one parse the rankings in terms of training not placement. Personally, I am more interested in the training than placement. Does anyone have a ranking list based on "training"?

publicaffairsny
12-25-2014, 05:13 PM
How can one parse the rankings in terms of training not placement. Personally, I am more interested in the training than placement. Does anyone have a ranking list based on "training"?

All programs teach the same material. Your advisor will be a tenured prof, someone who has completed a PhD from a good school (most likely) and has proven competent enough as a prof or researcher to get tenure. You're gonna get trained well anywhere you go. However the higher you get up the rankings the more likely your profs will be from top 5 schools (though probably 50% are as it is at most top 50 schools, the rest are just plain good (see revision below)) and the better prepared your peers will be. If you think these measures are important to your definition of training then you want to shoot higher. If you believe that the student will become the teacher then it may not matter who the teacher is as long as they know the fundamentals.

Of course all of this is probably meaningless to anyone but yourself as a recent grad on the market. The rank of your program is gonna have an effect on your placement.

Food4Thought
12-25-2014, 09:13 PM
How can one parse the rankings in terms of training not placement. Personally, I am more interested in the training than placement. Does anyone have a ranking list based on "training"?

I guess publicaffairsny and I have fundamentally different views on this issue.

I don't think there is any set list of schools based on training, but if you ask a professor, there will certainly be schools that have a better reputation for teaching than others.

Since you are interested in training more than placement, here is my advice: Your number one priority is getting into the best possible program. After you have taken that into consideration, you should do research on the faculty. Look to see how many senior faculty there are. If there are senior faculty covering all of the fields, that is a good sign. If there is a field (lets say micro theory) where most of the faculty is junior, then your micro courses are most likely going to be taught by people who are relatively inexperienced at teaching. For instance, it is not uncommon for fresh PhDs from Berkeley to have no teaching experience other than being a graduate TA.

This will not hold in general, obviously. But for instructional purposes, I would prefer to be taught by senior faculty. In my micro class this past semester, we had a relatively new professor teach us, and it was very painful.

publicaffairsny
12-25-2014, 11:05 PM
That's a good point. Programs will have different strengths in terms of field. One school I am looking at advertises certain strengths but when I talked to a professor there he told me they have trouble retaining faculty in one of the fields. Of course you aren't gonna get that info without a personal connection.

ColonelForbin
12-26-2014, 12:05 AM
You are all missing a huge part of the equation: peer effects. You're going to spend so much more time learning and teaching your peers than getting mentored by professors. Having great peers matters a lot. Especially at your first placement as you'll already have a great research network.

publicaffairsny
12-26-2014, 12:10 AM
You are all missing a huge part of the equation: peer effects. You're going to spend so much more time learning and teaching your peers than getting mentored by professors. Having great peers matters a lot. Especially at your first placement as you'll already have a great research network.

I said that here:

The difference is who you are being taught by, and who you are learning with along with different aspects of culture.

and here:

However the higher you get up the rankings the more likely your profs will be from top 5 schools (though probably 50% are as it is at most top 50 schools ( I would revise this and say the number of top 5 drops off as you get farther down, but more than 50% are still top 30 and every programs will have people from the top 5), the rest are just plain good) and the better prepared your peers will be.

ColonelForbin
12-26-2014, 12:10 AM
Also PANY placement is a very close proxy for training quality. The academic job market is fairly efficient. If you want good training goto a school with good placement. If you want to teach goto a school with good teaching placements.

publicaffairsny
12-26-2014, 12:22 AM
Also PANY placement is a very close proxy for training quality. The academic job market is fairly efficient. If you want good training goto a school with good placement. If you want to teach goto a school with good teaching placements.

There is clear selection bias going on. I would argue that placement has nothing to do with training and everything due to the quality of inputs the higher up the ranking you go.

ColonelForbin
12-27-2014, 01:09 AM
I guess you win. I've just given up reading your posts mostly.

publicaffairsny
12-27-2014, 01:17 AM
Don't take it personal. The point is to debate.

Blanket
12-27-2014, 04:55 AM
Don't take it personal. The point is to debate.

It really isn't. The point of this forum is to help prospective Economics graduate students, and making declarative and comparative statements about the training received in differing Ph.D programs is nothing more than silly. There are very few here who have taken Ph.D courses at one institution, let alone two.

publicaffairsny
12-27-2014, 05:55 AM
In some threads the point is advice. In others its to further our understanding of nuances associated with admissions and institutions. Its generally impossible to further knowledge without debate. I suggest you get used to it.

Food4Thought
12-27-2014, 06:25 AM
In some threads the point is advice. In others its to further our understanding of nuances associated with admissions and institutions. Its generally impossible to further knowledge without debate. I suggest you get used to it.

You are on an economics forum. So, you should understand that truth about the subtleties of graduate programs will not be achieved a priori. Debate is worthless if they are based on opinions.

From my personal experience interacting with PhD students from these institutions, the people that get into Harvard/MIT, by and large, are the kinds of people who went to prestigious undergrads and had amazing connections. Yes, they are brilliant, but so is the top student at the University of Alabama. A lot of non-academic factors go into deciding where someone goes to undergrad, and some people draw the short end of the straw.

UPenn, NYU, and other top 5-10 schools are often where the best and the brightest of lower ranked schools peak, admissions-wise, because the top-5 schools are the ones who sweep up many of the best students who had the connections early. This is my point, you seem to think that admissions are not path-dependent in any way, that Harvard is better because students at Harvard are vastly superior. You will need to substantiate your claim, because I guarantee you one of the only differences between students at Harvard, MIT, Yale, Princeton, and ones at UPenn, NYU, Columbia will be with respect their letters of recommendation and prestige of undergrad.

If you have facts to base your opinion, please let them be known. If you don't, then let's agree that your opinion is unsubstantiated and is better described as rhetoric.

publicaffairsny
12-27-2014, 06:40 AM
You are on an economics forum. So, you should understand that truth about the subtleties of graduate programs will not be achieved a priori. Debate is worthless if they are based on opinions.

From my personal experience interacting with PhD students from these institutions, the people that get into Harvard/MIT, by and large, are the kinds of people who went to prestigious undergrads and had amazing connections. Yes, they are brilliant, but so is the top student at the University of Alabama. A lot of non-academic factors go into deciding where someone goes to undergrad, and some people draw the short end of the straw.

UPenn, NYU, and other top 5-10 schools are often where the best and the brightest of lower ranked schools peak, admissions-wise, because the top-5 schools are the ones who sweep up many of the best students who had the connections early. This is my point, you seem to think that admissions are not path-dependent in any way, that Harvard is better because students at Harvard are vastly superior. You will need to substantiate your claim, because I guarantee you one of the only differences between students at Harvard, MIT, Yale, Princeton, and ones at UPenn, NYU, Columbia will be with respect their letters of recommendation and prestige of undergrad.

If you have facts to base your opinion, please let them be known. If you don't, then let's agree that your opinion is unsubstantiated and is better described as rhetoric.

There's some contradiction here. On one hand, folks are saying that ranking is a proxy for training. On the other, I'm hearing students are equally distributed throughout programs and their outcomes are dependent on their own skills. This is more in line with what I'm saying. That training is basically the same at Harvard and NYU, but Harvard may place better because of rankings. On a wider range, there are clear selection reasons why you can't compare the placement of a Harvard grad with someone from the number 35 school. The students are just different on average. You can't definitively say it has to do with training.

And I don't have to have facts about admissions to disprove your facts. I can point out that your model is flawed and that's an equally valid positive statement. Not rhetoric.

fakeo
12-27-2014, 05:42 PM
Regarding the original question in the thread, besides US News, the two other most commonly used rankings (AFAIK) are the RePeC (https://ideas.repec.org/top/) and Tilburg (https://econtop.uvt.nl/rankinglist.php) rankings. Both are international but can be used to compile country-specific rankings as well.

RePeC is pretty amazing because it gives lots and lots of "special" rankings, such as ranking by field and stuff like that. If I understand correctly, its disadvantage is that it heavily depends on whether the papers of the authors are uploaded to RePeC. But then again most papers are, so that's not such a huge issue.

Tilburg only measures (https://econtop.uvt.nl/methodology.php) actual publications in various (top) journals, which is as objective as you can get. The downside is that there is no correction for the size of the department. So since larger departments with more faculty will have more publications on average, ceteris paribus, the ranking is biased towards them.

Still, I personally prefer Tilburg because its methodology is very clear and uses a single objective measure. As opposed to this, the methodology of US News (http://www.usnews.com/education/best-graduate-schools/articles/2014/03/10/methodology-best-social-sciences-and-humanities-schools-rankings) is surveys (conducted in 2012 with a 25% response rate), and RePeC can be manipulated somewhat by flooding it with working papers (e.g. the rank of BGSE, PSE, and TSE seems inflated compared to other rankings).

Quite reassuringly though, the three rankings are largely in agreement with each other for the most part.

behavingmyself
12-29-2014, 11:10 PM
Quality of training varies substantially from school to school, and even across fields within the same school, in ways which are not always well-captured by faculty research rankings. The difference is not as pronounced in the first year or two of grad school when everyone is learning the same things, but it becomes quite important during the research phase of the PhD. This is why it's important to get a sense of how much interaction students have with faculty when you go on visit days.

As empirical support for my claim, observe the large differences in the distributions of research productivity of alumni of various programs, and notice that output is only modestly correlated with faculty research ranking:

http://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.28.3.205

publicaffairsny
12-29-2014, 11:51 PM
I don't interpret that study as demonstrating what you claim. It suggests to me that top students will perform no matter where they are trained: variation between programs is more reasonably explained by the initial sorting process concentrating the most prepared applicants in top 40 programs producing the outliers like Carnegie Mellon and Rochester. The article provides no discussion of a link to training other than the hypothesis that top students are the only ones actually getting trained.

Food4Thought
12-30-2014, 01:22 AM
I don't interpret that study as demonstrating what you claim. It suggests to me that top students will perform no matter where they are trained: variation between programs is more reasonably explained by the initial sorting process concentrating the most prepared applicants in top 40 programs producing the outliers like Carnegie Mellon and Rochester. The article provides no discussion of a link to training other than the hypothesis that top students are the only ones actually getting trained.

I agree. From my sporadic exploration of the PhD Ranking --> Research Productivity literature, it is better to be the top student at a lower ranked program (within reason) than to be the median or sub-median student at a relatively higher ranked program.

That being said, is being a mediocre student at somewhere like Stanford really something to be complaining about?

PhDPlease
12-30-2014, 01:24 AM
I am under the impression that while ranking of program is correlated with placements on average, there are some programs that have abnormally good placement relative to rank and some that have abnormally bad placement relative to rank (at least for some period of time). I would think at least part of that could be due to training (not necessarily that the professors are smarter but that there is more of an atmosphere of interaction between faculty and students, the faculty is expected to take their interaction w/grad students seriously, etc), although I guess you cannot prove that schools w/abnormally good or bad placement relative to their rank aren't endogenously attracting better students than their rank would suggest or that it isn't random noise.

Food4Thought
12-30-2014, 01:35 AM
*speculation warning*

I think faculty size could play a huge role in explaining that phenomena. If two departments are equally ranked and one has 60% fewer faculty, that means the faculty at that smaller department must be all that much more productive in research than their larger counterpart. These qualities probably rub off on the students, hence better placement.

publicaffairsny
12-30-2014, 01:48 AM
Other factors:

Underdog effect at Carnegie Mellon and Rochester.

Higher paying private sector opportunities for graduates from higher ranked programs to forgo research.

PhDPlease
12-30-2014, 01:54 AM
*speculation warning*

I think faculty size could play a huge role in explaining that phenomena. If two departments are equally ranked and one has 60% fewer faculty, that means the faculty at that smaller department must be all that much more productive in research than their larger counterpart. These qualities probably rub off on the students, hence better placement.

I think that seems true for non-size-adjusted ranking. On the other hand, if 2 schools are equally ranked in the size-adjusted ranking, the larger school has more potential advisers, and at a large school if a few faculty do not like to work with students or are a bad fit, it would be less of a concern as there are a higher # of other faculty to potentially work with. (speculation alert as well of course)

fakeo
12-30-2014, 08:09 AM
Quality of training varies substantially from school to school, and even across fields within the same school, in ways which are not always well-captured by faculty research rankings. The difference is not as pronounced in the first year or two of grad school when everyone is learning the same things, but it becomes quite important during the research phase of the PhD. This is why it's important to get a sense of how much interaction students have with faculty when you go on visit days.

As empirical support for my claim, observe the large differences in the distributions of research productivity of alumni of various programs, and notice that output is only modestly correlated with faculty research ranking:

http://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.28.3.205

This research is nice but it has its caveats. I've done my analyses using the numbers in the paper and identified three departments that are pretty good relative to their usual rank: Rochester, UCSD and CMU. Now, the average cohort size according to the paper at these programs is 8.7, 6.1 and 2.0, respectively. While the sample average is 15.1 (median 15.9). So these programs are so successful because they don't have a lot of students. Seriously, a cohort size of 2 at CMU?! I wouldn't necessarily say that their training quality is so high. It's more about only admitting very good students, and not bothering with anyone who has a slight risk of being mediocre.

Also, the data is for 1986-2000. Lots could have changed since then. So I will agree with @publicaffairsny, what this paper really tells me is that it's better being a top student at a lower ranked program than a mediocre one at H/M. Which is very encouraging in and of itself.

By the way, you do know that RePeC has a ranking by the quality of the graduate program (https://ideas.repec.org/top/top.inst.students.html) (as proxied by the strength of students) as well?

publicaffairsny
12-30-2014, 03:19 PM
The data also suggests that if 80% publish nothing a lot of people are doing things besides research. Most folks on here are going in with the intent of researching. But the 80% that don't succeed must be exercising various exit options. Besides teaching positions, what are they and to what extent do people take them?

behavingmyself
12-30-2014, 05:13 PM
My point was that department research rank is not a sufficient statistic for department value-added to PhD students. If you are willing to believe that the composition of entering classes is not wildly dissimilar at schools with similar research rank (I find this reasonable), then this claim is demonstrated by the data in the paper I linked. The mechanism for this difference in value-added is a matter of speculation, though I think it mostly stems from advising and peer effects.

I further claim (based on personal experience, without supporting evidence) that there is heterogeneity by field and by advisor in value-added even within departments, and that the stronger fields and advisors within a school are not necessarily the most research productive. This is why it's important to pay attention to placement records, and to go to visit days to find out about the environment within the departments you are choosing among.

behavingmyself
12-30-2014, 05:19 PM
The data also suggests that if 80% publish nothing a lot of people are doing things besides research. Most folks on here are going in with the intent of researching. But the 80% that don't succeed must be exercising various exit options. Besides teaching positions, what are they and to what extent do people take them?
1) Only 50% of the sample had exactly 0 publications, and only 40% of students from top 30 schools. The very low numbers at most quantiles are because these researchers are publishing in extremely low-value journals, where their contribution to economics as a discipline is quite small.

2) You can look at placements pages to see what people do after their PhD. Some fraction of people work in the private sector and some people work in government positions which don't require publication. But much of what's happening is that publishing is hard and many people find it hard to publish even though their jobs require publication. If you get an academic job at a good school, you can drop down to a lower-ranked school even without publications. People with initial placements at low-ranked schools often have to leave academia if they aren't able to publish.

behavingmyself
12-30-2014, 05:38 PM
This research is nice but it has its caveats. I've done my analyses using the numbers in the paper and identified three departments that are pretty good relative to their usual rank: Rochester, UCSD and CMU. Now, the average cohort size according to the paper at these programs is 8.7, 6.1 and 2.0, respectively. While the sample average is 15.1 (median 15.9). So these programs are so successful because they don't have a lot of students. Seriously, a cohort size of 2 at CMU?! I wouldn't necessarily say that their training quality is so high. It's more about only admitting very good students, and not bothering with anyone who has a slight risk of being mediocre.
I regressed a measure of publications (ln of AER equivalents, substituting -6 for those with 0 pubs) on cohort size. Cohort size has a negligible relationship. Controlling for rank and rank^2 didn't strengthen the relationship, and, including a school fixed effect, cohort size was insignificant. Obviously we don't know the true data-generating process so I don't want to claim that there is no causal connection, but at least the correlation you're describing doesn't seem to be systematic.

Team3
12-30-2014, 08:19 PM
The data also suggests that if 80% publish nothing a lot of people are doing things besides research. Most folks on here are going in with the intent of researching. But the 80% that don't succeed must be exercising various exit options. Besides teaching positions, what are they and to what extent do people take them?

Here's a critique of the first couple of paragraphs (which is all the time you get to convince someone to skim the rest of your article): Conley and Onder define "research" too narrowly to be of any help.

Op-Ed: it's pretty arrogant to define "research" as the thing academics do, and everything else as "the needs of business or industry." Hint: departments aren't training people to __fill_in_the_blank___(litigation, policy, portfolio choice, etc.), they're training people in the research craft that serves as the foundation of __fill_in_the_blank__.

Harsher Op-Ed: actually, nevermind.

publicaffairsny
12-31-2014, 01:32 AM
Here's a critique of the first couple of paragraphs (which is all the time you get to convince someone to skim the rest of your article): Conley and Onder define "research" too narrowly to be of any help.

Op-Ed: it's pretty arrogant to define "research" as the thing academics do, and everything else as "the needs of business or industry." Hint: departments aren't training people to __fill_in_the_blank___(litigation, policy, portfolio choice, etc.), they're training people in the research craft that serves as the foundation of __fill_in_the_blank__.

Harsher Op-Ed: actually, nevermind.

No judgment intended. I was just trying to define those exit options better for myself.

Team3
12-31-2014, 01:39 PM
No judgment intended. I was just trying to define those exit options better for myself.

Exit options are one of the major things that change with program rankings: in particular, access to certain private firms and prestigious quasi-governmental institutions for the median student tapers with program rankings. One of the reasons this occurs is that the "known-commodity-ness" picks up fatter tails as one moves down the rankings, and employers focus recruiting efforts where they're likely to get the best match for their offers.

publicaffairsny
12-31-2014, 03:02 PM
I think one connection that I didn't immediately make when talking about this data, was that we have no indication from the study of where the top research producers ranked within their program relative to their classmates according to objective measures. For all we know the best research prospects could be the lowest ranking students with the weakest job placements.

Food4Thought
12-31-2014, 05:34 PM
I think one connection that I didn't immediately make when talking about this data, was that we have no indication from the study of where the top research producers ranked within their program relative to their classmates according to objective measures. For all we know the best research prospects could be the lowest ranking students with the weakest job placements.

Personal experience suggests the obvious answer: the most successful and well-published were publishing very early on in their career, often having several publications or many papers in the pipeline when they are on the job market. This would indicate they were placed well relative to their cohort.

I don't think I've ever seen a CV of anyone successful who has taken many years after PhD to really hit their stride.

publicaffairsny
12-31-2014, 05:50 PM
Personal experience suggests the obvious answer: the most successful and well-published were publishing very early on in their career, often having a solo publication in the pipeline when they are on the job market. This would indicate they were placed well relative to their cohort.

I don't think I've ever seen a CV of anyone successful who has taken many years after PhD to really hit their stride.

Is it unusual to be attempting a solo publication during your candidacy? I would think its a given.