Drugmonkey Grants, research and drugs

Web Name: Drugmonkey Grants, research and drugs

WebSite: http://drugmonkey.scientopia.org

ID:71717

Keywords:

Grants,Drugmonkey,research,

Description:

As you know, the end of the Federal fiscal year can be a fun time for hopeful NIH grant applicants. This is when your favorite ICs are counting up the beans and making sure they use all of their appropriated money. This means that they often pick up some grants with scores that otherwise looked like they were not going to fund. This is great if you are one of the lucky ones. I have had 9/30 grant starts in the past. It feels awesome.I am thinking about the Hoppe finding and the original Ginther report. I am thinking as always about the NIH s complete and utter failure to address this issue. And I am hopeful. Always. That IC Directors will take it upon themselves to follow the advice I had right from the start. Just FIX this. It won t take much. Just a few extra grant pickups that happen to have Black PIs. End of year is a great time to slip one or three or five into the portfolio. Nobody can complain about these decisions. So pull up RePORTER. Click the start date of 9/15/2020, enter the two letter code for your favorite ICs and start searching. As you will recall, the Hoppe et al. 2019 report [blogpost] both replicated Ginther et al 2011 with a subsequent slice of grant applications, demonstrating that after the news of Ginther, with a change in scoring procedures and changes in permissible revisions, applications with Black PIs still suffered a huge funding disparity. Applications with white PIs are 1.7 times more likely to be funded. Hoppe et al also identified a new culprit for the funding disparity to applications with African-American / Black PIs. TOPIC! Aha , they crowed, it isn t that applications with Black PIs are discriminated against on that basis, no. It s that the applications with Black PIs just so happen to be disproportionately focused on topics that just so happen to have lower funding / success rates . Of course it also was admitted very quietly by Hoppe et al that:WH applicants also experienced lower award rates in these clusters, but the disparate outcomes between AA/B and WH applicants remained, regardless of whether the topic was among the higher- or lower-success clusters (fig. S6).Hoppe et al., Science Advances, 2019 Oct 9;5(10):eaaw7238. doi: 10.1126/sciadv.aaw7238If you go to the Supplement Figure S6 you can see that for each of the five quintiles of topic clusters (ranked by award rates) applications with Black PIs fare worse than applications with white PIs. In fact, in the least-awarded quintile, which has the highest proportion of the applications with Black PIs, the white PI apps enjoy a 1.87 fold advantage, higher than the overall mean of the 1.65 fold advantage. Record scratch: As usual I find something new every time I go back to one of these reports on the NIH funding disparity. The overall award rate disparity was 10.7% for applications with Black PIs versus 17.7% for those with white PIs. The take away from Hoppe et al. 2019 is reflected in the left side of Figure S6 where it shows that the percentage of applications with Black PIs is lowest ( 10%) in the topic domains with the highest award rates and highest (~28%) in the domains with the lowest award rates. The percentages are more similar for apps with white PIs, approximately 20% per quintile. But the right side lists the award rates by quintile. And here we see that in the second highest award-rate topic quintile, the disparity is similar to the mean (12.6% vs 18.9%) but in the top quintile it is greater (13.4% vs 24.2% or a 10.8%age point gap vs the 7%age point gap overall). So if Black PIs followed Director Collins suggestion that they work on the right topics with the right methodologies, they would fare even worse due to the 1.81 fold advantage for applications with white PIs in the top most-awarded topic quintile! Okay but what I really started out to discuss today was a new tiny tidbit provided by a blog post on the Open Mike blog. It reports the topic clusters by IC. This is cool to see since the word clusters presented in Hoppe (Figure 4) don t map cleanly onto any sort of IC assumptions. All we are really concerned with here is the ranking along the X axis. From the blog post: 17 topics (out of 148), representing 40,307 R01 applications, accounted for 50% of the submissions from African American and Black (AAB) PIs. We refer to these topics as “AAB disproportionate” as these are topics to which AAB PIs disproportionately apply.Note the extreme outliers. One (MD) is the National Institute on Minority Health and Health Disparities. I mean seriously. The other (NR) is the National Institute on Nursing Research which is also really interesting. Did I mention that these two Is get 0.8% and 0.4% of the NIH budget, respectively? The NIH mission statement reads: NIH’s mission is to seek fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to enhance health, lengthen life, and reduce illness and disability. Emphasis added. The next one (TW) is the Fogerty International Center which focuses on global health issues (hello global pandemics!) and gets 0.2% of the NIH budget. Then we get into the real meat. At numbers 4-6 on the AAB Disproportionate list of ICs we reach the National Institute on Child Health and Development (HD, 3.7% of the budget), NIDA (DA, 3.5%) and NIAAA (AA, 1.3%). And clocking in at 7 and 9 we have National Institute on Aging (AG, 8.5%) and the NIMH (MH, 4.9%). These are a lot of NIH dollars being expended in ICs of central interest to me and a lot of my audience. We could have made some guesses based on the word clusters in Hoppe et al 2019 but this gets us closer. Yes, we now need to get deeper and more specific. What is the award disparity for applications with Black vs white PIs within each of these ICs? How much of that disparity, if it exists, accounted for by the topic choices within IC?And lets consider the upside. If, by some miracle, a given IC is doing particularly well with respect to funding applications with Black PIs fairly .how are they accomplishing this variance from the NIH average? What can the NIH adopt from such an IC to improve things? Oh, and NINR and NIMHHD really need a boost to their budgets. Maybe NIH Director Collins could put a 10% cut prior to award to the other ICs to improve investment in the applying-knowledge-to-enhance-health goals of the mission statement? A certain someone has taken it upon himself to lampoon certain types of solicitations issued by a lab head for postdocs and occasionally for graduate students, when they appear on Twitter. The triggering material included in such solicitations are terms such as independent , energetic , brilliant , highly motivated , creative and the like. Sometimes the trigger for this certain someone is merely a comment that the applicant should be experienced in some particular scientific technique. Seemingly inoffensive and very traditional, right? I mean, every lab head wants the lab to be as successful as possible and that means that they want good rather than bad employees. Whoops. But we re talking about trainees, right? Graduate students and postdoctoral trainees. They are supposed to be getting something from the lab, not the other way around. Correct? So this over emphasis on how the PI only wants to hire the most talented, rather than the most needy, individuals pulls back the curtain to reveal the seamy truth. Trainees in biomedical science are in large part the workforce. Which is obtained for less money due to the training misdirection. This is one that set me off recently, thanks to our beloved aforementioned trollerpants. Chit chat amongst the Professor class that they need a postdoc now. Or general announcements that they will be soon looking to hire a graduate student in their new appointment, whee! but need . And of course coupling this to the above focus on the very best, most motivated, well trained, energetic self-starting individuals? The notion of actually competing for the best of the available postdocs raises an ugly head. You will be entirely unsurprised that I couple all of this to my views on labor in academic research labs and, in particular, the way we go along deluding ourselves that we are not part of any sort of labor market. I couple this to my thinking about ways to make academic careers slightly less hellish on the factors which are usually rubbing points.Thinking more about the labor aspects of what is now academic training lets us think, I believe, more creatively about making things better for all of us. No, it does not magically invent more Professor jobs. It does not restore State level commitment of funds to public Universities and thereby relieve the pressure for extramural funds. It does not make the NIH budget double overnight and therefore reduce pressure for the grant seekers. But creating stable, long term job categories for those who are now some thin rebranding of postdoc could advance us. Creating stable career jobs to do the pure work part of the graduate student job could advance us. Yes, this means we will train fewer graduate students and replace that labor with technicians. Who will be more or less expected to journey through their career as a career. Benefits. Increasing salaries with experience and longevity. One of the reasons NIH put the modular budget in place is to get reviewers to stop with the ticky tack over costs. Costs that vary all across the country from place to place. Costs that a certain species of reviewer just could not get through their head would vary. Costs that a certain species of reviewer delighted in using to spike a grant because those outrageous cage costs at Big U were higher than they were paying at their LessBig U. A certain species of reviewer is very concerned about salaries paid, if they can just get their beady little eyes on the information. A related species is very concerned about how many individuals are being paid off the grant, if they can just get their eyes on that information. It is very hard to get their eyes on contributions by graduate students or postdocs who are on a fellowship or Program paid stipend. It is inevitably that they get their eyes on technician salaries when looking at an itemized budget. I have recently received a grant review comment that clearly I was paying my technical staff too much, coupled with an obviously grudging admission that the person had long experience as a technician. I have related more than once on these pages that over time I have generally relied more on tech labor than on the trainee scam. This, as our second President of the USA John Adams famously remarked about his refusal to use enslaved labor, costs me. It costs my grants and therefore I get less productivity per dollar compared with someone who is willing to fully exploit cut-rate labor under guise of trainee job categories. I do not turn my techs over willy-nilly every several years to reset salaries, either. And the way things work in these here United States, people get paid more over time. Those with more experience get paid more than those with less, even if the lesser experienced person could do the same job. So when my peers who review my grants say that the merit of my proposal is diminished because I make these labor choices in my lab, and suggest that what I should be doing is exploiting the heck out of labor by using less experienced and cheaper techs ..For every reviewer who is dumb enough to actually write this in a critique, there are ten that are thinking it. They are taking a less positive cant on my proposal as a consequence. And possibly looking for other ways to express their disapproval. I myself have occasionally fallen into to the too many staff for the work described review space. I ve done it super rarely, so I think I m probably on solid ground. The only cases I can recall were really, really egregious. But I need to watch myself, as do you. How often are you thinking that a major grant will receive the supplemental help of undergraduate interns ? grand students or postdocs on their own fellowships? How many times have you questioned the role of a staff scientist when surely a postdoc would do? One of my favorite species of manuscript reviewer comment is that the data we are presenting are uninterpretable . Favorite as in the sort of reaction I get where I can t believe my colleagues in science are this unbelievably stupid and are not completely embarrassed to say any such thing ever. Uninterpretable is supposed to be some sort of easy-out Stock Critique, I do understand that. But it reveals either flagrant hypocrisy (i.e., the reviewer themselves would fall afoul of such a criticism with frequency) or serious, serious misunderstanding of how to do science. When you see a plot like this, what do you assume the whiskers are showing? Super extra curious! pic.twitter.com/3Ax4jeGyDm Zen Faulkes (@DoctorZen) August 5, 2020True. Just puzzled that people care enough about showing data to make a plot like this but then make it uninterpretable. Zen Faulkes (@DoctorZen) August 5, 2020Now, generally when I am laughing at a reviewer comment, it is not that they are using uninterpretable to complain about graphical design (although this occasionally comes into the mix). They usually mean they don t like the design of the experiment(s) in some way and want the experiment conducted in some other way. Or the data analyzed in some other way (including graphical design issues here) OR, most frequently, a whole bunch of additional experiments. If the authors don t do this then the data they are presenting are uninterpretable Reviewer # 3. It s always reviewer #3. Let me address Zen s comment first. It s ridiculous. Of COURSE the graph he presented is interpretable. It s just that we have a few unknowns and some trust. A whole lot of trust. And if we ve lost that, science doesn t work. It just doesn t. So it s ridiculous to talk about the case where we can t trust that the authors aren t trying to flagrantly disregard norms and to lie to us with fake data. There s just no point. Oh and don t forget that Zen construed this in the context of a slide presentation. There just isn t time for minutia and proving beyond any doubt that the presenter/authors aren t trying to mislead with fakery. Scientific communication assumes some reasonable common ground, particularly within a subfield. This is okay. When there is cross talk between fields with really, really different practices, ok, maybe a little extra effort is needed. But this is a graph using the box-and-whiskers plot. This is familiar to the audience and indeed Zen does not seem to take issue with it. He is complaining about the exact nature of the descriptive statistic conventions in this particular box-and-whiskers plot. He is claiming that if this is not specified that the data are uninterpretable . NONSENSE!These plots feature an indicator of central tendency of a distribution of observations, and an indicator of variablity in that distribution. Actually, most descriptive illustrations in science tackle this task. So..it s familiar. This particular type of chart gives two indications of the variability- a big one and a small one. This is baseline knowledge about the chart type and, again, is not the subject of Zen s apparent ire. The line is the central tendency. The box outlines the small indicator and the whiskers outline the big indicator. From this we move into interpretation that is based on expectations. Which are totally valid to deploy within a subfield. So if I saw this chart, I d assume it was most likely depicting the central tendency of a median or mean. Most likely the median, particularly if the little dot indicates the mean. The box therefore outlines the intraquartile range, i.e., the 25%ile and 75%ile values. If the central tendency is the mean, then it is most likely that the box outlines plus or minus one standard error of the mean or one standard deviation. Then we come to the whiskers. I d assume it was either the 95% Confidence Interval or the range of values. I do NOT need to know which of these minor variants is involved to interpret the data. Because scientific interpretation functions along a spectrum of confidence in the interpretation. And if differences between distributions (aha another ready assumption about this chart) cannot be approximated from the presentation then, well, it s okay to delve deeper. To turn to the inferential statistics. In terms of if the small indicator is SD or SEM? meh, we can get a pretty fair idea. If it isn t the SD or SEM around a mean, or the 25%ile/75%ile around a median, but something else like 3SEM or 35/65? Well, someone is doing some weird stuff trying to mislead the audience or is from an entirely disparate field. The latter should be clear. Now, of COURSE, different fields might have different practices and expectations. Maybe it is common to use 5 standard deviations as one of the indicators of variability. Maybe it is common to depict the mode as the indicator of central tendency. But again, the audience and the presenter are presumably operating in approximately the same space and any minor variations in what is being depicted do not render the chart completely uninterpretable!This is not really any different when a manuscript is being reviewed and the reviewers cry Uninterpretable! . Any scientific paper can only say, in essence, Under these conditions, this is what happened . And as long as it was clear what was done and the nature of the data, the reporting of can be interpreted. We may have more or fewer caveats. We may have a greater or smaller space of uncertainty. But we can most certainly interpret. It sometimes gets even worse and more hilarious. I have this common area where we present data where the error bars are smaller than the (reasonably sized) symbols for some (but not all) of the groups. And we may have cases where the not-different (by inferential stats *and* by any rational eyeball and consideration of the data at hand) samples cannot be readily distinguished from each other (think: overlapping longitudinal or dose curves). You need to use color or something else so that we can see the overlapping details or else it is all uninterpretable! Reviewer 3. My position is that if the eye cannot distinguish any differences this is the best depiction of the data. What is an error is presenting data in a way that gives some sort of artificial credence to a difference that is not actually there based on the stats, the effect size and a rational understanding of the data being collected. Time for the Acknowledgements sections of academic papers to report to report on a source of funding that is all to often forgotten. In fact I cannot once remember seeing a paper or manuscript I have received to review mention it. It s not weird. Most academic journals I am familiar with do demand that authors report the source of funding. Sometimes there is an extra declaration that we have reported all sources. It s traditional. Grants for certain sure. Gifts in kind from companies are supposed to be included as well (although I don t know if people include special discounts on key equipment or reagents, tut, tut). In recent times we ve seen the NIH get all astir precisely because some individuals were not reporting funding to them that did appear in manuscripts and publications. The statements about funding often come with some sort of comment that the funding agency or entity had no input on the content of the study or the decisions to/not publish data. The uses of these declarations are several. Readers want to know where there are potential sources of bias, even if the authors have just asserted no such thing exists. Funding bodies rightfully want credit for what they have paid hard cash to create. Grant peer reviewers want to know how productive a given award has been, for better or worse and whether they are being asked to review that information or not. We put in both the grants that paid for the research costs and any individual fellowships or traineeships that supported any postdocs or graduate students. We assume, of course, that any technicians have been paid a salary and are not donating their time. We assume the professor types likewise had their salary covered during the time they were working on the paper. There can be small variances but these assumptions are, for the most part, valid. What we cannot assume is the compensation, if any, provided to any undergraduate or secondary school authors. That is because this is a much more varied reality, in my experience. Undergraduates could be on traineeships or fellowships, just like graduate students and postdocs. Summer research programs are often compensated with a stipend and housing. There are other fellowships active during the academic year. Some students are on work-study and are paid a salary and in school related financial aid in a good lab this can be something more advanced than mere dishwasher or cage changer. Some students receive course credit, as their lab work is considered a part of the education that they are paying the University to receive. Sometimes this course credit is an optional choice- something that someone can choose to do but is not absolutely required. Other times this lab work is a requirement of a Major course of study and is therefore something other than optional. sometimes that lab work is compensated with only the work experience itself. Perhaps with a letter or a verbal recommendation from a lab head. I believe journals should extend their requirement to Acknowledge all sources of funding to the participation of any trainees who are not being compensated from a traditionally cited source, such as a traineeship. There should be lines such as: Author JOB participated in this research as an undergraduate course in fulfilling obligations for a Major in Psychology. Author KRN volunteered in the lab for ~ 10 h a week during the 2020-2021 academic year to gain research experience. Author TAD volunteered in the lab as part of a high school science fair project supported by his dad s colleague. I m not going to go into a long song and dance as to why I think when you consider what we do traditionally include, the onus is quickly upon us to explain why we do NOT already do this. Can anyone think of an objection to stating the nature of the participation of students prior to the graduate school level?Hoppe et al 2019 reported that R01 applications submitted by Black PIs for possible funding in Fiscal Years 2011-2015 were awarded at a rate of 10.7%. At the same time, R01 applications submitted by white PIs enjoyed an award rate of 17.7%. There were 2,403 R01 applications submitted by Black PIs and 18,315 applications submitted by white PIs that were funded. If you take the unawarded applications submitted by Black PIs (2,147 of them) and swap these out for applications funded to white PIs this would reduce the funding rate of the applications submitted by white PIs to. Which is still 46% higher than the award rate the applications from Black PIs actually achieved. The NIH rules on empaneling reviewers on study sections says right at the top under General Requirements: There must be diversity with respect to the geographic distribution, gender, race, and ethnicity of the membership. You will notice that it does not specify any specific diversity targets. One handy older report that I had long ago, lost and then found again is called the CSR Data Book, FY 2004 and it is dated 5/23/2005. Among other details, Table 16 shows that the from 2000-2004 the percent of female reviewers appointed to panels went 27.0%, 25.8%, 28.2%, 31.1%, 32.9%. The percent of non-standing (ad hocs and SEP participation) went 24.5%, 25.7%, 25.2%, 24.4%, 24.9%. That s good enough for now, feel free to chase down any more recent stats, I m sure they are in the NIH site somewhere. If women make up just over half of the population, and about a third of the biomedical research faculty, the proper representation of women on NIH grant review panels is: Drug Monkey (@drugmonkeyblog) June 13, 2020My dumb little twitter poll showed that 35.3% of people that had an opinion thought that the NIH s apparent female reviewer target was about right. I assert that they probably arrive at their target based on what they think is the fraction of their target population (STEM profs? Biomed profs? NIH applicants?). Who knows but I bet whatever it is, it is below the population representation. Some 59.9% of those that offered an opinion thought that the ~population target was about right. It isn t in that older document, but Hoppe et al do report in Table S10 that 2.4% of reviewers for all study sections that evaluated R01s were African-American while 77.8% were white. As a reminder, about 14% of Americans are Black if you include those that check other boxes as multi-racial, 12.4% if you do not. If African-Americans make up 5% of all faculty, less than 1% of Biology Profs and 14% of the US population the correct percentage of African-American reviewers for NIH grants is: Drug Monkey (@drugmonkeyblog) June 13, 2020We can see from this that of the responses offered, 12.7% thought there should be fewer Black reviewers than their are (or roughly the same), some 19% thought it should be about the proportion of Black Professors in STEM fields and 68.3% thought it should more or less match the population level. There is a serious disconnect between the opinion of the dumb little twitter poll of those that follow me on Twitter and what CSR is targeting as being diverse with respect to gender, race . Now, admittedly I have been preparing the field of battle for two weeks at this point, years by some reckonings. Softening them up. Carpet bombing with Ginther napalm and Hoppe munitions. So this is by no means a random sample. This is a sample groomed to be at least aware of NIH funding disparity and a sample subjected to an awful lot of my viewpoint that this is a massive failure of the NIH that needs to be corrected. But still, I think some direct questions are in order. So next time you are talking to your favorite SRO, maybe ask them about this. One interesting little point. I posted these polls only an hour apart and flogged both of them a couple of times later in the day. I actually pinned the second one which should give it slightly more visibility, if anything. 405 people offered an opinion on the question about African-American reviewers and 689 on the second one. The gender one got 4 RTs (which might boost reach) and the racial one got 2. The no opinion vote was 98 for the racial question and 107 for the gender poll so apparently the looky-loo portion of the samples is ~the same number of people. I find this to be pertinent to the miasma of institutional injustice that we are discussing of late. I ve had quite a bit to say about the original Ginther and the dismal NIH response to it. I was particularly unhappy with the NIH (ok, Director Francis Collin s) response to the Hoppe et al paper. These papers and findings are tops on my mind, especially as I fielded reactions both direct and indirect from my colleagues. Everybody is really dismayed by the George Floyd murder. Everybody is taking a moment, maybe because they are home with the Corona virus restrictions, but taking a moment to be be really bothered. And really keen to UNDERSTAND. And to DO something. Well, doing things kinda starts in our own house, eh?The Hoppe paper is mostly about topic words and the way that the types of research interests that Black PIs have may set them at a disadvantage. Nevermind the fact that even within topic word clusters Black PIs still are at a disadvantage, the NIH is really keen to discuss the glass being half not-racist instead of the fact it s also still half racist. But for me this was an opportunity to grapple with the numbers and revisit my old topics about how few grants it would actually take to even up the hit rate for Black PIs. This is because Black applicants are only in the low single digits in terms of percentages. The Hoppe data looks at R01 applications submitted for FY2011-2015, taking only the ones with identified Black or white PIs. We re going jump into the middle a bit here so that I can download my recent tweet storm into a post. First, a poll I put up. We’re nearing the endgame. The NIH should fund every scored application submitted by African-American PIs, reaching back 2 years and into the future for 5 years. Drug Monkey (@drugmonkeyblog) June 14, 2020The question came from my thinking about Hoppe, but I waited to see the votes before returning to a theme I d been on before. It will help you to open both Hoppe and the Supplement and look at Figure 1 from the former and Table S1 of the latter. Figure 1 confuses applicants (left side) with applications (top right) so it can be good to refer to the Table S1. There were 2403 R01 applications from Black PIs. 1346 (or 56%) were triaged and 1057 (44%) were discussed. Of the discussed applications 256 (10.7%) were funded and 801 of the discussed apps were not funded. (Note there s some rounding error here so don t hold me to one app one way or the other. That 10.7% was rounded up because 10.7% of 2403 is 257, not 256.) This was for applications submitted across five Fiscal Years, so we re talking ~269 apps triaged (not discussed) per year and ~160 discussed but not funded per year. There are 25 NIH ICs that fund grants, if I have it right. (I m pulling the relative allocation per-IC below from a spreadsheet that lists 25.) So that s 11 (triaged) and 6 (discussed) Black PI applications per year per IC that do not get funded. For reference, NIMH (which is the 9th biggest IC by budget) has 256 new R01 and 37 Type 2 renewal R01s on the books right now. That s right, you say, ICs are different in size and so therefore we need to adjust the unfunded applications from Black PIs to the size of the IC. Yes, I realize we probably have large differences in % Black PIs seeking funding across the ICs but it s all we have to go on without better information. ok, so lets look at the unfunded apps by IC share. This analysis to follow will be selected ICs. The biggest NIH institute, NCI, receives 15.5% of the entire NIH allocation (which is $41.64 Billion). If we allocate the unfunded applications from Black PIs proportionally then NCI applications account for 42 NDs and 25 discussed-unfunded. But that institute is so large it is hard to really grasp. Lets look at NIGMS (5th by $)- 19 NDs and 11 unfunded. MH? 13/8; DA? 10/6; AA? 4/2. and I m rounding up for the last two ICs. so. what percentage of their funded (type 1, type 2) would this be? I m basing off current FY Type 1 and 2 because we re talking forward policy. If these ICs picked up the discussed-not-funded by %NIH$ share? NIGMS- 2%, NIMH- 2.7%, NIDA- 5.2%, NIAAA- 2.5%. For completeness the share of the triaged/ND apps would be: NIGMS- 3.3%, NIMH- 4.5%, NIDA- 8.7%, NIAAA- 4.2%. again as a fraction of their current new grants. I mention this because one of the consistent findings of Gither et al 2011 and Hoppe et al. 2019 is that applications from Black PIs are more likely to be triaged. The difference in the Hoppe data set was 56% of applications from Black PIs went un-discussed versus only 42.6% of white PI applications. So. Those numbers of discussed-but-unfunded applications from Black PIs are low, but it seems high enough to be relevant. A couple to five percentage of the portfolio for a year? This is not unimportant to the IC portfolio. But to YOU, my friend remember the population size. If we took those 801 apps from the Hoppe data set and funded them, while subtracting 801 apps funded to white PIs (remember, they ignored all other categories of PI race), this would make the success rate for white PI applications go from 17.7% to wait for it 16.9%. Recall, the funding rate for Black PI applications was 10.7%. So yes, that would push the success rate for Black PI applications to 44%ile if NIH funded all of the discussed applications. Which sounds totally unfair. But before you get too amped about that, recall your history. Those people we think of as the current luminaries spend a good chunk of the middle of their careers enjoying >30% success. Look at those rates in the 1980s you may not be aware of this but the early 80s was time remembered as simply terrible in the grant getting. Oh, the older folks would tell me tales of their woes even in the mid 2000s. Well I eventually realized why. Some of them had a few years in there, prior to the 1980s, of 40% or better. And this particular data set (it s RPG, not just R01 btw) isn t even broken out by established/new PI or continuation/new grant! So I m sure the hit rate for established PI applications was higher as was the rate for competing renewal applications. Why yes, we ARE coming back to the establishment of generational accumulated wealth. From a certain point of view. but not right now. we re not ready to talk about the R word. Instead, let s come at this the other way. We kinda got into this a few days ago talking about the white PI grants that were funded at lower scores than *any* funded app with a Black PI (this is in Table 1 of Hoppe et al). There were 2403 Black PI applications in the dataset used in Hoppe et al.. 17.7% of this is 425. Subtract the 256 that were funded and we are at 169 applications (as a reminder this is NIH wide, over 5 years) to reach parity with the white PI rate. Of course subtracting those 169 from the white PI pool would plunge their success. *plunge* I tell you. From 17.7% to ..17.5%. which would obviously be totally unfair so I ll let you do the math to get them to meet in the middle. Just remember NIH prefers if the Black PI apps are juuuuuust under. Statistically indistinguishable tho. Like for gender. Getting this to meet in the middle means that something less than a 0.2% change in the success rate of grants submitted by white PIs would fix the 7.0% deficit in success rates that applications from Black PIs suffer. If instead of just matching success rate, NIH were to fund every single discussed application submitted by Black PIs, this would only change white PI success rates by 0.8%, down from 17.7% to 16.9% as outlind above. Again, we need to compare that 0.8% drop to the 7% deficit suffered by applications with a Black PIs that is currently NBD according to the NIH. and many of our science peers. I feel confident there are many who are contemplating these analyses and the implied questions thinking wait, I m not exchanging my grant for their grant . But that s not the right way to think about this. You would be exchanging your current 17.7% success rate for a 17.5% success rate. I was just noticing something that I hadn t really focused on before in the Hoppe et al 2019 report on the success of grant applications based on topic choices. This is on me because I d done an entire blog post on a similar feature of this situation back when Ginther et al 2011 emerged. The earlier blog post focused on the quite well established reality that almost all apps are funded up to a payline (or virtual payline for ICs that claim, disingenuously, that they don t have one) and that the odds of being funded as one moves away from (worse scoring) that payline, the lower the odds. Supplemental Figure S1 in Ginther showed that these general trends were true for all racial groups. My blog post was essentially focused on the idea that some apps from African-American PIs were not being funded at a given near-miss score while some apps from white PIs were being funded at worse scores.It s worth taking a look at this in Hoppe et al. because it is a more recent dataset from applications scored using the new nine point scale. I was alerted to Table 1 of Hoppe et al. which shows the percentage of the total funded pool of applications from Black and white PIs by the voted percentile rank, binned into 5 percentile ranges (0-4 is good, 85-89%ile bad). As you would expect, almost all applications in the top two bins (0-9%ile) were funded regardless of PI race. And the chances of an app being funded at a given percentile bin decrease the further they are away from the very top scores. Where it gets interesting is after the 34%ile mark where no Black PI apps were funded. In any score bin. And there was at least one application in each bin save for 65-69, 75-79 and 80-84 which are not worth talking about anyway. The pinch is observing that at least some applications of white PIs were funded from 35-59th percentile. I.e., at scores that are worse than the score of any app funded with a Black PI. On Twitter I originally screwed up the count because I stupidly applied the bin percentages to the entire population of funded awards. Not so. In fact I need to calculate it per bin. Now if my current thinking is right, and it may not be, those bonus bins for white PIs represent 25% of the distribution (5 bins, 5%ile points per bin). The supplement Table S1 tells us there were 103,620 applications submitted by white PIs so that leaves us with 25,905 applications, 5,181 in each bin. Percentiling of applications is within a rolling three rounds of each standing study section. Special Emphasis Panels are variously percentiled- sometimes against an associated parent study section, sometimes against the total CSR pool. Multiplying each of the bin success rates, I end up with a total of 119 applications of white PIs funded from 34-59th percentile. A score range at which ZERO applications were funded to Black PIs. So, in essence, you could replace all of those applications funded to white PIs with more meritorious (well? that s how they use the rankings. percentile = merit) unfunded applications submitted by Black PIs. Even by some distance as only 74% of 10-14%ile scoring applications with Black PIs were funded for example. I was curious why Hoppe et al included the Table and what use they made of it. I could find only one mention of Table 1 and it was in the section titled IC decisions do not contribute to funding gap . However, below the 15th percentile, there was no difference in the average rate at which ICs funded each group (Table 1); applications from AA/B and WH scientists that scored in the 15th to 24th percentile range, which was just above the nominal payline for FY 2011–2015, were funded at similar rates (AA/B 25.2% versus WH 26.6%, P = 0.76; Table 1). The differences we observe at narrower percentile ranges (15 to 19, 20 to 24, 25 to 29, and 30 to 34) slightly favored either AA/B or WH applicants alternately but were in no case statistically significant (P ≥ 0.13 for all ranges). These results suggest that final funding decisions by ICs, whether based on impact scores or discretionary funding decisions, do not contribute to the funding gap. This is more than a little annoying. Sure, they sliced and diced the analysis down to where it is not statistically resolvable as a difference. But real world? It s not a matter of constant anger for any PI who has a near miss score and gets wind of anyone being funded at a worse score? Sure it is. And that last statement is just plain false. 119 white PI applications funded at worse scores is 46.5% of the total number of applications funded with Black PIs. If all of those discretionary funding decisions had gone to Black PIs, that would raise the hit rate from 10.7% to 15.6% for Black PIs. Whereas the white PI hit rate would plunge from 17.7% to 17.56%. So this analysis they are referring to supports quite the opposite conclusion. Discretionary funding decisions, i.e. outside of percentile ranks where nearly every application is funded, do in fact contribute substantially to the disparity. and correcting this to give Black PIs a fair hit rate, by selecting applications of HIGHER MERIT, would cause an entirely imperceptible change in the chances for white PIs. There was a thread on the Twitters today complaining about graduate students being called trainees. Hot take: PhD students are called trainees to justify underpaying us for doing the job we re training for PhD Diaries (@thoughtsofaphd) May 19, 2020Because, of course, the hot take is correct. We have increased the number of post-graduate trainees in doctoral granting programs so as to obtain cut-rate labor to service our biomedical science research laboratory work. Yes. Absolutely. To service the work that our federal government is asking us to do, and paying us to do, via the NIH, NSF and a few other major grant-making entities. Grants to not-for-profit Universities and Research Institutes are, of course, a way for the US federal government to try to get cut-rate labor to service its goals. By leveraging the power of calling middle management Professors to justify underpaying us for the job we are doing. ( Underpaying is a concept I have on good authority from practically every academic I ve spoken with about their satisfaction with their compensation.) Getting back to the pre-doctoral exploit, however, their is this notion of a valuable credential being dangled as the additional compensation. The award of the PhD (and the presumed training that comes with it) is supposed to make up for any perceived deficiencies in month to month paychecks. And it does have value. This credential is necessary for many subsequent job categories that are perceived as being desired. Or at least more desired than the jobs that are available, or the compensation that is available, for those without this particular credential. My question for today is, would things be better in academic science if, instead of the credential model we operated on the peformance based, resume building model? Everyone enters this pipeline as a fresh faced bachelor s degree recipient and gets paid as a real employee on technician wages. Just like our current tech class. From there on, advance to the first supervisory step (like the current postdoc stage) depends merely on performance, opportunity and drive. If you just put in your time, you stay a tech. And move up on that trajectory. If you take an interest in the broader science issues and do more than just put in your hours under direction of the higher-ups, more like what we expect out of current graduate students, well, at some point you are competitive for the entry level manager position. And you get some techs to direct. Then again, if you want to move up to the next level, junior faculty-ish we can say, you have to produce. You have to produce and show you can run a team and act in all ways like a PI save name and .boom. You get to be PI. From there, if you take the extra time to also teach classes, since we re going to have the adjunctification of traditional teaching duties rolled into this re-alignment of course, maybe you eventually earn the title of Professor. If we still have that. At every stage, the key is that you are more or less expected to be able to make a career at that stage if that is what fits you. Techs can remain techs. Job longevity. Steady raises. Benefits. Low level managers ditto. Look you still have to perform. Every workplace has turnover for competence and for fit. But then again I see checkout folks at my local Costco that I ve seen there for well over two decades. Same job, presumably with incremental raises. No need to constantly run upward merely to stay in your job. And I assume there are those who I saw two decades ago who have moved up in managerial tracks either within Costco or in some other retail business. Your donation helps to support the operation of Scientopia - thanks for your consideration.

TAGS:Grants Drugmonkey research 

<<< Thank you for your visit >>>

Websites to related :
UCR | Department of Economics

  WELCOME TO THE UCR DEPARTMENT OF ECONOMICSAt the undergraduate level, the Department offers B.A. majors in economics; business economics; economics an

Home | The Fairfield Mirror

  FUSA Poses New Parking Petition Catherine Santangelo September 30, 2020 “I’m currently unemployed and my work study documents are delaying me from w

Andrew Trice Explorations in Te

  It s not every day that you get the opportunity to have your work showcased front and center on the main landing page for one of the largest companies

The GW Hatchet An independent s

  By Zach Schonfeld Yesterday Speights lauded LeBlanc’s leadership throughout the meeting, marking her first public statement addressing hundreds of d

Discount Coupon Codes October 20

  Talon Arms $10 Discount Talon Arms offer you 10$ off with this coupon code. Take $10 discount Coupon code: Show Coupon Code All Talon Arms coupon code

Athena - Simplified Health Infor

  Concise relevant Health Information - about Diseases, Symptoms, Diagnostic tests, Treatment options and choice of Drugs is presented here using a radi

必赢注册平台|必赢注册登录|必赢网

  版权所有:必赢注册登录ICP备案:陕ICP备17018125号-1高新校区地址:陕西省宝鸡市高新大道1号邮编:721013陕公网备案:610302022000254号石鼓校区地址:陕西省宝鸡市宝光路44

NRS | Kayak Gear, Raft Supplies,

  Whether you prefer the traditional chest entry of the Crux or opt for the streamlined back entry on the Pivot, NRS drysuits keep you warm, dry and hap

Crystal Information - The site f

  Metaphysical Techniques such as Protection, Programming Manifestation Or Alternatively Suggest a Word in Any Of The Comment Fields at the Bottom of E

Steuerberatung Rappe - www.rappe

  Diese Webseite verwendet Cookies zur Auswertung der Aufrufe. Sie haben auf dieser Webseite die Möglichkeit personenbezogene Daten zu übermitteln. Ih

ads

Hot Websites