Citation
Proposal Pressure in the 1980s: An Indicator of Stress on the Federal Research System

Material Information

Title:
Proposal Pressure in the 1980s: An Indicator of Stress on the Federal Research System
Added title page title:
Staff paper
Creator:
United States. Congress. Office of Technology Assessment.
Publisher:
U.S. Congress. Office of Technology Assessment
Publication Date:
Language:
English
Physical Description:
24 p. : ill. ; 28 cm.

Subjects

Subjects / Keywords:
Research -- United States ( LCSH )
Genre:
federal government publication ( marcgt )

Notes

General Note:
For this report OTA compiled time-series data provided by the National Science Foundation (NSF) and the Federal mission agencies on proposals for competitively awarded research projects, and by the American Association for the Advancement of Science’s (AAAS) annual R&D budget series.7 These funding trends, presented below, suggest changes over the decade of the 1980s and raise some issues for future policy study.

Record Information

Source Institution:
University of North Texas
Holding Location:
University of North Texas
Rights Management:
This item is a work of the U.S. federal government and not subject to copyright pursuant to 17 U.S.C. §105.
Classification:
Y 3.T 22/2:2/2003012517 ( sudocs )

Aggregation Information

IUF:
University of Florida
OTA:
Office of Technology Assessment

Downloads

This item is only available as the following downloads:


Full Text

PAGE 1

Proposal Pressure in the 1980s: An Indicator of Stress on the Federal Research System April 1990 NTIS order #PB90-253147

PAGE 2

PROPOSAL PRESSURE IN THE 1980s: AN INDICATOR OF STRESS ON THE FEDERAL RESEARCH SYSTEM Project Staff John Andelin, Assistant Director, OTA Science, Information, and Natural Resources Division Nancy Carson Science, Education, and Transportation Program Manager Daryl E. Chubin, Senior Analyst Willie Pearson, Jr., Analyst (OTA Fellow) Beth Robinson, Analyst Robert Garfinkle, Research Analyst Marsha Fenn, Office Administrator Madeline Gross, Administrative Secretary Gay Jackson, Administrative Secretary Kimberley Gilchrist, Secretary

PAGE 3

Reviewers and Contributors Robert Abel National Science Foundation Ted Berlincourt U.S. Department of Defense Norman Braveman National Institutes of Health Michael Davey Congressional Research Service Alan Fechter National Research Council Katherine A. Forsberg U.S. Department of Energy Jim Goodwin Defense Advanced Research Projects Agency Kathi Hanna Institute of Medicine Jack Harless Army Research Office Lisa Heinz Office of Technology Assessment Christopher Hill National Academy of Sciences Peggy Kates National Aeronautical and Space Administration Caries Kruytbosch National Science Foundation John Linter Air Force Office of Scientific Research James McCullough National Science Foundation Steve Nelson American Association for the Advancement of Science Terry J. Pacovsky U.S. Department of Agriculture Melissa Saul U.S. Department of Veterans Affairs Mark Schaefer Carnegie Commission on Science, Technology, and Government NOTE: OTA is grateful for the valuable assistance provided by the reviewers and contributors. The reviewers and contributors do not, however, necessarily approve, disapprove, or endorse this report. OTA assumes full responsibility for the report and the accuracy of its contents.

PAGE 4

PROPOSAL PRESSURE IN THE 1980s: AN INDICATOR OF STRESS ON THE FEDERAL RESEARCH SYSTEM INTRODUCTION The launch of Sputnik marked the beginning of a golden age of Federal support to science. However, those who look to the 1960s as a model for sustaining science in the 1990s yearn for what is unlikely to return. 1 As explored below, the pattern of continued growth in R&D budgets slowed in the 1970s, and the future gives anything but assurance of renewed growth. Given this uncertainty, it is fair to say that the funding environment is not well understood. 2 The Federal Government will spend $66 billion on research and development (R&D) in fiscal year 1990. Roughly 15 percent will support basic research. 3 Although basic research rarely has immediate applications, history leads to the expectation that an important part of it eventually will. The Federal Government funds basic research precisely because it may render important insights and benefits, and lead to an enhanced quality of life for most of the citizenry. The research system consists chiefly of the Federal agencies that fund basic research, researchers (e.g., in universities, national laboratories, industry, and nonprofit organizations) who seek agency funding, and the research that results. Interactions among funders, managers, performers, and consumers of basic research endow the system with a dynamic quality. Indeed, that quality is 1. John Ziman, Bounded Science (Review of Smith and Karleskys The State of Academic Science: The Universities in the Nations Research Effort), Minerva, vol. 16, 1978, p. 327. The once explosive growth pattern of science (as measured by, e.g., funding, publications, and patents) yielded to a more incremental rate of increase in scientific cutputs. For a discussion of this transition, see Derek J. de Solla Price, Little Science, Big Science (New York, NY: Columbia University Press, 1963); for a retrospective on Prices predictions, see Susan E. Cozzens, Derek Price and the Paradigm of Science Policy, S cience, Technology. & Human Values, vol. 13, summer and autumn 1988, pp. 361372. 2. For commentaries, see U.S. Congress, Office of Technology Assessment, Higher Education for Scienc e and Engineerinq, OTA-BP-SET-52 (Washington, DC: U.S. Government Printing Office, March 1989); and Roger L Geiger, Research Perspectives on Research Universities, a report to the National Science Foundation on the Workshop held at Pennsylvania State University (University Park, PA: Institute for Policy Research and Evaluation, and Center for the Study of Higher Education, 1989). 3. Albert H. Teich et al., Congressional Action on Research and Development in the FY 1990 Budget (Washington, DC: American Association for the Advancement of Science, January 1990). For a retrospective, see Mark V. Nadel, The Rise of Political Science, The GAO Journal, winter 1988/89, pp. 47-53; for a look ahead, see Janice R. Long and Pamela S. Zurer, President Proposes 7 Percent Increases in Federal R&D Funding for 1991, Chemical & Engineering News, Feb. 12, 1990, pp. 7-13.

PAGE 5

reflected in agency programs with changing goals, competition among members of the research work force for funding, and the mechanisms used to determine research emphases and allocate available monies. While we hear much today about the benefits accruing from basic research (of civilian or military origin), 4 we also hear much about a system under stress: tight budgets, deteriorating facilities, and bleak prospects (especially for young researchers) of gaining or sustaining support for research programs. 5 To examine stress in the system, OTA documents in this paper changes in the 1980s in the phenomenon of proposal pressure, the number of research proposals submitted v. the number funded, at each Federal agency that operates a competitive grants program. In addition to establishing a baseline on proposal pressure, these data will suggest issues for further study in OTAs ongoing assessment of Basic Research for the 1990s. 6 This larger study will examine both the policies and mechanisms for awarding research monies and achieving an array of national research goals. 4. Various policy tools are used to inform Federal decisionmaking on basic research investments. This is reflected in empirical studies of research teams, facilities, and institutions. For a review, see Daryl E. Chubin, Research Evaluation and the Generation of Big Science Policy, Knowledae: Creation, Diffusion, Utilization, vol. 9, December 1987, pp. 254-277. As the competitive pressures for the funding of basic research have grown in other countries, so have the advocates of techniques that identify hot and emerging areas of leading-edge research. The chief exponents of this view have been John Irvine and Ben Martin, Foresiaht in Science: Pickina the Winners (London: Frances Pinter, 1984); and the sequel, B.R. Martin and J. Irvine, Research Foresiaht: Prioritv-Settina in = (London: Frances pinter, 1989). Also see Organisation for Economic Cooperation and Development, Evaluation of Research A Selection of Current Practice (Paris, France: 1987); and Ciba Foundation, The Evaluation of Scientific Research (Chichester, England: Wiley-lnterscience, 1989). 5. Debate tends to emphasize trade-offs: big v. little science, industry-university research centers v. individual investigators, and peer (merit) reviewed v. pork barrel projects. The policy discourse, in turn, shifts toward how to fund, organize, and optimize U.S. investments in scientific research. Congressional interest in national ~esearch investments yielded an exploratory OTA document. See U.S. Congress, Office of Technology Assessment, Research Fundina as an Investment: Can We Measure the Returns?, OTA-TM-SET-36 (Washington, DC: U.S. Government Printing Office, April 1986). 6. Requested by the Committee on Science, Space, and Technology, U.S. House of Representatives, this assessment began in December 1989. It is scheduled for completion in February 1991. A copy of the request letter from Robert A. Roe, Chairman, and Robert S. Walker, Ranking Republican Member, and-the study proposal written in response are available from OTAs Science, Education, and Transportation Program.

PAGE 6

THE RELATION OF PROPOSAL PRESSURE TO RESEARCH BUDGETS For this paper, OTA compiled time-series data provided by the National Science Foundation (NSF) and the Federal mission agencies on proposals for competitively awarded research projects, and by the American Association for the Advancement of Sciences (AAAS) annual R&D budget series. 7 These funding trends, presented below, suggest changes over the decade of the 1980s and raise some issues for future policy study. 8 Research applications (proposals) determine the demand for Federal funding. How many investigators compete for funds? is the number stable or increasing? The total number of applications received by an agency constitutes proposal pressure. The extent of pressure is a crude indicator that researchers perceive agency support as vital for making research progress. The competition among researchers who apply for funding can be measured as, for example, the proportion that succeed in winning awards, and the average and total award amount and duration per researcher. 7. One problem an analyst immediately faces in correlating budget data and other indicators is how to separate for analysis the R from the D. Agency labels, as well as the activities performed under each, lack consistency. A common assumption has been that increases in R&D funding observed in the Reagan years were due to development activities rather than basic research. But difficulties of record-keeping and agency categorization, as well as inherent difficulties in distinguishing basic from applied research, create problems of interpretation. Most of the R&D budget figures presented here derive from two series published annually by the American Association for the Advancement of Science, Research & Development: AAAS Report V through Report XIV (Washington, DC: 1980-1989); and American Association for the Advancement of Science, c o ngressional Action on Research and Development in the FY 1979 through FY 1988 Budget (Washington, DC: January 1979-January 1989). Current dollars were converted to constant 1982 dollars by OTA. in this paper, when we talk about research proposals, we are confident that development has not crept in; when we discuss funding, the referent is research, but the numbers include much more (hence, R&D). 8. For example, the Department of Defenses budget for fundamental and exploratory research, respectively budget lines 6.1 and 6.2-is a perennial funding issue. Only category 6.1 is a research line (with a current budget just under $1 billion), but not ail projects funded there support basic research. About one-half of the 6.1 budget is awarded as grants; the rest goes to national laboratories and outside contractors. See U.S. Congress, Office of Technology Assessment, Holding the Edge: Maintaining the Defense Technology Base, OTA-ISC-420 (Washington, DC: U.S. Government Printing Office, April 1989). The basis for calculating Federal obligations for basic research is complex; comparable budget information is not available at selected agencies.

PAGE 7

To answer such questions, OTA solicited information from several Federal funding agencies. At the Department of Defense (DoD), these were the Army Research Office, Air Force Office of Scientific Research, and Defense Advanced Research Projects Agency (DARPA). For the Department of Energy (DOE), the Office of Energy Research responded to the request. At NSF, the Comptrollers Office responded. At the National Institutes of Health (NIH), data were provided by the Division of Research Grants. The Office of Grants and Program Systems supplied information on the U.S. Department of Agriculture (USDA). Finally, at the Veterans Administration (VA), g the Office of Research and Development provided data on its Medical Research Program. (Because the VA lacks statutory authority to grant funds, awards are processed through NSF and NIH.) The agencies furnished the following information for the 10-year period ending in 1987 (or a longer period if the data were available): 1) number of research proposals reviewed; 2) number of proposals funded; 3) average dollar amount and duration of award; and 4) any other data that would help characterize application and award trends at their agency. The results show considerable variability across agencies in the type of data systematically collected and, therefore, available for analysis. The status of data on the competitive grants process in Federal agencies is summarized in table 1. The most complete data are available for the Air Force, the VA, NSF, and NH. As can be gleaned from the table, the data are not strictly comparable for all years. For example, data on average duration of award were incomplete and, therefore, not included for presentation. Table 2 summarizes trends in funded proposals for competitive grants at selected agencies. What is most striking about the data is the keen competition for funding. For example, the number of proposals reviewed by NSF increased from 14,499 to 26,802 between 1977 and 1988; the proportion actually funded of the total number of proposals reviewed declined appreciably over the same period (from about 46 to 28 percent). 9. The name of this agency has been changed to the Department of Veterans Affairs; we retain VA here as the familiar identifier.

PAGE 8

A similar pattern prevails for the VA, where proposals increased from 881 in 1980 to 1,310 in 1988. Noticeable fluctuations occurred in the 3-year period from 1982 to 1984. Between 1980 and 1988, successful proposals declined dramatically from about 64 to 40 percent. At the Air Force, the number of proposals reviewed climbed from 1,440 to 2,103 between 1980 and 1988. Data on proposal success, while available only for the years 1985 to 1988, point to sharp increases in competition. For example, funded proposals plunged from 25 to 13 percent over this 4-year period. Finally, the trends at NIH generally mirrored those at the other agencies. Between 1977 and 1988, the number of proposals reviewed by the agency rose from 13,304 to 19,205. It should be noted that NIH has an approved but not funded category. This category refers to the proposals recommended for funding, but receiving no support due to budgetary limitations. The data presented in table 2 refer only to proposals that were funded. The pattern of funded proposals, expressed as a proportion of those reviewed, varied considerably between 1977 and 1988. For example, the success rate increased from about 29 in 1977 to 41 percent in 1979, but declined thereafter. 10 In 1987, NIH funded 6,446 grants; in 1990, it plans to fund only 4,719 grants. By cutting the number of grants by over 25 percent, NIH intends to boost the size of each award. This apparently reflects a decision that many biomedical and life scientists have been working with insufficient funding, which has modified the form and content of the projects proposed and ultimately approved. 12 10. In a June 1989 personal communication to OTA, an NIH staff member explains that the strategy of reviewers changed during this period. He claims that there was a decline in the disapproval rate because reviewers realized that an approved application with a priority score of 250 or better could not be funded. The staff member argues that a more stable measure of success is the proportion of applications recommended for approval that are funded. it is interesting that NIH identifies this as its award rate, which is higher than the actual proportion of proposals funded as defined above. Award and funded rates display a similar trend. For example, compare the following with NIHs percent funded column in table 2: the award rates increased from 39 to 52 percent between 1977 and 1979, but declined thereafter. Also see National institutes of Health, Statistics and Analysis Branch, Division of Research Grants, Extramural Trends FY 1976-1985 (Washington, DC: 1986). 11. As a cost-cutting move, the Congressional Budget Office (CBO) suggested that NIH . reduce the number of grants awarded or shave the size of awards by 10 percent. We have not ascertained a connection between CBOS proposal and NIHs decision. See Mark Crawford, CBO Lists Options for Cutting R& D, Science, vol. 243, Feb. 24, 1989, p. 1001. 12. See Liane Reif-Lehrer, Going for the Goid: Some Dos and Donts for Grant Seekers, me Scientist, Apr. 3, 1989, p. 15.

PAGE 9

Agency Trends While proposal pressure determines the demand side of the competition for research grants, the budget determines the supply side. Thus, agency expenditures limit the amount of funding in any given fiscal year. It is also very difficult to retrieve disaggregate data on competitive research grants; thus we must rely on total R&D funding as a crude indicator of research activity at an agency. Total R&D budgets (in constant 1982 dollars) of the Federal agencies for fiscal years 1979 to 1989 (written without the prefix FY below) are shown in table 3. Compared with their 1979 budgets, NIH, other HHS, NSF, Agriculture, and the VA had higher budgets in 1988, with the greatest increase occurring at DoD, where the 1988 budget was nearly double that of 1979. Overall, R&D budgets increased over the last decade. AAAS reports that federally funded basic research increased 37 percent (in constant 1982 dollars) from 1980 to 1988. 13 Data on average award amount (in constant 1982 dollars) are presented in table 4. Overall, NIHs average award increased from about $115,000 in 1978 to a high of $154,000 in 1987. Although the data are less complete for NSF, the trend shows that the average award (not annualized) increased slightly from 1980 to 1987. Nevertheless, the most recent awards, when annualized, are considerably below the annual $78,000 figure in 1978. At the VA, the average award grew steadily from 1982 to 1987 ($47,000 to $52,000), then jumped to $61,000 a year later. The trend for the Air 13. At the six major agencies that fund basic research, the increases ranged from 11 percent at DoD to 59 percent at NIH. in between were USDA (14 percent), NSF (17 percent), NASA (30 percent), and DOE (52 percent). See Albert H. Teich and Kathleen Gramp, R&D in the 1980s: A Soec iai ReDort (Washington, DC: American Association for the Advancement of Science, September 1988), p. 8. Also see National Science Foundation, Fede ral Funds for Resea rch and Development: Fiscal Years 1987, 1988. a nd 1989, vol. 37, NSF 89-304 (Washington, DC: 1989), pp. 81-83. 14. Additional pressures on resource allocation are reflected in the amount of funding that proposers request to support their research. Over the period 1977-88, the average amount of funding requested per proposal at NIH more than doubled (in constant 1982 dollars) from $96,450 to $194,150. During the 7 years of James Wyngaardens tenure as Director of NIH, the duration of research grants also rose from 3.1 to 4.2 years. See Science & Government Retrlort, Money Flowed to Bethesda, May 15, 1989, p. 3. To understand the full meaning of this growth, it is essential to know the number of proposals per investigator over time, and the proportion of overhead costs in funded proposals that applied to investigator and student salaries.

PAGE 10

Force is difficult to discern due to a switch from single year to multiyear funding in 1984. The pattern at DARPA has been one of decline from 1980 to 1983, then peaking in 1984 followed by 2-year decline before peaking again in 1987 and dropping noticeably in 1988. 15 Data on average award size coupled with average duration would present a more complete picture of when funds actually would be available to researchers to support their work. These data are not available. However, information in tables 2-4 helps to clarify the picture. We can estimate the cost of funded proposals for the period 1985-1989 at selected agencies by multiplying the average award amount (table 4) times the total number of proposals funded (table 2). This dollar amount represents the cost of new starts, in contrast to the research funds committed to continuing projects (out-year obligations). If we divide this amount by the total R&D budget of the agency (table 3), we develop a very rough idea of percent change in the resources awarded to new research projects. For NSF and NIH, there is little change in these proportions. Over the 4-year period, a low of 29 percent and a high of about 35 percent of NSFs budget accounted for the funding of new projects, while at NIH new starts represented a low of 17 percent and a high of about 19 percent of budget obligations. (For comparison, in 1979, NIH devoted 17 percent to new starts.) We might think of these proportions as a measure of flexibility. in a lean budget year, continuing obligations to multiyear projects cut into any agencys ability to fund new ones. We hesitate to draw conclusions about research monies appropriated and spent in the absence of information about agency goals, strategies, and mechanisms. OTA is currently collecting such information through extensive interviews with staff at the major Federal agencies that support basic research. 15. For example, information provided by the Air Force gives some insight into the value of data on funding duration. While the average award size (in constant 1982 dollars) increased substantially between 1982 and 1988, so did the duration of award. Prior to 1984, grants were awarded for 12-month periods; beginning in 1984 grants were awarded for up to 36 months.

PAGE 11

Nagging Issues Based on these data, inferences about project selection criteria and increasingly rugged competition for Federal resources raise some nagging issues. For instance, the relation of quantity of funded v. quality of unfunded proposals remains difficult to ascertain. At NIH, proposals can be rated on a scale of 100 to 500; the lower the number, the better the priority score. The trend in priority scores priority suffice. has produced downward creep, i.e., lower scores are required for funding. Whereas a score of 250 earned approval with funding a decade ago, a score of 150 today may not This form of grade inflation tells us nothing about quality. The argument most often heard is that there is an excess of quality proposals relative to the carrying capacity of NIH. As researchers enter the research system, more are applying for grants. 16 At NSF and NIH over the last decade, the proportion of funded proposals declined and then plateaued at about 30 percent. Is this a desirable proportion or is it artificially depressed? Some researchers claim that funding more proposals would attenuate quality. But funding any proposal other than the single best one attenuates quality. If there are deserving projects and if they receive funding, the yield will be greater than if they are not funded. 17 (The research system might ultimately derive more, of course, by funding infrastructure or education rather than research proposals; that, too, is a policy choice.) To be sure, quality is an elusive property; objective measures are hard to find. Program managers are expected to exercise judgment when proposal ratings on technical merit are too close to differentiate -when the decision to fund is tantamount to tossing the dice. Discussions of this process, of course, bog down in the formality of review. Even the language used to discuss the criteria applied to proposal selection is stultifying. Merit is the central criterion of choice; it is not the 16. For the period 1978-88, 17,000 to 20,000 new Ph.D. s in science and engineering were awarded annually. See National Science Foundation, Division of Science Resources Studies, Early Release of Summary Statistics on Science and Engineering Doctorates, 1988, unpublished manuscript, April 1989, p. 7. Add these to the number of non-Ph.D.s who apply for Federal funds for the first time in any given year and the roster of potential principal investigators is growing (despite increases in retirements). In terms of human resources, the research system is~t contracting. 17. This would seem to be the motivation for a Cystic Fibrosis Foundation advertisement last year announcing. the availability of funding for highly meritorious applications that have been submitted to and approved by the National Institutes of Health but remain unfunded. See Science, Cystic Fibrosis Research Grants for NIH-Approved but Unfunded Applications, vol. 243, Feb. 10, 1989.

PAGE 12

only criterion. Proposal review is supposed to consider such factors as institutional location and reputation, as well as track record (or conversely, relative youth) of the proposer. Sometimes these factors ail else being equal are decisive. 18 When we label factors other than technical content of the proposed research as nonmerit factors, we confound the dialogue on systems of proposal review currently in force and impede the search for possible alternatives. What is needed instead is a fresh look at the effect of adding criteria, i.e., changing the weights of ail factors considered, on the operation of already beleaguered review systems. While policymakers rely on informed judgments of trusted, seasoned scientists to render project funding recommendations, today these judgments can be augmented by (semi-) quantitative indicators of research and researcher quality, such as those contained in NSFs Science and Engineering Indicators series. 19 These indicators include various indirect measures, such as publications resulting from projects and citations to this work in the scientific literature. Such ex post peer evaluations can be construed as votes on the importance of the research. The assumption is that these votes validate expert judgment, so that Federal grants go to researchers with the best ideas. 20 In short, quality is a characteristic inferred from public use; predicting quality from proposals is riskier business. Inferences The incomplete agency data that OTA has presented here indicate three related trends: increased R&D budgets, mounting proposal submissions, and a declining proportion of proposals funded at most agencies. Various interpretations of these trends might be offered. 18. in generai, the younger investigator from a nonresearch university is not iikeiy to be funded. This is the inherent conservatism of grants peer review. For recent evidence and commentaries, see Jim McCullough, First Comprehensive Su~ey of NSF Applicants Focuses on Their Concerns About Proposal Review, Sc ience, Technoloav, & Human Vaiues, vol. 14, winter 1989, pp. 78-88. Commentaries appear on pp. 89-102. 19. For the iatest in this series, see Nationai Science Board, Science & Enaineerina Indicators -~ (Washington, DC: U.S. Government Printing Office, KWO). 20. Not surprisingly, the correspondence between what peers judge to be the best and researchers themselves regard as important contributions to scientific progress is far iess than perfect. Quaiity resides in this gray area of differing perceptions. See A.L. Porter et ai., Citations and Scientific Progress: Comparing Bibiiometric Measures with Scientist Judgments, Scientometrics, voi. 13, Nos. 3-4, 1988, pp. 103-124; and William N. Dunn et ai., Science Impact Assessment and Pubiic Policy, Poiicv Studies Review, voi. 8, autumn 1988, pp. 146-154.

PAGE 13

For example, in the transformation of funding that began in the 1960s, perhaps the sheer number of researchers and the academic reward system operating within universities and without -overwhelmed proposal review. A kind of lottery mentality appears to have taken hold in the 1980s: the more grant proposals submitted, the greater the probability that one would be funded. As competition grows, resources for rewarding quality seem to shrink. In addition, some researchers, both those who often succeed in winning Federal funds and those who do not, argue that agency decisions tend to grow conservative and the chief attribute of basic research -risk-taking -declines. 21 Proposal Review and Peer Review There are many mechanisms for proposal review in the Federal Government and most include some form of peer review. But not all proposal review involves expert judgments of peers external to the agencies that award research monies. The Office of Naval Research, for example, is known to rely primarily on in-house review and a large measure of program manager discretion. Yet the central assumption (shared at least rhetorically by Federal decisionmakers and scientists alike) of the key role of peer review in the evacuation of researchers and research performance remains largeiy unchallenged. = Whether peer reviews are binding or advisory, they represent the key determinant of how priorities are set and how scarce Federal resources are allocated for so-called university-based, 21. For a discussion, see Daryl E. Chubin and Edward J. Hackett, Peerless Sc ience: Peer Review and U.S. Science Poiicy (Aibany, NY: SUNY Press, forthcoming 1990), ch. 3. Last year NSF instituted a quick-turnaround research set-aside program for high-risk proposais. See Eiiot Marshail, A Fast Track for High-Risk Science, Sciencq, vol. 244, May 19, 1989, p. 764. 22. One of the first challenges to NSFs peer review system occurred in 1975 hearings heid before the House Subcommittee on Science, Research, and Technology. The system was deciared fundamentaiiy sound. See James W. Symington and Thomas R. Kramer, Does Peer Review Work? American Scientist, voi. 65, January-February 1977, pp. 17-20. in Juiy 1989, the pubiic interest organization Public Citizen petitioned NSF to change . the way that the Foundation protects the rights of its grant applicants . (Eric R. Giitzenstein, personai communication, Juiy 13, 1989). For detaiis on the case that spurred the petition, see Eiiot Marshaii, NSF Peer Review Under Fire from Nader Group, Science, voi. 245, July 21, 1989, p. 250. NSFs beiated but thorough response is contained in a ietter from Charles H. Herz, NSF Generai Counsei (personai communication to Eric R. Glitzenstein, Mar. 12, 1990).

PAGE 14

investigator-initiated research. 23 In general, research communities favor peer review because t delegates a significant degree of authority and quality control to representatives of those communities. 24 Peer review, along with other forms of proposal review, can only set priorities within a 25 This means that peer review research community; it cannot assist in decisions across communities. can assist in, but not alone determine, resource allocation decisions regarding which fields or research problems, many of them multidisciplinary, should be supported. = A framework for weighing alternatives, making research choices, and plugging them into the political process has been lacking. The Office of Management and Budget has been the surrogate for such a framework. But in an earlier era, the discretionary budget for supporting research was relatively unconstrained. Basic research was a modest slice of the pie. intellectually and economically promising work could be accommodated by Federal funding on merit or mission grounds alone. The much-ballyhooed flexibility of the system more accurately meant no critical choices required. 23. See Daryi Chubin and Sheiia Jasanoff, Peer Review and Pubiic Poiicy, Science, Technoioa V, & Human Vaiues, voi. 10, summer 1985, pp. 3-5. For an historical critique, see Deborah Shapiey and Rustum Roy, Lost at the Frontier: U.S. Science and Technoioav Poiicv Adrift (Phiiadeiphia, PA: iSi Press, 1985). 24. For a recent statement of the peer review ideoiogy, see George T. Mazuzan, Proposai Review: The Heart of the Project Grant Enterprise, NSF Direction+, voi. 1, Nov.-Dee. 1988, p. 3. Recognition that quaiity and originality may not be sufficient project seiection criteria ied NSF to reconsider reievance to areas of opportunity as an expiicit criterion in its review process, which was renamed in 1986 merit review. See Caries E. Kruytbosch, The Roie and Effectiveness of Peer Review, in Ciba Foundation, op. cit., footnote 4, pp. 69-85. Reactions to the presentation of recent proposai pressure data on NiH were striking in their misgivings about peer review outcomes. See Joseph Paica, Hard Times at NiH, Scienc~, voi. 246, Nov. 24, 1989, pp. 988-990; and ietters in SCience, Jan. 26, 1990, pp. 393-394, under the titie NiH Budget Crisis. 25. This intrinsic deficiency in peer review was recognized a quarter-century ago. it is discussed as a poiicy diiemma in Alvin Weinberg, Reflections o n Bia Sc ienc~ (Cambridge, MA: MiT Press, 1966). 26. Since new research probiems do not distribute neatiy into disciplinary niches, peers are not readiiy transformed into coalitions that can exert their wiii on the poiiticai process. Novei science and the fiedgiing research community may iack the poiiticai resources to compete with established disciplines. For such multidisciplinary research, there are no obvious peers. So in the absence of a path-breaking discovery, they may weii suffer in the setting of funding priorities across research fieids. See Aian L. Porter and Frederick A. Rossini, Peer Review of interdisciplinary Research Proposals, Science, Technoioav, & Human Vaiues, voi. 10, summer 1985, pp. 33-38.

PAGE 15

Much of the recent flurry of activity to construct a priority-setting framework 27 was anticipated 28 Today, the issue of basic in the mid-1980s by the congressional Task Force on Science Policy. research choices signals the perennial difficulty of justifying national investments whose returns cannot be readily anticipated and are long in coming. The difference is a cramped discretionary budget and the escalating costs, magnified by academic infrastructure and instructional needs, of doing basic research. Thus, as the Nation contemplates the scientific opportunities of the 1990s, calls for priority-setting underscore an important tension: while peer scrutiny honors the intellectual rigor and traditions of disciplines, the allocation of scarce resources is a political rank ordering that simultaneously draws on peer judgments and ignores them as other values come into play. Some agencies readily recognize this and rely on in-house judgments, rather than outside experts, in proposal review. The symbolism of peer review, in short, maybe greater than its actual use. THE FEDERAL ROLE IN RESEARCH FOR THE 1990s Federal funding for basic research during the last quarter century has gone from a short period of decided growth to a sustained period of variable and slower growth. Funding patterns of the last decade represent internal shifts in research budgets among the Federal agencies. = These shifts, in turn, have heightened competition for support within the research community and led to demands 27. In 1989, the National Academy of Sciences proposed a framework for soliciting research policy advice and the empirical basis for making tough decisions about basic research. National Academy of Sciences, Federal Science and Technoloav Budaet Priorities: New Perspectives and Procedures (Washington, DC: National Academy Press, 1989). Also see Frank H.T. Rhodes, A System to Set Science Priorities, Technoloav Review, November-December 1988, pp. 21-22, 25; and John A. Dutton and Lawson Crowe, Setting Priorities Among Scientific Initiatives, American Scientist, vol. 76, November-December 1988, pp. 599-603. 28. U.S. Congress, House Committee on Science and Technology, Task Force on Science Policy, An Aaenda for a Studv of Government Sc ience Policy (Washington, DC: U.S. Government Printing office, December 1984). This was, in turn, an effort to revise the post-World War II blueprint for Federal support of basic research Vannevar Bushs Science -The Endless Frontier -which led to the creation of the National Science Foundation. 29. The most salient shift, however, was between defense and nondefense R&D. For fiscal years 1980-1988, total defense R&D increased 83 percent and, within that category, basic research increased 11 percent. Nondefense R&D declined by 24 percent during this period, while basic research in this category rose by 40 percent. See Teich and Gramp, op. cit., footnote 13, pp. 6-7. As much as three-quarters of the Federal R&D budget was devoted to defense R&D in the Reagan Years; the proposed Bush budgets have reduced that fraction to about 60 percent.

PAGE 16

for new ways to set priorities for the Nations conduct of basic research. All research performers, in all sectors, must accommodate to new realities. w There is a need to analyze how the funders and performers of basic research have accommodated, and how this system might adapt more in the next decade. 31 A multiplicity of Federal sources of support for basic research has always been hailed as a strength of the U.S. system. 32 But such pluralism and decentralization is now also seen as problematic. Analysis can tell us which Federal agencies are underwriting which research domains. = What remains for study is how the criteria used by different agencies affect the selection of institutions, projects, and researchers to support. 30. in the words of NSF senior science adviser Luther Wiiiiams, A iot of the work that needs to be done in the 1990s simpiy cannot be done by individuals or a smaii iaboratoryteam. The U.S. research enterprise shouid not be heid hostage to the 1960s modei of RO1 grants [an NiH category of individual investigator-initiated awards] that is outdated. See Jeffrey Mervis, NiH Officiai, Ex-Chief of Biack Coiiege, Named Adviser to NSFs Erich Bioch, The Scientist, June 12, 1989, p. 3. 31. For the researchers perspective cm the impact of Federai funding poiicies on the conduct of academic research, see Daniei Aipert, Performance and Paraiysis: The Organizational Context of the American Research University, J ournai of Hiaher Education, voi. 56, May/June 1985, pp. 241-281; and David Baitimore, The Worsening Ciimate for Bioiogicai Research, Technoioav Review, MayJune 1989, p. 22. 32. Six agencies NiH, NSF, DoD, DOE, NASA, and USDA -account for about 95 percent of aii federaiiy funded basic research. Some wouid say this underscores the diversity in avaiiabie sources of Federai support; others wouid argue the opposite, i.e., there are oniy six major sources and in many research areas, a singie agency monopolizes the Federai funding. For further anaiysis, see Susan L. Sauer (cd.), R&D in FY 1989: Lookina Ahead in an Eiection Year (Washington, DC: American Association for the Advancement of Science, 1988), esp. pp. 11-16. 33. For exampie, a recent OTA contractor report expiored the quantitative anaiysis of literature as an indicator of research activity. That anaiysis provides measures of research performance under federaiiy funded research projects, and yieids another perspective on the research system. See Henry Smaii and David Pendiebury, institute for Scientific information, Federai Support of LeadingEdge Research: A Report on a Method for identifying innovative Areas of Scientific Research and Their Extent of Federai Support, OTA contractor report, February 1989. The report shows that funding acknowledgments can provide estimates of the presence of Federai agency support reiative to corporate, foundation, nationai laboratory, State, and foreign government support. These estimates, of course, say nothing about the maanitudq, i.e., doiiar amount, of the support. To illustrate the patterns that emerge from iiterature-based anaiysis, the report presents case study detaiis for high-temperature superconductivity (HTS) and four other research areas. HTS was an area of rapid advance in 1987, fueiing expectations that superconducting materiais of commercial importance were soon to foiiow. 34. Two exampies of agency efforts to support research in novei ways can be cited. NSFs programs to promote Engineering Research Centers and Science & Technology Centers are attempts to centralize research at universities, whiie emphasizing teamwork and multidisciplinary collaboration across department and corporate sectors. See Nationai Science Foundation, Program Evacuation Staff, Enaineerina Research Centers: Status of Finaiist ProRosais Deciined bv NSF, Report 89-34 (Washington, DC: Aprii 1989). At USDA, historically seen by many as isolationist, there are pians to revitalize competitiveness in research, especially in giobai ciimate change and biotechnology. See

PAGE 17

Embedded in the issue of project selection is sensitivity to the geographical distribution of resources. Appeals to the academic pork barrel occur because certain regions and States receive disproportionately small allocations of research funds. This is due to the concentration of researchers (and, some argue, reviewers) on the east and west coasts. = it is presumed that the geographical distribution of research institutions results in the observed concentration of awards, but this is unknown. (Some, of course, point to an oid boys network that preserves the distinction between have and have not institutions.) to become larger factors in Federal research Were geography or other selection criteria 36 policy, would the importance of technical merit necessarily change in proposal review? Most agencies today employ criteria in addition to merit in the selection of research projects and centers. Has the quality of research produced at those institutions suffered? Nobody knows. One thing, however, is likely: reliance on the present review system is unlikely to result in any substantial change in geographic distribution of research funds. Furthermore, if competition continues to grow, creative energies will flow into proposal writing, rewriting, and review rather than the conduct of research. Many scientists argue that this is already the case. Sc ience & G overnment Re~o rt, Q&A: Charles Hess, New Head of USDA Research, Juiy 15, 1989, pp. 44. 35. See Coiieen Cordes, Congressional Practice of Earmarking Research Funds Does Not Broaden Aiiocation of Funds, Study Finds, The Chronicle of Hiaher Educa tion, Mar. 1, 1989, pp. A17, A22; Kin Ha and David Lipin, California institute of Technology, Pork-Barreiing of Science Funds, unpublished manuscript, 1989; and Robert M. Rosenzweig, Graduate Education and its Patrons, presented at the 28th Annuai Meeting, Councii of Graduate Schoois, Coiorado Springs, CO, Nov. 30, 1988. For historical data, see U.S. Generai Accounting Office, University Fundina: Patterns of Distribution of Federai Research Funds to Universities, GAO/RCED-87-67BR (Washington, DC: U.S. Government Printing Office, February 1987). 36. Geographic distribution of-Federai funds is not the oniy congressional concern. Others inciude the need a) for research that, for whatever reason, is not being done anvw here; b) for equipment or facilities, be it a teiescope or a synchotron, to pursue a particular iine of research; or c) for expansion of a nationai research-based mission, such as the campaign to cure AiDS.

PAGE 18

Prospects National security, medical advance against dread disease, and preparation of the work force have been U.S. priorities since the 1960s, but economic competitiveness stands as a relatively new national goal. Events of the last decade have shown that new demands on the Federal agencies may accumulate as problems with an identifiable research base, such as global warming and acid rain, ascend the political and public agenda. 37 The Federal Government, industry, and other patrons have often pressured the research system to produce certain results, and since the 1970s this pressure has increased. Basic research, however, is intellectually-driven, not problem-driven; it builds a knowledge base. What kind of obstacles will the Federal Government and the scientific community face as outside requirements on the research system grow in an era of tight funding? Simultaneously, there is congressional concern and convincing evidence in certain fields and industries -that the Nation is losing its competitive edge in the outcomes associated with basic research investments. Those investments are commonly measured in terms of national R&D expenditures as a percent of gross national product (GNP). The five leading industrialized nations -the United States, France, West Germany, Japan, and the United Kingdom all currently spend between 2.3 and 2.8 percent of their GNP on R&D. In the 1960s, the U.S. R&D-GNP ratio was significantly higher than the others (except the United Kingdom); today, West Germany and Japan are the pacesetters. = In an era of heightened accountability of public funds, pressures mount as to how to cope with the conflicting demands posed by research opportunities. Congress has articulated these pressures in considering the role of the Science Adviser (and the Office of Science and Technology 37. See U.S. Congress, House CommiHee on Science, Space, and Technology, Subcommittee on Science, Research, and Technology, The Hearinas on Adeauacv, Direction and Priorities for the American Sc ience and Technoloav Effort, Feb. 28-Mar. 1, 1989 (Washington, DC: U.S. Government Printing office, 1989). 38. See National Science Foundation, International Science and Technoloav Data Update: 1988, NSF 89-307 (Washington, DC: 1988), pp. 7, 23; and Erich Bloch, Basic Research: The Kev to Economic Competitiveness, NSF 86-21 (Washington, DC: National Science Foundation, 1986).

PAGE 19

Policy over which he presides), as well as in debating appropriations for the Superconducting Supercollider. 3g But how might Federal agencies experiment with the management of resources -while retaining their mission focus? What data would prompt rethinking the management of the research system? What, after all, should be done if the mechanisms and models of surging research growth in the 1960s are no longer sufficient or appropriate for the 1990s? The system that appeared flexible and diverse is now being tested. Scarce resources require choices. And choices attenuate the flexibility and diversity that have undergirded the unparalleled creativity of U.S. science. This paper finds evidence of an overburdened research system. How to cope in the 1990s maybe as much an organizational as a fiscal challenge to the Federal Government. 39. Barbara J. Cuiiiton, Science Adviser Gets First Formai Look, Science, vol. 245, Juiy 21, 1989, pp. 247-248; Janice Long, Super Coiiider Monies To Remain, Science Funding Debate Continues, C&E News, Juiy 17, 1989, pp. 21-22; and Coiieen Cordes, Caiis for Setting Science-Spending Priorities are Renewed as Supercoiiider Gets Go-Ahead, NSF Faces Pinch, The Chronicie of Hiaher Education, Juiy 26, 1989, pp. A19, A23.

PAGE 20

Table 1-Status of Proposal Data Responding Agency office DoD Air Force Office of Scientific Research Army Research Office Total reviewed Available N/A Office of Naval N/A Research Defense Advanced N/A Research Projects Agency DOE Office of Energy Available Research NIH Division of Available Research Grants VA Office of Available Research & Development Quality Index: Percent funded Available N/A N/A N/A Available Available Available Average award amt. Available N/A Years covered Quality* Comments 1985-88 3 New starts include grants and contracts. N/A o Only raw data provided. Relevant information must be hand calculated--too labor intensive for the deadline. N/A N/A N/A Request delayed due to a higher agency priority request. Available 1980-88 1 N/A 1982-88 1 Data are not disaggreagated. Available 1977-88 3 Available 1980-88 3 ,, ... 3 = all categories of data 2 = 2 of 3 categories of data 1 = 1 of 3 categories of data O = unusable in greatest form l

PAGE 21

Table 1 (centd.) Responding Agency office NASA Office of Procurement NSF Comptrollers Office National Science Board,Status of Science 1980 (Nov. 1979) USDA Office of Grants and Program Systems Quality index: 3 = all categories of data 2 = 2 of 3 categories of data 1 = 1 of 3 categories of data O = unusable in greatest form Tota reviewed (see comments) Available Available N/A Percent Average funded award amt. (see (see comments) comments) Available Available N/A Available Available (see comments) Years covered Quality* Comments (see N/A Data provided on comments) microfiche. Relevant information must be hand calculated -too labor intensive. 1985-88 3 1977-82 3 1980-88 1 incomplete data and/or a new division created in some years. Relevant information must be hand calculated -too labor intensive.

PAGE 22

b x z I a ) 8 r-.. a cn3 0 0 a o it al c 0 > T 0 n

PAGE 23

Table 3R&D Budgets of Federal Agencies, 1979-1989 (constant 1982 dollars, in millions) Major R&D agencies: Defense NASA Energy NIH Other HHS NSF Agriculture Transportation Interior EPA AID Commerce VA Education Other agencies: Nuc. Reg. Comm. Smithsonian TVA Labor HUD Corps of Engineers Justice Treasury All other agencies TOTAL 1979 16,872 5,936 7,086 3,914 682 1,091 932 456 534 500 139 432 160 198 204 49 51 174 73 34 55 17 59 39,648 1980 16,228 6,068 6,677 3,715 690 1,056 828 459 475 396 141 414 157 155 221 49 35 288 51 35 46 14 53 38,251 1981 18,295 5,820 6,487 3,532 695 1,008 861 439 434 395 168 367 161 96 239 48 34 105 40 32 32 8 34 39,331 1982 20,848 4,589 5,406 3,450 489 974 830 312 385 285 171 308 139 129 221 56 30 23 19 29 28 9 10 38,740 1983 22 8 739 2,627 4,962 3,664 535 1,017 850 356 372 225 177 312 155 98 199 53 20 13 17 28 33 9 3 38,463 1984 25,867 2,772 5,206 3,964 549 1,156 905 453 336 241 184 336 188 94 177 57 21 15 17 26 22 10 3 42,598 1985 28,861 3,214 5,233 4,407 559 1,281 904 396 351 276 190 353 193 128 135 67 25 12 15 30 40 18 4 46,692 1986 31,447 3,350 4,978 4,505 574 1,258 862 340 344 291 198 355 166 118 111 63 9 -13 14 30 34 16 0 49,075 1987 32,951 3,844 4,811 5,229 1,355 274 362 306 206 370 198 114 109 67 69 22 14 27 37 21 0 52,044 1988 32,896 4,044 5,023 5,487 741 1,369 271 367 323 94 353 170 119 173 69 75 21 13 29 36 23 0 52,660 1989 32,375 4,910 5,178 5,699 903 1,469 944 275 397 326 97 356 180 127 159 68 49 21 13 29 32 34 0 53,644 . SOURCE: American Association for the Advancement of Science, ~esearch & Develo~ment: MS Reporl V through ReDort XIV (Washington, DC: 1980-1989); and Atbert H. Teich etal., Cormressional Action on Research and Development in the FY 1990 Budaet (Washington, DC: January 1990). Budget amounts converted to constant dollars by OTA.

PAGE 24

F Y 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 Table 4Average Award Amount for Selected Agencies: New Starts (constant 1982 dollars) NSF* 77,765 N/A 90,364 96,692 107,816 113,287 123,396 125,382 123,833 128,274 124,203 V A N/A N/A N/A N/A 47,285 b 47,833 49,092 50,216 52,394 51,898 61 ,251 d Air Force N/A N/A 59,028 62,171 68,000 73,967 174,074C 120,939 154,! 124,224 126,833 NI H 114,620 115,168 114,130 112,507 112,340 114,714 123,408 132,683 133,572 154,121 151,940 DARPA N/A N/A 314,638 247,714 275,152 248,299 355,843 292,807 282,152 440,289 365,642 aFor fi~~al years 1980.88, the amounts listed here are for average total grant size. (On avera9ej SF awards are for 2 years.) bsummer CyCk ny cAbrupt increase from 1983 to 1985 reflects a change from single year to multiyear awards. dspring cycle onlY.


xml version 1.0 encoding UTF-8
REPORT xmlns http:www.fcla.edudlsmddaitss xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.fcla.edudlsmddaitssdaitssReport.xsd
INGEST IEID E63MRQ1GB_E8TG6Q INGEST_TIME 2017-05-18T16:47:47Z PACKAGE AA00055015_00001
AGREEMENT_INFO ACCOUNT UF PROJECT UFDC
FILES