Ranking Of Doctoral Programs Of Health Education

Transcription

Ranking of Doctoral Programs of Health EducationBy: Stephen J. Notaro, Thomas W O'Rourke, and James M. EddyNotaro, S.N., O’Rourke, T.W. & Eddy, J.M. (2000) Ranking doctoral programs in health education. AmericanJournal of Health Education, 31, 2, 81-89.Made available courtesy of AAHPERD: http://www.aahperd.org/*** Note: Figures may be missing from this format of the documentAbstract:This study ranked doctoral programs of health education based on the productivity of the faculty and thescholarly activity of doctoral students. The methodology, unique to ranking studies, uses a multiple set ofvariables weighted by scholars and leaders in the field of health education. Variables were articles published,citations received, journal editorships, external funding for research, student activity, student/faculty ratio,mentoring and placement, and student support. An overall ranking is provided as well as the ranking for each ofthe eight individual variables. Twenty-eight of the 44 doctoral programs of health education participated in thisstudy (a response rate of 64%). Twenty-six programs had at least one variable ranked in the top 10 programs,and all programs had at least two variables ranked in the top 20. Correlation analysis of the eight variablesprovided additional insights. Interestingly, the four variables related to the faculty were not related to the fourstudent variables. Implications of the ranking for administrators, faculty, students, and the health educationprofession are provided.Article:IntroductionAcademic quality rankings are not available for doctoral programs of health education. The present studyappears to be the first effort in the field of health education to rank programs based on the productivity of thefaculty and indices on the activity of program doctoral students. The ever-increasing popularity and use ofacademic rankings makes this study timely as it establishes a weighted ranking system for doctoral programs ofhealth education, with the respective weights being established by identified scholars and leaders in the field.Rankings are available for a number of other disciplines or fields of academic study. Currently, there are tworankings that garner the most attention. U.S. News and World Report publishes an annual ranking of "America'sbest graduate schools" (Zuckerman, 1997). Its main focus are the fields of law, business, medicine, engineering,and education, which are ranked by a set of variables. The report also ranks a number of other disciplinessimply by reputation. The second major work is from the National Research Council, which ranked over 3,000departments in more than 40 different fields. Also, the literature contains a multitude of articles providingrankings in individual disciplines.Rankings are an effective device used to compare items in any subject area that are difficult to measure.Certainly, academic programs are difficult to compare. Vast numbers of universities with an array of similarprograms warrant a ranking system. Therefore, academic quality rankings are an important part of the literature.Rankings appear to be useful to university administrators, faculty, and students, Administrators may findacademic rankings useful in the allocation and acquisition of resources (Miller, Tien, & Peebler, 1996; Scott &Mitias, 1996; Webster, 1992). Faculty have an interest in their program's or institution's standing against similarprograms (Goodwin, 1995; Katz &Eagles, 1996; Lowry & Silver, 1996; Scott & Mitias, 1996). Many studentsuse rankings to determine where to continue their academic study (Miller et al., 1996; Morrison, 1987; Scott &Mitias, 1996; Webster, 1992).

There appear to be four major types of contemporary methods to rank academic programs. First, is rankingacademic quality by reputation (Roush, 1995). In this method, individuals knowledgeable in a discipline oresteemed in their field are asked to rank programs. Second, are academic quality rankings based on facultyproductivity. Popular measurements of faculty productivity include the number of articles published and thenumber of citations received in the literature (Taubes, 1993; Tauer & Tauer, 1984; Webster, 1986; West &Rhee, 1995). Third are rankings based on student achievements. These include such measures as SAT and GRE(standardized) test scores. Fourth are rankings based on institutional resources. These include measures such asthe number of volumes in the library, student-faculty ratio, and the amount of resources expended per student(Webster, 1986; West & Rhee, 1995).The present study included three methods: faculty productivity, student achievement, and institutionalresources, as all three can be employed to measure faculty productivity and doctoral student activity, which arethe measures of academic quality in this article. Reputation, although an often-used measure of academicquality and a valuable indicator of perceived performance, was not the focus of this study because, by nature,reputation is highly subjective.Significance of this StudyThe study built and expanded on previous ranking methodologies for other disciplines and adds to the literaturein at least four ways. First, although rankings exist for many disciplines and programs, there is no ranking ofdoctoral programs of health education. Second, many ranking systems in the literature are based either on onevariable, which usually is reputation, or a combination of articles and citations received in a restricted set ofjournals. This study expanded the number of variables used to rank departments and obtained a more accurateand detailed assessment of quality. Third, most multiple standard rankings weight each variable equally orappear to arbitrarily assign weights. This study apparently is the first to establish a weighting system formultiple variables based on the input of scholars and leaders in the field. Fourth, this research will provideinstitutions and departments as well as administrators, faculty, and students with information regarding thequality of programs in health education.Purpose of the StudyThe purpose of this study was to rank doctoral programs of health education based on the academicproductivity of the faculty and indices on the activity of doctoral students within the departments. The rankingbuilds on the work of Charles West in the field of education. West used a set of six variables to rankdepartments of education (West & Rhee, 1995). This study also differentiates the importance of the individualvariables in a set of variables through a weighting system determined by scholars and leaders in the field ofhealth education. The variables used to rank programs measure academic productivity by articles and citationsreceived in key selected journals; editorships in the same selected journals; external funding for research andcontracts; activity of doctoral students; doctoral student/ faculty ratio; faculty mentoring and placement ofdoctoral students; and doctoral student support, including assistantships for teaching and research. Previousresearch identified leading journals of health education that would be appropriate for this study (Everett, Casler,& Summers, 1994; Laflin, Horowitz, & Nimms, 1999; Price & Robinson, 1999).MethodsStep 1: Survey One DesignA literature review of previous ranking studies was conducted to identify valid measures to rank doctoralprograms. The Step 1 survey was developed with eight identified variables to be weighted and to collect otherpertinent information that could be used to rank doctoral programs of health education.Step 2: Establishment of a Board of ReviewersA board of reviewers was established to complete the Step 1 survey. Scholars and leaders were selected fromthe field of health education during the 10 years (1987-1997) who also were employed at an institution that hasa doctoral program in health education. The board's knowledge of the field of health education enabled them todetermine the weight for the individual variables that were used to rank the programs. In addition to current

chairs or heads of doctoral programs of health education, the subjects selected were from three prominenthealth education organizations—the American Association for Health Education (AAHE), the Society forPublic Health Education (SOPHE), and the Public Health Education/ Health Promotion Section of theAmerican Public Health Association (APHA). The groups are not mutually exclusive as many individualsbelong to a number of the categories. The nine groups surveyed were (1) department chairs in doctoralprograms of health education (from the AAHE Directory of Institutions Offering Undergraduate and GraduateDegree Programs in Health Education, 1997 edition); (2) the AAHE scholar recipients (AAHE recognizes onehealth education scholar annually); (3) AAHE presidents (elected every 2 years by the membership); (4)SOPHE distinguished fellows (SOPHE recognizes one distinguished fellow annually); (5) SOPHE presidents(elected annually by the membership); (6) Public Health Education and Health Promotion Section of APHAPublic Health Education Distinguished Career Award recipients (section selects a recipient annually); (7)APHA Public Health Education/Health Promotion section chairpersons (individuals elected annually by themembers); (8) APHA School Health Education section chairpersons (individuals elected annually by themembers); and (9) AAHE/ SOPHE committee members who drafted the graduate standards for health education (elected by the leadership of national professional organizations).A questionnaire was sent to members in these nine categories with a return envelope provided. A follow-upsurvey for non-respondents was conducted about 1 month after the initial contact. The following informationwas collected. Respondents were asked to place themselves in one or more of the nine categories that were applicable totheir personal academic or professional status. Only respondents identified as having been employed at an institution of higher learning in a programgranting a doctorate in health education were asked to complete all the questions. To establish a weighting system, respondents were asked to review eight proposed variables to rankdoctoral programs of health education for the years 1993-1997. Respondents were asked to provide a numberrating from 0 to 100 for each of the eight variables so that the total equaled 100%. The variables were as follows: (a) articles published by faculty in preeminent health educational journals; (b) citations received byfaculty in preeminent health education journals; (c) faculty editorships or journal board memberships inpreeminent health education journals; (d) external funding for research, grants, and contracts (not training orservice) in health education; (e) activity of doctoral students in health education in research, teaching, andservice; (f) student/faculty ratio in doctoral programs of health education; (g) faculty mentoring of students andquality of student placement on graduation; (h) student support (assistantships and research). Respondents were asked to state the critical mass of faculty with health education degrees and total facultyneeded to offer a quality doctoral program in health education. Other information was solicited from respondents including (1) how many years of past data therespondents felt should be used to rank doctoral programs and (2) whether the recently developed (1997)AAHE/SOPHE graduate standards should be used to rank the quality of programs for possible use in futurestudies. Finally, the respondents were asked whether a ranking would be used, if one existed, for hiring faculty,staff consultants, or for other purposes.Step 3: Development of Weighted VariablesThe data from the Step I survey were analyzed utilizing the Statistical Package for the Social Sciences (SPSS).Descriptive statistics were employed to establish the weighting of variables used in this study. Correlationanalyses were used to assess the degree of relationship between the weighted variables.

Step 4: Survey Two Design and ImplementationThe Survey Two design operationalized the weighted variables so that the survey would collect the data neededto establish a ranking of doctoral programs of health education. The time frame from which data were collectedwas limited to 5 years to control for distant works that may have had a substantial impact on the rankings.Selection of Subjects and MethodsSurvey Two was administered to current chairs/heads or coordinators of all 44 doctoral programs of healtheducation in the United States. Follow-up of nonrespondents over the following 3-month period consisted oftelephone contact, mailing of an additional copy of the survey, and E-mail correspondence. The chairs wereasked to supply the following information. Whether their departments had doctoral programs in health education. Those departments without adoctoral program in health education were not included in the ranking. Descriptions of their current positions in the department or unit. Respondents could report one of thefollowing categories: (a) department head or chair, (b) department coordinator/curriculum coordinator, (c)health education program coordinator, or (d) other (respondents were asked to specify). A list of up to five (full or part-time) most academically productive health education faculty or staff. Toassess academic productivity, 81 refereed preeminent health education journals identified by Laflin et al. (1999)were reviewed for the 5-year period January 1993 through December 1997 pertaining to health education interms of (a) the number of faculty articles published, (b)the number of faculty citations in other publications,and (c) the number of faculty journal and/or editorial board editorships. The approximate total dollar value (direct and indirect costs) of health education grants and contracts forwhich the faculty member served only as a principal investigator or co-principal investigator (not as asubcontractor), for the 5-year period January 1993 through December 1997, that contributed directly to thefollowing: (a) faculty and graduate student health education research; (b) health education communityintervention activities or demonstration projects; (c) health education innovations, training, continuingeducation, and related activities; (d) other (respondents were asked to explain). If one grant/contract contributedto more than one category, the funds were to be either divided or allocated to the most related category. Fundswere not to be double counted. Grants and contracts the primary purpose of which was not related to healtheducation were not to be included. On average, for the 5-year period January 1993 through December 1997, the approximate annual number offull time equivalent (FTE) faculty in the administrative unit dedicated to the doctoral program of healtheducation. On average, for the 5-year period January 1993 through December 1997, the approximate annual number ofdoctoral students of health education that, as defined by the institution, were (a) full-time health educationdoctoral students, or (b) part-time health education doctoral students. On average, for the 5-year period January 1993 through December 1997, the approximate annualpercentage of full-time doctoral students of health education who received the following: (a) 50% or moreteaching or research assistantship, (b) 24-49% teaching or research assistantship, and (c) internal or externalfinancial dissertation support. Only (a) and (b) are mutually exclusive. On average, for the 5-year period January 1993 through December 1997, the approximate annualpercentage of full-time health education doctoral students who, while a graduate student: (a) had sole responsibility for teaching two or more health education classes or sections; (b) had sole responsibility forteaching one health education class; (c) were appointed by the health education department with a paidassistantship to assist health education faculty with teaching a class, research projects, or service projects; or (d)

served on a university, college, school, or departmental committee as a representative of the department orprogram. A list of up to 10 health education students who received doctorates during the 5-year period January 1993through December 1997, including their names and current employment with place of employment andprofessional title or rank.Step 5: Ranking the ProgramsStep 5 of this study analyzed the data and developed the ranking for doctoral programs of health education.Collection of DataSurvey Two data were collected from department heads or coordinators of doctoral programs of healtheducation. The data were tabulated and analyzed using SPSS. Descriptive statistics and correlation analyseswere utilized. The following steps were performed for each of the eight variables to obtain the ranking.Determining the Raw Score Data for the eight variables used to rank doctoral programs of health educationwere totaled to obtain an overall doctoral program raw score. The raw scores for each of the variables from1993-1997 were determined as follows.The data for the three variables involving articles published by faculty, citations of faculty, and facultyeditorships in the 81 preeminent health education journals identified by Laflin et al. (1999) was based on thesum of the five faculty members provided by each program. The numbers of articles published were collectedfrom the 81 journals by the use of automated library indexes. Books were not used for several reasons. Booksare not subject to peer review to the same extent as peer reviewed articles. However, to assess the effect ofbooks on rankings, a separate analysis was conducted correlating academic publishing productivity of articlesalone to articles and books combined. The variables were nearly perfectly correlated (.98), indicating excludingbooks from the analysis would not significantly effect the rankings. The indexes used were ERIC, MEDLINE,Psychlnfo, and Web of Science. Citations were obtained from the Social Science Citation Index in the Web ofScience index, which contained 68 of the 81 journals, accounting for 84% of the journals. Citations from theremaining 13 journals were not included. Editorships were obtained for 92% of the 81 journals by reviewingthe appropriate journal editions. Each raw score was the total number of faculty articles, citations, andeditorships.Information on external funding was from self-reported data from each program, with the highest amount oftotal dollars being the highest raw score.Information on the student/faculty ratio was self-reported data, and the raw score was determined by dividingthe approximate annual number of total doctoral students (not just graduates) by the approximate annualnumber of total FTE faculty per year. The lowest student/faculty ratio was considered the highest raw score. Todetermine the total number of doctoral students, full-time students were given a weighted value of 1.0, and parttime students were given a weighted value of 0.5. The two weighted values of doctoral students werecombined. To determine the number of FTE faculty, the amount of time dedicated to the health educationprogram of full-time and part-time faculty were combined. For example, six total faculty with two at 100%time, two at 50% time, and two at 25% time in the health education program would equal 3.5 FTE faculty.To measure the activity of doctoral students in health education in research, teaching, and service, the selfreported survey data included two mutually exclusive responses: (1) the percentage of students who had soleresponsibility for teaching two or more health education classes, and (2) the percentage of students who hadsole responsibility for teaching one health education class. Students teaching two or more classes received aweight of 0.50, while students responsible for one class received a weight of 0,25. The remaining two itemswere (3) the percentage of students appointed by the health education department with a paid assistantship toassist health education faculty with teaching a class, research projects, or service projects received a weight of0.25, and (4) the percentage of students who served on a university, college, school, or departmental committee

as a representative of the department or program also received a weight of 0.25. The weighted scores werecombined, with the greatest program total being considered the highest raw score.To measure student support for assistantships and research, three items of self-reported information were used,with the first two responses being mutually exclusive: (1) percentage of doctoral students receiving a 50% ormore teaching or research assistantship received a weight of 0.50, (2) percentage of doctoral students receivinga 24-49% teaching or research assistantship received a weight of 0.25, and (3) percentage receiving internal orexternal financial dissertation support received a weight of 0.50. The greatest total weighted score wasconsidered the highest raw score.To measure faculty mentoring of students and quality placement the department heads were asked to provide alist of up to 10 of the top students who received their doctorates during the 5-year period January 1993 throughDecember 1997. The number of articles published and citations received was determined in the same fashion aswere the faculty counts derived from the indexed journals of the set of 81 health education-related journalsidentified by Laflin et al. (1999). In addition, placement was determined by the number of the 10 doctoralstudents who were employed at Carnegie Research I institutions or at national level health institutions/organizations such as the National Institutes of Health, the Centers for Disease Control and Prevention, and the NationalArthritis Foundation. Articles and citations received a weight of 0.50 in the same proportion relative to eachother as the weight applied in Survey One. The number of doctoral students employed at Carnegie Research Iinstitutions or at national level health institutions also received a weight of 0.50. The weighted scores werecombined, with the greatest total being the highest raw score.Determining Proportional and Weighted ScoresFor each of the eight variables, the program with highest raw score was assigned a value of 1.0 and each of theremaining scores was a proportion of 1.0. The proportion was determined by dividing the raw score by the rawscore of the highest value to obtain the proportion. For example, if 10 were the greatest number of articles published, that program would receive a value of 1.0. A second program with 5 articles would have a proportionalscore of 0.50.Each proportion for the eight variables was multiplied by the weighting assigned to that variable from Question4 of Survey One, which was developed by scholars and leaders in the field of health education to obtain aweighted score for each variable.

The weighted scores for each of the eight variables were summed for a total weighted score. The total weightedscores were placed in rank order with the highest total weighted score being ranked first to the lowest totalweighted score being ranked last. The end result is the ranking of doctoral programs of health education.Limitations of the StudyTo control for size, each program was limited to only five faculty and 10 doctoral students as reported by theprogram or department heads. The journals used in this study were a set of 81 journals identified by Laflin et al.(1999) as preeminent journals in health education. Citation indexes containing 68 of the 81 journals weresearched by the authors' names as provided by the program or departments. All authors listed in an index for anarticle received full credit for a publication or citation because some sources list authors alphabetically, makingit difficult to determine which author had the greatest contribution to an article.Results and DiscussionRankingsThe initial questionnaire was sent to 102 scholars and leaders in the field of health education. Of this group, 79responded, giving a response rate of more than 75%. Fifty respondents met the study criteria, having beenemployed at an institution of higher learning in a program granting a doctorate in health education, to weightthe variables in Survey One. The 50 scholars and leaders from the field of health education assigned weights tothe eight variables based on a 100-point scale. The results of the weighting are presented in Table 1. Externalfunding and articles published in journals received the highest weighting. Each variable received a weight of atleast 9.0%. It is interesting to note that citations received a weight that was about 40% less than articles. The

four variables related to faculty activity accounted for about 55% of the total weight, whereas the four variablesrelated to doctoral students received about 45% of the total weight. This may indicate that although the facultyreceived the greater weighting, a high level of importance also was placed on the activity of program doctoralstudents.Twenty-eight programs were represented and ranked in this study. Results of the overall composite weightedscore For the top 20 doctoral programs in health education, the overall ranking based on that score, and theranking for the eight variables based on the weighted score for each variable are presented in Table 2. Otherranked programs are listed alphabetically and not in rank order. Programs having a rank not in the top 20 onany variable are indicated by a dash. Of the eight ranked variables, four are related to faculty and four tostudents.Review of the composite weighted score indicates programs are grouped into several tiers. The top tier includesthe 9 highest ranked programs, headed by Indiana University and continuing through the University of Toledo.The University of South Florida stands between the top and next tier of 10 programs, including SouthernIllinois University to Loma Linda University. A review of the rankings indicates that programs in schools ofpublic health appear generally to rank highly, having 6 of the top 10 programs. Those six schools were theUniversity of Texas at Houston, the University of North Carolina at Chapel Hill, the University of Illinois atChicago, the University of South Carolina, the University of Michigan, and the University of South Florida.Results of the rankings for the eight variables reveals several findings. Six programs had a majority of the eightvariables ranked in the top 10. Indiana appears the most consistent, with a ranking in the top 10 for all eightvariables, followed by the University of Toledo with seven and the University of North Carolina at Chapel Hill,the University of South Carolina, the University of Alabama/University of Alabama at Birmingham, and theUniversity of Maryland having five variables ranked in the top 20. All 28 programs had at least two variablesranked in the top 20. Twenty-six schools had at least one variable ranked in the top 10. The frequency ofprograms having a variable in the top 10 and top 20 are presented in Table 3.Variability of the rankings for the eight variables within programs was noted. Several programs ranked in thetop 10 (University of Texas at Houston, University of Illinois at Chicago, University of South Carolina, and theUniversity of Toledo) had at least one variable not ranked in the top 20. This was offset by high rankings forother variables. For example, the second highest ranked program, the University of Texas at Houston, was notranked in the top 20 for student activity but had the highest ranking for external funding, was tied for first infaculty article publications and second for citations in the literature. Similarly, the University of Illinois atChicago did not rank in the top 20 for either student activity or student/faculty ratio but was first in citations,and tied for first with the University of Texas at Houston in article publications and second in external researchfunding. In contrast, the University of Toledo, which was not ranked in the top 20 for external funding,exhibited a consistently high ranking on all other variables. These results suggest that some programs maychoose to focus on specific activities, such as publishing journal articles, while others are more broadlyfocused.Correlation FindingsTo gain additional insight, Pearson correlation coefficients were calculated for the weighted scores for each ofthe eight variables and the total composite score. Results of the correlation matrix are presented in Table 4.Several findings emerged. There was a correlation of .80 between faculty journal article publications andcitations in the professional literature. This indicates that faculty who publish more frequently are cited moreoften in the literature. Funded research was correlated with both article publication (.61) and citations (.67).

This may reflect that program faculty having greater external research funding may have greater opportunitiesto conduct research, are often expected to publish their findings by their funding sources, and that the facultymay simply devote more time to scholarly research. Review of the rankings for funded research indicates thatschools of public health generally tend to have higher rankings for both external funding and number ofpublications. Some of these programs may be more dependent on "soft" funding than other programs. Thus,their faculty may devote more effort to securing external funding in contrast to other programs funded to agreater extent by "hard" money, which may require more effort devoted to a teaching mission.Journal editorships/editorial board involvement was modestly and not significantly related with either articles(.31) or citations (.17). One might have expected faculty who are more often published and cited in theliterature to be more involved in terms of editorships/editorial boards, hut this do

the eight individual variables. Twenty-eight of the 44 doctoral programs of health education participated in this study (a response rate of 64%). Twenty-six programs had at least one variable ranked in the top 10 programs, and all programs had at least two variables ranked in the top 20. Correlation analysis of the eight variables