37<f Mi A /V<5. 3os ( - UNT Digital Library

Transcription

37 fmi A/V 5. 3os (ADMISSION FACTORS RELATED TO SUCCESS IN DOCTORALPROGRAMS IN VOCATIONAL-TECHNICAL EDUCATIONIN TEXAS AND OKLAHOMADISSERTATIONPresented to the Graduate Council of theUniversity of North Texas in PartialFulfillment of the RequirementsFor the Degree ofDOCTOR OF PHILOSOPHYByRoss O'Neal Roberts, B.S., M.EdDenton, TexasAugust, 1989

Roberts, Ross O'Neal, Admission Factors Relator) foSuccess—in Doctoral Programs in Vocational-TechnicalEducation in Texas and Oklahoma.Doctor of Philosophy(Vocational-Technical Education), August, 1989, 114 p p . ,istables, bibliography, 26 titles.This study identified the admissions criteria forselected doctoral programs in vocational-technicaleducation in Oklahoma and Texas and investigated therelationship of these criteria to success in the doctoralprograms.Success in the doctoral programs was identifiedin terms of cumulative doctoral grade point average.Data were obtained through a questionnaire designed to licit both general information concerning admissionscriteria for vocational-technical doctoral programs at theselected institutions and to collect specific informationon a random sample of twenty doctoral candidates from eachof the four selected institutions.Factors consideredincluded birthdates, gender, scores on admissions tests,grade point average in the masters program, the year thelatest masters was completed, number of colleges attended,and cumulative doctoral grade point average.A statistical analysis using nine separate one-wayanalyses of variance determined that four of the ninefactors considered proved to be statistically significantat the .05 level or better when correlated with the

criterion variable (cumulative doctoral grade pointaverage).Those factors were gender, Graduate RecordExamination verbal and composite scores, and masters gradepoint average.The results of the study basically parallel findings ofresearch concerning admissions criteria and success ingraduate programs in other areas.Additional researchefforts should address the issue of determining the mostappropriate decision logic model for making admissionsdecisions in programs at the graduate levels.

TABLE OF CONTENTSP3LIST OF TABLESf ChapterI.INTRODUCTION AND SIGNIFICANCE1Statement of the ProblemPurposes of the StudyHypothesisBackground and Significnce of the StudyLimitations of the StudyDelimitations of the StudyDefinition of TermsII.REVIEW OF RELATED LITERATURE15Admissions Tests as Predictive CriteriaStudies of Multidimensional AssessmentStrategies for Graduate ProgramsPerceptions of Applicants ConcerningAdmissions Criteria and ProceduresDirections for Further ResearchIII.METHODOLOGY .32Population SelectionDevelopment of the InstrumentTreatment of the DataIV. PRESENTATION AND ANALYSIS OF DATA36Presentation of the DataSummary of Findingsv.SUMMARY, FINDINGS, CONCLUSIONS, ANDRECOMMENDATIONS55SummaryFindingsConclusions and RecommendationsAPPENDICES63BIBLIOGRAPHY112iii

LIST OF TABLESTablePage1Mean Grade Point Average by Age Groups2Analysis of Variance Summary by Age Groups403,Mean Grade Point Average by Gender41Analysis of Variance Summary by Gender42Mean Grade Point Average by GRE-Verbal Scores.4345,6.7.8.9.Analysis of Variance Summary by GRE VerbalScoresMean Grade Point Average by GRE-QuantitativeScoresAnalysis of Variance by GRE-QuantitativeScoresMean Grade Point Average by GRE-CompositeScores10. Analysis of Variance Summary by GRE CompositeScores11. Mean Grade Point Average by MAT Scores3944454546474812. Analysis of Variance Summary by MAT Scores4813. Mean Grade Point Average by Master's GPA5014. Analysis of Variance Summary by Master's GPA5115. Mean Grade Point Average by Year of Master'sDegree16. Analysis of Variance Summary by Year ofMaster's Degree17. Mean Grade Point Average by Number ofColleges18. Analysis of Variance Summary by Number ofCollegesiv52525354

CHAPTER IINTRODUCTION AND SIGNIFICANCEOne of the primary concerns of any institution ofhigher learning and of any department within any of thoseinstitutions of higher learning is developing a set ofadmissions criteria which can aid in the selection ofcandidates for admission who are most likely to fulfillthe degree requirements of the program.A variety ofapproaches has been attempted, but none of them has beencompletely accurate in predicting success of students inacademic programs.Determining an appropriate set of admissions criteriabecomes increasingly important as the academic programsbecome more stringent at the higher levels of the master'sand doctoral degrees.The costs in terms of time, dollars(both personal and institutional), and personal commitmentinvolved in pursuing a higher degree make it imperativethat admissions criteria enable decision makers to selectcandidates who have the potential to succeed.One of the primary admissions criteria used in graduateprograms is a test score or a combination of test scores.FOE*9 large percentage of graduate programs, these scoresmay come from the Graduate Admissions Examination (GRE).

Other programs may use the Miller Analogies Test (MAT) or asubject-matter-oriented test.If survival time (19 39 tothe present) and high usage were the basic criteria forsuccess in predicting likelihood of success in graduateprograms, the GRE would indeed be a superb instrument.(10)In fact, the GRE-T score (a combination of the verbaland quantitative components of the GRE) is the single mostfrequently applied admissions requirement for graduateschools (l).However, there is widespread concern over theinappropriate use of test scores in admissions.Forexample, when Marston (5) and Thacker and Williams (9)reviewed various predictive studies, they questioned thedesirability of the widespread employment of the GRE forpredicting performance in graduate school.(5, 9)Furstand Roelfs concluded in 1979 that evidence of thepredictive validity of the various forms of the GRE hadindeed been mixed (3).In addition to questions raised concerning predictivevalidity of admissions tests, particularly the GRE, thereare additional ethical issues which must be addressed.Individual accounts of denial of admissions based onerroneous reporting of test scores, for example, lendadditional credence to the concern.Claims of ethnic,sexual, or socio-economic bias from individuals and groupsbring ethical issues to the forefront during examinations

of admissions and/or graduation criteria for graduateprograms.In Texas, this concern is reflected in a series ofproposed bills filed in the State Legislature of Texas.Many of these bills are designed to try to insure thatinstitutions of higher education do not rely solely orunfairly on test scores in their admission policies.Examples include Senate Bill 29 authored by Senator Truan(6), House Bill 325 authored by Representative Luna (7),and Senate Bill 99 3 authored by Senator Parker (8).Copies of these bills are included in Appendix A.Dr. Frederick H. Dietrich, Vice-President of theProgram Division, College Entrance Examination Board,testified before the Select Committee on Higher Educationof the State of Texas on March 13, 1986, where he stressedthe importance of assuring access to higher education atthe undergraduate level.He further emphasized thatfailure to recognize differences between common andcompulsory education (which includes high school, andwhich, he says, undergraduate education is becoming) andhigher education poses a very real threat to highereducation.One particular danger he identifies is thecommon practice of using graduate level admissions tests asindicators of the quality of undergraduate programs or asassessments of individual achievement rather than aspredictors of success in graduate programs.He further

cautions against overreliance on these test scores aseither the sole orthe primary criterion for determiningappropriateness of admission to graduate programs.Heconcludes that other measures of academic success andpromise are available, especially courses taken and thegrades achieved (2).A copy of Dr. Dietrich's address isincluded in Appendix B.Concern over identifying admissions criteria that arepredictive rather than reflective is certainly not the onlyconsideration for those involved in the issue of admissionsto graduate programs.Inherent in the philosophy of thoseinstitutions which rely solely or heavily on admissionstest scores, academic achievement, and/or assessment ofability is the assumption that such factors measure what isessential for success in an academic environment, excludingsuch factors as motivation, creativity, personal honesty,intuition, and characteristics of social responsibility andsensitivity.Each of these factors has been determined toplay a key role in life success, yet each is often excludedfrom consideration when determining whether or not a givenindividual will be able to obtain an education commensuratewith his or her life goals.While test scores have often been shown to be closelycorrelated with ultimate success in graduate programs(i.e., graduation), there is no concomitant body ofresearch to determine how well those excluded from programs

based on test scores might have done.Norman Gronlundpoints out in Measurement and Evaluation in Teaching thattest scores provide only one type of information and shouldalways be supplemented by past records of achievement andother types of assessment data.No major educationaldecision, he concludes, should ever be based on test scoresalone (4).After reviewing a broad spectrum of literature on thesubject of admissions criteria for graduate programs, withPar"kicular emphasis on doctoral programs in vocationaleducation, it became apparent that admissions tests,standardized or otherwise, should not be the sole criteriafor admission to graduate-level programs.Consequently, italso became evident that it is incumbent upon thoseactively involved in enhancing higher education todetermine what is, in fact, a more appropriate method fordetermining who is more likely to succeed in graduateprograms of education, with particular emphasis on graduateprograms in the field of vocational education.This studywas designed with such a goal in mind.To define the focus of the study and to providePreliminary background information, a pilot survey wasconducted to determine exactly what admissions criteria arecurrently being employed in making determinations foradmission to doctoral programs in the field of vocationaleducation.For this survey, all thirty-four institutions

of higher education identified as offering doctoralprograms in the field of vocational education were sentquestionnaires requesting information concerning degreeofferings in vocational education and selection andadmission criteria for the period 1980-1985.survey used is included in Appendix C.institutions is included in Appendix D.A copy of theA list of surveyedOf the thirty-fourinstitutions surveyed, nineteen responses were received.Thirteen of those responses contained usable data on degreeofferings and selection or admissions criteria.Admissionsfactors identified by the survey included admissions tests,grade point average (GPA), personal interview,recommendations, teaching experience, work experience, andwriting samples.Institutions were also asked if they everwaived scores on admissions tests if the scores fell belowa cut-off score that had been established.Of the thirteen institutions responding, twelve usedadmissions tests as one of the admissions criteria.Nineuniversities (69%) accept scores from either the GRE orMiller Analogies Test (MAT).Three (23%) required the GRE,and one (8%) did not require an admissions test.Cut-offscores ranged from 950 to 1000 composite score on the GRE(verbal and quantitative scores).One university requireda composite of 1500 on the verbal, quantitative, andanalytical components of the GRE.MAT ranged from 40 to 55.Cut-off scores on the

Of the institutions using admissions tests as one ofthe primary admissions criteria, ten (77%) had no waiverprovision for test scores.Two institutions (15%) allowedwaivers based on high GPAs and recommendations.Theremaining institution had no requirement for admissionstests.Ten of the institutions used minimum grade pointaverages as an admissions criterion.required no specific GPA.The remaining threeFour institutions (31%) requireda GPA of 2.5 or above; four other institutions (31%)required a GPA of 3.0 or above.The remaining twoinstitutions (15%) required minimum GPAs of 3.5 or above.Personal interviews with potential doctoral studentswere also popular admissions criteria.Eight of theinstitutions (62%) required personal interviews ofcandidates prior to admission to the graduate program.Theremaining seven institutions had no requirement forinterviews.Requirements for interviews were coupled withrequirements for personal recommendations in all cases.However, three institutions (23%) which did not requireinterviews required recommendations.A surprising number of institutions (six, or 46%)required teaching experience.The remaining seveninstitutions listed teaching experience as desirable.Onlyone institution (8%) required work experience.Interestingly, that institution was the same institution

which listed no other admissions criteria.However, eleveninstitutions (85%) said that work experience was desirable.The final criterion examined was a writing sample.Thewriting sample was required by only one institution (8%), arather surprising finding when one considers that writingis identified in much of the literature as the single skillmost essential to success in graduate programs.Based on information obtained from the pilot survey, itseemed apparent that there were several options to using asingle admissions criterion.Preliminary review of thefindings did not reveal the basis on which admissionscriteria were established, and none of the literaturereviewed addressed why specific admissions criteria wereselected by given institutions.To try to determine whatfactors might be the best predictors of success in agraduate program, the scope of this study was limited to aregional survey so that correlations between admissionscriteria for doctoral programs in vocational education andsuccess in the programs could be studied in depth.Statement of the ProblemThe problem investigated in this study was twofold.The first aspect of the problem addressed the identification of admissions criteria for selected doctoralprograms in vocational-technical education.After thesefactors were identified, the second aspect of the problem

addressed correlation of the admissions factors withsuccess in the doctoral programs as evidenced by gradepoint average in the programs.Purposes of the StudyThe purposes of this study are described below:1.To identify the admissions criteria which arecurrently used by selected institutions offering doctoralprograms in the area of vocational-technical training.2.To determine the extent to which these criteriacorrelate with success in the doctoral program entered whensuccess in the program is defined as the grade pointaverage achieved in the program.3.To determine the relative predictive value of theseadmissions criteria.HypothesisFor purposes of this study, the following hypothesiswas tested:Null Form:There will be no significant difference inrankings of grade point averages of students who areadmitted to vocational—technical doctoral programs andtheir respective rankings on a variety of admissionsfactors.Alternate (Working) Hypothesis:Rankings of gradepoint averages of students in vocational-technical doctoral

10programs will be significantly related to their rankings ona variety of admissions factors.Background and Significance of the StudyWhile identifying appropriate screening mechanisms fordetermining admissions criteria for entry into educationprograms at all levels has been a matter of discussion,debate, and almost outright warfare in the educationalcommunity basically since the inception of an "educationalsystem," it has become a critical issue in the field ofvocational education in the 1980s and 1990s.Increasingfiscal constraints plague vocational-technical trainingprograms in the private, public, and government sectors.Consequently, increasing emphasis is being placed oninsuring maximum benefit for training dollars, includingincreased emphasis on accountability associated with theexpenditure of those funds.In light of this current climate, insuring that the"right" students (i.e., those who have the highestprobability of succeeding when compared to the totalpopulation of applicants for the program) are accepted intoeducational programs, especially doctoral programs invocational-technical training, becomes an essential factorin fiscal responsibility and accountability,if it ispossible to more accurately predict which admissionsfactors correlate most closely with success in a doctoral

11vocational-technical training program (grade pointaverage), it may be possible to insure better use oftraining dollars available.An additional, and perhapsequally important, benefit would be the possibility of moreappropriately using human resources in the form of bothinstructors and students.Because this is one of the initial studies specificallytargeting doctoral programs in vocational-technicaleducation and training, it is hoped that the results ofthis study can form the foundation for further researchwhich may enhance the effectiveness of the screeningprocess for admissions to doctoral programs in vocationaltechnical education and training. While the results cannotbe expected to be conclusive, they can certainly provideadditional direction in the search for the best admissionspolicies and procedures.Limitations of the Study limitation of this study was the inability to surveypersons who failed to meet the admissions standards andwere never admitted to graduate school.Delimitations of the StudyA delimitation was that the detailed sample of studentdata was limited to doctoral programs in vocationaltechnical education in Texas and Oklahoma,in addition,the study was limited to doctoral students who were

12admitted to graduate study during the 1980-1988 timeperiod.The questionnaire further delimits the sample totwenty cases per institution.The sample from eachinstitution was selected at random to be representative ofthe population at that institution.Definition of TermsTerms used in the context of this study are commonterms in the field of education and training.For thatreason, no special or specific definitions are required.

CHAPTER BIBLIOGRAPHY1.Camp, Joseph, and Thomas Clawson. 1979.Therelationship between the Graduate RecordExaminations Aptitude Test and graduate gradef v e r a 9 e i n a M a s t e r of Arts in Counseling.Educational and Psychological Measurement. 39(Summer): 429-431.2.Dietrich Frederick H. 1986, March. Report from fhoCollege Entrance Kxaminatinn Board. Addresspresented to the Select Committee on HigherEducation of the State of Texas, Tyler, TX.3.Furst, Edward J., and Pamela J. Roelfs. 1979.Validation of the Graduate Record Examinations andthe Miller Analogies Test in a doctoral program ineducation. Educational and PsvchologiraiMeasurement 39 (Summer): 147-151.4.Gronlund, Norman E. 1985.Measurement and evaluationin teaching. New York: Macmillan Publishing3Company.5.Marston, A. R. 1971. it is time to reconsider theGraduate Record Examination, AmericanPsvcholoqist 26 (July): 653-544.6.Texas Legislature. Senate. 1987. SB29.(SenatorI ru f» n )Proposed bill relating to s t a n d i i tests used by public educational institutions.Austin, TX.7.Texas Legislature. House. 1987. HB325.(Representative Luna). Proposed bill, relafin nstandardized—tests used by public educationinstitutions. Austin, TX.8.Texas Legislature. Senate. 1987. SB993.(SenatorProposed—bill—relating to the testing ofand remedial education opportunities for studentsat public institutions of higher education.Austin, TX.9.Thacker, A. J., and R. E. Williams. 1974. Therelationship of the Graduate Record Examination tograde point average and success in graduateschool. Educational and Psychological Mpasnrpmenf34 (Winter): 939-944. 13

1410.Tyler, L. E. 1972. A review of the Graduate RecordExaminations: National program for graduateschool selection. In Seventh Mental MeasurementYearbook , ed. 0. K. Buros, 2: 1030-1032. HighlandPark, NJ: The Gryphon Press.

CHAPTER 2REVIEW OF RELATED LITERATURELiterature reviewed for this study was heavily orientedtoward admissions testing as a primary criterion foradmissions to graduate programs.Because the GRE is one ofthe most widely recognized and utilized admissions tests,innumerable articles, studies, and discourses addressedvirtually every combination of correlations between the GREand "success" imaginable.For no other single admissionscriterion was such an array of information available.However, a number of studies examined correlations betweenGRE scores and several other commonly applied admissionscriteria, such as grade point average, age, and quality ofundergraduate programs or other graduate programs attended.Because the GRE has been in use in various forms sinceabout 1946, much of the data is relatively old, with manyof the studies being conducted in the 1960s and 1970s.When the GRE is taken in isolation as a predictivecriterion for success in graduate programs, the evidence ofpredictive validity of the various forms of the GRE hascertainly been mixed.Evidence of the widespreadconfidence in the predictive validity of the GRE manifestsitself in the nearly universal application of the GRE as at15

16least a major admissions criterion, if not a soleadmissions criterion.Based on a common-sense approach tothe application of test scores and cautions against theirmisuse, it is understandable that studies would routinelyand consistently be conducted to determine whether or notGRE scores are, indeed, valid predictors of success ingraduate programs.Because this study focuses on graduate programs at thedoctoral level, literature addressing GRE predictivevalidity at that level are included in this review.Whenother studies were found to be particularly relevant byother researchers, such studies may beincluded in whichresearch was conducted using graduate programs at themaster's level.The literature review included in thischapter is presented basically in chronological order, withminor deviations in sequence as required to preservelogical order.Admissions Tests as Predictive Criteriafor Success in Doctoral or Masters ProgramsIn 1975, John Nagi conducted a study of 63 graduatestudents in a doctoral program in EducationalAdministration at the State University of New York atAlbany.Thirty-three of these students completed theprogram; thirty did not.In this study, Nagi addressed thepredictive validity of both the Graduate Record Examination(GRE) and the Miller Analogies Test (MAT).The dependent

17variable was completion/non-completion of the program.Using a point-biserial correlation, Nagi found no statistically significant correlation between the dependentvariable and either the GRE or the MAT score.Nagitherefore concluded that neither the GRE nor the MAT were ffective predictors of success in the doctoral program,and further concluded that his study bore out similarstudies by w. R. Borg and other researchers(12).In 1979, a study by Arthur A. Dole and Andrew R.Baggaley revealed similar results, but against a dependentvariable of averaged rankings by faculty members onscholarship and professionalism of students enrolled indoctoral programs in a number of fields.Dole and Baggaleyadded different independent variables of undergraduate andgraduate GPA, age at time of admission into the doctoralprogram, a selectivity index (indicating quality ofinstitutions attended—Astin's index), gender, and honors(primarily awards and published work).Somewhatsurprisingly, age had the highest correlation with both-dependent variables.Grade point averages also correlatedsignificantly with both criteria.The GRE-Verbal score andthe selectivity index also correlated somewhat lessstrongly, but still significantly, with scholarship but notwith professionalism.None of the other predictors,including the GRE-Quantitative score, showed significantcorrelations with either of the criterion variables.Dole

18and Baggaley therefore concluded that the GRE can serve a"modest but useful function" in predicting success inprograms at the doctoral level if it is used in concertwith other predictive devices (6).Another study published in 1979 by Edward J. Furst andPamela J. Roelfs of the University of Arkansas examined thepredictive validity of the GRE and the MAT in a doctoralprogram in Education over a nine-year period. Criterionvariables were devised based on the requirement fordisciplined thinking in an analytical exercise selected bythe researchers in conjunction with grades in statisticsand educational research and a sum of these grades.300 subjects were included in the study.OverCorrelationsusing a variety of combinations of predictors revealed thatthe GRE-Verbal and GRE-Total were valid indicators ofpotential success in graduate level work in education, butthat the predictive validity was substantially enhancedwhen combinations of predictors were used(7).David J. Hebert and Alan Holmes conducted a study in1979 at the master's level, studying only the predictivevalidity of different components of the GRE.Using datafrom 67 students admitted to the University of NewHampshire Master of Education program, they correlated theGRE-Verbal, GRE-Quantitative, and GRE-Total scores with thegraduate grade point averages.They found statisticallysignificant relationships between the GRE-Verbal and theGRE—Total and the subjects' graduate grade point averages.

19However, the GRE-Quantitative score did not correlatesignificantly with graduate grade point average.Additionally, the researchers pointed out that there was anegative correlation between GRE-Quantitative and graduategrade point average, with those scoring lower on theGRE-Quantitative receiving higher graduate grade pointaverages.Consequently, they questioned the use ofGRE-Total scores, because the GRE-Quantitative score formspart of that composite.They also cautioned thatgeneralizations concerning the validity of using the GREacross a variety of departments are suspect.Theysuggest that the most useful information concerning thepredictive value of the GRE is local data relevant to aspecific department and that each department shouldundertake its own study to determine specific localrelevance of the GRE and its subscores (9).Another study at the master's level (also 1979) wasconducted by Joseph Camp and Thomas Clawson of theUniversity of North Florida supported Holmes' findings.Camp and Clawson studied the predictive validity of the GREand its subscores with graduate grade point average for 135students in a Master of Arts program in counseling.Again,GRE-Verbal correlated significantly with graduate gradepoint average; GRE-Quantitative did not.While theGRE-Total score also correlated significantly with graduateGPA, Camp and Clawson suggest that the quantitative portion

20of the total score could actually detract from the validityof the GRE-Total score (3).A later study by Javaid Kaiser of the University ofKansas was presented as a paper at the annual meeting ofthe Rocky Mountain Educational Research Association in1982.In this study, Kaiser studied the predictivevalidity of the GRE along with a number of otherpredictors.Using graduate grade point average as thecriterion variable and GRE-Verbal, GRE—Quantitative,GRE-Total, undergraduate grade point average, graduategrade point average, major field of study, sex, and year ofenrollment as predictors, Kaiser collected data for 356students in education and 51 in computer science and usedstepwise multiple regression to analyze the data.Kaiserconcluded that the GRE-Verbal score was the single bestpredictor of success in graduate school in education whensuccess is defined by graduate grade point average.However, the predictive validity did not hold true for theGRE-Quantitative or for the GRE-Total.in addition, usingundergraduate grade point average, sex, and year ofenrollment did not increase the predictabilitysignificantly.The composite of GRE—Verbal scores andundergraduate GPA was determined to be the best set ofpredictors.For the computer science students, none of thefactors contributed significantly to prediction of thecriterion variable.However, data did confirm that

21undergraduate GPA was a better predictor than the GREscores (10).The results of these studies are not inconsistent withthe philosophy of the Educational Testing Service or theGraduate Record Examinations Board which administer theGRE.In fact, in a Spring, 1988 GRE Board NewslettertheBoard cautions that scores should not be added together andthe total used as a predictor of success in a particulargraduate program.Even if the scores happened to beperfect predictors for a given program (which, the Boardadds, they are not), each program would require a uniquemix of abilities which would best predict success in theprogram.In addition, the Educational Testing Servicebelieves it is a misuse of prudent testing procedures toestablish a cutoff score based on the GRE-Total scorebecause some students may possess the appropriate mix ofabilities would never pass an initial screener if theGRE-Total score was not high enough to meet or exceed thecutoff point.If such cutoff scores are published, manyvery capable students with a high probability of doing wellin a given program might never even apply.Finally, theBoard says, we must address the fact that there are anumber of very bright people who simply do not test wellbut who should not be excluded from graduate programswithout corroborating the indications of the test (11).

22Studies of Multidimensional AssessmentStrategies for Graduate ProgramsAltho

admissions tests as one of the admissions criteria. Nine universities (69%) accept scores from either the GRE or Miller Analogies Test (MAT). Three (23%) required the GRE, and one (8%) did not require an admissions test. Cut-off scores ranged from 950 to 1000 composite score on the GRE (verbal and quantitative scores). One university required