In primary and secondary education, measures of teacher quality are often based on contemporaneous student performance on standardized achievement tests. In the postsecondary environment, scores on student evaluations of professors are typically used to measure teaching quality. We possess unique data that allow us to measure relative student performance in mandatory follow‐on classes. We compare metrics that capture these three different notions of instructional quality and present evidence that professors who excel at promoting contemporaneous student achievement teach in ways that improve their student evaluations but harm the follow‐on achievement of their students in more advanced classes.
[snip]
[O]ur study uses a unique panel data set from the United States Air Force Academy (USAFA) in which students are randomly assigned to professors over a wide variety of standardized core courses. The random assignment of students to professors, along with a vast amount of data on both professors and students, allows us to examine how professor quality affects student achievement free from the usual problems of self-selection. Furthermore, performance in USAFA core courses is a consistent measure of student achievement because faculty members teaching the same course use an identical syllabus and give the same exams during a common testing period.5 Finally, USAFA students are required to take and are randomly assigned to numerous follow-on courses in mathematics, humanities, basic sciences, and engineering. Performance in these mandatory follow-on courses is arguably a more persistent measurement of student learning. Thus, a distinct advantage of our data is that even if a student has a particularly poor introductory course professor, he or she still is required to take the follow-on related curriculum.6
[snip]
Our findings show that introductory calculus professors significantly affect student achievement in both the contemporaneous course being taught and the follow-on related curriculum. However, these methodologies yield very different conclusions regarding which professors are measured as high quality, depending on the outcome of interest used. We find that less experienced and less qualified professors produce students who perform significantly better in the contemporaneous course being taught, whereas more experienced and highly qualified professors produce students who perform better in the follow-on related curriculum.
[snip]
Results show that there are statistically significant and sizable differences in student achievement across introductory course professors in both contemporaneous and follow-on course achievement. However, our results indicate that professors who excel at promoting contemporaneous student achievement, on average, harm the subsequent performance of their students in more advanced classes. Academic rank, teaching experience, and terminal degree status of professors are negatively correlated with contemporaneous value-added but positively correlated with follow-on course value-added. Hence, students of less experienced instructors who do not possess a doctorate perform significantly better in the contemporaneous course but perform worse in the follow-on related curriculum.
Student evaluations are positively correlated with contemporaneous professor value-added and negatively correlated with follow-on student achievement. That is, students appear to reward higher grades in the introductory course but punish professors who increase deep learning (introductory course professor value-added in follow-on courses). Since many U.S. colleges and universities use student evaluations as a measurement of teaching quality for academic promotion and tenure decisions, this latter finding draws into question the value and accuracy
of this practice.
These findings have broad implications for how students should be assessed and teacher quality measured. Similar to elementary and secondary school teachers, who often have advance knowledge of assessment content in high-stakes testing systems, all professors teaching a given course at USAFA have an advance copy of the exam before it is given. Hence, educators in both settings must choose how much time to allocate to tasks that have great value for raising current scores but may have little value for lasting knowledge. Using our various measures of quality to rank-order professors leads to profoundly different results. As an illustration, the introductory calculus professor in our sample who ranks dead last in deep learning ranks sixth and seventh best in student evaluations and contemporaneous value-added, respectively. These findings support recent research by Barlevy and Neal (2009), who propose an incentive pay scheme that links teacher compensation to the ranks of their students within appropriately defined comparison sets and requires that new assessments consisting of entirely new questions be given at each testing date. The use of new questions eliminates incentives for teachers to coach students concerning the answers to specific questions on previous assessments.
[snip]
Students at USAFA are high achievers, with average math and verbal Scholastic Aptitude Test (SAT) scores at the 88th and 85th percentiles of the nationwide SAT distribution.8 Students are drawn from each congressional district in the United States by a highly competitive process, ensuring geographic diversity. According to the National Center for Education Statistics, 14 percent of applicants were admitted to USAFA in 2007. Approximately 17 percent of the sample is female, 5 percent is black, 7 percent is Hispanic, and 6 percent is Asian. Twenty-six percent are recruited athletes, and 20 percent attended a military preparatory school. Seven percent of students at USAFA have a parent who graduated from a service academy and 17 percent have a parent who previously served in the military.
[snip]
These findings have broad implications for how students should be assessed and teacher quality measured. Similar to elementary and secondary school teachers, who often have advance knowledge of assessment content in high-stakes testing systems, all professors teaching a given course at USAFA have an advance copy of the exam before it is given. Hence, educators in both settings must choose how much time to allocate to tasks that have great value for raising current scores but may have little value for lasting knowledge.
Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors
Scott E. Carrell
University of California, Davis and National Bureau of Economic Research
James E. West
U.S. Air Force Academy
=======
6 For example, students of particularly bad Calculus I instructors must still take Calculus II and six engineering courses, even if they decide to be a humanities major.
Showing posts with label teacher effectiveness research. Show all posts
Showing posts with label teacher effectiveness research. Show all posts
Saturday, June 26, 2010
student evaluations & college calculus at the USAFA
Thursday, April 22, 2010
missing knowledge
Barak Rosenshine on teacher effectiveness research:
Return of the repressed.
I suggest that the teacher effects era, between 1955 and 1980, was an impressive run of cumulative research. During this period, over 100 correlational and experimental studies were conducted using a common design and the different observation instruments shared many common instructional procedures. And it was cumulative: researchers cited and built upon the instructional findings of others.After several years of immersion in the education world, I am just now finding out about this.
Return of the repressed.
success or failure
Constructivist Instruction: Success or Failure by Sigmund Tobias and Thomas M. Duffy
I discovered the book while Googling Barak Rosenshine.
Until last week, reading the Direct Instruction list, I had never heard of Barak Rosenshine. He is apparently Doug Lemov's predecessor;* he did what Lemov did, which was to find highly effective teachers and then watch what they were doing.
Later on, he trained teachers in the techniques and compared their students' achievement to that of students in classrooms with conventional teachers. The teachers trained in direct instruction were more effective.
* Please correct me if I'm wrong - just getting into this. Rosenshine has researched teacher effectiveness and summarized the research; I'm not sure which studies he did.
I discovered the book while Googling Barak Rosenshine.
Until last week, reading the Direct Instruction list, I had never heard of Barak Rosenshine. He is apparently Doug Lemov's predecessor;* he did what Lemov did, which was to find highly effective teachers and then watch what they were doing.
Later on, he trained teachers in the techniques and compared their students' achievement to that of students in classrooms with conventional teachers. The teachers trained in direct instruction were more effective.
* Please correct me if I'm wrong - just getting into this. Rosenshine has researched teacher effectiveness and summarized the research; I'm not sure which studies he did.
Subscribe to:
Posts (Atom)