The Case for Teach for America

I am an enthusiastic participant in mentoring programs, and I have always enjoyed teaching and tutoring. Teach for America (TFA) therefore seems like a logical potential postgraduate destination. But it is with some dismay that I learn many regard TFA as controversial. To quote one Harvard University editorial, TFA is “working to destroy the American public education system,” through a nefarious combination of sending unprepared twenty-two year olds into the most challenging classrooms in America and replacing career teachers. A bit more digging reveals a veritable bookshelf of criticism: a Washington Post op-ed citing TFA as an “experiment in ‘resume-padding’ for ambitious young people,” and an Atlantic article eviscerating TFA for its lack of teacher training are just two of the many, many examples of TFA-related criticism I found in a simple Google search.

Yet these criticisms seem largely misplaced. While it is true TFA might be guilty of attracting more than a few “resume-padders” not concerned with American education, the program seems to have benefitted both students and TFA participants.


American students are the most obvious beneficiaries of TFA. On average, TFA teachers move secondary math students forward an extra 2.6 months in one school year. And in the three states (Louisiana, North Carolina, and Tennessee) that rank teacher preparation programs based on student outcomes, TFA comes near the very top. These indicate the quality of education students receive from TFA teachers is at least as good, and probably better, than that from ordinary teachers.

TFA critics, conceding this point, might respond that TFA is a mere band-aid on America’s gunshot wound of an education problem. Yes, TFA teachers might be of high quality, but what is the point of a high quality teacher if he or she is only in the game for two years (the length of the TFA commitment)? The answer: TFA alumni are remarkably dedicated to improving American educational outcomes, and more broadly, to addressing American poverty. A whopping 86 percent of TFA alumni work in education or in “professions related to improving lives in our most marginalized communities,” in the words of TFA founder Wendy Kopp. Sixty-four percent of alumni work full-time directly in education.

Additionally, TFA’s most recently reported acceptance rate was about 11 percent, this of an applicant pool that includes one in five Harvard seniors. It seems that TFA is a valuable tool for recruiting some of America’s most talented college students and encouraging them to work in incredibly important professions. And so it seems obvious that TFA members undertake lifelong commitments to the TFA mission, neutralizing any criticism of their supposed lack of real passion.

Another criticism of TFA centers on TFA’s perceived usurpation of career teachers. “I don’t think you’ll find a city that isn’t laying off people to accommodate Teach for America,” said Boston Teachers Union President Richard Stutman in 2009. Resentment among teachers’ unions is high, yet this resentment seems misplaced. Even after TFA hires members, those members still must interview for jobs “just like everyone else,” says TFA spokeswoman Kerci Marcello Stroud. And anyway, TFA members are most often placed in the most low-income, low-achieving schools within cities and school districts, schools many tenured teachers avoid like the bubonic plague. Announcements of schools laying off hundreds of teachers while sparing TFA members are misleading, since, as one superintendent Peter Gorman said, TFA teachers are “placed at schools…where the placement of personnel has proven to be difficult.” It’s hard to blame school districts for sparing some of their best teachers (see evidence above) at their worst schools.

I have no experience with TFA, directly or indirectly. I have never even met someone who has taught through TFA. Yet I feel comfortable throwing my support behind this organization. As noted Harvard economist Raj Chetty has demonstrated, just one year of schooling under a teacher whose classes score highly on standardized tests can increase a student’s lifetime earnings by $50,000. This statistic, regardless of the veracity of the actual figure (though it does cover tax returns of more than 2.5 million students, and thus would seem to be accurate), demonstrates what Chetty and his colleagues call the “value-added” educational model. Under this model — and under my own model, the “common sense” model — a wonderful teacher can make an extremely significant contribution to a child’s life. Teach For America, which provides our country with competent and committed teachers and leaders, should be lauded.

  • Share on Facebook
  • Share on Twitter
  • Share on Google Plus
  • Share on Pinterest
  • Share on LinkedIn

Leave a Comment

12 Comments

  • Really? You have got to be kidding me! TFA is good at one thing: teaching to the test. And that’s about it. They lack the experience to be considered “great” educators. These are long-term temp jobs until they get their next job at a state DOE or some other corporate education reform “non-profit” company. John, you are young, so I will forgive you your ignorance on this matter. These public school teachers you devalue are put through hell on a daily basis in these schools with unnecessary curriculums and accountability. If these teachers are “difficult” it’s because they are next in line to hang from the noose the corporate education reformers have implemented in the system.

    You wrote “I have no experience with TFA, directly or indirectly. I have never even met someone who has taught through TFA.” That was my first question when I started reading this. Because I didn’t think Charter School of Wilmington had TFA on staff there. When TFA teachers are held by the same standards as public school district teachers, we can have this conversation. All you are doing now is pushing an agenda that has no bearing or meat to it.

  • Fire first, ask questions later? Comments on Recent Teacher Effectiveness Studies

    Posted on January 7, 2012

    11 Votes

    Please also see follow-up discussion here: https://schoolfinance101.wordpress.com/2012/01/19/follow-up-on-fire-first-ask-questions-later/

    Yesterday was a big day for big new studies on teacher evaluation. First, there was the New York Times report on the new study by Chetty, Friedman and Rockoff. Second, there was the release of the second part of the Gates Foundation’s Measures of Effective Teaching project.

    There’s still much to digest. But here’s my first shot, based on first impressions of these two studies (with very little attention to the Gates study)

    The second – Gates MET study – didn’t have a whole lot of punchline to it, but rather spent a great deal of time exploring alternative approaches to teacher evaluation and the correlates of those approaches to a) each other and b) measured student outcome gains. The headline that emerged from that study, in the Washington Post and in brief radio blurbs was that teachers ought to be evaluated by multiple methods and should certainly be evaluated more than once a year or every few years with a single observation. That’s certainly a reasonable headline and reasonable set of assertions. Though, in reality, after reading the fully study, I’m not convinced that the study validates the usefulness of the alternative evaluation methods other than that they are marginally correlated with one another and to some extent with student achievement gains, or that the study tells us much if anything about what schools should do with the evaluation information to improve instruction and teaching effectiveness. I have a few (really just one for now) nitpicky concerns regarding the presentation of this study which I will address at the end of this post.

    The BIG STUDY of the day… with BIG findings … at least in terms of news headline fodder, was the Chetty, Friedman & Rockoff (CFR) study. For this study, the authors compile a massive freakin’ data set for tech-data-statistics geeks to salivate over. The authors used data back to the early 1990s on children in a large urban school district, including a subset of children for whom the authors could gather annual testing data on math and language arts assessments. Yes, the tests changed at different points between 1991 and 2009, and the authors attempt to deal with this by standardizing yearly scores (likely a partial fix at best). The authors use these data to retrospectively estimate value-added scores for those (limited) cases where teachers could be matched to intact classrooms of kids (this would seem to be a relatively small share of teachers in the early years of the data, increasing over time… but still limited to grades 3 to 8 math & language arts). Some available measures of student characteristics also varied over time. The authors take care to include in their value-added model, the full extent of available student characteristics (but remove some later) and also include classroom level factors to try to tease out teacher effects. Those who’ve read my previous posts understand that this is important though quite likely insufficient!

    The next big step the authors take is to use IRS tax record data of various types and link it to the student data. IRS data are used to identify earnings, to identify numbers and timing of dependent children (e.g. did an individual 20 years of age claim a 4 year old dependent?) and to identify college enrollment. Let’s be clear what these measures are though. The authors use reported earnings data for individuals in years following when they would have likely completed college (excluding incomes over $100k). The authors determine college attendance from tax records (actually from records filed by colleges/universities) on whether individuals paid tuition or received scholarships. This is a proxy measure – not a direct one. The authors use data on reported dependents & the birth date of the female reporting those dependents to create a proxy for whether the female gave birth as a teenager.[1] Again, a proxy, not direct measure. More later on this one.

    Tax data are also used to identify parent characteristics. All of these tax data are matched to student data by applying a thoroughly-documented algorithm based on names, birth dates, etc. to match the IRS filing records to school records (see their Appendix A).

    And in the end, after 1) constructing this massive data set[2], 2) retrospectively estimating value-added scores for teachers and 3) determining the extent to which these value added scores are related to other stuff, the authors find…. well… that they are.

    The authors find that teacher value added scores in their historical data set vary. No surprise. And they find that those variations are correlated to some extent with “other stuff” including income later in life and having reported dependents for females at a young age. There’s plenty more.

    These are interesting findings. It’s a really cool academic study. It’s a freakin’ amazing data set! But these findings cannot be immediately translated into what the headlines have suggested – that immediate use of value-added metrics to reshape the teacher workforce can lift the economy, and increase wages across the board! The headlines and media spin have been dreadfully overstated and deceptive. Other headlines and editorial commentary has been simply ignorant and irresponsible. (No Mr. Moran, this one study did not, does not, cannot negate the vast array of concerns that have been raised about using value-added estimates as blunt, heavily weighted instruments in personnel policy in school systems.)

    My 2 Big Points

    First and perhaps most importantly, just because teacher VA scores in a massive data set show variance does not mean that we can identify with any level of precision or accuracy, which individual teachers (plucking single points from a massive scatterplot) are “good” and which are “bad.” Therein exists one of the major fallacies of moving from large scale econometric analysis to micro level human resource management.

    Second, much of the spin has been on the implications of this study for immediate personnel actions. Here, two of the authors of the study bear some responsibility for feeding the media misguided interpretations. As one of the study’s authors noted:

    “The message is to fire people sooner rather than later,” Professor Friedman said. (NY Times)

    This statement is not justified from what this study actually tested/evaluated and ultimately found. Why? Because this study did not test whether adopting a sweeping policy of statistically based “teacher deselection” would actually lead to increased likelihood of students going to college (a half of one percent increase) or increased lifelong earnings. Rather, this study showed retrospectively that students who happened to be in classrooms that gained more, seemed to have a slightly higher likelihood of going to college and slightly higher annual earnings. From that finding, the authors extrapolate that if we were to simply replace bad teachers with average ones, the lifetime earnings of a classroom full of students would increase by $266k in 2010 dollars. This extrapolation may inform policy or future research, but should not be viewed as an absolute determinant of best immediate policy action.

    This statement is equally unjustified:

    Professor Chetty acknowledged, “Of course there are going to be mistakes — teachers who get fired who do not deserve to get fired.” But he said that using value-added scores would lead to fewer mistakes, not more. (NY Times)

    It is unjustified because the measurement of “fewer mistakes” is not compared against a legitimate, established counterfactual – an actual alternative policy. Fewer mistakes than by what method? Is Chetty arguing that if you measure teacher performance by value-added and then dismiss on the basis of low value-added that you will have selected on the basis of value-added. Really? No kidding! That is, you will have dumped more low value-added teachers than you would have (since you selected on that basis) if you had randomly dumped teachers? That’s not a particularly useful insight if the value-added measures weren’t a good indicator of true teacher effectiveness to begin with. And we don’t know, from this study, if other measures of teacher effectiveness might have been equally correlated with reduced pregnancy, college attendance or earnings.

    These two quotes by authors of the study were unnecessary and inappropriate. Perhaps it’s just how NYT spun it… or simply what the reporter latched on to. I’ve been there. But these quotes in my view undermine a study that has a lot of interesting stuff and cool data embedded within.

    These quotes are unfortunately illustrative of the most egregiously simpleminded, technocratic, dehumanizing and disturbing thinking about how to “fix” teacher quality.

    Laundry list of other stuff…

    Now on to my laundry list of what this new study adds and what it doesn’t add to what we presently know about the usefulness of value-added measures for guiding personnel policies in education systems. In other words, which, if any of my previous concerns are resolved by these new findings.

    Issue #1: Isolating Teacher Effect from “other” classroom effects (removing “bias”)

    The authors do provide some additional useful tests for determining the extent to which bias resulting from the non-random sorting of kids across classrooms might affect teacher ratings. In my view the most compelling additional test involves evaluating the value-added changes that result from teacher moves across classrooms and schools. The authors also take advantage of their linked economic data on parents from tax returns to check for bias. And in their data set, comparing the results of these tests with other tests which involve using lagged scores (Rothstein’s falsification test) the authors appear to find some evidence of bias but in their view, not enough to compromise the teacher ratings. I’m not yet fully convinced, but I’ve got a lot more digging to do. (I find Figure 3, p. 63 quite interesting)

    But more importantly, this finding is limited to the data and underlying assessments used by these authors in this analysis in whatever school system was used for the analysis. To their credit, the authors provide not only guidance, but great detail (and share their Stata code) for others to replicate their bias checks on other value added models/results in other contexts.

    All of this stuff about bias is really about isolating the teacher effect from the classroom effect and doing so by linking teachers (a classroom level variable) to student assessment data with all of the underlying issues of those data (the test scaling, equating moves from x to x+10 on one test to another, an on one region of the scale on one test to another region of the scale on the same test).

    Howard Wainer explains the heroic assumptions necessary to assert a causal effect of teachers on student assessment gains here: http://www.njspotlight.com/ets_video2/

    When it comes to linking the teacher value-added estimates to lifelong outcomes like student earnings, or teen pregnancy, the inability to fully isolate teacher effect from classroom effect could mean that this study shows little more than the fact that students clustered in classrooms which do well over time eventually end up less likely to have dependents while in their teens, more likely to go to college (.5%) and earn a few more dollars per week.[3]

    These are (or may be) shockingly unsurprising findings.

    Issue #2. Small Share of Teachers that Can Be Rated

    This study does nothing to address the fact that relatively small shares of teachers can be assigned value-added scores. This study, like others merely uses what it can – those teachers in grades 3 to 8 that can be attached to student test scores in math and language arts. More here.

    Issue #3: Policy implications/spin from media assume an endless supply of better teachers?

    This study like others makes assertions about how great it would all turn out – how many fewer teen girls would get pregnant, how much more money everyone would earn, if we could simply replace all of those bad teachers with average ones, or average ones with really good ones. But, as I noted above, these assertions are all contingent on an endless supply of “better” teachers standing in line to take those jobs. And this assertion is contingent upon there being no adverse effect on teacher supply quality if we were to all of the sudden implement mass deselection policies. The authors did not, nor can they in this analysis, address these complexities. I discuss deselection arguments in more detail in this previous post.

    A few final comments on Exaggerations/Manipulations/Clarifications

    I’ll close with a few things I found particularly annoying:

    Use of super-multiplicative-aggregation to achieve a number that seems really, really freakin’ important (like it could save the economy!).

    One of the big quotes in the New York Times article is that “Replacing a poor teacher with an average one would raise a single classroom’s lifetime earnings by about $266,000, the economists estimate.” This comes straight from the research paper. BUT… let’s break that down. It’s a whole classroom of kids. Let’s say… for rounding purposes, 26.6 kids if this is a large urban district like NYC. Let’s say we’re talking about earnings careers from age 25 to 65 or about 40 years. So, 266,000/26.6 = 10,000 lifetime additional earnings per individual. Hmmm… nolonger catchy headline stuff. Now, per year? 10,000/40 = 250. Yep, about $250 per year (In constant, 2010 [I believe] dollars which does mean it’s a higher total over time, as the value of the dollar declines when adjusted for inflation). And that is about what the NYT Graph shows: http://www.nytimes.com/interactive/2012/01/06/us/benefits-of-good-teachers.html?ref=education

    The super-elastic, super-extra-stretchy Y axis

    Yeah… the NYT graph shows an increase of annual income from about $20,750 to $21,000. But, they do the usual news reporting strategy of having the Y axis go only from $20,250 to $21,250… so the $250 increase looks like a big jump upward. That said, the author’s own Figure 6 in the working paper does much the same!

    Discussion/presentation of “proxy” measure as true measure (by way of convenient language use)

    Many have pounced on the finding that having higher value added teachers reduces teen pregnancy and many have asked – okay… how did they get the data to show that? I explained above that they used a proxy measure based on the age of the female filer and the existence of dependents. It’s a proxy and likely an imperfect one. But pretty clever. That said, in my view I’d rather that the authors say throughout “reported dependents at a young age” (or specific age) rather than “teen pregnancy.” While clever, and likely useful, it seems a bit of a stretch, and more accurate language would avoid the confusion. But again, that doesn’t generate headlines.

    Gates study gaming of stability correlations

    I’ve spent my time here on the GFR paper and pretty much ignored the Gates study. It didn’t have those really catchy findings or big headlines. And that’s actually a good thing. I did find one thing in the Gates study that irked me (I may find more on further reading). In a section starting on Page 39 the report acknowledges that a common concern about using value-added models to rate teachers is the year volatility of the effectiveness ratings. That volatility is often displayed with correlations between teachers’ scores in one year and the same teachers’ scores the next year, or across different sections of classes in the same year. Typically these correlations have fallen between .15 and .5 (.2 and .48 in previous MET study). These low correlations mean that it’s hard to pin down from year to year, who really is a high or low value added teacher. The previous MET report made a big deal of identifying the “persistent effect” of teachers, an attempt to ignore the noise (something which in practical terms can’t be ignored), and they were called out by Jesse Rothstein in this critique: http://nepc.colorado.edu/thinktank/review-learning-about-teaching

    The current report doesn’t focus as much on the value-added metrics, but this one section goes to yet another length to boost the correlation and argue that value-added metrics are more stable and useful than they likely are. In this case, the authors propose that instead of looking at the year to year correlations between these annually noisy measures, we should correlate any given year with the teacher’s career long average where that average is a supposed better representation of “true” effectiveness. But this is not an apples to apples comparison to the previous correlations, and is not a measure of “stability.” This is merely a statistical attempt to make one measure in the correlation more stable (not actually more “true” just less noisy by aggregating and averaging over time), and inflate the correlation to make it seem more meaningful/useful. Don’t bother! For teachers with a relatively short track record in a given school, grade level and specific assignment, and schools with many such teachers, this statistical twist has little practical application, especially in the context of annual teacher evaluation and personnel decisions.

    [1] “We first identify all women who claim a dependent when filing their taxes at any point before the end of the sample in tax year 2010. We observe dates of birth and death for all dependents and tax filers until the end of 2010 as recorded by the Social Security Administration. We use this information to identify women who ever claim a dependent who was born while the mother was a teenager (between the ages of 13 and 19 as of 12/31 the year the child was born).”

    [2] There are 974,686 unique students in our analysis dataset; on average, each student has 6.14 subject-school year observations.

    [3] Note that the authors actually remove their student level demographic characteristics in the value-added model in which they associate teacher effect with student earnings The authors note: When estimating the impacts of teacher VA on adult outcomes using (9), we omit the student-level controls Xigt. (p. 22) Tables in appendices do suggest that these student level covariates may not have made much difference. But, this may be evidence that the student level covariates themselves were too blunt to capture real variation across students.
    Share this:

    Share

    Related

    Follow up on Fire First, Ask Questions LaterIn “Test Scores & Teacher Evaluation”

    Follow up Question Guide for Ed Writers (on Teacher Evaluation)With 7 comments

    A few comments on the Gates/Kane value-added studyIn “Test Scores & Teacher Evaluation”
    Posted in: Test Scores & Teacher Evaluation, Value-Added Teacher Evaluation
    ← 6 Things I’m Still Waiting for in 2012 (and likely will be for some time!)
    Misunderstanding & Misrepresenting the “Costs” & “Economics” of Online Learning →
    22 Responses “Fire first, ask questions later? Comments on Recent Teacher Effectiveness Studies” →

    Advancing the Teaching Profession

    January 7, 2012

    Thanks for the good work on these pieces. You did a really nice job of laying out the issues. My first read of the NYT article, as well as other articles that appeared around the country, was to take some things at face value. I need to delve in a little deeper. You blog post set the record straight on some things. Thanks for sharing and putting such thoughtful effort into this over the past few days.

    Bob Ryshke

    Tyrone Bynoe

    January 7, 2012

    Bruce,

    Your elaboration regarding the actual small percentage of teachers in NJ that are actually eligible to be analyzed in an value-added context is amazing. Since real state school assessments are limited to just a few grades and a few certification areas, I wonder how states will validate their student growth models of teacher effectiveness. The actual measures of teacher effectiveness can only be applied to a few teachers. I inquire if this is in part one of the reasons why their is such an uproar in New York about the teacher and principal effectiveness model. It appears that the reach of this evaluation program far exceeds its grasp. Although this comment reacts to a very small section of your commentary — and then ensuing elaboration — , it is constantly important to articulate since student growth models of teacher effectiveness may now have strong political support in legislative bodies.

    Leonie Haimson

    January 7, 2012

    thank you Bruce! what does the Chetty paper say, if anything, about the volatility of teacher value-added ratings?

    Cedar Riener

    January 7, 2012

    Great stuff. Always glad to see my first impressions confirmed by someone who actually knows their stuff. I was amazed as I started to read the paper at how brazen the speculation is. Without even bothering to consider the real world implications of firing lots of teachers, they say:

    We estimate that replacing a teacher whose true VA is in the bottom 5 percent with an average teacher would increase studentsílifetime income by $267,000 per classroom
    taught. However, because VA is estimated with noise, the gains from deselecting teachers based on estimated VA are smaller. The gains from deselecting the bottom 5% of teachers are approximately $135,000 based on one year of data and $190,000 based on three years of data

    But yet have no problem quoting the true VA in popular press reports.

    They have no problem saying that firing would be more cost effective than bonuses, because high VA teachers probably would stay anyways without the bonus in the text, but relegate the following to footnotes:

    There are also other important concerns about VA besides the two we focus on in this paper. For instance, the signal in value-added measures may be degraded by behavioral responses such as teaching to the test if high-stakes incentives are put in place (Barlevy and Neal 2012)

    and

    In the long run, higher salaries could attract more high VA teachers to the teaching profession, a potentially important benefit that we do not account for in our calculation

    There just doesn’t seem to me to be a clear line between, “Hey I am totally free to speculate on this” and “Well, that wasn’t covered by our calculations.”

    One last point with a question: They mention in another footnote that

    Even in our sample, we Önd that the top 2% of teachers ranked by VA have patterns of test score gains that are consistent with test manipulation based on the proxy developed by Jacob and Levitt (2003). Correspondingly, these high VA outlier teachers also have much smaller long-term impacts than one would predict based on their VA

    In other words: it looked like some of the top 2% VA teachers were cheating, and therefore their students didn’t do as well as they should have. But yet, this is even before high stakes testing, right? These teachers are cheating on low-stakes tests? And this isn’t news?

    Anyways, thanks for your good work on this. Interesting stuff and looking forward to reading more.

    el

    January 8, 2012

    There’s an assumption that the only difference between classrooms is the teacher. But the other students in the classroom matter, too. One or two unusually disruptive kids can drag down a class pretty significantly; if you had more than that, it would be very difficult for the teacher to keep the other kids progressing (not to mention that those disruptive kids themselves will bring the scores down, and we’re only counting scores). If you have a class where all the kids are easily focused and stay on task and interested in doing well, that class is going to have more success – and it probably doesn’t matter which exact teacher they have within a wide range – short of someone like Delores Umbridge.

    Jersey Jazzman

    January 8, 2012

    As always, Bruce, thank you for your insights.

    One of the great concerns I have about this embrace of VAM is how it will affect both the current teaching corps’s and future teachers’ perceptions of their jobs and their career status. Chetty and Friedman seem to casually dismiss concerns teachers have about using measures that they themselves admit are prone to error: “Well, sure we’ll fire some good teachers, but c’est la vie!.” Who wants a career like that? And who wants to be subject to such arbitrary scrutiny when the teacher down the hall – who teaches music, or history, or kindergarten – ISN’T subject to the same capricious measures?

    I’m also concerned this policy will continue to over-emphasize the role of standardized testing in education. Is this good for our students, or our economy? Higher test scores may correlate to slightly higher salaries early in one’s career – do they correlate to higher economic productivity for the US? Are we as a nation better off by changing our focus to passing bubble tests?

    One question: I’m slogging through the study now; Chetty and Friedman say the OLDEST age for which they could link wages with teacher assignment is 28. Am I right to assume they are using a regression analysis to project earnings through a lifetime?

    If so: isn’t this like the “three good teachers” argument? Which was an estimation, but not actually based on any true experimental or quasi-experimental treatment?

    Intuitively, lifetime earnings are determined by a host of variables. Are we really prepared to say your 3rd Grade teacher (how many of you can name your 3rd Grade teacher?) has a significant impact on your earnings when you’re 50? And we should embrace admittedly firing good teachers on this basis?

    Eliot Long

    January 8, 2012

    With such a large data set, the authors could present one or more suggested VA cut-offs used to “fire teachers sooner than later”, identify which teachers would have been fired early in the study and then (because clearly they were not fired) evaluate their longer term effectivenss and the frequency of “mistakes.” This is a golden opportunity to evaluate interpretations made by the authors.

    chungatest

    January 9, 2012

    Isn’t the idea of test scores leading to certain life outcomes somewhat of a self-fulfilling prophecy? Whether such scores have any meaning on their own, the fact is we attach meaning to them which imbues them with even greater significance. Isn’t this likely to be a causal factor in the study reported in the NY Times?

    chungatest

    January 9, 2012

    Jim kinda captures what I’m getting at in my prior comment: http://www.schoolsmatter.info/2012/01/high-test-score-make-happy-people.html

    Stuart Elliott

    January 10, 2012

    Some additional perspective on the $267,000 figure for removing the bottom 5 percent of teachers: Lifetime earnings of $522,000 per student mean that $10,000 is an increase of 2 percent. This is one-third the average 6 percent increase that occurs with each additional year of schooling. But then consider that this is a policy that would directly affect the students of only 5 percent of the teachers, so the systemwide average impact would be an increase in lifetime earnings of only 5 percent of that 2 percent increase — or 0.1 percent. This corresponds, roughly, to the increase that would result from an additional 3 days of school per year. (Of course, this ignores the other caveats above, which would further reduce the estimate .)

    12 Trackbacks For This Post

    Shanker Blog » The Persistence Of Both Teacher Effects And Misinterpretations Of Research About Them →
    January 8th, 2012 → 6:20 pm

    […] The paper caused a remarkable stir last week, and for good reason: It’s one of the most dense, important and interesting analyses on this topic in a very long time. Much of the reaction, however, was less than cautious, specifically the manner in which the research findings were interpreted to support actual policy implications (also see Bruce Baker’s excellent post). […]
    “Academic blogging” qua peer review – Educational Insanity →
    January 8th, 2012 → 8:01 pm

    […] Bruce Baker, professor in the Graduate School of Education at Rutgers, The State University of New Jersey, offers a fairly comprehensive review. […]
    Value-Added on Core Knowledge Blog– some thoughts on Chetty, Friedman, and Rockoff | jasonpbecker →
    January 13th, 2012 → 5:08 am

    […] also improve other more distant and relevant factors in children’s lives((Actually, I think that Baker, Di Carlo, and Dorn are all probably right that the tests for biasness in VAMs for teachers are the […]
    Why Accountants Should Not Run Schools | FavStocks →
    January 13th, 2012 → 8:31 am

    […] Nevertheless, spreadsheet-wielding experts of every stripe constantly step to the forefront with one “authoritative” study after another using “simple juxtapositions of two trends” to justify all sorts of arguments — whether it’s about replacing bricks and mortar schools with online learning or firing more teachers to increase our children’s future incomes. […]
    Friday Ed Bites in a New Semester – the weighted pupil →
    January 13th, 2012 → 2:41 pm

    […] much more important that social science academics learn to better disseminate their findings. This post on School Finance 101 examines the Chetty et al. study from an academic perspective, but uses very […]
    Accountability without Autonomy Is Tyranny | Dailycensored.com →
    January 16th, 2012 → 12:03 pm

    […] has yet to be peer-reviewed, and two close examinations of the study—by Matthew Di Carlo and Bruce Baker—have praised the data but urged caution about conclusions drawn by the researchers and the media […]
    Big, impressive study, questionable policy conclusions « School Matters →
    January 16th, 2012 → 9:06 pm

    […] scholars who question the conclusions seem impressed. Bruce Baker at School Finance 101 writes that it’s “a really cool academic study” with “a freakin’ amazing data set!” […]
    Diane Ravitch Blocked Me on Twitter | Value Adder →
    January 18th, 2012 → 7:51 am

    […] *For a more reasonable critique of the actual concerns about this study (e.g., mistakes will happen when firing based on VAM, potentially no supply of better teachers) , see Bruce Baker at SchoolFinance101 . […]
    Ravitch reacts to value-added study « soetalk.com →
    January 18th, 2012 → 7:24 pm

    […] are likelier to go to college and eventually have a higher income. But, according to economist Bruce Baker of Rutgers University, it is not so simple to identify which teachers produced these goo… “… just because teacher [value-added] scores in a massive data set show variance does […]
    NJEA response to Christie’s “State of the State” address « Lakewood Education Association →
    January 18th, 2012 → 10:46 pm

    […] the governor cited a single research study on teacher evaluation, which has already been questioned by experts in the field. We would invite him and all others interested in serious reform to […]
    The Best Posts On The Gates’ Funded Measures Of Effective Teaching Report | Larry Ferlazzo’s Websites of the Day… →
    January 19th, 2012 → 5:26 pm

    […] Fire first, ask questions later? Comments on Recent Teacher Effectiveness Studies is from School Finance 101. […]
    Universal Public Education Is Dead (Expanded) | Dailycensored.com →
    January 20th, 2012 → 10:06 pm

    […] has yet to be peer-reviewed, and two close examinations of the study—by Matthew Di Carlo and Bruce Baker (see Baker’s follow up as well)—have praised the data but urged caution about conclusions […]

  • TFA charges hefty fees to hiring districts above and beyond standard teacher pay and benefits. Back in 2009 and 2010, six TFA corps members cost Red Clay $120K–$10,000 per corps member per year. Ridiculous. This on top of salary and benefits. How can this be justified?

  • NOR. You have no experience with TFA directly NOR indirectly. As a graduate of Wilmington Charter and as a current UVA student, you should know proper grammar.

  • Mr Connolly,
    You write for your college newspaper but managed # 1 NO NO rule for aspiring journalists : Don’t bury the “lede” In your case, your “lede” was ” I have NO experience with TFA, directly or indirectly. I have never even MET someone who has taught through TFA. YET I feel comfortable throwing MY support behind this organization”
    This statement is such a head scratcher to your reader that combined with your use of most cursory of the junk science extrapolation of the Chetty etal findings (which you even support bizarrely as “This statistic, REGARDLESS of the VERACITY of the actual figure”)leads THIS reader to hope that in your future journalistic endeavors you take the time to go beyond Talking Points and do in-depth research before you attempt any more “opining.” Good luck with your studies.

  • Teach for America is a abysmal failure in all the major Urban school districts. The successes you point to are in rural districts and are no better than rural districts without TFA. Good news though, Harper Lee is coming out with a new novel to base your life on, Catcher in the Rye with Cheever gone and To Kill a Mockingbird were getting so over read now there is hope. The nations schools are in poor shape and Teachers Unions are primarily to blame, but TFA and the Peace Corps are for Liberals to pat themselves on the back, they are simply feel good.

  • Amazing how vituperative these comments are – you can disagree with what is an “opinion” and provide your counterpoints but why so much anger? Is it because this nice young guy went to Wilmington Charter or something?

    There are an awful lot of others things to be concerned about in our schools than TFA