This is a blog, not a thinktank. However, the occasional long article is not out of place, particularly if it explains why something is happening which has been puzzling many.
Parts 1-3 went into how the wrong people were weaseled into education by the money who themselves had a global agenda. A specific example was given of Benjamin Bloom, seen as an educational guru, an unassailable, if ponderous authority and excerpts were quoted which in our day now and decades ago would be seen as horrifying in their falsehood.
Yet students continued to be admitted to teacher-training institutions, where they came under the tutelage of staff who made Burgess, Philby, Maclean and Blunt’s professors seem like High Tories. An example is Eric Foner in the U.S.
Backing all these people had to be a literature to refer students to, away from “reactionary” literature which wanted to maintain trusted teaching methods. The only way to do this was through funded research [guess where the funding came from?] followed by publication in trusted journals and for the educational establishment to embrace this research, placing it beyond scrutiny. It could be studied for its wisdom but never challenged. When it reached the power point presentations at in-service days, there were many different things going on, of which a solid basis for the research was not always a main priority:
Attractiveness to students, teachers, parents, and other school system stakeholders can weigh heavily in research selections. So can public relations. For example, the desire of school leaders and board members to demonstrate “progressive leadership” often plays a contributory role. In short, the selection of research-based programs and innovations brought back from workshops and meetings may be substantially influenced by considerations other than evidence of effectiveness.
There were three ways to get such research past the guard
1. It had to begin with truisms, in general form, which no one seriously challenged and had to chime in with the MSM-driven “public mood” of the day. In the 90s, this was for “progressive” education. So, for example, the concept that children learn best when they can discover for themselves is something which resonates with most adults. Did you learn better when a science teacher allowed you to do experiments and you saw the coloured liquid doing strange things on the bunsen burner or did you learn more in interminable lectures by dry professors?
So this truism was given experimental data in dull-coloured books [authoritative] to back it up and these books become required reading. Along the way, the readers were told what the results indicated and teachers, infused with enthusiasm over the new way wrote books such as “Let Them Run A Little”, supporting open plan classrooms where the children would no longer be taught but would find out for themselves. The book “Let Them Run A Little”, of course, was never titled: “Let Them Run Wild”, “Savages Beyond Reason”, “Seeds of the New ASBOs” or “It’s Fine to have No Respect – Self is the Summum Bonum”.
2. The research methods had to be changed, skewed, to produce the results you wanted – for as much as you could direct a docile student, you couldn’t fool a peer review of your work from outside the establishment. So the research method needed to be feted as innovative, modern but based on sound educational practice. Just as the left of today does, buzzwords were employed to praise or to vilify.
3. The “research” was better when it tore down “tired old shibboleths” and uncaring, reactionary teaching where pedagogues shouted at you. People react better to debunkings than they do to theses that you’re trying to establish. It’s like the debunking of the debunkers in WTC7, whom I then redebunked. No one set up a new theory but tore down an old one or eight.
The article below looks at research and what has happened in the field and though the data is old, it hasn’t significantly altered for the better and the figures are still roughly valid. The conclusions, if you’re with us after the first three parts of this series, are predictable and together with the steady drip-drip of leftist ideologues into positions of conducting such “research”, the result has been the blighting of children for some decades now.
The results are self-evident, as Jefferson might say.
Research and Innovation: Let the Buyer Beware
Stone, J. E. & Clements, A. (1998). Research and innovation: Let the buyer beware. In Robert R. Spillane & Paul Regnier (Eds.). The superintendent of the future (pp.59-97). Gaithersburg, MD: Aspen Publishers.
J. E. Stone and Andrea Clements East Tennessee State University
Schools are inundated with research that promises to improve achievement. Yet when programs are implemented results always seem to fall short. How can it be in school after school, year after year? The answer depends on whom you ask.
Educational researchers allege that the problem is a lack of money for research. They say that research and development has little impact simply because there isn’t enough of it and they cite studies showing that funding is meager relative to the magnitude of the education enterprise.
Outside observers disagree. Some 35,000 professors of education at American colleges and universities devote an average 14% of their time to research–broadly defined. Their students conduct research too. Annually, more than 7,300 doctoral students in education write dissertations.
Myron Lieberman (1993) estimates the dollar value of the manpower dedicated to educational research by professors and doctoral students alone to be in excess of $700 million annually. Still other education research is authored by state departments of education, by nonprofit “think tanks,” by federal agencies, and by the regional educational research laboratories. Significantly, only a small percentage of published research is undertaken by schools or school systems.
The results of this scholarly activity are readily available to schools through a variety of sources. Thousands of books, professional and academic journals, newsletters, technical bulletins, and other published sources make research available to teachers and administrators. Many recent publications are available on the Internet.
A vast amount of material is indexed in the federally sponsored Education Resources Information Center (ERIC). ERIC includes a Current Index to Journals in Education and a microfiche library of mostly unpublished research called Research in Education.
Research in Education is available in education libraries throughout the United States. The amount of research available through these several sources is staggering, and most of it is directly or indirectly related to the problem of improving school achievement.
The idea of improving teaching through the application of science has been around since the earliest days of organized teacher training. John Dewey, for example, believed that the scientific study of child development would improve classroom instruction by suggesting ways in which teaching might be fitted to the learner (Dewey, 1916/1963). However, it was not until the 1960s that governmentally funded research began expanding to present-day levels. The Johnson administration’s “war on poverty” infused federal dollars into university research institutes and education laboratories on an unprecedented scale. Head Start (U.S. Department of Health and Human Services, 1985) and Follow Through (Proper & St. Pierre, 1980) are prime examples. Both were designed to improve the school success of disadvantaged children and they are among the largest educational research projects ever mounted. The Follow Through project alone cost nearly $1 billion.
Has the money and manpower spent on research been justified by improvements in schooling? If the findings reported in Education Week’s “Quality Counts” (Wolk, 1997) are any indication, the answer would have to be no. Despite the pressures for improvement created by reports such as the National Commission on Excellence in Education’s A Nation at Risk (1983), measured achievement has stayed essentially flat. The National Assessment of Educational Progress scores in math and science have risen only a few points on a 500-point scale since 1973 (U.S. Department of Education, 1996). Of course there are isolated examples of significant improvement, but the broad picture is that the schools are (in the words of “Quality Counts”) “treading water.”
Why so little impact?
If there is a significant amount of research–although arguably not enough–and the findings are widely available, why is there not at least a trend toward improved achievement? Again, researchers have an answer: Good research is available but schools fail to implement it. In other words, schools talk as though they adopt research-based innovations but at the classroom level they keep doing the same old thing (Cuban, 1993). There is more than a little truth to this claim. The innovative programs publicized by school administrators are not always translated into classroom practice. Teachers have a great deal of independence in the classroom and they are taught to fit their teaching style to students’ needs. Remaining with accustomed approaches is, indeed, the tendency if only for reasons of comfort and familiarity.
Another explanation offered by researchers is that schools don’t know good research when they see it. They are easily drawn to familiar practices supported by weak evidence. Unfamiliar practices supported by very credible evidence are often ignored. As discussed below, there is merit to this view. From the standpoint of science, experimental studies are far more convincing than descriptive and correlational ones, yet school personnel often ignore the stronger and adopt innovations suggested by the weaker. For example, during the 1960s and 1970s correlational studies suggesting self-esteem enhancement as a means to improved achievement led to sweeping changes in teacher training and schooling. Experimental findings to the contrary were ignored (Scheirer & Kraut, 1979).
They showed that self-esteem and achievement are correlated mainly because achievement enhances self-esteem, not because self-esteem enhances achievement.
One other explanation popular with researchers is the institutional inertia warps and retards progress. Plainly this view also has merit. All organizations encourage some possibilities and restrict others. All are comfortable with certain ways of conducting themselves and uncomfortable with others. Teacher unions, for example, may resist changes that make teachers’ jobs more laborious. Administrative customs may resist change that make jobs look too easy. Of course, community expectations, regulatory policy, and public oversight can all exert resistance to change.
In marked contrast to the views of researchers, schoolhouse “insiders” (i.e., teachers and administrators) say that research has little impact because much of it does not work in the real world. As they see it, schools are doing everything they can to implement the latest findings, but social and economic realities impose limits. Implementing research is like rebuilding a ship in the midst of a voyage. Staying afloat has to be the first consideration. Rebuilding during a storm is even more problematic. Schools can and do make the changes suggested by research, but circumstances can trump even the best-laid plans. Even with successful implementations, effects are obscured or nullified by factors such as limited resources, two-earner families, increased crime, teen pregnancy, drug abuse, gangs, television, and a host of other hindrances and adversities (Olson, 1997).
Despite the often limited benefit of research-based innovations, schools continue to adopt them–if only to keep up with the latest trends.
Which research and which innovations, however, often depends less on the quality of the findings than on the channel through which the research comes to the school’s attention. School personnel are frequently exposed to “the latest” research at workshops, professional meetings, and in-service training. Typically, the teachers, administrators, and board members who attend these meetings have a limited understanding of research and/or of the findings pertaining to the innovation in question. More often than not, presenters and programs for such meetings are selected not because their ideas are well grounded but because they have a stimulating presentation. In addition, audience interest is often spurred by a regulatory mandate or incentive funding, not a burning desire for improved student achievement.
Other pragmatic considerations play a role as well. For example, attractiveness to students, teachers, parents, and other school system stakeholders can weigh heavily in research selections. So can public relations. For example, the desire of school leaders and board members to demonstrate “progressive leadership” often plays a contributory role. In short, the selection of research-based programs and innovations brought back from workshops and meetings may be substantially influenced by considerations other than evidence of effectiveness.
The Restrictions Imposed by Doctrine
Another factor that influences decisions about research is educational philosophy.
The practice of injecting popular psychological theory into schooling–often without regard to effectiveness or applicability–has been a chronic problem in American education (Davis, 1943; Hilgard, 1939). Currently, a poorly recognized but longstanding educational doctrine called “developmentalism” (Hirsch, 1996; Stone, 1996) permeates the public schooling community. Developmentalism frames teaching and learning issues in a way that favors certain types of research and disregards others.
Developmentalism is a derivation of eighteenth-century romantic naturalism. The French philosopher Jean Jacques Rousseau (1712-1778) is the most influential of its early proponents. The works of John Dewey (1859-1952) and Jean Piaget (1896-1980), however, are more directly responsible for its present-day acceptance. Developmentalism is a view of age-related social, emotional, and cognitive change that presumes a child’s native tendencies to be a fragile expression of the individual’s natural and therefore optimal developmental trajectory (Stone, 1996). It conceives of education as a set of experiences that serves to facilitate and preserve that trajectory by fitting the educational experience to the individual.
Developmentalism contrasts sharply with the classic tradition in education and with the American tradition founded by the Puritans. Both sought to civilize and better the individual, not merely accommodate his or her emerging tendencies. Both classic tradition and the common school aimed to discipline natural impulses in service of a higher good. The significance of this philosophic issue as an impediment to effective schooling would be difficult to overstate. Most public schools seek achievement to the extent permitted by students’ natural inclinations. They are “learner centered.” Most parents and policy makers want schooling that impels achievement beyond that to which most students are inclined by their youthful proclivities (Steinburg, 1996). They are “learning centered.”
The dominance of learner-centered pedagogy is in no small part an accident of history. Progressivism–a social and philosophical offshoot of romantic naturalism–predominated in American intellectual circles in the late nineteenth century and early twentieth century. These were the years during which universal public education came to be public policy as well as the formative years of many teacher-training institutions. Accepted teaching practices of that day were often harsh and punitive; thus progressive methods were a welcome alternative. The premier teacher-training institution of the early twentieth century was Teacher’s College, Columbia University (Cremin, 1964). Its graduates led the development of other such programs around the country. Even today, the educational methodologies that prevail in the public education community are those that agree with the philosophic leanings of the Teacher’s College faculty of the early 1900s (Hirsch, 1996).
Developmentally informed pedagogy has come to dominate public schooling but without clear public recognition of its nature and its role. Over the past 75 years it has emerged and reemerged under a variety of names. In the 1920s it was called “progressive” and “child centered.” Today it is termed “reflective” and “learner centered” (Darling-Hammond, Griffin, & Wise, 1992). However termed, it has consistently maintained that teachers should seek to instruct only through activities that students find engaging and enjoyable. Thus, instead of employing the most enjoyable of teaching methods that are known to result in learning, teachers have been trained first to seek activities that are enjoyable and engaging and to use them in ways that will produce learning. Thus good teaching has come to be thought of as teaching that is well received and that incidentally produces some degree of learning.
Uncertainty about learning outcomes was not considered a pedagogic weakness by progressive education’s founders. Neither John Dewey nor progressive education’s great popularizer, William Heard Kilpatrick, considered conventionally prescribed educational objectives to be the proper aim of schooling. Instead, both argued that schooling should seek the emergence of an individually defined and broadly conceived intellectual development. Dewey, in particular, wrote at length about the harm done by teacher insistence on externally defined aims (Dewey, 1916/1963). Viewed from the progressive/learner-centered perspective, research that seeks to demonstrate a teaching methodology’s ability to produce a preconceived learning outcome is inherently faulty and inconsistent with the proper aims of schooling.
Despite public repudiation in the 1950s, Dewey’s view remains the foundation of today’s cutting-edge innovations. It has spawned a remarkable array of educational terms and concepts, and they have been widely propagated by agencies and organizations such as the U.S. Office of Education, the state departments of education, teacher-training programs, accrediting agencies, professional and academic societies, and the like.
The education community seeks to improve schooling through the use of research, but learner-centered strictures guide the adoption process. The impression created by the vast assortment of current educational terms and concepts is one of abundant variety. In truth, however, most conform to the same progressive vision of education. As noted by E. D. Hirsch (1996), “within the educational community, there is currently no thinkable alternative” (italics in the original, p. 69). Recent permutations and derivatives include the following:
• lifelong learning • developmentally appropriate instruction • brain-based learning • situated learning • cooperative learning • multiple intelligences • multiaged instruction • discovery learning • portfolio assessment • constructivism • hands-on learning • project method • thematic learning • integrated curriculum • higher-order learning • authentic assessment • whole-language reading
How Learner-Centered Thinking Restricts Choices: The Case of the Follow Through Project
Learner-centered doctrine discourages the use of results-oriented research (Stone, 1996). Studies concerned with improving achievement typically test an intervention or treatment (i.e., an action taken by the researcher that is intended to produce change in the student). The success of the intervention is judged in reference to some predetermined expectation. In contrast to the goal of inducing results, the goal of developmentally informed research is to accommodate schooling to the individual and to do so in a way that achieves the ends to which the individual is inclined by nature, not those prescribed by the curriculum.
One of the clearest instances of results-oriented research rejected on learner-centered grounds comes from the Follow Through project (Proper & St. Pierre, 1980). Follow Through was a huge federally funded research project of the late 1960s and early 1970s. It was launched in 1967 by the Ninetieth Congress in response to President Johnson’s request to “follow through” on project Head Start. Improved achievement in the basic skills of disadvantaged students was its prime objective. It remains the largest educational experiment ever.
Nine educational models were compared in 51 school districts over a six-year period. Of the nine, all but two were learner centered; and contrary to the prevailing educational wisdom, the two exceptions significantly outperformed the field. Of greater significance, five of the seven learner-centered models produced worse results than the traditional school programs (i.e., the nontreated control groups) to which each Follow Through approach was compared. What makes the contrast especially striking is that the outcome measures included not only basic skills but “higher-order” cognitive skills and a measure of self esteem–the very sort of outcomes that learner-centered methods are intended to enhance.
The most successful of the nine models was Direct Instruction (Engelmann, Becker, Carnine, & Gersten, 1988)a structured and so-called teacher-centered approach. Despite its overwhelming success, Direct Instruction was disparaged and largely ignored by the education community (Watkins, 1988). A lengthy critique of Follow Through was published in Harvard Educational Review (House, Glass, McLean, & Walker, 1978), and the U.S. Department of Education’s National Diffusion Network–a bureaucratic agency responsible for disseminating only the “best” research–concluded that all nine programs were valid and all were recommended for further funding. In fact, added funding was given to the failed models on the grounds that they needed strengthening.
The Follow Through Direct Instruction findings are by no means the only research that has been ignored because it disagreed with the learner-centered view. Herbert Walberg (1990, 1992) summarized some 8,000 reports of demonstrably effective teaching methods. Like Direct Instruction, most were structured, teacher-directed, and designed to produce measurable gains in achievement. Most could be described as learning-centered instead of learner-centered. Many employed drill, recitation, and incentives for student effort. A review of research literature by Ellson (1986) found 75 studies of teaching methods that produced achievement gains at least twice as great as those of comparison groups. Many of them were popular at one time but none are learner-centered and none are in widespread use today.
The reception accorded Direct Instruction and other learning-centered research is important because it highlights a critical difference between the public’s educational objectives and those of the learner-centered schooling establishment. Public Agenda (Johnson & Immerwahr, 1994) and other public polling organizations have found that the public wants schools that produce conventionally measured academic achievement. The public is not opposed to the goals of learner-centered schooling, but it considers them secondary to conventional academic achievement. To the public, outcomes such as improved self-esteem are attractive, but schools that fail with respect to academic achievement are nonsense no matter what else they may produce. The same priorities are embodied in state-level school accountability policies. They focus primarily on academic gains operationally defined by achievement tests. By contrast, learner-centered research gives equal priority to “intellectual growth,” enhanced self-esteem, and gains in knowledge and skills. If one or more of the three are produced, the research is taken to be informative and potentially valuable for school implementation.
At this point allow me to intervene and insert a comment or two. Any teacher trained in the 70s and 80s who has read this far will know that it attacks the gods Dewey and Piaget. The Follow Through will be dismissed as uncorroborated or even rightwingnut. Yet that research exists and was quantifiable and it’s not isolated. Why weren’t we shown this research? Why were we only shown the leftist Dewey and Piaget, in glowing terms?
This article shows that there was indeed research arguing for the retention of traditional methods but in a better way. That research was ignored and in terms of children’s development, I’m go to use the words “criminally ignored”. Not wanting to get religious here but there’s a concept of blasphemy against the holy spirit and it roughly means that when you know something is true or might be and you deliberately go against it, ignore it or pervert it, that’s a crime you never come back from.
I believe, when we look at students today in the U.S. and the UK, not to mention the Commonwealth countries, this famed British education begins to look anything but what parents should send their kids over and pay exorbitant fees for.
Why Researchers Remain Learner-Centered
Despite the ever-growing demand for improved achievement, neither researchers nor schools are able to break away from learner-centered thinking, and for several reasons. Both researchers and most school personnel are indoctrinated in learner-centered thinking, and powerful incentives encourage them to remain loyal to that point of view.
For researchers, funding is a prime incentive. Fund allocations are almost inevitably influenced by other educators, and most of them subscribe to learner-centered orthodoxy. Funding affords a researcher time to work, and to have a reasonable chance at funding, one’s proposal must appeal to the views of other educators.
For most researchers, funding is tied to institutional support. Most researchers are college faculty, and their primary responsibility is teaching. If a faculty member needs time to conduct a study, the institution must at a minimum relieve the individual from teaching. Ordinarily it will hire someone to teach in his or her place. Research grants provide the funding for the substitute instructor. If the researcher’s employer does not like a proposal, it may decide against released time. A proposal that appeals to the views of learner-centered administrators and colleagues is more likely to find support.
Grants also pay what are called “indirect costs” for the use of the institution’s facilities and other forms of overhead. These are additional funds that may amount to 50% or more of a research project’s direct costs for a substitute instructor, equipment, supplies, and so forth. The funds an institution receives for such costs are typically added to various administrative budgets, thus enabling substantial discretionary spending. College administrators consider faculty who generate big indirect cost contributions to be their most productive and deserving faculty. Grants are key to a faculty member’s career advancement at major institutions. Grants that are readily funded for big amounts (e.g., grants from state education agencies) are thus extremely attractive.
Second, there is the matter of publication. In order to advance their academic reputations, researchers must publish. Research that is not published is assumed to be of lesser quality, and rightly so. Research that is published in the most respected journals is stringently peer reviewed. Reviewers and editors do not rule out findings that are inconsistent with orthodoxy, but such reports inevitably receive much closer scrutiny and are thus less likely to be accepted. A record of successful publication also contributes mightily to a researcher’s chance of acquiring more funding.
Third, there is the matter of acceptance in the schools. The learner-centered view is more attractive to researchers because it is more easily marketed to the schools. Public school administrators typically have been trained in learner-centered thinking, thus such research has an intuitive appeal. That it may not produce intended results is a downside, but one that is frequently overlooked. School administrators are never fired or penalized because an innovative program fails. After all, how could an administrator be blamed for accepting the recommendations of scholar-experts who are supported by prestigious institutions. Because success is defined more in terms of funding than outcomes, appeal to decision makers is more important than demonstrated effectiveness. One need only observe the indicators of organizational advancement that are trumpeted in the media to verify the truth of this conclusion. Media releases talk about money and organizational expansion, not increased student learning.
The learner-centered view is comfortable to other stakeholders as well. Its convenience and vague expectations are significant considerations to teachers. In the learner-centered view, teachers are responsible for affording a quality educational experience, not the production of measurable academic outcomes. Learner-centered teachers consider outcomes to be governed by factors outside teacher control, thus the quality of teaching cannot be judged by results. Also, teachers find that learner-centered approaches are flexible and can be blended with existing practice without inconvenience and disruption. Factors of this sort make the task of adopting learner-centered practices simpler than, for example, implementing Direct Instruction–a methodology requiring more than the usual day or two of in-service training.
Learner-centered instruction also appeals to students. It seeks to accommodate them, not to shape them. By contrast, schooling that produces results typically requires a concerted student effort, and the time devoted to such an effort can infringe on more attractive pursuits (Steinberg, 1996). It should be noted, however, that students’ short-term satisfactions come at the cost of very substantial longer-term cost. Lost educational opportunity may result in permanently impaired career prospects–a delayed cost that students are unable to anticipate. Lost opportunities also cost taxpayers both in failed human resource development and the cost of remediation. Schooling that permits students to waste their own time and taxpayer-funded educational opportunity is an enormous but largely overlooked public disservice.
Recognizing useful research
Research that can add to the efficiency and effectiveness of public schooling is available, but school personnel must be able to recognize it. Otherwise, there is a very substantial chance that they will be drawn into adopting one of the many fads that dominate the educational landscape. Recognizing credible, useful studies requires an understanding of certain basics of research.
Both medicine and education rely on a scientific knowledge base. Medicine, however, relies on relatively mature and exact sciences such as physics, chemistry, and biology, whereas education relies on the far less mature social and behavioral sciences. These differences in quality of research and precision of measurement are reflected in the certainty and internal coherence of the knowledge base on which the two professions rely. Competing and contradictory findings are not uncommon in the behavioral sciences; thus the matter of determining which findings are credible, important, and applicable is a formidable challenge to the educational practitioner.
Given facts open to selective use and interpretation, educators frequently rely on knowledge that is equivocal or that may be contradicted by other evidence. Recognizing this condition, Anderson, Reder, and Simon (1995) offer the following caution:
[N]ew “theories” of education are introduced into schools every day (without labeling them as experiments) on the basis of their philosophical or common sense plausibility but without genuine empirical support. [Instead] we should make a larger place for responsible experimentation that draws on the available knowledge. It deserves at least as large a place as we now provide for faddish, unsystematic and unassessed informal “experiments” or educational “reforms.” We would advocate the creation of a “FEA” an analogy to the FDA which would require well designed clinical trials for every educational “drug” that is introduced into the market place. (p. 24) Another limit on sound educational research is the inherent variability in human behavior. People think, feel, act, cooperate or don’t cooperate, and so forth. Unlike inanimate objects, their actions are influenced by a range of extraneous variables that limit the applicability of findings. Behavioral sciences such as psychology have evolved standards that enable meaningful research despite these uncertainties. Unfortunately, many studies ignore them and consumers frequently fail to recognize the inevitable deficiencies and limitations. Thus it is not uncommon for educational administrators, grant writers, and program developers to stretch findings beyond their intended meaning or inadvertently to misrepresent results.
Quantitative versus Qualitative Research
Quantitative research includes both descriptive and explanatory studies. Descriptive studies are concerned only with establishing the existence of a phenomenon of interest–student achievement, for example. How much of it exists, where it exists, and what kinds of it exist are typical descriptive hypotheses. Explanatory studies are concerned with the causes of a phenomenon of interest. For example, does the use of Direct Instruction improve achievement? Technically stated, explanatory studies are concerned with the discovery of functional relationships (i.e., relationships in which the state of a given phenomenon is said to be a function of a preceding event or condition). Less technically said, explanatory studies are concerned with whether a given effect is the result of a particular cause. Causal relationships are examined in experiments and experimentlike studies called quasi-experiments. More is said about experiments below.
Descriptive studies address a wide range of topics. For example, a report of average test scores for students at different schools would be descriptive. So would a study of the number of words comprising recognition vocabulary of children at succeeding ages. Descriptive studies include a number of subtypes. For example, studies of characteristics such as preferred types of play or ability to perform certain intellectual tasks may entail observation of fresh samples of children at successive chronological age levels. Such studies are called “cross-sectional” descriptive research. Studies that examine the same characteristics but observe the same individual children over a period of years are called “longitudinal.”
Quantitative descriptive studies also include reports of correlational relationships between variables. An example of a correlational study would be one that describes the degree of relationship between family socioeconomic status and school achievement. Another example is hyperactivity’s relationship to junk food consumption. Correlational studies are among those most frequently misinterpreted by users of educational research.
Despite its current unpopularity among educators, there is a great deal of high-quality quantitative research in education. It includes disquieting descriptive findings such as falling SAT scores and reports of low math and science achievement and similarly disquieting experimental results such as those of the Follow Through project. In the opinion of the authors, quantitative research’s unpopularity may well be related to its disagreeable results. Findings that affirm orthodoxy are clearly more popular.
Qualitative research in education is a growth industry. It is a type of research long used in fields such as cultural anthropology. Qualitative research relies on written description instead of objective measurement, and its findings are subject to all the vagaries associated with written descriptions of any kind. Rather than attempting to affirm hypotheses and make generalizations that are grounded in an agreed-upon objective framework, qualitative research is more concerned with description as subjectively perceived by an observer in context. Such descriptions are thought to be more honest and realistic than descriptions that purport to be objective and at arm’s length. It is a form of research premised on a postmodern, multiculturalist view of science. It argues that the objective understanding to which traditional science aspires is nothing more than an arbitrary Western convention–one educators should be free to reject.
By avoiding a focus on particular variables of interest, qualitative research presumably avoids the imposition of cultural bias. Of course such a process ignores the very information typically sought by the consumer. For example, a teacher’s question about whether one teaching method produces greater achievement than another would not be answered by a qualitative study. Qualitative studies do not “prove” or “disprove” anything. They can only describe. The validity of such studies is simply an open question (Krathwohl, 1993).
The vagueness of the methods used in qualitative studies invites observer bias. Observers are necessarily selective in their observations. For example, an observer who dislikes the punishment seen in a classroom may tend to note the negative emotional reactions of students more than would a disinterested observer. By contrast, a more impartial observer might give greater attention to the increased on-task behavior that may be effected by the use of punishment. Although there are ways to make such observations more reliable, they are far more subject to researcher bias than most quantitative reports.
Like qualitative research, action research has gained in popularity among educators. Wiersma (1995) describes it as research “conducted by teachers, administrators, or other educational professionals for solving a specific problem or for providing information for decision making at the local level” (p. 11). Action research is typically quantitative but less rigorous in design and methodology than conventional quantitative research. The following is a classroom level example: A teacher is having discipline problems during her fifth-period class. She arranges the desks differently and assesses whether the discipline problems are reduced. A written report of her investigation, including data, analysis, and a brief discussion, would be considered action research. Would such a finding be a sufficient basis for recommending that teachers employ rearranged desks as a means of treating discipline problems? In theory it would not. Practice, however, is another matter. Despite methodological weaknesses–in the present example, a single class sample and no control group–such findings are sometimes used to bolster proposals for new and innovative programs.
Pseudoresearch is a form of scholarly writing that appears to make factual claims based on evidence but, in fact, consists only of opinion founded on opinion. Previous studies are cited, but they contain only theory and opinion. Legitimate empirical reports traditionally present a review of literature that enables the reader to put new findings in context and to strengthen factual generalizations (Stanovich,1996). However, previous studies containing only opinion do nothing to strengthen the report that cites them.
Commonsense educational claims are often supported by such “research.” For example, if an expert opines that schooling is improved by greater funding and if other experts cite and endorse that original claim, subsequent reports will contain what appears to be substantiation. If the claim seems plausible and thus goes unquestioned, it appears to gain acceptance as a fact without ever being tested. Such claims are said to be supported by “research” but it is “research” in the sense of a systematic review of relevant literature, not in the sense of studies that offer an empirical foundation for factual assertions.
Educational innovations that are consistent with popular educational doctrines are often supported by such research. The controversial but widely used whole-language reading instruction (discussed below), for example, goes unquestioned by most educators because it fits hand-in-glove with learner-centered pedagogy. It is supported primarily by favorable opinion among like-minded educators, not demonstrated experimental results.
A type of research that seems to produce empirical facts from opinion is a group-interaction process called the Delphi method (Eason, 1992; Strauss & Zeigler, 1975). However, instead of creating the appearance of empirically grounded fact from multiple reports of opinion (as does pseudoresearch), the Delphi method creates facts about opinion.
In Delphi research, the opinions of experts are collected and synthesized in a multistage, iterative process. For example, if a researcher sought to determine the future occupations open to high school graduates, he or she might consult a panel consisting of career counselors, former high school students, employers, and economists. The panelists would be asked to compose a list of prospective jobs, and they would each share their list with the other panelists. After viewing the lists of other panelists some members might choose to change their estimations, and their changes would then be shared with the other panelists in a second round of mutual review. Ideally, three or so rounds of sharing and realignment would produce a consensus. The “fact” resulting from such a study is that experts agree about the future availability of certain jobs, not that certain jobs have a high probability of being available.
A recent attempt to find effective institution-to-home “transition strategies” for disabled juvenile delinquents illustrates how a Delphi consensus can be confused with an empirically grounded conclusion. Following three rounds of surveys, Pollard, Pollard, and Meers (1994) concluded that the priorities identified by the panelists provided a “blueprint for successful transition” when, in fact, the surveys produced only a consensus about what may or may not prove to be a successful blueprint.
Rand corporation is credited with developing the Delphi technique as a means of distilling a consensus of expert opinion. Sackman (1974) has summarized its primary shortcomings. The expert status of panelists is not scientifically verifiable and neither is the assumption that group opinion is superior to individual opinion.
One other confusion about the Delphi technique pertains to its use by the leader of a deliberative body. Delphi methodology can create the appearance of consensus where none exists–a problematic outcome of a deliberative process. Technically, the Delphi technique does not force a consensus; but as a practical matter, it is designed to produce a consensus and it puts substantial pressure on dissenters for conformity to the group. When employed by the leadership of a deliberative group, it can turn what should be an open and fair-minded exchange of views into a power struggle. Minority viewpoints can be isolated and marginalized. The result is more mindless conformity than reasoned agreement. The conclusions reached by committees and policy-making bodies can easily be distorted by Delphi methodology.
Experimental and Quasi-Experimental Research
Experiments are quantitative studies in which cause-effect relationships are tested (Campbell and Stanley, 1966). Quasi-experiments attempt the same but with certain limitations. Other studies may suggest or imply causal relationships, but their findings are far more ambiguous and subject to misinterpretation. Experiments are not foolproof, but they afford the best evidence science has to offer.
From a purely scientific standpoint experiments are important because they attempt to answer the primary question with which science is concerned: “What explains or accounts for the phenomenon under investigation?” All sciences aspire to this kind of understanding. They are valuable from a practical standpoint, too, because they address the question of whether a given program, teaching method, treatment, intervention, curriculum, and the like produces expected effects. Because schooling is intended as means of making a difference in the lives of students, the armamentarium of professional educators should contain tools that are well tested and demonstrably effective. Ideally, they should also be convenient, cost-effective, and well received by the student; but at a minimum, they must be effective. The critical importance of experimental evidence in establishing effectiveness is not well understood by educators, but it is just such an understanding that is at the heart of knowing which research is valuable and why.
The aim of science is said to be the explanation of natural phenomena. However, the term explanation itself requires a bit of explanation. As the term is used by scientists, explanation refers to cause-and-effect explanation. For example, a phenomenon such as achievement in school is said to be explained (or at least partially explained) if it can be shown that the presence or absence of achievement is functionally (i.e., causally) related to a preceding event or set of events termed a cause. A functional or causal relationship is initially stated in a tentative form called a hypothesis and is not considered a valid explanation until affirmed by evidence.
Experimental research is the business of collecting evidence that might support or disconfirm causal hypotheses. It entails the manipulation of a hypothesized cause for the purpose of inducing an expected effect. If a given effect (technically, a change in the “dependent variable”) follows alteration of the purported cause (technically, a change in the “independent variable”), the causal hypothesis is said to be supported. Other types of quantitative research and even qualitative research may be valuable in suggesting cause-effect hypotheses, but only experimental research can provide a direct test.
A slight digression and yet not a digression, at this point, is this on the techniques used by groups such as Common Purpose:
WF Outcomes: A well-formed outcome is a term originating in neuro-linguistic programming for an outcome one wishes to achieve, that meets certain conditions designed to avoid (1) unintended costs or consequences and (2) resistance to achieving the goal resulting from internal conflicting feelings or thoughts about the outcome.
Internal and External Validity of Studies
Whether an empirical study is capable of demonstrating a causal relationship is one issue, but whether a given experiment was properly conducted is another. Moreover, even a properly conducted experiment may have limited applicability and usefulness in the “real world.” Whether the procedures used in an experiment permit valid findings is the matter of internal validity. Whether the findings of an experiment are generally applicable to the “real world” (i.e., applicable under conditions beyond those under which the study was conducted) is the matter of external validity.
A wide variety of technical considerations can adversely influence the internal validity of an experiment. For example, the manner in which subjects were assigned to treatment and comparison groups can profoundly affect the outcome of an otherwise well-designed experiment. Technical issues with respect to type of sampling and type of population sampled, for example, can greatly influence the external validity of a study. Accurate assessment of these and other technical details requires considerable expertise. Even well-informed investigators may overlook significant threats to the validity of an experiment. Cook and Campbell (1979) provide an authoritative discussion of the myriad considerations that should be considered. Happily there are at least three considerations that a nonexpert can examine to assess the internal validity of a study: source, convergence, and replication.
Source. If a study is reported in a peer-reviewed scholarly journal, chances are good that it meets acceptable standards of internal and external validity. Peer review typically entails blind review of a manuscript by a panel of experts selected by an editor. Panelists are not given the author’s name and the author is not given the reviewers’ names. All criticisms and replies are exchanged through the editor. The most reputable and selective journals use this process. Reports reviewed only by an editor may be valid, but peer-reviewed scholarship is generally conceded to be the most credible. Again, the process is not foolproof, but it is the best science has to offer. Unpublished reports and reports that are not subject to editorial review–grant proposals and reports of funded research such as those included in the ERIC’s Research in Education, for example–are of uncertain quality and should be treated as such.
Convergence. If a study’s findings are generally consistent with (i.e., they converge with) the findings of other investigations in an area of research, they are generally assumed credible (Stanovich, 1996). Any competent research report will include a review of relevant literature. Consistencies and discrepancies within the existing literature and between the report at hand and previous studies are analyzed and discussed. Articles called “reviews of literature” and “meta-analyses” are dedicated to citing and summarizing all of the findings relevant to a given topic or area of study. Although new and revolutionary findings are sometimes uncovered by a single study, competent observations of the same or similar phenomena usually result in similar findings. Most scientific advancements come as incremental additions to understanding, not breakthroughs.
Replication. Replications are repeats of an original study by another investigator using a fresh set of subjects. The credibility of a study that has been replicated is greatly enhanced. Findings that have been replicated are considered valid even if they do not converge with other reports in the same general area of investigation. Only a small percentage of studies in the behavioral sciences are replicated, however.
The Need for Both Experiments and Field Testing
Few experimental investigations are able to fully satisfy requirements for both internal and external validity in a single study. The controls, artificial conditions, and other constraints necessary to ensure internal validity tend to interfere with external validity. Conversely, unanticipated and uncontrolled events can confound or invalidate an otherwise well-conceived study that is conducted in a natural environment such as a school. Because of this inherent conflict, programs or interventions derived from experimental investigations should be field tested prior to implementation.
Field tests are trials of an experimentally supported finding in the classroom or clinic or other setting for which it is intended. Not infrequently they result in the discovery of limitations, cautions, and restrictions on the applicability of experimentally validated findings. Even findings that have been field tested elsewhere may lack local applicability because of peculiar local conditions. Thus, large-scale programs, in particular, should also be locally tested on a small scale in what is called a pilot study. Pilot studies are especially important when the implementation of research findings entail significant time and energy costs for school personnel or learning opportunity costs for students.
Part 5 will be in two days.