Occasional
Paper No. 40
Factors Influencing Technology’s Effect on
Student Achievement and a Caution About
Reading the Research
Abigail Garthwait
College of Education & Human Development
University of Maine
5766 Shibles Hall
Orono, ME
04469-5766
December
2001
A publication of the College of
Education & Human Development at the University of Maine and the Penquis
Superintendents’ Association.
The
Occasional Paper Series is intended to provide educators and
policymakers in Maine with information that can assist them as they address the
complex problems confronting their communities, education systems, or
students. Papers are distributed
periodically as topics vital to educational improvement are addressed by faculty
and graduate students at the University of Maine. The opinions and information obtained in the Occasional Paper
Series are the authors’ and do not necessarily represent those of the
University of Maine or the College of Education & Human Development.
The
Center for Research and Evaluation is a nonprofit
research unit within the College of Education & Human Development at the
University of Maine. Since 1985, the
Center has linked the College of Education & Human Development to Maine’s
schools, communities, and public agencies to more effectively address the
complex issues confronting educational systems in the state. To stimulate discussion and promote policy
developments, the Center designs and conducts qualitative and quantitative
research about school conditions and practices. It disseminates research findings through analytical reviews and
bulletins, and publishes original research in The Journal for Research in Rural Education and in a series of
occasional papers produced in conjunction with the Penquis Superintendents’
Association. The Center also provides
evaluation services, including fiscal, curricular, and administrative reviews.
The Center for Research and Evaluation is
funded by the University of Maine and through project grants. It is administered and staffed by social
science research and evaluation professionals in conjunction with College and
University faculty.
Copyright © 2001 by the Center for Research
and Evaluation. This paper may be
photocopied for individual use.
Center for Research & Evaluation
College of Education & Human Development
University of Maine
5766 Shibles Hall
Orono, ME
04469-5766
Phone
207-581-2493 • Fax
207-581-9510
Equal
Opportunity Statement
In complying with the letter and
spirit of applicable laws and in pursuing its own goals of diversity, the
University of Maine System shall not discriminate on the grounds of race,
color, religion, sex, sexual orientation, national origin or citizenship
status, age, disability, or veterans status in employment, education, and all
other areas of the University. The
University provides reasonable accommodations to qualified individuals with
disabilities upon request. Questions
and complaints about discrimination in any area of the University should be
directed to Office of Equal Opportunity, University of Maine, Room 101, 5754
North Stevens Hall, Orono, ME
04469-5754; (207) 581-1226 (voice and TDD)
A
Member of the University of Maine System
Abstract
Many factors may contribute to the
influence of computers on learning: access to home computers, first language,
gender, and academic history, among others. However, only some factors can be
directly influenced by schools. This
paper identifies three key school related factors that influence technology’s
effect on student achievement: instructional goals; the match between goals,
instructional strategies, and assessment tools; and staff development.
Introduction
Three
decades of research relating to the effects of computer technology in education
have ranged from experimental studies (control and treatment groups) to
quasi-experimental (pre and posttests) to meta-analyses. Researchers have posed
questions about a myriad of topics: time on task, anxiety, motivation, change
in test scores, collaboration, gender/socioeconomic discrepancies, and health
and safety habits. Other studies have employed qualitative measures to examine
whether technology can foster collaboration or serve as a constructivist tool.
Educational decision makers may be intimidated by the interrelated and
confounding factors of software, hardware, people, and context, but are still
accountable for reconciling the time and expense related to technology with the
educational benefits. Many factors may contribute to the influence of computers
on learning: access to home computers, first language, gender, and academic
history, among others (Edwards, 2001). However, only some factors can be
directly influenced by schools. This paper identifies three key school related
factors that influence technology’s effect on student achievement:
instructional goals; the match between goals, instructional strategies, and
assessment tools; and staff development. In addition, it will also present a
few of the complex issues involving the literature of the field: the crucial
importance of thoroughly reporting definitions and research results and the
difficulties in aggregating studies with differing ideological stances.
In
this paper technology and instructional technology will refer
interchangeably to computers, not items such as slide rules, overhead
projectors or graphing calculators, although these do fit within a broader
definition of technology. Assessment of student achievement is another term in
need of clarification. Some researchers (e.g., Mann, Shakeshaft, Becker, &
Kottkamp [1999]; Wenglinsky [1998]) define student achievement as a score on a
standardized test (NAEP and Stanford 9, respectively), while others call for a
broader view of assessment that includes both quantitative and qualitative
measures of learning. Seymour Papert (1993) illustrates the need for
multifaceted, comprehensive assessments with the example of a factory director
who gets a bonus for achieving the company’s goal of making 150 tons of
super-sized nails, though they happen to be nails that are too big for anyone to
find useful! He concludes, “Defining educational success by test scores is not
very different from counting nails made, rather than nails used” (p. 208).
Congruent with Papert’s view of assessment, Grant Wiggins and Jay McTighe
(1998) explicate a system in which performance assessment is matched with
instructional goals in Understanding by
Design. Tierney, Carter, and Desia (1991) were early proponents for
portfolio assessment and their themes have been extended into the growing field
of electronic and digital portfolios. The intricacies inherent in just defining
these two terms (technology and assessment of student achievement)
reveal some of the difficulties facing administrators who must justify budget
decisions based on educational value.
Small-scale Research
Studies
using a small number of students may pave the way for better understanding of
educationally effective uses for computers. For example, Turner and Dipinto
(1992) noted a major unanticipated finding in a research project of 37
seventh-grade students: “With traditional written reports, students usually
make revisions only after the teacher has corrected their drafts. Using [tool
software], however, the students made an enormous number of spontaneous text
revisions” (p. 196). Similar results were found by Finkelman and McMunn (1995)
as they studied 19 sixth graders creating an electronic author study. The
students tended to revise their writings more often in the multimedia program.
Furthermore,
All of the students reported that they learned more
about their author and learned how to better organize their thoughts through
use of [the program]. All of the students followed the traditional steps in the
writing process: planning, prewriting, drafting, editing, revising and
publishing. Students reported that this project provided a stimulating learning
atmosphere, making the process more enjoyable. (p. 24)
If an
educational goal is to foster reflection and redrafting of written work
(Atwell, 1998; Calkins, 1994), then it follows that the computer, a tool which
facilitates motivation and ease of revision, matches the instructional goals .
Exciting
research results have been produced by participant observers on a small
classroom scale. Yasmin Kafai (1995) describes a project in a low socioeconomic
school in which 16 fourth graders used the computer language Logo to develop
fraction games for younger students. In Minds
in Play: Computer Game Design as a Context for Children’s Learning, Kafai
explores the epistemology of the students working in a constructionist design
environment. She notes, “the students improved significantly in their
understanding of fractions and flexibility in moving between different
representational modes” (p. 302). Kafai highlights two instructionally
insightful conclusions: “long-term involvement in the project was essential for
students’ learning” (p. 290) and by the “creation of a rich and complex
learning environment. . . . the nature of this learning culture represented the
complexities of the everyday world in which children learn” (p. 293). As
educational decision makers are formulating policy and educators are mapping
out instructional strategies and assessment tools, it is important that they
acknowledge the need for adequate time and a carefully structured classroom
climate.
The
American Institutes for Research reported their investigation of several
promising projects (Coley, 1997). Twenty-two fourth- and sixth-grade classes
(from seven urban districts) investigated civil rights. The researchers found
that fourth graders having and using on-line access placed significantly higher
on two of nine learning measures; sixth graders measured significantly higher
on four. A 4-year study by the Office of Educational Research and Improvement
(U.S. Department of Education as cited in Coley, 1997) that looked at
technology in constructivist classrooms, found that five of the eight schools
had higher test scores than a comparison group. In this case, a wide range of
carefully targeted resources were easily available only by using computers. The
unit and the accompanying instructional strategies matched the information
literacy goals of the lesson.
While
it is beyond the scope of this paper to explore the connection between
motivation and achievement, it should be noted that a significant amount of
research at the classroom level has found positive results in the affective
realm. Beichner (1994) documented seventh graders who were so enthusiastic
about their work creating multimedia zoo kiosks that they often came to school
early, left late, and skipped lunch and study halls. Related affective
observations were noted by Riddle (1995) in his study of 18 fourth-grade
students. He noted one boy with chronic discipline problems in the regular
classroom. This student remained consistently on task and was reluctant to
leave the computer when the period was over. Riddle underscores the
motivational component, “all students said that they were proud of their work
and the majority credited this pride to the fact that they worked hard” (p.
22). While working towards instructional goals, educators want to use tools
which have been documented as motivational.
Repman,
Weller, and Lan (1993) investigated the variations in social context for 98
eighth graders working with a hypermedia based unit on computer ethics. Their
study serves to illuminate the interrelationships between technology and
pedagogy. They noted a trade off between the accomplishments of the gifted and
talented students’ and the nongifted and talented students. “When magnet
students worked in heterogeneous pairs, mean scores were approximately one
standard deviation lower than the scores for magnet individuals or homogeneous
pairs. At the same time, pairing of any kind improved the achievement of
non-magnet students” (p. 294). While the researchers cautioned against making
elaborate conclusions based on this study, they did see benefits in grouping
the nongifted and talented students for the hypermedia lesson.
Classroom-size
studies can assist in pointing out the complexities of integrating technology
and instruction. As part of their qualitative study, Lundeberg, Coballes-Vega,
Standiford, Langer, & Dibble (1997) posed the question, “Are students
constructing knowledge as they construct projects?” as they investigated a
geography unit taught by two elementary school teachers. The end project was a
collaborative hypermedia stack which the small groups presented to the class of
40 students. The researchers noted the intense engagement and motivation of the
students but concluded that the technology functioned as a mask for the lack of
quality in some of the projects. The teachers seemed to view a polished end
product as evidence of student learning, even though it was clear during the
construction stage that more technologically proficient students dominated the
keyboards. “A number of these projects probably would have been more critically
assessed if they had been in traditional form. In some cases, information was
copied verbatim, missing or simply erroneous” (p. 79). The conclusion is not
that technology sidetracks student learning but that teachers must restructure
their classrooms and assessment systems to accommodate fundamentally different
ways of learning. From Kafai’s carefully crafted design environment to
utilization of editing software in both Turner and Dipinto (1992) and Finkelman
& McMunn (1995), the teacher is a pivotal force in making the best use of
instructional technology. Pedagogy is key.
Meta-analyses and Long-term Studies
Yuen-Kuang
Liao (1998) conducted a meta-analysis of 35 studies in order to synthesize the
research comparing the effects of hypermedia on students’ achievement. The
researcher suggests the effects were moderately positive when compared to
traditional instruction (effect size 0.48). The result that educators might
find useful is the statistically significant impact of the type of hypermedia
delivery. The studies in which simulators (software “using vivid situations for
learning”) were employed showed significantly higher results than studies in
which computer-based interactive videodisc or multimedia were used. Therefore,
hypermedia programs which more actively involved the students resulted in
higher achievement than those which put the student in a more passive role
(Ayersman, 1996).
One
of the difficulties in pinpointing technology’s influencing factors is the
speed at which computers are evolving. One can see the problematic nature of
clustering studies which take place over extended time, if one pictures
comparing a child laboriously entering DOS commands and his younger sibling
effortlessly making movies on a laptop. Nevertheless, one meta-analysis that is
still being used to promote technology is James Kulik’s (1994) work which
synthesized over 500 studies on computer-based instruction (CAI). He found that
students in the treatment groups (those using CAI) averaged scores at the 64th
percentile on achievement tests while control group students (i.e., same
material without computers) averaged at the 50th percentile. Kulik also noted
that the CAI instruction was more time efficient and produced students with
more positive attitudes towards learning. Most of the research examined by
Kulik was done in the 1980s, when computer hardware and software was vastly
different from that available today. Kulik’s (1985, 1990, 1994) recurring
updates showing positive student achievement related to computer use may
indicate that the significant results are not merely a result of the “novelty
effect” (i.e. that the newness or novelty of the experimental approach doesn’t
have undue effect on the conclusions).
Attempts
to consolidate complex data from more than one discipline, grade level, or even
software application potentially muddies the results of meta-analyses. The wide
array of products that contain computer chips (pdas, digital cameras, networks,
cell phones, etc.) further complicates definitional delineations. Differences
in students’ developmental skills add an additional dimension: “Implementations
of these innovations takes place from kindergarten through high school, and the
attributes of successful integration may not be the same across these levels”
(Painter, 2001, p. 22).
It
would be remiss not to add a word about the possibility of biased results in
research and reports commissioned or sponsored by computer or software firms. A
meta-analysis by Jay Sivin-Kachala and Ellen R. Bialo (1994) and their
subsequent updates (including one in 1999) are highly visible (Coley, 1997;
Roschelle, Pea, Hoadley, Gordin, & Means, 2000; Schacter, 1999). The
original study found measurable positive differences in student achievement and
attitudes towards learning due to the effect of technology. An alert reader
would notice that the update carries the information that the authors are
identified as President (Bialo) and Vice President (Sivin-Kachala) of
Interactive Educational Systems Design. Both the study and its update were
published by The Software Publishers Association.
A
contrasting study, also sponsored by a computer firm, does not set out to prove
that using computers in K-12 education brings a rise in student achievement
results. Instead, these researchers wished to set up computer-rich environments
and document what happened. Research sponsored by Apple Computer, (Apple
Classrooms Of Tomorrow, ACOT) investigated 10 years of intensive technology
use. The collaborative included public schools, universities, and research
agencies. The results showed no significant difference in standardized test
scores with comparison groups in the same school (students did not have both a
computer at home and at school) (Baker, Gearhart, & Herman, 1994). However,
the study documented other effects of computer use not measurable by a pencil
and paper test. Students “explored and represented information dynamically and
in many forms, . . . communicated effectively about complex processes, used
technology routinely and appropriately, became independent learners and
self-starters, knew their areas of expertise and shared that expertise
spontaneously” among other changes (Coley, 1997, ¶ 21). Even self-proclaimed,
techno-skeptic Larry Cuban was impressed. He wrote the introduction to Teaching with Technology: Creating Student
Centered Classrooms by Sandholtz, Ringstaff, and Dwyer (1997).
From five classrooms located in five different
schools in which children, families, and teachers received computers and
accessories, ACOT researchers learned soon enough that a saturation strategy
failed to alter how teachers taught. . . . The researchers watched what
happened, listened to teachers, and documented small, incremental, but
significant changes in classroom practices. They recorded how classrooms became
places of traditional and nontraditional teaching, imaginative hybrids of
practice that emerged over time. (p. xiii)
One
significant implication of this study is that the educators used the
technology-rich environment to support substantial changes in their pedagogy,
and the results were revealed in observable, documented student behaviors.
However, the changes in student work took time and did not show up in
standardized test scores.
Despite
the zeal with which proponents of technology advocate computers in schools,
there are cautionary voices. A number of important books (Healy, 1998; Postman,
1992; Roszak, 1996; Stoll, 1999) and articles (Alliance for Childhood, 2000;
Henry, 1999; Kirkpatrick & Cuban, 1998) question the mindless proliferation
of computers. The authors urge a more serious examination of this trend,
especially with young children. In her book Failure
to Connect (1998) Jane Healy writes:
Educators are worried that education is becoming an
adjunct to the technology business, a sort of training school for the high-tech
world. We parents want to see our children succeed, but the foundations for
true success—even future technology “guru” status—rest on skills that will not
become obsolete with the changing of a microprocessor. Most successful
technology innovators did not grow up with computers, but rather with rich,
internal imaginations. Many were divergent thinkers who failed to flourish in
the traditional world of school. (p. 31)
Opponents
express concern about excessive computer use to the detriment of creative and
outdoor play, and to the potential of physical harms such as eyestrain, carpal
tunnel syndrome, and poor posture. The underlying tenet posed by Healy and
others hinges on the closeness of the match between the expressed goals for
computers and the actuated student use, as well as the importance of oversight
by all educators.
Nevertheless,
most schools have progressed beyond the basic question, “Do computers belong in
the classroom?” because parents, business and the community view their presence
as a given. “Computers and the Net are simply preconditions for moving to a new
paradigm in learning. . . . More importantly, [initiatives which put computers
in schools] provide the children themselves with the tools they need to learn
and to catalyze the rethinking of education” (Tapscott, 1998, p. 136). The
pertinent question is not “Do computers make a difference?” but “What factors
in technology use influence student achievement?”
In a
provocative article entitled “Computers make kids smarter – Right?” Heather
Kirkpatrick and Larry Cuban (1998) categorize single studies, meta-analyses,
reviews, and other research into neat lines of pro and con. The concluding
remarks could not be clearer. “Given these pressures, it is that much more
imperative that educators have a clear sense of their goals for technology and
that researchers focus accordingly” (¶ 61).
A Closer Look at Two Large-scale Studies
The Milken Family Foundation produced a
report in 1998 that has had considerable media coverage, “The Impact of
Education Technology on Student Achievement” (Schacter, 1998). Harold
Wenglinsky’s (1998) work is one of the six featured studies. “Does It Compute?
The Relationship Between Educational Technology and Student Achievement in
Mathematics” examines data from the 1996 National Assessment of Educational
Progress (NAEP). The sample consisted of students in classrooms randomly selected
by NAEP, comprising 6,227 fourth graders and 7,146 eighth graders. The results
were controlled for teacher characteristics, class size, and socioeconomic
status.
The
three positive findings that Schacter reported from Wenglinksy are:
• Eighth-grade students who used simulation and
higher-order thinking software showed gains in math scores of up to 15 weeks
above grade level as measured by NAEP.
• Eighth-grade students whose teachers received
professional development on computers showed gains in math scores of up to 13
weeks above grade level.
• Higher-order uses of computers and professional
development were positively related to students’ academic achievement in
mathematics for both fourth- and eighth-grade students. (p. 7)
Schacter also lists two “negative findings” from
Wenglinsky’s report:
• Fourth-grade students who used technology to play
learning games and develop higher-order thinking performed only 3 to 5 weeks
ahead of students who did not use technology.
• Both fourth- and eighth-grade students who used
drill and practice technologies performed worse on NAEP than students who did
not use drill and practice technology. (p. 8)
Note
the use of “only” in the first negative finding; it seems to indicate a
positive effect, which was not as great as it might have been.
It is
useful to introduce another research project, often cited with the Wenglinsky
study (e.g., in Schacter, 1999), as a foil for examining the implications of
his work; Mann et al.’s (1999) evaluation of West Virginia’s Basic
Skills/Computer Education (BS/CE). What makes West Virginia’s initiative an
intriguing project are its scope and focus. Two results were clearly evident:
The comprehensive, statewide technology program had been fully implemented by
the 8th year, and achievement scores had improved. Mann et al.’s purpose was
not to offer approbation or opprobrium, but rather to determine the extent to
which West Virginia’s gains in test scores could be related to BS/CE.
West
Virginia’s primary goal was to improve basic skills of its elementary students.
In the school year 1990-1991 every kindergarten class in West Virginia received
hardware, software, and teacher training. The hardware component consisted of
three or four computers for each classroom, a printer, and a school-wide
server. This comprehensive intervention followed in waves as these students
moved up through the elementary school. While individual schools had some level
of decision-making authority, it was held within strict parameters. Schools
could decide whether computers would go in a centralized lab or be placed in
the classroom, or a combination of the two. Schools were also allowed to choose
either of the two recommended software packages.
The
researchers selected 18 schools as the initial stratifier. Fifth graders (n = 950) were chosen because they were
the only level to have 3 consecutive years of test scores and who also had the
most continuous exposure to the technology initiative. Factor analysis was used
to determine the effects of input phenomena, which were then related to
variation in standardized test scores.
Given
the significance of its conclusions, it is no wonder that the Mann et al.
(1999) study has been widely quoted. “The BS/CE technology regression model
accounts for 11% of the total variance in the basic skills achievement gain
scores of the 5th-grade students” (p. 12), and the authors convincingly argue
that 11% actually underestimates the real effect. The researchers looked at
other areas also:
While there are no differences in the amount of use
between girls and boys, the girls were more likely to see computers as a tool
and the boys as a toy. . . . In terms of gain scores, there were differences in
only two areas related to gender—girls gained more in social studies and boys
gained more in spelling. In math and reading, there were no gender differences.
(p. 35)
Furthermore,
“Those without computers at home
gained more [than students with computers at home] in: total basic skills,
total language, language expression, total reading comprehension and
vocabulary” (p. 34). In a separate report, attached to the original study, the
principals analyzed the cost efficiency in relation to other interventions.
They found that the initiative was more cost effective in improving student
achievement than (a) class size reduction from 35 to 20 students, (b)
increasing instructional time, and (c) cross-age tutoring.
It is
critical to note that the West Virginia BE/CE Initiative is based on improving
the basic skills (spelling,
vocabulary, reading, and math) of its students. While vendors supplied a few
packages that could be considered “tool” programs (e.g., Children’s Writing and Publishing Center), most of the software
falls within the category of drill and practice. Representative titles include:
Bouncy Bee Learns Letters and Words,
Combining Sentences Series, Parts of Speech Series, and Skillsbank 96 Reading & Mathematics.
Therefore, the positive results (i.e., a rise in test scores) in West Virginia
are attributed to the use of drill and practice software.
An
apparent contradiction surfaces when comparing Wenglinsky’s (1998) and Mann et
al.’s (1999) student activities on the computers. Wenglinsky stated that the
use of computers to teach lower-order thinking skills (defined as “drill and
practice,” p. 15) was negatively related to academic achievement (pp. 5-8).
Mann et al. sees positive results with the same type of software. A closer look
at the two original studies uncovers elements that clarify the situation. In
Wenglinsky’s full report, an umbrella statement prepares the reader: the study
“found that the greatest inequities did not lie in how often computers were
used, but in how they were used “ (p. 5). He offers as definitional, “for
eighth-graders as ‘simulations and applications’ for higher-order skills and
‘drill and practice’ for lower-order skills; for fourth-graders, higher-order
thinking is measured from playing mathematical learning games” (p. 28). Thus
“playing learning games” counted as spending time in higher-order thinking skills
for fourth graders. Other researchers have noted that many popular “learning
games,” sometimes categorized as “edutainment,” fail miserably in teaching math
skills. For example, students may arrive at a correct answer (often a
requirement for going to the next level) simply by random clicking. Some games
allocate more screen time to rewarding behavior than in having the student
practice mathematical computations (Smith, 1986). “Rewards” typically come in
the form of dancing rabbits or multiple chances at shooting down alien space
ships. A salient precondition for software use in the classroom is embedded in
a section on Mann et al.’s report entitled “Policy Inputs.” “Both vendors
provided correlation matrices to the texts on the West Virginia adoption lists
and to the standardized assessment tool selected by the state” (p. 17). It
demonstrates that West Virginia’s explicitly articulated goals and its
carefully crafted plan can be powerful in affecting student learning. Such a
tight match between the assessment tools and the instructional strategies
should produce higher achievement scores.
Match Between Goals and Instructional Strategies
When
the National Council of Teachers of Mathematics approved the new math standards
in 1989, it set into motion a radical push for teachers to align their
instructional strategies with the constructivist principles outlined in the
standards and the subsequent documents. Lecture methods were no longer suitable
if teachers believed that students should learn mathematics in an active way,
allowing them to construct their own understanding. Leah McCoy (1996) conducted
a meta-analysis of 65 studies looking at programming languages and student
skills. She concludes, “Logo programming, particularly turtle graphics at the
elementary level, is clearly an effective medium for providing mathematics
experiences” (¶ 22). Yet having access to the computer and the software is not
sufficient for learning. McCoy notes that most studies included the
recommendation that “the teacher be involved in planning and overseeing the
Logo experiences to ensure that students discover and understand the target
concepts” (¶ 22).
Research
by Min Liu and Keith Rutledge (1996) explored the affective realm, relating
computer apprenticeship to student achievement. They compared two high school
classes in a high minority, inner city school; approximately 60% of the
population were considered “at-risk.” The control group was an in-tact computer
class (n = 22) learning to use
specific programs and the treatment group (n
= 24) was engaged in a multimedia design project. Lin and Rutledge quote
literature on the importance of motivation’s role in learning and student
achievement. They concluded that, “the ‘learner-as-designer’ environment
described here had a positive impact on the at-risk high school students. As a
result of participating in this project, the students showed a significant
growth in their value of intrinsic goals” (p. 31). Observations about the
students working during their lunch time, and before and after school
demonstrate their motivation for the project. The study, however, did not
attempt to determine how much learning could be attributed to the technology
itself or to other components of the project; for example, students were
creating for a “real audience”—a local Children’s Museum. In the words of one
of the museum’s representatives, “I’m ecstatic with their work. Their work is
excellent” (p. 42). The goal in this project was to engage students in digital
apprenticeship and to observe the outcome on student work. The teacher did not
lecture on how graphic designers work but varied the pedagogy to provide the
potential for real audiences, a series of working artists, and ongoing support.
Teaching within a computer design environment might feel risky for educators
used to more traditional modes, and it necessitates a change in instructional
strategies.
Some
complexity resides in defining what constitutes technology integration. Is a
lecture using presentation software substantially different from the exact same
information delivered via overhead transparencies? A teacher unable to deliver
a coherent explanation will not find her inability miraculously cured by
PowerPoint. A lesson in which a student spends an hour on the Internet finding
the latitude and longitude of his town shouldn’t count as “usefully integrating
technology” when the information is accessible with a 30 second visit to an
atlas. Simply because a student is using a computer doesn’t mean that the
trade-offs in time and money make it an appropriate use. Many would argue that
a classroom management system which “rewards” students who finish their
“regular work” by playing games (even basic skill games) does not constitute
technology integration. Not only does such a management system encourage
rushing other tasks, it promotes unhealthy competition for limited resources.
For further discussion of the difficulties see Fouts (2000), Joy & Garcia
(2000), Kirkpatrick & Cuban (1998) or Painter, (2001). All of these issues
point to the need for further research.
In
1999 researchers examined Idaho’s far-reaching computer infusion initiative by
relating test score gains to technology use patterns and technology literacy
along with five other components. (The sample population was over 35,000 8th-
and 11th-grade students). The study concluded, “There is a positive
relationship between academic performance in core studies, language, math, and
reading and the integration of technology in Idaho’s K-12 schools” (as quoted
in Fouts, 2000, p. 22). The notable findings relevant for educators were that
the strongest technological predictors of achievement gains were the ability to
choose the appropriate software tool, the amount of computer use at school, and
exposure to Internet and E-mail use.
Professional Development
In
addition to Wenglinsky’s (1998) and Mann et al.’s (1999) findings mentioned
earlier, a deeper look at their full studies reveals a strong message which is
related less to the technology per se than to administrative support and intensive
teacher training. “West Virginia spent roughly 30¢ of every technology dollar
on training, ten times the national average for schools” (Mann et al., p. 16).
A related finding was laid out in Wenglinksy’s (1998) national study of NAEP
data. “Teacher professional development in technology and the use of computers
to teach higher order thinking skills were . . . positively related to academic
achievement” (pp. 5-6). Affording administrative support requires that
educators in policy-level positions have more theoretical and practical
knowledge of instructional technology. This includes developing a system for
measuring local success (Costa & Bobowick, 2001) and providing for staff
development.
“Teacher
expertise is the most important factor in determining student achievement,”
writes Linda Darling-Hammond (Darling-Hammond & Ball, 1997, ¶ 5). She
supports her statement with numerous studies, highlighting one by Ronald
Ferguson, which found that “teachers’ expertise (as measured by teacher
education, scores on a licensing examination, and experience) account for far
more variation in students’ achievement than any other factor (about 40% of the
total)” (¶7). Given the prevalence of computers in schools (Sandham, 2001), it
is truly surprising that teachers are not taught how to get the most out of
them. A 1999 survey reported that only 29% of teachers had participated in more
than 5 hours of professional development in technology curriculum integration
in the past year (Fatemi, 1999). Nor can we expect the passing of time (and
subsequent retirement of untrained teachers) to be the panacea. Fatemi notes
that “teachers who have been in the classroom 5 years or fewer are no more
likely to use digital content than those who have been teaching for more than
20 years” (p. 35).
Maine
data shows similar findings. In the fall of 2000 a survey was sent to all
teachers in the state to ascertain their access to computers, and their
professional and classroom use of computers (Eberle & Keeley, 2000).
Interestingly enough, only 30% of all teachers used computers frequently for
their own professional development. Overall, teachers in Maine possessed a
limited repertoire of instructional applications; infrequent use of
computerized problem solving, multimedia, or simulations were often reported.
However, teachers do tend to employ computers on a daily basis to meet
immediate needs, with 63.3% of teachers frequently creating materials on the
computer, 54.9% communicating with colleagues, and 43.2% performing
administrative work with computers. One of the more alarming findings, echoing
the national data, was that “Younger teachers do not use computer applications
more than more experienced teachers” (p. 4) thereby indicating that these
trends may not change with fresh influxes of new teachers. Further study may
show whether these results can be attributed to a dearth of newer technologies
or whether teachers do not know how to (or see no reason to) integrate computer
use into classroom activities.
Denton
and Manus (1995) cite research that found teachers who have had in-service
training more likely to use computers in instructional problem solving than
teachers who haven’t. They compared 3 years of standardized test data from
eight schools and concluded that “academic performance . . . across years
suggest that something is happening that is positive” but add that bold claims
are not supported (p. 4).
Access and Placement of Technology
Too
often access to computers is considered the primary measure of instructional
technology. A student-to-computer ratio shouldn’t be perceived as the bottom
line in evaluating technology’s impact on education. Access to computers is a
necessary but not sufficient requirement for determining the impact on
educational outcomes. If many of these computers sit in the back of a classroom
rarely receiving an ampere of electricity, the potential for understanding the
benefits or drawbacks in teaching and learning will remain unactualized. “It
seems educators may be making more progress in providing access to technology
than in figuring out how to use it as a learning tool” (Doherty & Orlofsky,
2001, p. 45).
Dale
Mann (1999) headed a study that asked, under what conditions was technology
effective in raising student achievement? A significant finding was that
“students who had access to [the program’s] computers in their classrooms (the
‘distributed’ pattern) did significantly better than students who were taught
with [the program’s] equipment in lab settings. They had higher gains in
overall scores and in math” (p. 13). Ready access to computers ensured that
students had the potential to use them more often, and that teachers
self-reported having better skills in lesson planning, and delivering and
managing instruction. The placement of computers in the classroom instead of in
a separate lab configuration is noteworthy.
Education Week (Sandham, 2001, p. 87) reported that Maine is “just now laying the
groundwork for its first statewide school technology push,” referring to
Governor King’s Technology Endowment Fund (State of Maine), 2001. This
initiative will fund a portable digital device for every seventh grader in
Maine for the school year 2002-2003. While many Maine educators will disagree
with the “just now” statement in light of the 6-year-old ATM initiative and the
6-year-old Maine School and Library Network, they may be interested to note
that the article goes on to say that the results of recent surveys show “some
of the poorest areas had some of the best access to technology” (p. 87). It
remains to be seen if the critical next steps to make the best use of these
resources will occur. Contemplating future research about technology’s effect
on student learning in Maine is exciting because of the statewide breadth of
the initiative and the standardization of hardware and infrastructure forms.
The former has the potential of providing a rich data set and the latter,
serves to reduce one strand of complexity.
Conclusion
In
the rush to be “Ready for the 21st Century,” some districts may have been
satisfied with a simple list of the equipment deployed in their classrooms.
Never a viable measure of educational success, the number of machines, RAM
sizes, or even megahertz, will no longer impress constituents. This is
especially true now that the ongoing toll on the budget becomes more evident.
Not only is it necessary to justify purchases, implementation strategies, and
professional development with research data such as found above, but districts
should be prepared to acknowledge legitimate concerns relating to technology
use or abuse. As part of the local assessment systems, educators could
proactively secure data regarding their students’ achievement in relation to
technology usage.
This
examination of pertinent studies shows that computers and technology have the
potential to be an important and viable component for increasing student
learning. However, the mere presence and even simplistic use of computers alone
is no panacea. On one hand, educators who decry the hegemony of print-based
literacies (Papert, 1993; Russell, 1998; Tapscott, 1998) insist that the
misalignment between culture-based media and schools serves to disengage our
students. On the other hand, Theodore Roszak (1996) sounds a voice of reason:
“People who recommend more computers for schools are like doctors who prescribe
more medicine. What medicine? How much medicine? For what reason? The same
questions apply to computers.” It is mandatory that educators make the best
possible decisions. Heather Kirkpatrick and Larry Cuban (1998) summarize that
the research is inconclusive in several areas. Pressing questions still remain
unanswered and point to a dire need for further research: “Can we reach our
[educational] goals at less cost—without additional investments in technology?
Will computers help create the type of students and citizens we seek?” (¶ 42).
The
state of Maine is poised to invest substantially in educational technology.
There will be professional development training opportunities offered by a
variety of resources, but schools and districts will need to continue asking
the hard questions (Maine Department of Education, 2001). Despite the problems
with conducting and reading the research, Cherle Lemke notes, “The future forms
of learning technology are impossible to predict, but we can design them better
based on the islands of research that help explain where we have been” (quoted
in the preface of Mann et al., 1999, p. 3). The research that we do have
indicates that explicitly articulating goals, closely matching these with the assessment
tools and instructional strategies, and the absolutely essential need for staff
development do produce positive results in student learning.
References
Alliance for Childhood (2000). Fool’s gold: A critical look at computers in childhood. College
Park, MD. Retrieved August 18, 2001, from
http://www.allianceforchildhood.net/projects/
computers/computers_reports_fools_gold_contents.htm
Atwell, N. (1998). In the middle: New understandings about writing, reading, and learning.
Portsmouth, NH: Boynton/Cook.
Ayersman, D. J. (1996). Reviewing the research on
hypermedia-based learning. Journal of
Research on Computing in Education, 28(4), 500-525.
Baker, E. L., Gearhart, M., & Herman, J. L.
(1994). Evaluating Apple classrooms of tomorrow. In E. L. Baker, & H. F. O’Neil, (Eds.). Technology assessment in education and training (pp. 173-197).
Hillsdale, NJ: Erlbaum.
Beichner, R. L. (1994). Multimedia editing to promote
science learning. Journal of Educational Multimedia and Hypermedia, 3(1), 55-70.
Calkins, L. M. (1994). The art of teaching writing. Portsmouth, NH: Heinemann.
Coley, R. J. (1997, September). Technology’s impact [Electronic version]. Electronic School.
Retrieved on September 23, 2001, from
http://www.electronic-school.com/0099713.html
Costa, Sr., J. P., & Bobowick, E. (2001). Linking
technology to educational improvements. In B. Kallick & J. M. Wilson, III
(Eds.), Information technology for
schools: Creating practical knowledge to improve student performance (pp.
33-42). San Francisco: Jossey-Bass.
Darling-Hammond, L., & Ball, D. L. (1997). Teaching for high standards: What
policymakers need to know and be able to do. Retrieved November 3, 2000,
from http://www.negp.gov/Reports/highstds.htm
Denton, J. J., & Manus, A. L. (1995). Accountability effects of integrating
technology in evolving professional development schools. (ERIC Document
Reproduction Service No. ED393443)
Doherty, K. M., & Orlofsky, G. F. (2001, May 10).
Student survey says: Schools are probably not using educational technology as
wisely or effectively as they could. Education
Week, 20(35) 45-48.
Eberle, F., & Keeley, P. (2000, November). Survey of computer use of Maine teachers:
Sample results from a study of teacher use, application in the classroom and
access to computers. Augusta, ME: Maine Mathematics and Science Alliance.
Edwards, V. B. (Ed.). (2001, May 10). The new divides: Looking beneath the numbers
to reveal digital inequities [Special Issue: Technology Counts 2001].
Education Week, 20(35).
Fatemi, E. (1999, September). Building the digital curriculum. Retrieved November 24, 2000, from
http://www.edweek.org/sreports/tc99/articles/summary.htm
Finkelman, K., & McMunn, C. (1995). Microworld as a publishing tool for
cooperative groups: An affective study (Report # 143). Charlottesville:
University of Virginia, Curry School of Education. (ERIC Document Reproduction
Service No. ED384344).
Fouts, J. T. (2000, February). Research on computers and education: Past, present and future. Bill
and Melinda Gates Foundation. Retrieved October 28, 2001, from
http://tlp.esd189.org/images/TotalReport3.pdf
Joy, E. H., & Garcia, F. E. (2000). Measuring
learning effectiveness: A new look at no-significant-difference findings. Journal of Asynchronous Learning Network, 4(1).
Retrieved October 28, 2001, from http://www.aln.org/alnweb/journal/Vol4_issue1/
joygarcia.htm
Healy, J. M. (1998). Failure to connect: How computers affect our children’s minds—and what
we can do about it. New York: Simon & Schuster.
Henry, T. (1999, February 28). Educator questions computers’ educational value. USAToday.
Retrieved August 18, 2001, from
http://www.usatoday.com:80/life/cyber/tech/cta931.htm
Kafai, Y. B. (1995). Minds in play: Computer game design as a context for children’s
learning. Hillsdale, NJ: Erlbaum.
Kirkpatrick, H., & Cuban, L. (1998, Summer).
Computers make kids smarter—Right? TECHNOS Quarterly, 7(2). Retrieved October
16, 2001, from http://www.technos.net/journal/volume7/2cuban.htm
Kulik, J. A. (1994). Meta-analytic studies of
findings on computer-based instruction. In E. L. Baker & H. F. O’Neil, Jr.
(Eds.), Technology assessment in
education and training (pp. 9-33). Hillsdale, NH: Erlbaum.
Kulik, C. C., &
Kulik, J. A. (1991).
Effectiveness of computer-based instruction: An updated analysis. Computers in Human Behavior, 7, 75–94.
Kulik, J. A., Kulik, C.-L. C., & Bangert-Drowns,
R. L. (1985). Effectiveness of computer-based
education in elementary schools.
Computers in Human Behavior, 1,
59-74.
Liao, Y-K. C. (1998). Effects of hypermedia versus traditional instruction on students’
achievement: A meta-analysis [Electronic version]. Journal of Research on
Computing in Education, 30(4), 341-360.
Liu, M., & Rutledge, K. (1996, April). The effect of a “learner as multimedia
designer” environment on at-risk high school students. Paper presented at
the annual meeting of the American Educational Research Association, New York.
(ERIC Document Reproduction Service No. ED394509). Later published in Journal
of Educational Computing Research (1997) 16,145-177.
Lundeberg, M. A., Coballes-Vega, C., Standiford, S.
N., Langer, L., & Dibble, K. (1997). We think they’re learning: Beliefs,
practices, and reflections of two teachers using project-based learning. Journal of Computing in Childhood Education,
8(1), 59-8.
Maine Department of Education. (2001, September 25). Maine Learning Technology Endowment.
Retrieved October 16, 2001 from http://www.state.me.us/mlte/
Mann, D, Shakeshaft, C., Becker, J., & Kottkamp,
R. (1999). West Virginia story:
Achievement gains from a statewide comprehensive instructional technology
program. Milken Family Foundation. Retrieved August 27, 2001, from
http://www.Mff.org/publications/ publications.taf?page=155
McCoy, L. P., (1996). Computer-based mathematics
learning [Electronic version]. Journal of Research on Computing in
Education, 28(4), 438-461.
Painter, S. R. (2001). Issues in the observation and
evaluation of technology integration in K-12 classrooms. Journal of Computing in Teacher Education, 17(4), 21-25.
Papert, S. (1993). The children’s machine: Rethinking school in the age of the computer.
New York: HarperCollins.
Postman, N. (1992). Technopoly: The surrender of culture to technology. New York:
Random House.
Repman, J., Weller, H. G., & Lan, W. (1994).
Impact of social context on learning in hypermedia-based instruction. Journal of Educational Multimedia and
Hypermedia, 2(2), 283-298.
Riddle, E.
M. (1995). Communication through multimedia in an elementary classroom (Report # 143). Charlottesville: University of
Virginia, Curry School of Education. (ERIC Documentation Reproduction Service
No. ED384346)
Roschelle, J. M., Pea, R. D., Hoadley, C. M., Gordin,
D. N., & Means, B. M. (2000). Changing how and what children learn in
school with computer-based technologies. The
Future of Children, 10(2), 76-97.
Russell, G. (1998). Elements and implications of a
hypertext pedagogy. Computers and
Education, 31, 185-193.
Sandholtz, J. H., Ringstaff, C., & Dwyer, D. C.
(1997). Teaching with technology: Creating
student-centered classrooms. New
York: Teachers College Press.
Sandham, J. L. (2001, May 10). Across the nation. Education Week, 20(35), 67-105.
Schacter, J. (1999, June). Impact of educational technology on student achievement: What the most
current research has to say. Milken Family Foundation. Retrieved August 27,
2001, from http://www.mff.org/publications/publications.taf?page=161
Sivin-Kachala, J., & Bialo, E. R. (1994). Report on the effectiveness of technology in
schools, 1990-1994. Washington, DC: Software Publishers Association.
Sivin-Kachala, J., & Bialo, E. R. (1999). 1999 Research report on the effectiveness of
technology in schools (6th ed.). Washington, DC: Software Publishers
Association.
Smith, F. (1986). Insult
to intelligence: The bureaucratic invasion of our classrooms. Portsmouth,
NH: Heinemann.
State of Maine. (2001, January). Final Report of the Task Force on the Maine Learning Technology
Endowment. Augusta, ME: Office of Policy and Legal Analysis, Maine State
Legislature.
Stoll, C. (1999). High
tech heretic: Reflections of a computer contrarian. New York: Random.
Tapscott, D. (1998). Growing up digital: The rise of the net generation. New York:
McGraw-Hill.
Tierney, R. J., Carter, M. A., & Desia. L. E.
(1991). Portfolio assessment in the
reading-writing classroom. Alexandria, VA: ASCD.
Turner, S. V., & Dipinto, V. M. (1992). Students
as hypermedia authors: Themes emerging from a qualitative study. Journal of Research on Computing in
Education, 25(2), 187-199.
Weller, H. G. (1996). Assessing the impact of
computer-based learning in science. Journal
of Research on Computing in Education, 28(4), 461-485.
Wenglinsky, H. (1998). Does it compute? The relationship between educational technology and
student achievement in mathematics (Report No. BBB27038). Princeton, NJ:
Educational Testing Services. (ERIC Document Reproduction Service No. ED425191)
Wiggins, G., & McTighe, J. (1998). Understanding by design. Alexandria, VA:
ASCD.