A Question of Learning


Photo by Renee Rosensteel Photo by Renee Rosensteel

TwitterFacebookEmailPrintFriendly

Scientists work to make computers more effective teachers

As a digital revolution changes classrooms across the region and country, one key question lingers at the end of each school day: Do the new technologies actually enhance students’ learning?

The answer is unclear.

After decades of research in fields such as cognitive science, the debate is no longer about whether digital technologies have the potential to help students learn, but how they can be harnessed to do so and whether products on the market are effective.

Meanwhile, rigorous evaluations of education technology products are rare, giving school officials little guidance when navigating the rapidly growing market.

The answers to those and other questions are being explored in Pittsburgh, where researchers are at the forefront of developing advanced educational technologies, analyzing data to refine them and evaluating products on the market to gauge whether they help students learn.

Digital Tutors

One promising area of research is applying artificial intelligence to how students learn in ways not unlike what teachers do. Researchers insist that such technology is not intended to replace teachers. It can, however, expand personalized instruction to all students, which is not practical in most classrooms.

There’s great potential for technology to help teachers help students get more individualized attention,” said Vincent Aleven, associate professor at Carnegie Mellon University’s Human-Computer Interaction Institute.

One such software product is Cognitive Tutor, from Pittsburgh-based Carnegie Learning. It uses artificial intelligence to provide self-paced instruction and individualized feedback to students as they progress through a course of study. It’s part of the company’s curriculum that blends teacher-taught with computer– driven instruction.

Ideally, students use the Cognitive Tutor software twice a week in class. On those days, they work through math problems based on their level of knowledge. Some might be stuck on multiplying fractions and need extra time, while others are ready to continue. The software can tell.

The potential for enhancing personalized instruction with such technologies has led mainstream textbook publishers such as McGraw-Hill Education to become digital education companies. McGraw-Hill, for example, markets a digital teaching tool called Assessment and Learning in Knowledge Spaces (ALEKS) that uses artificial intelligence to adapt math and science courses to what students already know or don’t know. The product emerged from cognitive science, mathematics and software engineering research out of the University of California-Irvine.

Like ALEKS, the Cognitive Tutor software also grew out of cognitive science research, in this case more than 20 years ago at Carnegie Mellon. With renowned researchers such as Herbert Simon, recognized as an artificial intelligence pioneer, Carnegie Mellon has long been thinking about how we think and applying it to machines.

The science of learning

The science behind Cognitive Tutor is based on a model of how people think and learn called ACT-R, which was developed by John Anderson, Carnegie Mellon professor of psychology and computer science. Learning how to drive a car offers a simple example of how the complex theory can be applied in education.

The theory involves two kinds of knowledge: declarative and procedural. In learning how to drive, the necessary declarative knowledge includes knowing that putting the key in the ignition starts the car, applying pressure to the brake pedal prevents the car from moving, and so on.

But declarative knowledge alone is not enough. Drivers must draw on those learned acts and facts, put them together and apply them in proper order. For example, to back down a driveway, the car must be set in reverse and pressure must be applied to the gas pedal. With such procedural knowledge, drivers perform the steps seamlessly, routinely and without thinking about each one. Conversely, a gap in procedural knowledge, such as putting the car in neutral rather than reverse, guarantees they won’t succeed in backing down the driveway.

With Cognitive Tutor, it’s not whether students get a question right or wrong that’s important, but how they build procedural knowledge by putting together and applying different math facts and skills to solve a complex problem. That ability is revealed in the “skillometer” feature of the software. Although the software features large, real-world problems, what students learn are specific cognitive skills, said Steve Ritter, chief scientist at Carnegie Learning.

As they work through a multistep problem, the software tracks how students complete each individual step based on how they solve a problem and collects data to document what each student knows and doesn’t know. Next, the software assigns new problems to solve based on what each student knows and doesn’t know.

The software can also determine the particular strategy students use to solve the problem and can provide help if the student is struggling to solve it. Such a process reveals what students have learned. It is considered a better indicator of students’ learning than getting a question correct on a test, which could be the result of guessing, the luck of encountering a particular problem they already know, or other factors.

In addition, data on how students solve problems is analyzed for insights into how people learn.

Researchers, for example, use the data to determine the likelihood of whether or not a student will get a problem correct. An incorrect answer to a problem which a student is expected to correctly answer may indicate there are other skills involved in solving the problem that researchers weren’t aware of and lead them to investigate further. The process helps improve the software and their understanding of how students learn math, Ritter said.

RAND evaluated the Cognitive Tutor software in an expansive randomized control study that focused on its Algebra I course. The study involved 147 school sites, 73 high schools, 74 middle schools in 51 school districts in seven states over two years. None were in southwestern Pennsylvania.

Schools were randomly organized into two groups. One received the tutoring software and curriculum. A control group received traditional instruction from teachers. Outcomes for the first year showed little difference in the post-test algebra scores of students who used the software and those in the control group. However, the software significantly improved algebra scores for high schools students in the second year it was used.

Second-year outcomes suggest, for example, that students who used the software gained roughly an additional school year’s worth of academic growth in algebra compared with students who did not use the software, according to the study.

The second-year improvement can be attributed, in part, to teachers adapting to teaching with the software, Ritter said. “Teachers are used to teaching the whole class. This idea that students might be at different places in the curriculum is unusual to them. It’s one of the points of transition, one of the things that change between the first and second year.”

Evaluating effectiveness

Can you see someone learning? Do you know when you’re learning? The answer is no, according to Ken Koedinger, professor of human computer interaction and psychology at Carnegie Mellon and director of the Pittsburgh Science of Learning Center.

Learning is a subtle, complex process. But Koedinger worries that some people believe otherwise. “On the science side of things, my biggest concern and fear is that we have a tendency to think that we’re able to tap into our own learning process. But learning is really hard to see.”

Reducing cognition to something that is easily observable can be problematic. One risk is that if school officials and teachers think they can see their students learning, they may be less inclined to demand scientific proof that digital educational technologies on the market enhance learning rather than simply make the subject matter more engaging.

The education marketplace is flooded with games, apps and curriculum that claim to engage students, help them learn concepts more efficiently and raise test scores. But evidence of their effectiveness is scarce. Unlike the Cognitive Tutor software, most products sold today have not been rigorously evaluated to determine whether they help students learn.

In the field of evaluation, the gold standard is the randomized control trial. But thoroughly evaluating digital education products in schools is a long and expensive process. The RAND study of Cognitive Tutor, for example, cost $6 million and took two years to complete.

Doing evaluation in a rigorous way is a very time-consuming process, especially if you’re going to do it in real school situations,” said John Pane, distinguished chair in education innovation and a senior scientist at RAND in Pittsburgh. “You think of all the products that are out there and only a very, very tiny fraction of them have undergone that so far. And it does not seem like it is feasible to test everything that way.”

Federal education reform, such as the Every Student Succeeds Act, encourages school districts to use data and evidence-based practices to improve student outcomes and qualify for education innovation research grants.

But evidence-based decisions on which technologies schools adopt remain the exception rather than the rule. The proven effectiveness of a product is not always a key consideration when districts shop new technologies. Even when it is, school officials find that only a few products have undergone the kind of evaluation that would tell them how well a product enhances learning. “What’s happening in the mainstream is much more influenced by marketing than it is by research,” Pane said. “Part of this reform push is trying to switch it to be more research oriented.”

Without strong incentives to demonstrate the effectiveness of products, Koedinger said, “there isn’t a felt need for evaluation in the near term for these companies.”

While the evidence base is growing, it isn’t keeping pace with the development and use of new educational technologies. “If you’re looking at what’s being used in schools right now it’s impossible to study all of them. That’s one problem,” Pane said. “If we embark on a study and five years later we come out with the results, the product has evolved and it’s not actually the product we evaluated anymore.”

Possible solutions include more timely evaluation of educational technologies and stronger incentives to use research-proven products in the classroom. The U.S. Department of Education, for example, is developing an approach called rapid-cycle technology evaluation as a low-cost, quick turnaround way to assess digital education products. “I think evaluation is building out,” Koedinger said. “It won’t make a change overnight, but I think the incentives are going to emerge.”

Tags: