摘要:据Babson的一项调查显示,在美国高等教育机构中,在线和混合式学习的趋势持续上升。

据Babson的一项调查显示,在美国高等教育机构中,在线和混合式学习的趋势持续上升。有许多学术界的领导人物认为在线学习对于教育机构的长期增 长是至关重要的,而持有这一观点的领导人的比例也由2002年的49%上升到了2015年的71%。很多机构都在投资教育科技产品来支持在线学习。尽管这 些投资的方式多种多样,但是它们通常都会投资那些运用教育学习类的应用来传授课程内容的软件

他们怎么能确保他们花在新的项目上的时间和金钱的投入能够得到回报呢?为了帮助选择决策,Bill和Melinda Gates基金会在语境框架模式下来支持投资,这个资源能用来帮助老师和教育机构确定这些课程软件是否满足他们和学生的需求。对于研究课程软件产品的有效 性而言,这个框架为其前景的指导提供了一个开端。

想要在文献中找到答案并不容易,新的产品不断在市场上涌现,旧产品也在不断的更名和调整目标。对于大多数产品,并没有针对它们的有效性而发表的研究。

与此同时,关于学习软件有效性的研究非常多,即使将研究范围限制在相对新的课程软件领域,这些研究论文的数量仍然多到骇人。如果你在谷歌学术上搜索 “课程软件的有效性”你会在仅仅0.8秒的时间内找到7860条搜索结果。但是如果你仔细研究这些论文,你会发现只有很小的一部分能够得出关于课程软件有 效性的结论。即使一个教育者找到一个能提供关于课程软件的有效性的有信度的研究,很有可能它并不适用于这类产品、对象或学生所关心的领域。

除此之外,现有的课程软件影响力研究的结果可谓是五花八门,有时候使用课程软件的班级会比传统教学的班级成绩更好,有时候两者之间没有任何区别,有 的时候使用课程软件的班级似乎做的还不如传统教学的班级。一项由SRI教育公司生产的自适应学习产品的评估显示,即使是同一项产品,在不同的研究中,造成 的结果也是有差异的。学习状况例如完成率、学生性格、测量结果方法和软件版本的差异,都能影响结果,所以学者很难根据前面的研究去下任何结论。

语境框架下的课程软件用其他的方式来使用研究文献。这种方法会考虑到不同产品结构设计的功能,而不仅仅是看一个特定课程软件的影响结果。认识到每个 大学采用的课程软件的目的和环境的差异甚大,而且新的课程软件不断的出现,基于语境框架下的课程软件不一定会成为最好的课程软件,但它会从不同的角度列出 一系列的产品,例如:技术兼容性,可用性,从课程和机构角度来说的实用性和研究学习的连贯性。

这个框架同样邀请用户来考虑产品所在的大环境和用户使用产品的原因。一个老师是否想让学生在她的课堂上更活跃?一个学术部门是否想要同一门由不同老师教授的核心课程有更强的连贯性?或者一个学院是否想提高一个挂科率高的课程的通过率?

在和Tyton公司合伙开发框架的过程中,在一些只在部分课程软件所独有的学习科学技术的帮助下,我们识别出了一些结构性的特征。这些例子包括诊断学员哪方面的技能缺失,或者帮助学生进行预测。对于这些特征,有少数一些研究表明,添加这些功能能够改进学习的结果。

我们下一步的工作始于今年夏天,是开展一项关于学习有效性论文的综合研究,以此来确认自从2000年以后的所有的发表的课程研究的特性有哪些。将这些元素综合起来,会让我们可以计算出增加这些特性后的平均影响,并提供一个关于增加这些元素的相对重要性评估。

这项工作的结果将会有很高的可读性。教育者们可以在其中找到各种各样的话题,学习结果和学习者的类型。这篇文章将会和一个更新的语义环境下的课程软件一起在2016年十月发布。

本文作者Barbara Means是SRI国际的技术和学习中心的主任,而Vanessa Peters是这个中心的教研员。

本文由鲸媒体编译,出处www.ensurge.com。

原文:

Developing a Research Base to Evaluate Digital Courseware

Online and blended learning continue to trend upward in higher education institutions across the U.S. According to a Babson Survey, the proportion of academic leaders who report that online learning is critical for long-term institutional growth has grown from 49 percent in 2002 to 71 percent in 2015, and institutions are investing in edtech products to support digital learning. Although these investments can take many forms, they typically include software programs that are designed to deliver curriculum content through educational learning applications.

How can they be confident that their investments of time and money for selecting and implementing new courseware will actually pay off? To facilitate the decision making, the Bill & Melinda Gates Foundation supported the development of the Courseware in Context Framework, a resource for helping instructors and institutions identify courseware products that meet their needs and the needs of their students. The framework provides a starting place for navigating the landscape of available research evidence on the effectiveness of available courseware products.

Finding an answer in the research literature is not easy. New products are coming on the market all the time, and old products are getting renamed and repurposed. For most of them, there is no published independent research on the effectiveness of individual products.

At the same time, the number of studies of the effectiveness of learning software is huge. Even limiting your search to the relatively new category of “courseware” doesn’t reduce the number of studies enough to make the task of reviewing them less daunting. A Google Scholar search using the terms “efficacy of courseware” yields over 7,860 search results in a mere .08 seconds. But if you start examining the studies, you find that only a tiny fraction of them measure courseware learning impacts well enough to support drawing any kind of conclusion. And even when an educator does find a controlled study that provided a credible test of courseware impacts, chances are it will not be for the kind of product, subject area and students he or she is concerned with.

What’s more, the available courseware impact studies have results that are all over the map. Sometimes the classes using the courseware got better results than those using traditional lecture mode; sometimes there is no difference; at other times the students using courseware appear to do worse. An evaluation of adaptive learning products by SRI Education revealed that different impacts may be found even when the same product is evaluated in different studies. Differences in study conditions, such as implementation practices, learner characteristics, outcome measures and the version of the software being used can all affect results, making it difficult to form a conclusion on the basis of prior research.

The Courseware in Context Framework uses the research literature in a different way. Rather than looking for impact studies of particular courseware products, it considers the research base for various product instructional design features. Recognizing that the purposes and contexts for adopting courseware vary widely from college to college and that new courseware products are becoming available all the time, the Courseware in Context Framework does not seek to identify the “best” courseware. Rather, the framework sets forth a set of product capabilities and implementation practices that are considered desirable from multiple perspectives—technical compatibility, usability, best practices in implementation at the course and institutional levels, and consistency with research on learning.

The framework also invites users to consider the product’s context and the user’s primary reason for considering the adoption of courseware. Is an individual faculty member trying to get students to be more active learners in her course? Is an academic department trying to achieve greater consistency for a core course taught by many instructors? Or is a college looking to improve the success rate in a course that many students drop or fail?

Working with the Tyton Partners-led team developing the framework, we helped identify instructional features with support in the learning science literature that are incorporated into some courseware products but not others. Examples include diagnosis of skills a learner is missing, or a prompt for a student to make a prediction. For each of these features, we identified a small number of studies demonstrating that adding the feature to instruction improved learning outcomes.

The next step in our work, commencing this summer, is to conduct a comprehensive search of the learning effectiveness literature to identify all controlled studies of the courseware learning features published since 2000. Synthesizing the findings of all of the studies on a given feature will enable us to compute the average impact of adding the feature, providing a research-based estimate of its relative importance.

The result of this work will be a highly readable document where educators can find information on the range of subject matters, learning outcomes, and learner types included in studies of the feature. This publication will launch alongside an updated Courseware in Context Framework in October 2016.

Barbara Means is director of the Center for Technology in Learning at SRI International and Vanessa Peters, is an education researcher at the center.