Latent Semantic Analysis-based tutorial and assessment tools for ADL

Period of Performance: 02/01/2002 - 02/01/2006

$979K

Phase 2 SBIR

Recipient Firm

Knowledge Analysis Technologies, LLC
4940 Pearl East Circle, Suite 200
Boulder, CO 80301
Principal Investigator

Abstract

Distributed learning is rapidly shifting many instructor-led classes to web-based training. But making the e-learning experience as effective as an instructor requires an arsenal of learner support tools that barely exists today. One noteworthy void is validating e-learning through authentic assessment, which includes mimicking excellent instructor feedback. In Phase I we demonstrated that the Intelligent Essay Assessor (IEA) can out-perform skilled graders, and developed a method to self-calibrate the system without human graded essays. From modeling instructors' comments on officers' memos, we identified requirements and explored automated methods for content commenting tools. Today's computerized writing tools focus on the nits of writing (style, grammar), whereas skilled instructors focus on content--semantics, knowledge, clarity, and appropriate supporting evidence. While past work has shown that IEA using Latent Semantic Analysis is indistinguishable from instructors' overall grades, the feedback it provides is still minimal. In Phase II we will develop more detailed and informative content-based feedback that will guide the practice by revision that is essential to good writing. Feedback will be developed for structured memos and for topically wide ranging papers. An integrated assessment and commenting system will be fielded and evaluated in at least one Army instructional environment.