Thursday, January 10, 2013

Whose opinion is best?

Photo from morguefile.com
After reading Deborah Lynn Stirling's article, "Evaluating Instructional Design," I am not sure if I am more or less confused about the best approach for software evaluation. I like the idea of an experimental study, but as noted in the article, how the student learns or how the software is used is not part of this approach. The effectiveness of the software is measured through the results of student achievement. Yet high student achievement does not necessarily mean the software is effective. As indicated in Wanda Y. Ginn's article "Jean Piaget - Intellectual Development," drill and practice computer software "does not fit in with an active discovery environment" and "does not encourage creativity or discovery." An instructional software could simply be drill and practice where student achievement is high, but if the evaluation approach does not include examining how the student learns, I think it possesses a major shortcoming. In contrast, Stirling's discussion of the User Surveys approach indicates that teachers do "in fact judge software based on evidence other than student achievement." While I am not always sure of the practicality of this approach, I do agree with the statement that teachers "can benefit from field testing software within their own classroom." I know from my own experience, that if I can find the time to have a few students sit down and try software before I introduce it to the whole class, I can avoid minor issues simply because I never anticipated such problems would surface. The software is ultimately going to be in the hands of the teacher and the students, so I think this approach makes the most sense, despite its challenges in terms of practical application.

The overview provided by Stirling on evaluating instructional software could be appropriate for tool/application software like word processors or spreadsheets. I think that any approach which involves both the educator and student, like the User Surveys approach, creates an opportunity for students to become, as Ginn points out, "active participants instead of passive sponges" in their learning. From my own experiences, I have introduced software to students only to have some students suggest they could do it more effectively with a different software. So if the teacher is willing to field testing software (whether it be instructional or another type of application) with his/her students, that will certainly open the door for students to be more actively involved in their learning. Additionally, the field-testing of tool/application software is getting increasingly harder and harder with the introduction of students bringing their own devices to school (BYOD). The traditional standardization of technology in many schools is slowly fading. Norris and Soloway in "Tips for BYOD K12 Programs" argue that by the year 2015 "every student in America's K-12 public school system will have a mobile device to use for curricular purposes." As students continue to be connected to the Internet along with the various Web 2.0 applications available to them, it only makes sense they will have a more active role in what application they will want to use to get the job done. It can be argued that this development of less standardization and more personalization could make field testing obsolete. I would argue, however, that this approach blends well with the Web 2.0 world where students are given the opportunity in the classroom to present applications for field-testing that the teacher might not have considered or even be aware of.

When I initially considered Stirling's conclusion on evaluating instructional software, I decided that the direct approach was more in keeping with her perspective. I took this position because the direct approach is where the teacher has control, and Stirling argues that "software evaluation should be conducted by the instructor." What is not clear to me, however, is whether Stirling is arguing for a completely new approach to software evaluation, or if she is suggesting that the last approach she discusses, User Survey (where the teacher does field testing of the software), is the approach that is best suited because the teacher/instructor conducts the evaluation. If she is suggesting the User Survey approach with field testing in the classroom, I think this could potentially be a constructivist perspective because, as Stirling concludes, "the evaluation method used should yield information about quality, effectiveness, and instructional use." This approach, in my opinion, would have to go beyond data collection involving only student achievement. The students involved in the field testing would have to be active participants in that data collection even though the approach is being conducted by the teacher. Stirling's quote from V.E. Weckworth, however, would suggest a more direct approach as the argument is made that the critical element in evaluation is "who...has the power, the influence, or the authority to decide." If Stirling is suggesting the teacher should conduct his/her evaluation with this philosophy in mind, then it would definitely be a direct approach. However, I would argue this view of power, influence and authority is greatly outdated in today's world, especially with tool/application software where younger students have as much access to powerful applications as adults and where lucrative software startups are created by people hardly out of high school.

In following the Expert Opinion approach for software evaluation, I think there are a number of criteria that should be included. I had some struggle with coming up with more than three, so I did get some help from the site, TechPudding.

  • One major concern in any educational community is budgeting, so the cost of licensing, implementation and potential upgrading should be considered. 
  • As an educator, I think how the software tracks the data, collects it and allows the teacher/school to analyze and share the data is important. 
  • Is the software user-friendly and does it include a clean layout or user interface (does it follow the typical interface structure of most existing software so the user does not have to relearn how to navigate in the software). 
  • The site TechPudding makes a great case for universal design and higher order thinking attributes. In terms of engagement and learning in the 21st century, does the software meet the needs of today's learners?

No comments:

Post a Comment