Photo from morguefile.com |
The explanation in the ABKD group’s introduction that their model “favours a socio-constructivist approach to learning” coincides with Ullrich, Tan, Borau, Shen, Luo and Shen (2008) who state that learning from a constructivist’s view takes place in a social context and “the innate properties of Web 2.0 services are beneficial for learning” (p. 707). In their discussion of Web 2.0, one example that Ullrich et al. (2008) provide is the learning of a foreign language which, coincidentally, is the focus of learning in the trial evaluations the ABKD group did in relation to Wordpress and Audacity. Indeed, Ullrich et al. (2008) argue that Web 2.0 “is characterized by social learning and active participation, as advocated by constructivism” (p. 709).
One of the biggest strengths of the ABKD group’s software evaluation approach is the opportunity for the evaluator(s) to deeply explore and provide feedback based on socio-constructivist goals and outcomes, as evident in section three of their model. Question 3.3 states: “Does the software encourage exploration, experimentation, and student creation of their own knowledge? Explain how with some examples of what students can do.” From their own trial evaluations, the ABKD group illustrated the effectiveness of providing the opportunity for the evaluator to explain how the software “can help learning” as opposed to simple “yes” or “no” answers. For example, in their evaluation of WordPress, and in response to question 3.3, it states that students write posts about their learning experiences and other daily experiences which provides the opportunity for students to get to know each other better and form friendships as they communicate in a foreign language.
Additionally, this approach was effective in section two, regarding usability. In question 2.4, on the evaluation of Audacity’s interface design and representation of the content, the evaluator makes specific comments about the application not being “visually stimulating” and “young students may find it ‘boring’ to look at.” Further observations in relation to this question explain that the icons in the interface are “unique to audio processing” and practice in the interface is necessary even though the manual editing is simple to complete. The qualitative format of the evaluation model allows for observations from the evaluator that can be warning flags for educators who may have students who could become overwhelmed with unfamiliar and/or difficult interfaces. Ullrich et al. (2008) make reference to previous studies that have shown that “disorientation and cognitive overload are the principal obstacles of self-regulated learning in technology-enhanced learning” (p. 707).
Another example of the effectiveness of this software model’s design is apparent in section five on “Support and Continuous Development” where the evaluator must complete a number of comprehensive open-ended questions concerning on-line documentation, opportunities to provide feedback to the developer, the status of the developer’s website, available updates, etc. Question 5.3 instructs the evaluator to “Look at the developer's website and comment on recent activities. Are developers addressing concerns and problems in the forums? Do current users seem happy with the software?” These question are important not only in terms of what support and documentation is available, but also, as indicated by Stamelos, Refanidis, Katsaros, Tsoukias, Vlahavas and Pombortsis (2000), in “giving suggestions on the various teaching strategies instructors can adopt...informing how the program can be fitted into a larger framework of instruction, etc.” (p. 9). Such resources and user feedback on developers’ sites is becoming increasingly vital with the use of Web 2.0 applications in education and reflect the nature of Web 2.0 in “harnessing the power of the crowd” (Ullrich et al., 2008, p. 707).
This software evaluation model covers many issues that could arise when using Web 2.0 software and saving data to the “cloud.” The model addresses the issue of data portability in question 4.2 under “User Data and Security.” As Web 2.0 applications evolve and sometimes are even terminated, such as the recent announcement that Google will be shutting down Google Reader (Bilton, 2013), it is very important that an evaluator provides information on how data can be saved and exported. Furthermore, information must also be provided in regard to terms of service around ownership, privacy, etc. as addressed in 4.3, 4.4 and 4.5.
One of the strengths of the ABKD group’s software evaluation model could also be its biggest weakness: the process. While addressing the reality in most educational settings - where educators usually work hand in hand with technology coordinators - the model may become unwieldy in its execution. The model is divided into three parts. The instructor completes the preliminary evaluation; the educator, in conjunction with technology coordinator, completes the secondary evaluation. If the software is deemed worthy of further scrutiny, it is then tested with a pilot group of students who will complete student evaluations.
The preliminary evaluation is relatively concise, and the evaluator answers most of the form using a Likert Scale. However, while the ABKD group indicates this form is to be completed by the instructor, the title of the form provided for the results of the evaluation of WordPress indicates “ICT Coordinator” and then states in the instructions it is to be completed by “the instructor” to make “a secondary assessment.”
Aside from issues that could result in confusion with titles, terminology and when the evaluation is to take place, another concern with the process is the assumption made that “the teacher is knowledgeable in current teaching trends and best practices, and seeks to employ a constructivist pedagogy as the dominant form of instruction and learning. It also assumes the teacher is knowledgeable in the content area for which the software is intended.” Even though the teacher may be knowledgeable in current trends and practices, they may still have limitations when it comes to software evaluation and their experience and knowledge with technology and software. Tokmak, Incikabi and Yelken (2012) report that when students who were studying in education programs performed software evaluations, they did not “ provide details about specific properties in their evaluation checklist or during their presentation” and they “evaluated the software according to general impressions” (p. 1289). They concluded that education students and new teachers “should be given the opportunity to be involved in software evaluation, selection and development” (p. 1293).
One can argue that the ABKD group’s evaluation process addresses some of these concerns, since they suggest the majority of the evaluation is to be completed by the instructor in conjunction with a technology coordinator. However, it may be possible to streamline the process by incorporating the preliminary evaluation - the “Basic Software Information” and “Features and Characteristic” - into their secondary evaluation that is to be jointly completed by the instructor and technology coordinator. New teachers and teachers who subscribe to a constructivist’s view, but who have limited experience with technology and software evaluations, may find the preliminary evaluation intimidating and/or confusing. Furthermore, the final assessment completed by the students may be best considered as an optional evaluation, since not every school environment may allow for such an evaluation to take place. This was evident even in the ABKD Group’s software evaluations, as they did not have the opportunity for students to complete the third part due to holidays.
Overall, the software evaluation model proposed by the AKBD group is a comprehensive and qualitative model that addresses the current evolving trends of mobile and cloud computing and Web 2.0 applications. It is a solid and versatile model that addresses the need for educators experienced in using technology and technology coordinators to work collaboratively in selecting, evaluating, and using software for educational purposes. With some minor changes to its design, it could also be a model that could work as a guide and motivate inexperienced or new educators to introduce the use of more technology and software in their learning environments as they gain experience in software evaluations.
References
Bilton, N. (2012). The End of Google Reader Sends Internet Into an Uproar. The New York Times. Retrieved from: http://bits.blogs.nytimes.com/2013/03/14/the-end-of-google-reader-sends-internet-into-an-uproar/
Poissant, A., Berthiaume, B., Hogg, K., & Clarke, D. (2013). Team ABKD Group 5010 CBU Winter 2013: Our Software Evaluation Model. Retrieved from: http://alexthebear.com/abkd/
Stamelos, I., Refanidis, I., Katsaros, P., Tsoukias, A., Vlahavas, I., & Pombortsis, A. (2000). An adaptable framework for educational software evaluation. Retrieved from: delab.csd.auth.gr/~katsaros/EdSoftwareEvaluation.ps
Tokmak, H. S., Incikabi, L., & Yelken, T. Y. (2012). Differences in the educational software evaluation process for experts and novice students. Australian Journal of Educational Technology, 28(8), 1283-1297. Retrieved from: http://www.ascilite.org.au/ajet/ajet28/sancar-tokmak.pdf
Ullrich, C., Xiaohong, T., Borau, K., Liping, S., Heng, L., & Shen, R. (2008). Why Web 2.0 is Good for Learning and Research: Principles and Prototypes. WWW’08 Proceedings of the 17th international conference on World Wide Web, 705-714. Retrieved from: http://wwwconference.org/www2008/papers/pdf/p705-ullrichA.pdf