The most important lessons learnt for me have been on providing a more solid theoretical framework on Collaborative learning. The most rewarding learning activity for me was the time spent on the reflections, and the (course and additional) readings required in order to be able to do the reflections. My most important conclusions from ONL162 are therefore already summarised in my earlier blogposts.
To me, what will be most important for my future work on collaborative technology-enhanced learning, is on the one hand a better understanding of all different terms related to PBL-like activities (Topic 1 bis) and on the other applying different models and analysis schemes for pedagogical design to the learning activity set-up of the Collaborative Robot-Assisted Language Learning project (Topic 3), such methodology and elements for Collaborative learning (Topic 3, bottom) as the Course feature view (Topic 4-I), Personas (Topic 4-II), The five stage model (Topic 4-III) and a learning activity storyboard (Topic 4-IV).
The above analyses will indeed be put into direct use in the CORALL project when it starts next year. This course has hence indeed provided me with valuable scaffolding for the upcoming project.
Something that was already built into the core of the CORALL learning activities was collaboration. The primary reason for this was pedagogical: just as in the many references on the benefits of collaborative learning that we have been reading in ONL162, we see a clear benefit of peer learning and peer support for the intended language learning.
There was however a secondary reason, which was added robustness in technologically-enhanced learning. Even if enormous progress has been made during the last decade, making Automatic Speech Recognition (ASR) finally reach a stage where it is competitive compared to text input, recognition errors still occur in general (as do misunderstandings in human-human interaction). For a native speaker, this can be a source of frustration, but that native speaker nevertheless knows that the failure is on the system’s side. For a non-native speaker learning a second language, the situation is more complicated: to start with, pronunciation, vocabulary and grammar might not adhere to the rules of the language, which makes non-native ASR significantly more challenging. In addition to this, a less proficient non-native speaker may falsely conclude that the reason for the non-recognition was that she said something incorrectly, even if the utterance would have been understood and accepted by a human listener. If the learner then starts to change her (initially correct) pronunciation or wording, this could be detrimental for the learning. By adding a peer learner, the two of them can support each other to identify and negotiate mis-recognitions by the robot. This turns such failures to a learning activity in itself, since negotiating communication problems is something that is essential in standard human-human communication as well.
I am thinking about this technological-pedagogical robustness dimension when looking back on my experiences within ONL162. So many technological problems with live and recorded webinars, microphone settings, program crashes, freezing video images, etc… And every single occasion has had a (small or larger) negative impact on the learning experience. In no case has there been any pedagogical benefit of the problems encountered…
… at least not for the course curriculum as such. However, unintentionally all these problems have taught me two things for any own upcoming use of communication technology in on-line collaborative learning:
- Carefully analyse which setting is the best for your on-line learning activity.
- Keep it simple!
What these two conclusions mean is that you should use the simplest (most robust) communication channel that is sufficient for the task at hand and use more complex means of interaction not merely because they are available, but because they are necessary or provide additional pedagogical value for the task (e.g., visual information is required; interaction is essential – but consider if video interaction is required, or if voice or even text finput is enough; session must be live; feedback must be instantaneous, …).
There is a pillar reference by Chapanis (1975) on Interactive Human Communication in Scientific American, where he found that speech- and video-based communication outperformed text-based communication for collaborative task solving (taking half the time to solve the task). There are hence clear benefits of being able to use speech compared to text-only communication. However, already Chapanis found that video did not add that many additional benefits, and the reason was not that they were experiencing technological problems with the video channel.
The choice of communication channel should hence be made depending on the suitability of the task, and combine text-, voice and video-based exchanges, live and pre-recorded. This will hence lead to another form of blended learning/teaching, w.r.t. the means of communication.
Picture: CC0 Public domain