Nitko and Brookhart (2011) defined feedback as information that is provided to a student from a teacher based on a formative assessment performed by observing and diagnosing student activity. In the broader definition, feedback can include any information about performance that can be used to better student performance. This feedback should be helpful to students in improving their work. Simonson, Smaldino, Albright, and Zvacek (2012) went on to suggest that feedback shows students the guidelines as to performance improvement. Feedback is very important in online environments, in field experiences, and in the classroom. There are several types of feedback; they can be classified by the type of interaction through which they are administered. Feedback can occur from the teacher to the student, from the student to the student, or from the media (material) to the student. Feedback can also occur from the media (material) to the teacher. The purpose of all this feedback is to adjust learning. Feedback delivered electronically includes synchronous feedback, such as prompts in electronically delivered quizzes. It also includes asynchronous feedback, such as comments on assignments and tests, discussion responses, and the notes given on drafts of papers submitted. Simonson et al. point out that prompt feedback is important for learning process improvement. This finding is echoed in a study of formative assessment in the college classroom performed by Mangino (2012), in which one major finding was that the proximity of feedback to the time in which the task was performed was directly related to learning gains. The technology available in both learner assessment and delivery of feedback has expanded greatly in recent years, and continues to grow. Distance learning environments are often the sites of the development of methods for the use of these tools.
Data as Feedback
Nonhuman Feedback: Data as Feedback?
Are data feedback? Not usually. In most cases, data need to be analyzed by a human, such as an instructor, and then delivered as feedback. Assessment of learning can be made by a human or by software. Often, assessment made by software needs to be interpreted and the feedback delivered by a human. Software programs can generate many types of reports, some to be delivered to the instructor for further analysis and action, and some to the student. Sometimes, simple feedback can be immediately delivered to the student. Usually this feedback comes in the form of "correct" or "incorrect" as to their answer. This article will discuss how data can be used in the formation of feedback and used as feedback itself.
Course Management Systems Tracking Data
Course management systems usually incorporate some sort of tracking mechanism to deliver data to instructors and course developers. These data could include navigation data, course management data, and demographic data. The data instructors could mine for formative assessment purposes would include student participation data and student surveys. Researchers have found that many instructors are unable to use the data available for formative assessment interventions (Grant, 2012). Course management systems can produce large amounts of data; unfortunately these data are often not reported in a way that is useful to the instructor. Instructors often do not have the correct data management tools, so the data that are available in the course management systems are inadequate to understand, diagnose, and provide feedback direct the learning experience of the students. Recent research has looked at an example in the Blackboard system. This research provoked the question: how can data provide the evidence to cause instructors to change content or intervene in the learning process? Objective data are readily available through the Blackboard sys tem. Data are easily collected on the number of postings a student has done, as well as on when and how they post. These data are slightly helpful, but even more helpful data are available. For example, data can be examined semantically to check if the discussions are on topic. This may tell the instructor if intervention is needed. However, the only way these data are useful is if they are accessible, complete, and usable by the instructor. This not only involves the instructor getting the data in usable form, but also the instructor knowing how to combine the objective data that are mined, with both subjective and objective data that the instructor has gathered through other means. It is only then that an instructor can make a determination of what formative steps are needed for individual students and the course as a whole (Dringus, 2012). A recommendation rising out of the research on this topic was that a layered approach be used to help end-users become more comfortable with the use of these data. Another suggestion was that a system should be developed in which instructors could share their strategies for using this type of data. And as always, evidence-based best uses reports would be helpful in creating faculty acceptance and use of the data available to improve teaching and learning (Grant, 2012).
Student response systems can provide immediate feedback in large university class settings. Powell, Straub, Rodriguez, and Van Horn (2011), studied the use of student response systems, also known as clickers, with 183 students divided into two groups. The grades of the two groups were analyzed to determine the effect of clicker use on achievement. A five-question survey was completed by the students to determine their perception of the use of the clickers on their learning. The questions addressed academic achievement, formative assessment effect on learning, student self-identification of gaps in knowledge acquisition, the fun aspect of clickers, and student perception of cost versus value. The results of the survey of perceptions showed that all of the students that used the student response system thought the clickers were fun, but only half of them thought they were worth the cost. Students felt that the feedback received from the student response system increase their understanding of the concepts and helped them catch gaps in their content knowledge. The results of the final grade analysis showed that students who used the clickers performed significantly better than those not using the clickers. Though clickers are still widely in use with student response system programs, they may soon be replaced by handheld devices such as tablets or mobile phones.
Handheld devices can be used to deliver feedback to instructors for grading, or directly to students in field activities. In a study published in 2009, handheld devices were used with elementary school teachers in training as they participated in a field experience tutoring students. Assessment data were collected through the use of an assessment tool that was designed for use on the specific handheld device being used (Bennett & Cunningham, 2009). It should be noted that the device used in this study would be considered quite out of date by 2014 standards. The handheld devices used had limited memory, so assessment tools had to be short. Data could only be entered through the use of the stylus, with no keyboard entry available. Even with these drawbacks, the teachers found the devices useful for entering assessment data that could then be accumulated and analyzed so that feedback could be generated, upon which action could be taken to improve learning activities (Bennett & Cunningham, 2009). With current technology, smartphones and tablets are able to take the place of previous devices, such as personal digital assistants, to perform both student response activities and data reporting via course management systems applications available to both smartphones and tablets. With the existence of the "cloud," data are available anywhere.
Mobile devices can be valuable tools in current methods of learning and teaching. One method in which feedback utilizing mobile devices can be used is scaffolding. Combining today's independent, self-directed learning methods with the vast amount of information students are confronted with can result in working memory overload. Scaffolding expands on the constructivist theory of education by stating that students build knowledge based on what they already know. Constructivist theory states that individuals tend to build knowledge from a series of experiences (Revere & Kovach, 2011). Scaffolding is the method used to structure the series of experiences noted in constructivism (Nitko & Brookhart, 2011). A study of this learning/teaching method, undertaken by Hung, Hwang, Lin, Wu, and Su (2013), in which students received information and answered questions during a field experience, found that the use of mobile devices during a field experience, to provide immediate feedback during an inquiry learning activity, produced positive results. The group using the mobile devices scored much higher on post activity assessments than groups performing the activity with no real time feedback. It should be noted that the control group did not have the scaffolding methodology built into their lesson.
Computer Assisted Learning
Simulation-based computer assisted learning programs also provide real-time feedback to assist in the learning process. A study of the use of computer assisted learning to assist in statistics coursework (T.-C. Liu, 2010), examined how synchronous feedback, provided through simulations, could direct students into appropriate activities to correct misconceptions that occurred during their learning process. Liu utilized the cognitive conflict learning model as a lens through which to exam this phenomena. The cognitive conflict model, much like the scaffolding model, addresses cognitive overload in learning. In this model, learning follows a flow from externalization to reflection to construction to application. In the simulation-based computer assisted learning program studied, feedback during the construction phase would be used to prompt students into activities that would create concepts about the content. Feedback at the conclusion of the application phase would either direct the student to remediate the construction phase or go on to the next activity. The results of interviews in this study were that students felt that they were able learn statistical interpretation with fewer misconceptions through the use of feedback in a simulation-based computer assisted learning system.
Synchronous Instructor-to-Student Feedback
Communication in an online environment occurs between instructor and student, peer-to-peer, and even between the content and the student. Communication between instructor and student can take place both synchronously and asynchronously. Some instructors prefer to use the online chat sessions in the course management system for synchronous communication. Others prefer to hold traditional office hours in which they can have phone conversations or even Skype sessions with their students. Instructor feedback also occurs during online class sessions in which discussions take place. Research results have shown that synchronous discussions with instructors can be very beneficial. Students have reported that synchronous, computer-based discussions, with instructors can be a positive form of formative assessment with feedback. Students reported that they felt the discussions shifted from passive learning to active learning, and that real-time instructor feedback caused a connection between misconception and correction. Both students and instructors reported a more positive experience with computer-based discussions versus face-to-face discussions. The instructor reported more participation in computer-based discussions as compared to face-to-face office hours (Chung, Shel, & Kaiser, 2006). Other studies have reported a feeling of disconnection in computer-based discussions because of the inability to observe nonverbal cues (Huang & Hsiao, 2012).
Asynchronous Instructor Feedback
A large factor in a student success in an online course is instructor feedback. Research results have shown that students who receive more teacher comments on assignments, rather than just a letter or numerical grade, earn higher course grades. Conversely, fewer teacher comments were associated with lower student grades. These results illustrate the idea that teacher feedback is an essential part of an online course (Liu & Cavanagh, 2011). In order to be effective, feedback must be prompt and on point (Simonson et al., 2012).
Synchronous Peer Feedback
Chat sessions, often embedded in course management systems live course sessions, can be a positive, student initiated, synchronous, feedback vehicle. However, often the feedback is pedagogically questionable and will be dominated by the students who type faster, rather than generating a diversity of ideas. Chat sessions also generate overlapping discussions and out-of-sync conversations. One solution to the negative aspects of chat sessions is to have an instructor facilitate the chat (Revere & Kovach, 2011).
Asynchronous Peer Feedback
Student-to-student communication occurs most often on either discussion boards or blogs. This asynchronous form of communication allows students to be creative as to content, but can also allow for some students to miss out in participation (Revere & Kovach, 2011). Once again, some students find that misunderstandings are more likely in online communication because of the absence of nonverbal cues (Huang & Hsiao, 2012).
Discussion boards could be seen as the asynchronous form of chat sessions, and closely related to classroom discussions. Discussion boards usually involve student-driven content based on instructor-assigned questions. Discussion boards allow for thought-out discussions, rather than spontaneous discussions, as in chat sessions or classroom discussions. Discussion boards allow all students to participate more equally, leading to higher quality discussions (Huang & Hsiao, 2012). In order for discussion boards to be effective in providing the type of feedback that leads to critical reflection, instructors should facilitate the discussion board activity. The pace of the discussion board assignments should be set to assist students in focusing on the course content. The facilitator plays the role of keeping the discussion focused, and ensuring participation (Henning, 2012).
Utilizing Asynchronous and Synchronous Peer Feedback Methods
Some research suggests that blending synchronous and asynchronous discussions creates a learning experience that includes the best of both worlds--the traditional classroom and the online learning environment. Proponents of this method assert that the traditional classroom discussion keeps students more engaged than an online, asynchronous discussion. At the same time, online, text-based discussions allow time for deep reflection and critical thinking. Some students found that online, synchronous discussions were frustrating because of overlapping conversations in the text-based chat. But overall, the students who participated in the synchronous sessions in this study found that both their involvement and their motivation in the course were increased. Additional benefits to providing synchronous discussion sessions along the same topic lines as the asynchronous discussion boards include having a second chance to check for understanding of the content discussed on the discussion boards, as well as stronger student preparedness for the synchronous discussion (Armstrong & Thornton, 2012).
Just as discussion boards can provide opportunities for critical reflection through peer feedback, blogs can provide for even more in-depth student expression and feedback. Many blog sites are free. Many course management systems provide a blog tool (Simonson et al., 2012). As a student driven process, blogs can be a positive outlet for student expression, and peer feedback enabling critical reflection. The asynchronous nature of blogs, as well as the longer contributions, can discourage the feedback process. Factors that can ameliorate this problem are good instructor facilitation, notification of postings and responses, and applications available for tablets and mobile phone devices (Revere & Kovach, 2011).
Peer Feedback on Writing: Wikis and Peer Assessment on Writing Assignments
Wikis are writing assignments that are completed collaboratively over the Internet. Wikis provide an opportunity for peer-to-peer feedback that enables the critical reflection process. Research has indicated that the feedback received through the wiki process is perceived as valuable by the student writer. Among the drawbacks to the wiki experience are difficulties in the social aspects of group process. Research indicates that this problem often diminishes as students proceed in the project, becoming more comfortable with each other and clearer on the objective of the writing. Instructor facilitation as well as strong assignment development can lessen the negative impact of the social difficulties. The wiki is, essentially, a group assignment that has an added purpose of being published to the Internet to provide information to others. Group assignments as a whole tend to encourage self-assessment and reflection through feedback from peers. Knowing that one will have to assess the work of another student encourages all students to understand the assessment criteria of the assignment. Thus collaboration and peer feedback assignments help not only the student receiving the feedback, but the student giving it (Lin & Kelsey, 2009; T-C. Liu, 2010; Revere & Kovach, 2011).
Peer Feedback Essentials
Research results have shown that peer feedback assignments can lead to one of the greatest goals of education; lifelong learning (Phelan, 2012). Peer assessment is best done with a clear instructor rubric. Student involvement with the construction of the rubric, as well as clear assessment criteria, and appropriate instructor facilitation, can maximize gains in the learning objectives of the peer feedback assignment for all parties. These guidelines can also minimize the occurrence of inaccurate peer feedback, which is one of the drawbacks of a peer assessment assignment. Peer feedback can be constructed within a course management system, or using social networking and other Internet resources such as Twitter, various Google applications, and audio/video applications such as Skype (Revere & Kovach, 2011). Feedback can be delivered in a timely manner that is satisfactory to both recipient and provider through course management systems or through e-mail. In distance learning, the feedback process can create a sense of community in the online classroom. Providing feedback, and receiving and utilizing feedback, often results in a reflection process that becomes a lifelong learning skill (Phelan, 2012).
Feedback is an essential part of online learning. Its primary purpose is to adjust learning processes in order to maximize student achievement of learning goals (Simonson et al., 2012). Feedback can be delivered from the instructor to the student, student-to-student, or even from software embedded in the software that is delivering content to the student. Feedback can be in the form of data, as in the case of the reports that instructors can obtain from course management systems. This information must be analyzed and then delivered to the student as useable feedback (Grant, 2012). Simple feedback can be delivered from applications on mobile phones, in student response systems in large classes, or through computer assisted learning systems. Feedback in the form of human communication is delivered electronically in distance learning courses. Distance learning can include both synchronous and asynchronous feedback activities. Students have responded with both positive and negative perceptions of synchronous communication over the Internet (Chung et al., 2006; Huang & Hsiao, 2012). Asynchronous feedback from an instructor comes in the form of grades, comments, and corrections on papers. For the best learning improvements, this feedback should be given as close to the time the task being assessed was performed as possible (Mangino, 2012). Peer-to peer feedback occurs in distance learning courses through discussion boards, blogs, wikis, and other collaborative assignments. The peer-to-peer feedback process fosters, in the learner, a capacity for critical reflection, which is a lifelong learning skill. In addition, all of the feedback processes assist in the creation of the online learning community (Phelan, 2012).
Armstrong, A., & Thornton, N. (2012). Incorporating Brookfield's discussion techniques synchronously into asynchronous online courses. Quarterly Review of Distance Education, 13(1), 1-9.
Bennett, K. R., & Cunningham, A. C. (2009). Teaching formative assessment strategies to preservice teachers: Exploring the use of handheld computing to facilitate the action research process. Journal of Computing in Teaching Education, 25(3), 99-105.
Chung, G. K., Shel, T, & Kaiser, W. J. (2006). An exploratory study of a novel online formative assessment and instructional tool to promote students' circuit problem solving. Journal of Technology, Learning and Assessment, 5(6).
Dringus, L. P. (2012, June). Learning analytics considered harmful. Journal of Asynchronous Learning Networks, 16(3), 87-100.
Grant, M. R. (2012). University of Missouri-St. Louis: Data-driven online course design and effective practices. Continuing Higher Education Review, 76, 183-192.
Henning, T. (2012). Writing professor as adult learner: An autoethnography of online professional development. Journal of Asynchronous Learning Networks, 16(2), 9-26.
Huang, X., & Hsiao, E.-L. (2012). Synchronous and asynchronous communication in an online environment: Faculty experiences and perceptions. Quarterly Review of Distance Education, 13(1), 15-30.
Hung, P.-H., Hwang, G.-J., Lin, Y.-F., Wu, T.-H., & Su, I.-H. (2013). Seamless connection between learning and assessment: Applying progressive learning tasks in mobile ecology inquiry. Educational Technology and Society, 16(1), 194-205.
Lin, H., & Kelsey, K. D. (2009). Building a networked environment in wikis: The evolving phases of collaborative learning in a wiki-book project. Journal of Educational Computing Research, 40(2), 145-169.
Liu, F., & Cavanagh, C. (2011). High enrollment course success factors in virtual school: Factors influencing student academic achievement. International Journal on E-learning, 10(4), 393-418.
Liu, T-C. (2010). Developing simulation-based computer assisted learning to correct students' statistical misconceptions based on cognitive conflict theory, using "correlation" as an example. Journal of Educational Technology & Society, 13(2), 180-192.
Mangino, P (2012). Exploring the four core elements of formative assessment in college classroom instruction: Faculty member perspectives (Unpublished doctoral dissertation). Johnson & Wales University, Providence, RI.
Nitko, A. J., & Brookhart, S. M. (2011). Educational assessment of students (6th ed.). Boston, MA: Pearson.
Phelan, L. (2012). Assessment is a many splendoured thing: Fostering online community and lifelong learning. European Journal of Open, Distance and E-Learning, 2012(1), 1-12.
Powell, S., Straub, C., Rodriguez, J., & VanHorn, B. (2011). Using clickers in large college psychology classes: Academic achievement and perceptions. Journal of the Scholarship of Teaching and Learning, 11(4), 1-11.
Revere, L., & Kovach, J. V. (2011). Online technologies for engaged learning: A meaningful synthesis for educators. Quarterly Review of Distance Education, 12(2), 113-124.
Simonson, M., Smaldino, S., Albright, M., & Zvacek, S. (2012). Teaching and learning at a distance (5th ed.). Boston, MA: Pearson.
Lisa Goldsmith, 4811 Garden Spring Ln., Apt. 302, Glen Allen, VA 23059 E-mail: email@example.com
Table 1. Feedback Types by Temporal and Source Characteristics Feedback Synchronous Human Feedback Data as Feedback Delivered Electronically To student: Instructor to student: * Quiz results * Live chat office hours * Tutorials * Chat facilitation in * Mobile devices online class session * Clickers Peer-to-peer * Chat discussion in online class session * Online meetings for group assignments Asynchronous Human Feedback Data as Feedback Delivered Electronically CMS-to-instructor: Instructor-to-student: * Reports from the CMS * Grades and comments to be analyzed and * Discussion facilitation feedback delivered to * Corrective comments student on drafts of papers Peer-to-peer: * Wikis * Blogs * Discussion boards * Peer review of written work Sources: Grant (2012); Hung et al. (2013); Liu and Cavanagh (2011); Mangino (2012); Powell et al. (2011); Revere & Kovach (2011).