Surveying Students Online To Improve Learning and Teaching

pc-globe_170333162
By Vishen Naidu, Shelly Kinash and Melanie Fleming.

One of the most sensible ways of improving learning and teaching is to ask the students for feedback. At the end of each teaching period (i.e. semester or term) all universities and many schools survey their students. Usually these surveys are managed online. Questions ask for student perceptions about teaching, assessment and workload. The survey administrators report four common problems. First, response rates are low. This means that valid and reliable conclusions cannot be drawn from the data. Second, students seldom take the time to write comments. It is comments that provide the necessary substance for meaningful change. Third, the questions are usually focussed on teaching and teachers rather than learners and the learning experience. As a result, student evaluation is usually applied only to teachers’ annual reviews rather than to quality improvement of education. Fourth, and as a consequence of the first three concerns, student evaluation rarely results in closing-the-loop. Closing-the-loop means that action is taken, the student feedback is applied to make meaningful changes and these improvements are clearly reported back to the students. This article reports what Bond University did to resolve these four problems of response rates, student comments, question content and application to reported quality improvement.

Traditionally, the last week of teaching amongst exam preparation was reserved for Student Evaluation of Courses and Teaching (SECT) at Bond University – using a paper-based survey instrument that was both resources intensive and cumbersome, not to mention prone to handling error. However, in recent years, the move towards an online student evaluation system has become a widely accepted and well-established practice within the higher education sector. After successfully collaborating with EvaluationKIT on a pilot project implemented in 2009, Bond launched its online student evaluation system in the first trimester of 2012, with the overarching aiming of implementing a comprehensive, cost-efficient and reliable system. The system was received positively by both staff and students and delivered exceptional results. Since then, the focus has shifted to further developing the system to incorporate features/functions that:

  1. engaged with students to allow for reflective learning,
  2. encouraged deeper, more meaningful written comments,
  3. contained a well-developed, balanced set of questions that addressed the most pertinent areas of learning and the student experience, and
  4. facilitated greater transparency of the actions to close the loop on student feedback.

The Office of Learning and Teaching worked closely with EvaluationKit to integrate four key features that address the areas highlighted above.

Response Rates

One of the most significant and pervasive challenges of migrating to an online student evaluation system is that, inherently, response rates are low (Nulty, 2008). The problem with low response rates is that they do not provide sufficient data from which to infer teaching effectiveness. Unlike paper-based evaluations where surveys could be administered to a captive audience, the nature of online student evaluations relies heavily on voluntary student participation. Throughout the literature, low response rates are cited as a fundamental disadvantage of transitioning to an online evaluation system. However, some institutions have found solutions that work around this problem, such as: providing incentives, withholding or early access to results, allocating time before the end of class to complete outstanding surveys, and sending pre-notification and repeat reminders.

For Bond University, improving response rates in its online SECT system was critical following a disappointing overall response rate of 42 per cent in a 2009 pilot project. Consultations with various student focus groups revealed that learners were more likely to participate if there was some type of authoritative mechanism that prompted a response to complete any outstanding evaluations. As a result of this, a decision was made in consultation with the Bond University Student Association (BUSA), to integrate a pop-up module that would encourage participation, while acting as a restriction by preventing access to a student’s Learning Management System account until all outstanding surveys were completed. The two options on the pop-up were to complete the evaluations or ‘do it later’ – the latter option temporarily disabling the pop-up for the current session to allow students to access content. Students are also given the option to opt-out of the evaluation entirely by clicking on the statement, “I have considered completing this evaluation and have decided not to”.

In the first version, students then had to insert a reason for non-completion prior to resuming access to the learning management system. Students quickly figured out that they could enter garbled text to satisfy this requirement. In the next iteration, the students simply had to click on one overall evaluation rating, thus resembling the efficiency of the TripAdvisor App. This modification was well-received by students and derived useful quality assurance data. This data is extra to that achieved by the 90 per cent response rate of the full responses. Alongside these customisations, Bond also launched an internal communication strategy that used posters, email notifications, in-class demonstrations, and digital signage, to educate staff and students about the importance of evaluations which also worked to good effect in increasing engagement and creating awareness of the new system. When compared with the result of the pilot project, these strategies resulted in very high response rates, exceeding 90 per cent across the university in every semester.

Increasing Student Comments

While response rates are an important factor in increasing the reliability and accuracy of student feedback, improving the quantity and quality of student comments is equally important in understanding how students perceived the quality of the teaching of the subject. Each student has a unique learning style which has a significant impact on the way they react to different teaching styles and environments (Lewis, 2001). While the analysis of quantitative data can provide quantifiable and easy to understand results, it is the qualitative data that can provide greater insight and suggestions of what areas can be improved or maintained. One way in which Bond has developed this area is by encouraging instructors to engage with students in the classroom on the significance of student evaluations and the importance of written comments. This often helps students frame and structure their responses based on their classroom discussions. The Office of Learning and Teaching also actively promotes the importance of student comments through email campaigns, digital signage and student association publications. The introduction of the QUILT-SF (Quality Improvement In Learning and Teaching) initiative has also helped reinforce the practical use of student evaluations, which provides direct evidence that steps are taken in considering their feedback, thereby encouraging students to provide more thoughtful comments.

Survey Content

SECT is one of the most prevalent methods of soliciting feedback from students about their perceptions of the courses they are taking and the instructors that teach them. The design of the instruments and survey questions vary widely from institution to institution, but most often contain a set amount of fixed questions supplemented with some open ended questions. Beyond the typical SECT questions that concentrate on evaluating learning and teaching across courses, few institutions use measurements that gauge student engagement and the overall student experience. When it became clear that the survey instrument that was being used at the time was not capturing the relevant data to ensure the quality of learning and teaching at Bond, a decision was made to develop a new subject evaluation instrument that would also draw on student perceptions of engagement and the student experience. The construction of the new instrument was reviewed, validated and implemented under the direction of the Learning and Teaching committee. The new question items were reformatted as follows:

  • The assessment tasks are appropriate to the learning outcomes.
  • The learning activities in this subject helped me to learn effectively.
  • The workload in this subject was realistic and appropriate.
  • I felt engaged by this subject.
  • Overall I am satisfied with the quality of this subject.

Closing The Feedback Loop

When students participate in the evaluation process, their primary concerns are whether their opinions actually matter, what happens to their responses and what actions/steps are taken in addressing their concerns. Responding to student feedback does not always warrant meeting expectations; however, students do want to feel involved in the process and do expect a level of transparency when it comes to responding to their feedback (Watson, 2003).

Universities and schools using online evaluation surveys report the quantitative results through descriptive statistics. Usually the mean and standard deviation of responses to each question are reported and charted. Online evaluation has the advantage of technology-enabled developments. Modern researchers conducting qualitative research use analysis software to efficiently identify key themes. Some online evaluation systems now include qualitative analysis in their packages. Bond’s selected system, EvaluationKIT, includes text-based analysis. As a result, educators can quickly see that student comments are primarily about aspects such as assessment and then see whether the overall sentiment is positive or negative. This data analysis substantially increases the power of evaluation to lead to quality improvement. A newly developed feedback system that Bond is currently trialling draws on the written comments using a content analysis tool to close the loop on student feedback. The introduction of the QUILT-SF system was designed to provide explicit evidence of the university’s response to student feedback and improvement. The idea was inspired by conversations with former executive members of the Bond University Student Association (BUSA), and falls directly in-line with requirements set out by the Tertiary Education Quality and Standards Agency (TEQSA).

In developing the system, Bond University worked with EvaluationKit to automate the analysis of the student qualitative data. The system uses a comprehensive content analysis platform to identify prevalent themes/issues. Where there is sufficient comment data derived from the subject/course evaluations, the comments are grouped into themes depending on their frequency of occurrence. The content of the resulting reports includes: all student comments as entered, thematic analysis, and descriptive statistics from Likert scale items. Subject co-ordinators and Heads of school use this data to identify what responsive actions for improvement or maintenance need to be taken for the relevant subject(s), if applicable. The process then passes through the relevant Associate Dean of Learning and Teaching, who quality checks the reports before they are submitted to be queued for publication. Operationally, the QUILT-SF reports are accessible to students as a PDF link on the online subject outlines. When clicked, the link will direct students to the PDF report which outlines:
a) a summary of the prevalent item of student feedback regarding suggested maintenance or improvement of a subject and/or its teaching
b) the action taken in response to that feedback
c) the date it is anticipated to be actioned.

Based on the examples discussed in this paper, the following are key takeaways for improving evaluation through the use of online surveying:

Pop-up feature Use an online pop-up feature that integrates with the learning management system to improve response rates.
Usability Create a user-friendly system, making it easy for students to enter comments and advertise the importance of the comments.
Text Analysis Use a text-analysis program to identify key themes from the comments. Respond specifically to these themes and report resulting actions taken online.
Survey Design/Construct Ensure that the wording of the questions will derive responses that will achieve the goals motivating the evaluation (i.e. if the purpose of the evaluation is to improve learning, then ask the students what changes would help them learn). Link the resulting actions to relevant online content so that students can see what is done with their feedback.

In summary, online student evaluation systems have become a widely regarded and versatile mechanism for gathering feedback on student perceptions. While this practice is not without its challenges, the combination of these strategies in analysing and improving student feedback may help educational institutions build a comprehensive system capable of accurately improving learning and teaching, and ultimately enhancing the student experience. Given the importance placed on the process, it is essential that these system instruments are valid and reliable measurements of gathering data on student perceptions. It is also essential that institutions actively build awareness among students about the importance of student feedback and ensure that steps are taken to respond to their concerns.

Dr Shelley Kinash is the Director of Learning and Teaching, and Associate Professor Higher Education at Bond University on the Gold Coast, Queensland, Australia.

Melanie Fleming is a Project Manager and Researcher at Bond University. She is currently competing a PhD on the topic of Evaluation for Learning.

Vishen Naidu is a Project Coordinator in the Office of Learning and Teaching at Bond University, Queensland. Vishen’s primary role is to oversee the on-going administration and development of the electronic teaching evaluation system (eTEVAL) at Bond. Vishen has co-authored two published papers on student evaluation of teaching. His qualifications are in international business and marketing.

References:

Lewis, K. G. (2001). Making sense of student written comments. New Directions for

Teaching & Learning, 87, 25–32.

Nulty, D.D. (2008). The adequacy of response rates to online and paper surveys: what can be done? Assessment & Evaluation in Higher Education, 33(3), 301-314.

Watson, S. (2003). Closing the feedback loop: Ensuring effective action from student feedback, Tertiary Education and Management, 9(2), 145-157.

The following two tabs change content below.
Shelley Kinash

Shelley Kinash

Director, Office of Learning & Teaching at Bond University
Dr Shelley Kinash is Director, Office of Learning and Teaching at Bond University. Prior to Bond, Shelley taught as a Visiting Academic to the Faculty of Education (Graduate Certificate in Higher Education and Early Childhood) at University of Southern Queensland. Shelley was an Academic in the Faculty of Education (Educational Technology and Community Rehabilitation and Disability Studies) at the University of Calgary for 12 years. Shelley earned her PhD in Educational Technology in 2004. Her dissertation topic was Blind Online Learners, which she authored as one of her three books published by Information Age - Seeing Beyond Blindness. Shelley remains research active. You can contact her on skinash@bond.edu.au

There are no comments

Add yours