Introducing SELEVOR. A digital self-assessment tool for lecturers

Klostermann, André; Bögli, Laura; Bättig, Christian; Rufer, Lydia; Tribelhorn, Thomas (28 June 2022). Introducing SELEVOR. A digital self-assessment tool for lecturers (Unpublished). In: SIG 1 & 4 Joint Conference 2022.

[img]
Preview
Text
22_earli_SELEVOR_poster.pdf - Presentation
Available under License BORIS Standard License.

Download (338kB) | Preview

Evidence shows that teachers and lecturers should regularly reflect on their teaching behavior to improve their teaching skills. (e.g., Lombarts et al., 2009). In third-level education institutions, like Universities, the evaluation of teaching in higher education has become a standard instrument to assess the quality of academic education. Theoretically, lecturers can take advantage of this student evaluation of teaching to self-assess their own behavior. However, these universal tools (1) are mainly designed for summative and, thus, judgmental purposes (e.g., Hounsell, 2003), (2) often fail do address criteria of good and effective teach-ing (e.g., Johnson, 2000), (3) they are rarely tailored to specific forms of teaching (e.g., Spooren et al., 2013), (4) and they do not provide specific interventions tailored to potential gaps. Consequently, additional services are required to evaluate teaching concepts more individually and in advance to prevent potential problems.
At the University of Bern, the Education Development Unit is developing a digital self-assessment tool that aims to support lecturers in the preparation of lectures (SELEVOR). SELEVOR is being conceptualized as questionnaire embedded in the Universities’ learning management system which can be accessed by all members of Swiss third-level education institutions. At its heart, SELEVOR evaluates teaching concepts based on seven principles derived as synthesis from recent theories and models in psychology of learning and teaching (among others, Biggs & Tang, 2011; Hattie, 2012): (1) constructive alignment, (2) target group orientation, (3) problem focus, (4) choice of content, (5) elaboration, (6) adaptive teaching, and (7) teacher engagement. Since lectures are focused, central concepts from communication and rhetoric served as an additional source. Each principle is measured with five items on a four-point likert scale. Users receive immediate feedback on how well their concepts align to each of the seven principles as well as specific suggestions, derived from algorithmic analyses.
In the pilot phase, totally 101 users (49 female) filled the questionnaire. More than three out of four were (senior) lecturers (54.6 %) as well as PhD students (18.5 %) and post-docs (9.2 %). About half of the users reported at least 5 years of teaching experience and at least 1 week of advanced training in higher education didactics. SELEVOR was accessed by the members from all Universities’ faculties, which elucidates that – as intended – SELEVOR was used by a broad range of lecturers. Thus, first distribution and advertisement was successful. Feedback from the users showed that time to completion was acceptable and additional written feedback (18.5 % of the users) marked SELEVOR as helpful and easy to use tool for preparing lectures but also comprised of specific suggestions to further improve questionnaire mechanics.
To assess the reliability and validity of SELEVOR, in a first phase we performed item analyses to calculate the average inter-correlation of a multi-item scale (IBM SPSS Statistics 28) as suggested by Bühner (2011). The resulting Cronbach’s alpha values ranged between 0.31 (constructive alignment) and 0.71 (elaboration) with only 2 variables scoring higher than 0.6 which is defined as acceptable (e.g., Bland & Altman, 1997). Therefore, further single-item analyses will be performed designed as mixed-method approach (cognitive pretesting method) to further revise the questionnaire. Since this process is still running, the results will be presented at the conference.
In sum, it can be said that SELEVOR manifests as adequate and low threshold digital tool that provides lecturers feedback and specific treatments regarding their teaching concepts. However, further revision is necessary to improve quality criteria.

Item Type:

Conference or Workshop Item (Poster)

Division/Institute:

09 Interdisciplinary Units > Centre for University Continuing Education

UniBE Contributor:

Klostermann, André, Bögli, Laura Joana, Rufer, Lydia, Tribelhorn, Thomas

Subjects:

300 Social sciences, sociology & anthropology > 370 Education

Language:

English

Submitter:

André Klostermann

Date Deposited:

05 Sep 2022 12:13

Last Modified:

05 Dec 2022 16:23

Additional Information:

References
Biggs, J., and Tang, C. (2011). Teaching for Quality Learning at University. Berkshire: Open University Press.
Bland, J. M., & Altman, D. G. (1997). Statistics notes: Cronbach's alpha. BMJ, 314, 572.
Bühner, M. (2011). Einführung in die Test- und Fragebogenkonstruktion. Pearson: Deutsch-land GmbH.
Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. New York: Routledge.

BORIS DOI:

10.48350/172590

URI:

https://boris.unibe.ch/id/eprint/172590

Actions (login required)

Edit item Edit item
Provide Feedback