Abstract
This study aims to examine IT educators’ opinions on using Microsoft Copilot Chat for their professional tasks. The significance of this research lies in the increasing influence of generative AI technologies on learning and the necessity to evaluate their feasibility. The study employs an expert survey method based on a rating scale. 18 experts participated in it. The results indicate varying levels of satisfaction among experts with Microsoft Copilot Chat responses depending on the type of task. The highest-rated tasks were Trivia on a certain topic (4.67), unit test generation (4.50), optimise code (4.44), creating the content for slides on a certain topic (4.44), and creating a comparative table between different items (4.27).The tasks with the lowest ratings were creation of a logo for the conference (3.22), grading essays based on rubrics (3.17), identifying a logical fallacy in a particular article (3.00), convert the text in the image to a format that I can copy and paste (2.88), and creating a mind map to illustrate concepts (2.70).Therefore, using Microsoft Copilot Chat for these tasks with low ratings is not currently recommended. We used the SPSS Statistics suite to calculate Cronbach’s Alpha and Cronbach’s Alpha Based on Standardised Items. Based on the analysis of the experts’ responses, ratings were collected for each professional task for which a prompt was provided.The study’s practical significance lies in demonstrating to educators the capabilities of Microsoft Copilot Chat in performing their routine professional tasks. It has been particularly effective in several areas, including: administrative tasks (writing speeches, planning routes), assessment (developing tests, tasks for formative and summative assessment), communication (preparing information materials), lesson planning (generating ideas, creating graphic materials), programming assistance (explaining and optimising code), scientific activities (creating bibliographies, analysing articles), and others (e.g. playing intellectual games on the relevant topic). Future research opportunities are proposed, including the development of advanced training programs for IT educators on integrating AI into their professional practices and an examination of the effectiveness of these programs.
References
[1]. L. Labadze, M. Grigolia, and L. Machaidze, ‘Role of AI chatbots in education: A systematic literature review’, Int. J. Educ. Technol. High. Educ., vol. 20, p. 56, 2023. doi: https://doi.org/10.1186/s41239-023-00426-1. (in English).
[2]. N. Jones, ‘Bigger AI chatbots more inclined to spew nonsense – and people don’t always realize’, Nature, vol. 6, Sep. 2024. doi: https://doi.org/10.1038/d41586-024-03137-3. (in English).
[3]. C. V. R. Padmaja and S. Lakshminarayana, ‘The rise of AI: A comprehensive research review’, IAES Int. J. Artif. Intell., vol. 13, pp. 2226–2235, 2024. doi: https://doi.org/10.11591/ijai.v13.i2.pp2226-2235. (in English).
[4]. Z. Hojeij, M. A. Kuhail, and A. ElSayary, ‘Investigating in-service teachers’ views on ChatGPT integration’, Interact. Technol. Smart Educ., 2024. doi: https://doi.org/10.1108/ITSE-04-2024-0094. (in English).
[5]. K. Lu, ‘Can ChatGPT help college instructors generate high-quality quiz questions?’, Human Interaction and Emerging Technologies, T. Ahram and R. Taiar, Eds., USA: AHFE Int., 2023, pp. 123–145. doi: https://doi.org/10.54941/ahfe1002957. (in English).
[6]. Naweed-e-Sehar, ‘Exploring teacher attitudes towards ChatGPT: A comprehensive review’, Int. J. Soc. Sci. Entrep., vol. 4, pp. 212–225, 2024. doi: https://doi.org/10.58661/ijsse.v4i1.259. (in English).
[7]. M. B. Mutanga, V. Jugoo, and K. O. Adefemi, ‘Lecturers’ perceptions on the integration of artificial intelligence tools into teaching practice’, Trends High. Educ., vol. 3, pp. 1121–1133, 2024. doi: https://doi.org/10.3390/higheredu3040066. (in English).
[8]. European Parliament and Council, Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence (AI Act), Official Journal of the European Union, L 1689, 12 July 2024 [Online]. Available: http://data.europa.eu/eli/reg/2024/1689/oj (in English).
[9]. European Commission: Directorate-General for Education, Youth, Sport and Culture, Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators, Publications Office of the European Union, 2022. [Online]. Available: https://data.europa.eu/doi/10.2766/153756 (in English)
[10]. European Agency for Safety and Health at Work and Bollmann, U., Artificial intelligence and education – A teacher-centred approach to safety and health, Publications Office of the European Union, 2024 [Online]. Available: https://data.europa.eu/doi/10.2802/80935 (in English).
[11]. J. Belda-Medina and V. Kokošková, ‘Integrating chatbots in education: Insights from the Chatbot-Human Interaction Satisfaction Model (CHISM)’, Int. J. Educ. Technol. High. Educ., vol. 20, p. 62, 2023. doi: https://doi.org/10.1186/s41239-023-00432-3. (in English).
[12]. R. Chocarro, M. Cortiñas, and G. Marcos-Matás, ‘Teachers’ attitudes towards chatbots in education: A technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics’, Educ. Stud., vol. 49, pp. 295–313, 2021. doi: https://doi.org/10.1080/03055698.2020.1850426. (in English).
[13]. T. C. Nguyen, ‘University teachers’ perceptions of using ChatGPT in language teaching and assessment’, Proc. AsiaCALL Int. Conf., vol. 4, pp. 116–128, 2023. doi: https://doi.org/10.54855/paic.2349. (in English).
[14]. K. Osadcha, J. Szynkiewicz, and M. S. Chishti, ‘Using Microsoft Copilot Chat in the work of IT educators: Pilot study’, Norsk IKT-Konferanse Forsk. Utdann., vol. 4, 2024. doi: https://doi.org/10.5324/nikt.6202. (in English).
[15]. NTNU, ‘Guide to using Microsoft Copilot Chat’, 2024. [Online]. Available: https://www.ntnu.edu/documents/1271705576/1353444620/MsCopilotChat_Guide+For+IT+Educators_2024-05-16.pdf (accessed Apr. 3, 2025) (in English).
[16]. M. Cherie, ‘Expert surveys as a measurement tool: Challenges and new frontiers’, The Oxford Handbook of Polling and Survey Methods, L. R. Atkeson and M. R.-A. Michael, Eds., Oxford, UK: Oxford Univ. Press, 2018, pp. 123–145 (in English).
[17]. C. von Soest, ‘Why do we speak to experts? Reviving the strength of the expert interview method’, Perspect. Politics, vol. 21, pp. 277–287, 2023. doi: https://doi.org/10.1017/S1537592722001116. (in English).
[18]. P. F. Krabbe, ‘The measurement of health and health status’, Measurement of Health and Health Status, P. F. Krabbe, Ed., Cambridge, MA, USA: Academic Press, 2017, pp. 113–134 (in English).
[19]. T. C. Brown and T. C. Daniel, ‘Scaling of ratings: Concepts and methods’, Res. Paper RM-RP-293, U.S. Dept. Agric., Forest Serv., Rocky Mountain Forest and Range Exp. Station, 1990. doi: https://doi.org/10.2737/RM-RP-293. (in English).
[20]. M. Mahbub, N. Manjur, and J. Vassileva, ‘Towards better rating scale design: An experimental analysis of the influence of user preference and visual cues on user response’, Persuasive Technol., R. Ali, B. Lugrin, and F. Charles, Eds., Cham, Switzerland: Springer, 2021, pp. 223–245. doi: https://doi.org/10.1007/978-3-030-79460-6_12. (in English).
[21]. A. M. Gadermann, M. Guhn, and B. D. Zumbo, ‘Estimating ordinal reliability for Likert-type and ordinal item response data: A conceptual, empirical, and practical guide’, Pract. Assess. Res. Eval., vol. 17, p. 3, 2012. doi: https://doi.org/10.7275/n560-j767. (in English).
[22]. C. G. Park, ‘Implementing alternative estimation methods to test the construct validity of Likert-scale instruments’, Korean J. Women Health Nurs., vol. 29, pp. 85–90, 2023. doi: https://doi.org/10.4069/kjwhn.2023.06.14.2. (in English).
[23]. M. Tavakol and R. Dennick, ‘Making sense of Cronbach's alpha’, Int. J. Med. Educ., vol. 2, pp. 53–55, 2011. doi: https://doi.org/10.5116/ijme.4dfb.8dfd. (in English).
[24]. Fellowmind, ‘Working with an AI Copilot: A review of Microsoft 365 Copilot’, 2025. [Online]. Available: https://www.fellowmind.com/en/insights-and-news/working-with-an-ai-copilot-a-review-of-microsoft-365-copilot. Accessed: Apr. 3, 2025 (in English).
[25]. D. P. Mandić, G. M. Miščević, and L. G. Bujišić, ‘Vrednovanje kvaliteta odgovora koje je generisao ChatGPT’, Metod. Teor. Praksa, vol. 27, pp. 5–19, 2024. doi: https://doi.org/10.5937/metpra27-51446. (in English).
[26]. J. M. Fernández-Batanero, P. Román-Graván, M. Reyes-Rebollo, and M. Montenegro-Rueda, ‘Impact of educational technology on teacher stress and anxiety: A literature review’, Int. J. Environ. Res. Public Health, vol. 18, pp. 2234–2251, 2021. doi: https://doi.org/10.3390/ijerph18112234. (in English).

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright (c) 2025 Kateryna Osadcha, Viacheslav Osadchyi, Volodymyr Proshkin, Oleh Spirin