GENERATIVE ARTIFICIAL INTELLIGENCE IN PSYCHOLOGY: IMPLICATIONS AND RECOMMENDATIONS FOR SCIENCE AND PRACTICE
PDF (Ukrainian)

Keywords

generative artificial intelligence
psychology
psychodiagnostics
psychotherapy

How to Cite

[1]
M. Melnyk, A. Malynoshevska, and K. Androsovych, “GENERATIVE ARTIFICIAL INTELLIGENCE IN PSYCHOLOGY: IMPLICATIONS AND RECOMMENDATIONS FOR SCIENCE AND PRACTICE”, ITLT, vol. 103, no. 5, pp. 188–206, Oct. 2024, doi: 10.33407/itlt.v103i5.5748.

Abstract

Generative artificial intelligence (AI) is becoming increasingly prevalent across various fields, particularly in psychology, where it has the potential to significantly transform approaches to diagnosis, therapy, and research. This paper summarizes current research on the use of generative AI in psychology and its impact on the theory and practice of psychological science.

One of the primary applications of generative AI is in psychodiagnostics, where it can be used to automate the creation of diagnostic tools and interpret test results, analyze large volumes of data, and provide more accurate diagnostic conclusions. This significantly reduces the workload on psychologists while simultaneously increasing the efficiency of diagnostic processes. In the field of psychotherapy, generative AI can be used to create individualized therapeutic programs that provide continuous support to users, which is particularly important when access to qualified specialists is limited. Another important aspect is the use of generative AI in psychological research: AI can help in creating behavior models, predicting mental disorders, developing new research methodologies, reducing routine administrative burdens, and more.

While generative AI revolutionizes the work of psychologists, it simultaneously creates complex issues related to ethics, confidentiality, accuracy of diagnostic and therapeutic methods, and more. To ensure that generative AI is effective and ethical, clear standards and regulatory frameworks must be developed for its use. Therefore, the authors propose recommendations for the implementation of AI in psychological practice, emphasizing the need to develop specific guidelines to address these issues. The role of psychologists in ensuring the ethical use of AI, the necessity of continuous monitoring, and the assessment of its impact on users are also discussed.

Overall, generative AI holds great potential for psychological research and practice, but its implementation requires careful planning and consideration of ethical aspects to ensure safety and effectiveness.

PDF (Ukrainian)

References

СПИСОК ВИКОРИСТАНИХ ДЖЕРЕЛ

O. Kuzminska, D. Pohrebniak, M. Mazorchuk, and V. Osadchyi, "Leveraging AI tools for enhancing project team dynamics: impact on self-efficacy and student engagement," Information Technologies and Learning Tools, vol. 100, no. 2, pp. 92–109, Apr. 2024, doi: 10.33407/itlt.v100i2.5602.

M. Ghassemi et al., "ChatGPT one year on: who is using it, how, and why?" Nature, vol. 624, no. 7990, pp. 39-41, Dec. 2023, doi: 10.1038/d41586-023-03798-6.

Z. Abrams, "AI is changing every aspect of psychology. Here’s what to watch for," Monitor on Psychology, vol. 54, no. 5, Jul. 2023. [Електронний ресурс]. Доступно: https://www.apa.org/monitor/2023/07/psychology-embracing-ai

Views about AI's impact on society in the next 20 years, 2021. Our World In Data. [Електронний ресурс]. Доступно: https://OurWorldInData.org/artificial-intelligence

E. Fast and E. Horvitz, "Long-Term Trends in the Public Perception of Artificial Intelligence," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, 2017, pp. 963-970. doi: 10.1609/aaai.v31i1.10635.

European Union Agency for Cybersecurity, "Cybersecurity Threats Fast-Forward 2030: Fasten your Security-Belt Before the Ride!," [Електронний ресурс]. Доступно: https://www.enisa.europa.eu/news/cybersecurity-threats-fast-forward-2030

R. P. Hall, "Generative Artificial Intelligence, Large Language Models, and JID Innovations," JID Innovations, vol. 4, no. 2, p. 100256, Jan. 2024, doi: 10.1016/j.xjidi.2024.100256.

R. Van Noorden and J. M. Perkel, "AI and science: what 1,600 researchers think," Nature, vol. 621, no. 7980, pp. 672-675, Sep. 2023, doi: 10.1038/d41586-023-02980-0. PMID: 37758894.

F. Agbavor and H. Liang, "Predicting dementia from spontaneous speech using large language models," PLOS Digital Health, vol. 1, no. 12, p. e0000168, Dec. 2022, doi: 10.1371/journal.pdig.0000168.

D. Dillion, N. Tandon, Y. Gu, and K. Gray, "Can AI language models replace human participants?" Trends in Cognitive Sciences, vol. 27, no. 7, 2023. [Електронний ресурс]. Доступно: https://static1.squarespace.com/static/5e57f82eb306fc38c7637f33/t/6490afc6074d6e3982f6d049/1687203782514/can-AI-language-models-replace-human-participants.pdf

G. Aher, R. Arriaga, and A. Kalai, "Using Large Language Models to Simulate Multiple Humans," in Proceedings of the 40th International Conference on Machine Learning, vol. 2023, pp. 337–371, Jul. 2023, doi: 10.48550/arXiv.2208.10264.

P. Schramowski, C. Turan, N. Andersen, et al., "Large pre-trained language models contain human-like biases of what is right and wrong to do," Nature Machine Intelligence, vol. 4, pp. 258–268, 2022, doi: 10.1038/s42256-022-00458-8.

M. Májovský, M. Černý, M. Kasal, M. Komarc, and D. Netuka, "Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora's Box Has Been Opened," Journal of Medical Internet Research, vol. 25, p. e46924, May 2023, doi: 10.2196/46924. PMID: 37256685; PMCID: PMC10267787.

O. Pinchuk and I. Malytska, "Responsible and ethical use of artificial intelligence in research and publishing," Information Technologies and Learning Tools, vol. 100, no. 2, pp. 180-198, 2024, doi: 10.33407/itlt.v100i2.5676.

C. Ganjavi, M. B. Eppler, A. Pekcan, B. Biedermann, A. Abreu, G. S. Collins, I. S. Gill, and G. E. Cacciamani, "Publishers' and journals' instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis," BMJ, vol. 384, p. e077192, Jan. 2024, doi: 10.1136/bmj-2023-077192.

APA Journals, "Policy on generative AI: Additional guidance," American Psychological Association, 2024. [Електронний ресурс]. Доступно: https://www.apa.org/pubs/journals/resources/publishing-tips/policy-generative-ai?utm_campaign=apa_publishing&utm_medium=direct_email&utm_source=businessdevelopment&utm_content=all-journals_info_ai_authors_2024&utm_term=text_middle_information (дата звернення: 18.03.2024).

H. Luan, P. Geczy, H. Lai, J. Gobert, S. J. H. Yang, H. Ogata, J. Baltes, R. Guerra, P. Li, and C. C. Tsai, "Challenges and future directions of big data and artificial intelligence in education," Frontiers in Psychology, vol. 11, p. 580820, Oct. 2020, doi: 10.3389/fpsyg.2020.580820.

Міністерство освіти і науки України, "Результати міжнародного дослідження якості освіти PISA-2022," МОН, 2023. [Електронний ресурс]. Доступно: https://mon.gov.ua/news/rezultati-mizhnarodnogo-doslidzhennya-yakosti-osviti-pisa-2022 (дата звернення: 30.05.2024).

M. J. Gierl and H. Lai, "Using Automatic Item Generation to Create Solutions and Rationales for Computerized Formative Testing," Applied Psychological Measurement, vol. 42, no. 1, pp. 42-57, Jan. 2018, doi: 10.1177/0146621617726788.

P. Gual-Montolio, I. Jaén, V. Martínez-Borba, D. Castilla, and C. Suso-Ribera, "Using Artificial Intelligence to Enhance Ongoing Psychological Interventions for Emotional Problems in Real- or Close to Real-Time: A Systematic Review," International Journal of Environmental Research and Public Health, vol. 19, no. 13, p. 7737, Jun. 2022, doi: 10.3390/ijerph19137737.

G. Andersson, P. Carlbring, N. Titov, and N. Lindefors, "Internet Interventions for Adults with Anxiety and Mood Disorders: A Narrative Umbrella Review of Recent Meta-Analyses," Canadian Journal of Psychiatry, vol. 64, pp. 465–470, 2019, doi: 10.1177/0706743719839381.

B. Inkster, S. Sarda, and V. Subramanian, "An Empathy-Driven, Conversational Artificial Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation Mixed-Methods Study," JMIR Mhealth Uhealth, vol. 6, no. 11, p. e12106, Nov. 2018, doi: 10.2196/12106.

R. L. Horn and J. R. Weisz, "Can Artificial Intelligence Improve Psychotherapy Research and Practice?" Administration and Policy in Mental Health and Mental Health Services Research, vol. 47, pp. 852–855, 2020, doi: 10.1007/s10488-020-01056-9.

T. Bird, W. Mansell, J. Wright, H. Gaffney, and S. Tai, "Manage Your Life Online: A Web-Based Randomized Controlled Trial Evaluating the Effectiveness of a Problem-Solving Intervention in a Student Sample," Behavioural and Cognitive Psychotherapy, vol. 46, no. 5, pp. 570-582, 2018, doi: 10.1017/S1352465817000820.

E. Green, Y. Lai, N. Pearson, S. Rajasekharan, M. Rauws, A. Joerin, E. Kwobah, C. Musyimi, R. Jones, C. Bhat, A. Mulinge, and E. Puffer, "Expanding Access to Perinatal Depression Treatment in Kenya Through Automated Psychological Support: Development and Usability Study," JMIR Formative Research, vol. 4, no. 10, p. e17895, Oct. 2020, doi: 10.2196/17895.

H. Liu, H. Peng, X. Song, C. Xu, and M. Zhang, "Using AI Chatbots to Provide Self-Help Depression Interventions for University Students: A Randomized Trial of Effectiveness," Internet Interventions, vol. 27, p. 100495, Jan. 2022, doi: 10.1016/j.invent.2022.100495.

C. Cho, T. Lee, J. Lee, J. Seo, H. Jee, S. Son, H. An, L. Kim, and H. Lee, "Effectiveness of a Smartphone App With a Wearable Activity Tracker in Preventing the Recurrence of Mood Disorders: Prospective Case-Control Study," JMIR Mental Health, vol. 7, no. 8, p. e21283, Aug. 2020, doi: 10.2196/21283.

S. Morales, J. Barros, O. Echávarri, F. García, A. Osses, C. Moya, et al., "Acute Mental Discomfort Associated with Suicide Behavior in a Clinical Sample of Patients with Affective Disorders: Ascertaining Critical Variables Using Artificial Intelligence Tools," Frontiers in Psychiatry, vol. 8, p. 7, 2017, doi: 10.3389/fpsyt.2017.00007.

L. Glomann, V. Hager, C. A. Lukas, and M. Berking, "Patient-Centered Design of an E-Mental Health App," in Advances in Artificial Intelligence, Software and Systems Engineering, T. Z. Ahram, Ed. New York, NY: Springer International Publishing, 2019, pp. 264–271, doi: 10.1007/978-3-319-94229-2_25.

J. Li, M. A. Bonn, and B. H. Ye, "Hotel Employee's Artificial Intelligence and Robotics Awareness and Its Impact on Turnover Intention: The Moderating Roles of Perceived Organizational Support and Competitive Psychological Climate," Tourism Management, vol. 73, pp. 172-181, Feb. 2019, doi: 10.1016/j.tourman.2019.02.006.

M. F. Shahzad, S. Xu, W. Naveed, S. Nusrat, and I. Zahid, "Investigating the Impact of Artificial Intelligence on Human Resource Functions in the Health Sector of China: A Mediated Moderation Model," Heliyon, vol. 9, no. 11, p. e21818, Nov. 2023, doi: 10.1016/j.heliyon.2023.e21818. PMID: 38034787; PMCID: PMC10685199.

S. Kelly, S. A. Kaye, K. M. White, and O. Oviedo-Trespalacios, "Clearing the Way for Participatory Data Stewardship in Artificial Intelligence Development: A Mixed Methods Approach," Ergonomics, vol. 66, no. 11, pp. 1782-1799, Nov. 2023, doi: 10.1080/00140139.2023.2289864.

L. Walker, "Belgian Man Dies by Suicide Following Exchanges with Chatbot," The Brussels Times, 31 May 2024. [Електронний ресурс]. Доступно: https://www.brusselstimes.com/430098/belgian-man-commits-suicide-following-exchanges-with-chatgpt.

E. A. Demetriou, S. H. Park, N. Ho, K. L. Pepper, Y. J. C. Song, S. L. Naismith, E. E. Thomas, I. B. Hickie, & A. J. Guastella, "Machine Learning for Differential Diagnosis Between Clinical Conditions With Social Difficulty: Autism Spectrum Disorder, Early Psychosis, and Social Anxiety Disorder," Frontiers in psychiatry, 11, 545. 2020. https://doi.org/10.3389/fpsyt.2020.00545.

A. A. Mentis, D. Lee, P. Roussos, "Applications of artificial intelligence-machine learning for detection of stress: a critical overview," Molecular Psychiatry. 2023 Apr. DOI: 10.1038/s41380-023-02047-6

N. Board, "The Significance of Psychology in Shaping AI Development," LinkedIn, 2023. [Електронний ресурс]. Доступно: https://www.linkedin.com/pulse/significance-psychology-shaping-ai-development-nathaniel-board/.

Y. E. Bigman, D. Wilson, M. N. Arnestad, A. Waytz, and K. Gray, "Algorithmic Discrimination Causes Less Moral Outrage than Human Discrimination," Journal of Experimental Psychology: General, vol. 152, no. 1, pp. 4–27, 2023, doi: 10.1037/xge0001250.

R. Stower, A. Kappas, K. Sommer "When is it right for a robot to be wrong? Children trust a robot over a human in a selective trust task," Computers in Human Behavior. vol. 157, 2024, https://doi.org/10.1016/j.chb.2024.108229


REFERENCES (TRANSLATED AND TRANSLITERATED)

O. Kuzminska, D. Pohrebniak, M. Mazorchuk, and V. Osadchyi, "Leveraging AI tools for enhancing project team dynamics: impact on self-efficacy and student engagement," Information Technologies and Learning Tools, vol. 100, no. 2, pp. 92–109, Apr. 2024, doi: 10.33407/itlt.v100i2.5602. (in English)

M. Ghassemi et al., "ChatGPT one year on: who is using it, how, and why?" Nature, vol. 624, no. 7990, pp. 39-41, Dec. 2023, doi: 10.1038/d41586-023-03798-6. (in English)

Z. Abrams, "AI is changing every aspect of psychology. Here’s what to watch for," Monitor on Psychology, vol. 54, no. 5, Jul. 2023. [Online]. Available: https://www.apa.org/monitor/2023/07/psychology-embracing-ai (in English)

Views about AI's impact on society in the next 20 years, 2021. Our World In Data. [Online]. Available: https://OurWorldInData.org/artificial-intelligence (in English)

E. Fast and E. Horvitz, "Long-Term Trends in the Public Perception of Artificial Intelligence," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, 2017, pp. 963-970. doi: 10.1609/aaai.v31i1.10635. (in English)

European Union Agency for Cybersecurity, "Cybersecurity Threats Fast-Forward 2030: Fasten your Security-Belt Before the Ride!," [Online]. Available: https://www.enisa.europa.eu/news/cybersecurity-threats-fast-forward-2030 (in English)

R. P. Hall, "Generative Artificial Intelligence, Large Language Models, and JID Innovations," JID Innovations, vol. 4, no. 2, p. 100256, Jan. 2024, doi: 10.1016/j.xjidi.2024.100256. (in English)

R. Van Noorden and J. M. Perkel, "AI and science: what 1,600 researchers think," Nature, vol. 621, no. 7980, pp. 672-675, Sep. 2023, doi: 10.1038/d41586-023-02980-0. PMID: 37758894. (in English)

F. Agbavor and H. Liang, "Predicting dementia from spontaneous speech using large language models," PLOS Digital Health, vol. 1, no. 12, p. e0000168, Dec. 2022, doi: 10.1371/journal.pdig.0000168. (in English)

D. Dillion, N. Tandon, Y. Gu, and K. Gray, "Can AI language models replace human participants?" Trends in Cognitive Sciences, vol. 27, no. 7, 2023. [Online]. Available: https://static1.squarespace.com/static/5e57f82eb306fc38c7637f33/t/6490afc6074d6e3982f6d049/1687203782514/can-AI-language-models-replace-human-participants.pdf (in English)

G. Aher, R. Arriaga, and A. Kalai, "Using Large Language Models to Simulate Multiple Humans," in Proceedings of the 40th International Conference on Machine Learning, vol. 2023, pp. 337–371, Jul. 2023, doi: 10.48550/arXiv.2208.10264. (in English)

P. Schramowski, C. Turan, N. Andersen, et al., "Large pre-trained language models contain human-like biases of what is right and wrong to do," Nature Machine Intelligence, vol. 4, pp. 258–268, 2022, doi: 10.1038/s42256-022-00458-8. (in English)

M. Májovský, M. Černý, M. Kasal, M. Komarc, and D. Netuka, "Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora's Box Has Been Opened," Journal of Medical Internet Research, vol. 25, p. e46924, May 2023, doi: 10.2196/46924. PMID: 37256685; PMCID: PMC10267787. (in English)

O. Pinchuk and I. Malytska, "Responsible and ethical use of artificial intelligence in research and publishing," Information Technologies and Learning Tools, vol. 100, no. 2, pp. 180-198, 2024, doi: 10.33407/itlt.v100i2.5676. (in Ukrainian)

C. Ganjavi, M. B. Eppler, A. Pekcan, B. Biedermann, A. Abreu, G. S. Collins, I. S. Gill, and G. E. Cacciamani, "Publishers' and journals' instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis," BMJ, vol. 384, p. e077192, Jan. 2024, doi: 10.1136/bmj-2023-077192. (in English)

APA Journals, "Policy on generative AI: Additional guidance," American Psychological Association, 2024. [Online]. Available: https://www.apa.org/pubs/journals/resources/publishing-tips/policy-generative-ai?utm_campaign=apa_publishing&utm_medium=direct_email&utm_source=businessdevelopment&utm_content=all-journals_info_ai_authors_2024&utm_term=text_middle_information (accessed 18 March 2024). (in English)

H. Luan, P. Geczy, H. Lai, J. Gobert, S. J. H. Yang, H. Ogata, J. Baltes, R. Guerra, P. Li, and C. C. Tsai, "Challenges and future directions of big data and artificial intelligence in education," Frontiers in Psychology, vol. 11, p. 580820, Oct. 2020, doi: 10.3389/fpsyg.2020.580820. (in English)

Ministry of Education and Science of Ukraine, "Results of the International Study of Educational Quality PISA-2022," MES, 2023. [Online]. Available: https://mon.gov.ua/news/rezultati-mizhnarodnogo-doslidzhennya-yakosti-osviti-pisa-2022 Access date: 30.05.2024. (in Ukrainian)

M. J. Gierl and H. Lai, "Using Automatic Item Generation to Create Solutions and Rationales for Computerized Formative Testing," Applied Psychological Measurement, vol. 42, no. 1, pp. 42-57, Jan. 2018, doi: 10.1177/0146621617726788. (in English)

P. Gual-Montolio, I. Jaén, V. Martínez-Borba, D. Castilla, and C. Suso-Ribera, "Using Artificial Intelligence to Enhance Ongoing Psychological Interventions for Emotional Problems in Real- or Close to Real-Time: A Systematic Review," International Journal of Environmental Research and Public Health, vol. 19, no. 13, p. 7737, Jun. 2022, doi: 10.3390/ijerph19137737. (in English)

G. Andersson, P. Carlbring, N. Titov, and N. Lindefors, "Internet Interventions for Adults with Anxiety and Mood Disorders: A Narrative Umbrella Review of Recent Meta-Analyses," Canadian Journal of Psychiatry, vol. 64, pp. 465–470, 2019, doi: 10.1177/0706743719839381. (in English)

B. Inkster, S. Sarda, and V. Subramanian, "An Empathy-Driven, Conversational Artificial Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation Mixed-Methods Study," JMIR Mhealth Uhealth, vol. 6, no. 11, p. e12106, Nov. 2018, doi: 10.2196/12106. (in English)

R. L. Horn and J. R. Weisz, "Can Artificial Intelligence Improve Psychotherapy Research and Practice?" Administration and Policy in Mental Health and Mental Health Services Research, vol. 47, pp. 852–855, 2020, doi: 10.1007/s10488-020-01056-9. (in English)

T. Bird, W. Mansell, J. Wright, H. Gaffney, and S. Tai, "Manage Your Life Online: A Web-Based Randomized Controlled Trial Evaluating the Effectiveness of a Problem-Solving Intervention in a Student Sample," Behavioural and Cognitive Psychotherapy, vol. 46, no. 5, pp. 570-582, 2018, doi: 10.1017/S1352465817000820. (in English)

E. Green, Y. Lai, N. Pearson, S. Rajasekharan, M. Rauws, A. Joerin, E. Kwobah, C. Musyimi, R. Jones, C. Bhat, A. Mulinge, and E. Puffer, "Expanding Access to Perinatal Depression Treatment in Kenya Through Automated Psychological Support: Development and Usability Study," JMIR Formative Research, vol. 4, no. 10, p. e17895, Oct. 2020, doi: 10.2196/17895. (in English)

H. Liu, H. Peng, X. Song, C. Xu, and M. Zhang, "Using AI Chatbots to Provide Self-Help Depression Interventions for University Students: A Randomized Trial of Effectiveness," Internet Interventions, vol. 27, p. 100495, Jan. 2022, doi: 10.1016/j.invent.2022.100495. (in English)

C. Cho, T. Lee, J. Lee, J. Seo, H. Jee, S. Son, H. An, L. Kim, and H. Lee, "Effectiveness of a Smartphone App With a Wearable Activity Tracker in Preventing the Recurrence of Mood Disorders: Prospective Case-Control Study," JMIR Mental Health, vol. 7, no. 8, p. e21283, Aug. 2020, doi: 10.2196/21283. (in English)

S. Morales, J. Barros, O. Echávarri, F. García, A. Osses, C. Moya, et al., "Acute Mental Discomfort Associated with Suicide Behavior in a Clinical Sample of Patients with Affective Disorders: Ascertaining Critical Variables Using Artificial Intelligence Tools," Frontiers in Psychiatry, vol. 8, p. 7, 2017, doi: 10.3389/fpsyt.2017.00007. (in English)

L. Glomann, V. Hager, C. A. Lukas, and M. Berking, "Patient-Centered Design of an E-Mental Health App," in Advances in Artificial Intelligence, Software and Systems Engineering, T. Z. Ahram, Ed. New York, NY: Springer International Publishing, 2019, pp. 264–271, doi: 10.1007/978-3-319-94229-2_25. (in English)

J. Li, M. A. Bonn, and B. H. Ye, "Hotel Employee's Artificial Intelligence and Robotics Awareness and Its Impact on Turnover Intention: The Moderating Roles of Perceived Organizational Support and Competitive Psychological Climate," Tourism Management, vol. 73, pp. 172-181, Feb. 2019, doi: 10.1016/j.tourman.2019.02.006. (in English)

M. F. Shahzad, S. Xu, W. Naveed, S. Nusrat, and I. Zahid, "Investigating the Impact of Artificial Intelligence on Human Resource Functions in the Health Sector of China: A Mediated Moderation Model," Heliyon, vol. 9, no. 11, p. e21818, Nov. 2023, doi: 10.1016/j.heliyon.2023.e21818. PMID: 38034787; PMCID: PMC10685199. (in English)

S. Kelly, S. A. Kaye, K. M. White, and O. Oviedo-Trespalacios, "Clearing the Way for Participatory Data Stewardship in Artificial Intelligence Development: A Mixed Methods Approach," Ergonomics, vol. 66, no. 11, pp. 1782-1799, Nov. 2023, doi: 10.1080/00140139.2023.2289864. (in English)

L. Walker, "Belgian Man Dies by Suicide Following Exchanges with Chatbot," The Brussels Times, 31 May 2024. [Online]. Available: https://www.brusselstimes.com/430098/belgian-man-commits-suicide-following-exchanges-with-chatgpt. (in English)

E. A. Demetriou, S. H. Park, N. Ho, K. L. Pepper, Y. J. C. Song, S. L. Naismith, E. E. Thomas, I. B. Hickie, & A. J. Guastella, "Machine Learning for Differential Diagnosis Between Clinical Conditions With Social Difficulty: Autism Spectrum Disorder, Early Psychosis, and Social Anxiety Disorder," Frontiers in psychiatry, 11, 545. 2020. https://doi.org/10.3389/fpsyt.2020.00545. (in English)

A. A. Mentis, D. Lee, P. Roussos, "Applications of artificial intelligence-machine learning for detection of stress: a critical overview," Molecular Psychiatry. 2023 Apr. doi: 10.1038/s41380-023-02047-6. (in English)

N. Board, "The Significance of Psychology in Shaping AI Development," LinkedIn, 2023. [Online]. Available: https://www.linkedin.com/pulse/significance-psychology-shaping-ai-development-nathaniel-board/. (in English)

Y. E. Bigman, D. Wilson, M. N. Arnestad, A. Waytz, and K. Gray, "Algorithmic Discrimination Causes Less Moral Outrage than Human Discrimination," Journal of Experimental Psychology: General, vol. 152, no. 1, pp. 4–27, 2023, doi: 10.1037/xge0001250. (in English)

R. Stower, A. Kappas, K. Sommer "When is it right for a robot to be wrong? Children trust a robot over a human in a selective trust task," Computers in Human Behavior. vol. 157, 2024, https://doi.org/10.1016/j.chb.2024.108229. (in English)

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Copyright (c) 2024 Maryna Melnyk, Alona Malynoshevska, Ксенія Андросович

Downloads

Download data is not yet available.