CONTEMPORARY EDUCATIONAL TECHNOLOGY
e-ISSN: 1309-517X
My AI students: Evaluating the proficiency of three AI chatbots in completeness and accuracy

Reginald Gerald Govender 1 *

CONT ED TECHNOLOGY, Volume 16, Issue 2, Article No: ep509

https://doi.org/10.30935/cedtech/14564

Submitted: 27 December 2023, Published Online: 26 April 2024

OPEN ACCESS   904 Views   575 Downloads

Download Full Text (PDF)

Abstract

A new era of artificial intelligence (AI) has begun, which can radically alter how humans interact with and profit from technology. The confluence of chat interfaces with large language models lets humans write a natural language inquiry and receive a natural language response from a machine. This experimental design study tests the capabilities of three popular AI chatbot services referred to as my AI students: Microsoft Bing, Google Bard, and OpenAI ChatGPT on completeness and accuracy. A Likert scale was used to rate completeness and accuracy, respectively, a three-point and five-point. Descriptive statistics and non-parametric tests were used to compare marks and scale ratings. The results show that AI chatbots were awarded a score of 80.0% overall. However, they struggled with answering questions from the higher Bloom’s taxonomic levels. The median completeness was 3.00 with a mean of 2.75 and the median accuracy was 5.00 with a mean of 4.48 across all Bloom’s taxonomy questions (n=128). Overall, the completeness of the solution was rated mostly incomplete due to limited response (76.2%), while accuracy was rated mostly correct (83.3%). In some cases, generative text was found to be verbose and disembodied, lacking perspective and coherency. Microsoft Bing ranked first among the three AI text generative tools in providing correct answers (92.0%). The Kruskal-Wallis test revealed a significant difference in completeness (asymp. sig.=0.037, p<0.05) and accuracy (asymp. sig.=0.006, p<0.05) among the three AI chatbots. A series of Mann and Whitney tests were carried out showing no significance between AI chatbots for completeness (all p-values>0.015 and 0<r<0.2), while a significant difference was found for accuracy between Google Bard and Microsoft Bing (asymp. sig.=0.002, p<0.05, r=0.3 medium effect). The findings suggest that while AI chatbots can generate comprehensive and correct responses, they may have limits when dealing with more complicated cognitive tasks.

References

Citation

The articles published in this journal are licensed under the CC-BY Creative Commons Attribution International License.
This website uses cookies to provide necessary website functionality. By using our website, you are agree to our Privacy Policy.