Research Article
Ragad M Tawafak, Ghaliya Alfarsi, Jasiya Jabbar
CONT ED TECHNOLOGY, Volume 13, Issue 3, Article No: ep306
ABSTRACT
With restrictions on face-to-face clinical consultations within the COVID-19 pandemic and the challenges faced by healthcare systems in delivering patient care, alternative information technologies like telemedicine and smartphones play a crucial role. A wide variety of smartphone applications employs high-tech mathematical and programming approaches to enhance computer-assisted communication and learning processes’ pedagogical efficiency. Accordingly, this study’s main objective is to develop a model system that can function as smartphone computer graphics. This paper used the Technology Acceptance Model (TAM) as an m-learning model, and Bresenham’s line algorithm is a calculation system implemented by applications. The study method applies technology to validate the accuracy of the contents’ acceptance method of use. The results reveal significant positive effects of the proposed model on generating reasonable, fast, and accurate solutions for the presented problems and developing a more interactive platform of m-learning.
Keywords: computer graphics, algorithms, COVID-19, TAM
Research Article
Kutay Uzun
CONT ED TECHNOLOGY, Volume 9, Issue 4, pp. 423-436
ABSTRACT
Managing crowded classes in terms of classroom assessment is a difficult task due to the amount of time which needs to be devoted to providing feedback to student products. In this respect, the present study aimed to develop an automated essay scoring environment as a potential means to overcome this problem. Secondarily, the study aimed to test if automatically-given scores would correlate with the scores given by a human rater. A quantitative research design employing a machine learning approach was preferred to meet the aims of the study. The data set to be used for machine learning consisted of 160 scored literary analysis essays written in an English Literature course, each essay analyzing a theme in a given literary work. To train the automated scoring model, LightSide software was used. First, textual features were extracted and filtered. Then, Logistic Regression, SMO, SVO, Logistic Tree and Naïve Bayes text classification algorithms were tested by using 10-Fold Cross-Validation to reach the most accurate model. To see if the scores given by the computer correlated with the scores given by the human rater, Spearman’s Rank Order Correlation Coefficient was calculated. The results showed that none of the algorithms were sufficiently accurate in terms of the scores of the essays within the data set. It was also seen that the scores given by the computer were not significantly correlated with the scores given by the human rater. The findings implied that the size of the data collected in an authentic classroom environment was too small for classification algorithms in terms of automated essay scoring for classroom assessment.
Keywords: Automated essay scoring, Literary analysis essay, Classification algorithms, Machine learning