AUSTRALASIAN JOURNAL OF EDUCATIONAL TECHNOLOGY, cilt.42, sa.2, ss.18-38, 2026 (SSCI, Scopus)
Despite the growing relevance of artificial intelligence (AI) in language research, there is a lack of validated instruments to assess researchers’ ethical awareness regarding its use. Given the increasing integration of AI into technology-enhanced language teaching, assessment and learning environments, there is also a need to develop scales to provide a foundation for strengthening ethical AI practices across language research contexts. This study aimed to develop and validate the AI Ethical Awareness and Responsibility Scale (AI-EARS) to measure ethical awareness and responsibility in AI use among language researchers. The scale development process followed established validation standards and involved exploratory and confirmatory factor analyses. After item reduction, a two-factor structure was identified in the exploratory analysis, while a confirmatory factor analysis supported a refined one-factor, five-item model with strong model fit and excellent internal consistency. This final structure emerged because several items demonstrated weak loadings, high residual correlations or conceptual overlap during confirmatory analysis, which indicated that they did not sufficiently contribute to the latent construct. Evidence for concurrent validity and test-retest reliability further supported the psychometric robustness of the scale.