A hybrid approach based on deep learning for gender recognition using human ear images

KARASULU B., Yucalar F., Borandag E.

JOURNAL OF THE FACULTY OF ENGINEERING AND ARCHITECTURE OF GAZI UNIVERSITY, vol.37, no.3, pp.1579-1594, 2022 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 37 Issue: 3
  • Publication Date: 2022
  • Doi Number: 10.17341/gazimmfd.945188
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Art Source, Compendex, TR DİZİN (ULAKBİM)
  • Page Numbers: pp.1579-1594
  • Keywords: Human Ear, Gender Recognition, Deep Learning, Convolutional Neural Network, Recurrent Neural Network, FEATURE-EXTRACTION
  • Çanakkale Onsekiz Mart University Affiliated: Yes


Nowadays, the use of the human ear images gains importance for the sustainability of biometric authorization and surveillance systems. Contemporary studies show that such processes can be done semi-automatically or fully automatically, instead of being done manually. Due to the fact that deep learning uses abstract features (i.e., representation learning), it reaches quite high performance values compared to classical methods. In our study, a synergistic gender recognition approach based on hybrid deep learning was created based on the use of human ear images in classifying people fully automatically according to their gender. By means of hybridization, hybrid deep neural network architectural models are used, which include both convolutional neural network component and recurrent neural network type components together. In these models, long-short term memory and gated recurrent unit are taken as recurrent neural network type components. Thanks to these components, the hybrid model extracts the relational dependencies between the pixel regions in the image very well. On account of this synergistic approach, the gender classification accuracy of hybrid models is higher than the standalone convolutional neural network model in our study. Two different image datasets with gender marking were used in our experiments. The reliability of the experimental results has been proven by objective metrics. In the conducted experiments, the highest values in gender recognition with hybrid models were obtained with the test accuracy of 85.16% for the EarVN dataset and 87.61% for the WPUT dataset, respectively. Discussion and conclusions are included in the last section of our study.