Validation Study on the Practical Accuracy of Wood Species Identification via Deep Learning from Visible Microscopic Images

Authors

  • Te Ma Graduate School of Bioagricultural Sciences, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan
  • Fumiya Kimura Graduate School of Bioagricultural Sciences, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan
  • Satoru Tsuchikawa Graduate School of Bioagricultural Sciences, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan
  • Miho Kojima Forestry and Forest Products Research Institute, Matsunosato, Tsukuba 305-8687, Japan
  • Tetsuya Inagaki Graduate School of Bioagricultural Sciences, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan

Keywords:

Wood species identification, Microscopic cross-sectional images, Convolutional neural networks (CNN), Practical accuracy, Interactive platform, Web-based identification

Abstract

This study aimed to validate the accuracy of identifying Japanese hardwood species from microscopic cross-sectional images using convolutional neural networks (CNN). The overarching goal is to create a versatile model that can handle microscopic cross-sectional images of wood. To gauge the practical accuracy, a comprehensive database of microscopic images of Japanese hardwood species was provided by the Forest Research and Management Organization. These images, captured from various positions on wood blocks, different trees, and diverse production areas, resulted in substantial intra-species image variation. To assess the effect of data distribution on accuracy, two datasets, D1 and D2, representing a segregated and a non-segregated dataset, respectively—from 1,000 images (20 images from each of the 50 species) were compiled. For D1, distinct images were allocated to the training, validation, and testing sets. However, in D2, the same images were used for both training and testing. Furthermore, the influence of the evaluation methodology on the identification accuracy was investigated by comparing two approaches: patch evaluation and E2 image evaluation. The accuracy of the model for uniformly sized images was approximately 90%, whereas that for variably sized images it was approximately 70%.

Downloads

Published

2024-05-31

Issue

Section

Research Article or Brief Communication