
This is Ryuji Masui, Data Scientist at HACARUS.
In this article, I’m going to focus on how to apply Sparse Modeling to Hyperspectral Imaging, a technology HACARUS has strong interest in.
Hyperspectral Image
Let’s start off by explaining what Hyperspectral Images (HSI) are.
Normal cameras that are familiar to us are called RGB cameras, based literally on the three primary colors of light: red, green, and blue. Such cameras sense each wavelength to capture images. In simpler terms, by mixing red, green, and blue, we can create an image that looks natural when viewed by humans and that is what these conventional cameras do.
The HSI introduced in this article can detect not only red, green and blue, but uses advanced spectroscopic technology to detect far more wavelengths. State of the art HSI cameras even have resolutions in hundreds of wavelength allowing for far more complex information to be captured.
HSI cameras make it possible to identify objects that are difficult for humans to distinguish. For example, the identification of salt, sugar, and flour, or the identification of what crops are being grown based on aerial photographs of farms are possible with HSI. Some other areas where HSI cameras are expected to be introduced are anomaly detection in food processing lines as well as potential medical applications.
Hyperspectral Image and Sparse Modeling
On 17th MAY 2020, a paper: Hyperspectral Image Classification Based on Sparse Modeling of Spectral Blocks was published. In this paper, the dictionary learning technique of sparse modelling was used with HSI to achieve image classification with high speed and accuracy. Having high resolution in each direction, it can be said that HSI images have a higher dimension of data compared to RGB images. In general, if you don’t have a sufficient amount of data to solve a classification problem for high-dimensional data, a vast amount of data is required to prevent over-learning. However, sparse modeling can be used to extract essential features to prevent over-learning. In fact, in the case of crop identification from aerial photographs of a farm, the performance of this technique is close to 100%; while existing SVM (Support Vector Machine) methods have an accuracy of about 80%.
This paper was studied, implemented and replicated by several of our data scientists internally, and the results are shown below. The leftmost part of the figure shows an aerial image of a farm, color-coded by crop. From this, we randomly used 10% of the pixels as training data to infer the remaining 90%. The supervised learning method with SVM achieved an accuracy of 80.188%, while the sparse modeling method achieved 96.518%. Thus, by efficiently applying sparse modeling, even for small amounts of high-dimensional data, we were able to prevent over-learning and significantly improve inference performance.
Evaluation with real data
HACARUS has refined this sparse modeling-based algorithm to evaluate its performance on real-life data. This time, we used AVALDATA’s Near Infrared Hyperspectral Camera to perform a practical experiment.
Please take a look at the following images. These pictures show the famous Kyoto confectionery “Yatsuhashi”. The top left and top right images are of different Yatsuhashi made by different manufacturers, and the one below is of a ‘raw’ Yatsuhashi. In this case, we used these images to see if it is possible to distinguish which Yatsuhashi is which – based on the pixels in each image. We compared the results of both RGB and HSI.
First, we used a portion of the image to create the training data as shown below. Only 460 points were extracted in total from each Yatsuhashi image. The same training data was used for both RGB and HSI, to perform inference on the remaining pixels.
The following are the results of classification with RGB and HSI.
With RGB, misclassification is quite common due to the influence of light and other factors, and even the classification of whether a Yatsuhashi was raw or not, was not of high accuracy.
On the other hand, the HSI results show that it is capable of classifying all types of Yatsuhashi. However, misclassifications seem to be occurring when it comes to the edge of the Yatsuhashi. The reason for this is that when the angle between the object and the illumination changes, the component of the reflected wave changes slightly, and it may cause different data to be created compared to other parts of the image.
It seems to be a good idea to either process such areas differently or to include the data near the boundaries in the training data as well.
In this way, HSI technology can be used to perform image classification for even cases that are difficult for the human eye to distinguish. We also found that sparse modeling is an effective way to handle the high-dimensional data of HSI images successfully. In order to make you feel more familiar with the technology, we have decided to release all of our data as open data. If you’re interested – click here – you can actually find the HSI data to your disposal. Also, if you have any inquiries concerning HSI, or if you have any new findings concerning this innovative technology, please feel free reach out to us from the link below. Thank you very much for reading today’s post.