Our TechnologyWhy Hacarus? – Benefits of Sparse Modeling

At HACARUS, our proprietary Sparse Modeling based AI offers our customers highly accurate predictions by taking advantage of its three main strengths:

Big data-free sparse modeling technology

POINT 1Sparse Modeling technology does not require large amounts of data for training

Big data-free sparse modeling technology

In a world where AI is growing increasingly popular, Deep Learning is arguably the most well known AI technology. However, Deep Learning is also associated with the need for large amounts of data in order to obtain accurate results. This has led many companies to have issues adopting AI, as they are unable to collect sufficient data. With our award winning AI technology, there is no more need to worry about data size anymore.

AI that can interpret 'reasons for reaching that conclusion'

POINT 2AI that can provide explainable solutions

AI that can interpret 'reasons for reaching that conclusion'

Whether it's in business or medicine, it is essential to understand the decision making process that AI models take to make a prediction. Once again, with Deep Learning, results are hidden behind a black-box, neural network, making it impossible to determine why certain conclusions were made. Sparse Modeling provides feedback explaining the results and reasoning behind its solutions, giving it an edge against conventional AI methods.

High speed and low power consumption. Supports various execution environments.

POINT 3A lightweight design that offers high speeds and low power consumption in a variety of environments

High speed and low power consumption. Supports various execution environments.

For other systems that require large amounts of data, high-spec hardware, such as Graphic Processing Units (GPU) is required, which consumes a lot of power and is time intensive. Making use of its lightweight design, our Sparse Modeling technology is able to be integrated into our customer’s existing systems because of its low processing loads. In addition, we are also experienced in providing both cloud-based and on-site solutions tailored to their needs.

Introducing LASSO - a core enabler for Sparse Modeling

LASSO, a key algorithm behind Sparse modeling was first developed around 25 years ago. It is uncertain about how it was created, but when Professor Robert Tibshirani of Stanford University proposed LASSO, it became widely recognized in the field of data science.
During this time, research surrounding LASSO continued in academia, but it also has been put into practice for businesses. At HACARUS, we have been focusing on joint development with research institutions as well as application development with our business partners.

Year Publication
1995 Sparse Coding of Natural Images Produces Localized, Oriented, Bandpass Receptive FieldsB.Olshausen and D.Field
1996 Regression Shrinkage and Selection via the LassoR. Tibshirani
2006 Compressed SensingD.L. Donoho
2018 Multi-Layer Convolutional Sparse Modeling: Pursuit and Dictionary LearningJ.Sulam. et al.

Cases Where Sparse Modeling Surpassed Deep Learning

When comparing our Sparse Modeling based approach with Classifier (SVM) and Deep Learning (CNN) techniques for detection of defects on Solar Cells, Sparse Modeling far outperforms the competition. Not only is accuracy higher, but it also creates AI models faster – even when using a far smaller dataset.

Performance Comparison for Solar Cell Defect Detection
SVM CNN Sparse Modeling
Learning Volume 800 models 800 models 60 models
Learning Time 30 minutes 5 hours 19 seconds
Computing Time 8 minutes 20 seconds 10 seconds
Accuracy 85% 86% 90%
SVM=Support Vector Machine、
CNN=Convolutional Neural Network (Deep Learning)

Advantages of Sparse Modeling (Feature Comparison with Deep Learning)

Category Sub-Category Deep Learning Sparse Modeling
Learning Data Essential Data Needs a large amount of data Small amounts of data is sufficient
Learning Time Large learning times due to the large amount of data being analyzed Quick learning times due to the small data size.
Computing Time Normal (lag due to communicating with cloud servers) Quick (can operate in edge systems for fast computing)
Operating System Operating System Cloud & GPU Cloud & GPU (CPU chips are acceptable too)
Hardware Cost High Low
Communication Cost Necessary due to cloud computing No communication costs
Information Security Possible issues due to communications with cloud services No risk since all information is processed on the edge device
Explainability Explainability of results Poor (black boxing) Easy to understand results
Preparation Little preparation is required if there is a ready-made model available Advanced preparations are needed

Application Example

Estimating Trends from Data Damaged by Missing Data and Noise

The figure below shows the estimated trend (green line) from the observed data (x).
It can be seen that the trend is accurately estimated even when up to 80% of the data is missing.

Normal data set
Normal data set
Data set with 50% missing data
Data set with 50% missing data
Data set with 80% missing data
Data set with 80% missing data
Blue: True Trend・
Red: Observed data (including noise)・
Greed: Trend estimated from the observed data

Application Example

Defect Completion & Super-Resolution

Image Restoration

The figure below shows an example of an original image (a), the image with 50% of the data missing (b), and the restored image (c).
The image is not a perfect restoration, but the quality is high enough that the original image contents can be inferred.

Original Image (a)
Original Image (a)
Image with 50% missing data (b)
Image with 50% missing data (b)
Restored Image (c)
Restored Image (c)

Super-Resolution Image Processing

The figure below shows an example of an original image (a), the image with 50% degraded resolution (b), and a high-resolution processed image (c).

Original Image (a)
Original Image (a)
Image with 50% degraded resolution (b)
Image with 50% degraded resolution (b)
High-resolution image restored from image b (c).
High-resolution image restored from image b (c).

Potential Fields for Sparse Modeling Applications

  • ConstructionConstruction
  • ManufacturingManufacturing
  • EnergyEnergy
  • TransportationTransportation
  • Wholesale & RetailWholesale &
  • Finance/InsuranceFinance /
  • Restaurant/LodgingRestaurant /
  • HealthcareHealthcare
  • WelfareWelfare
  • ServiceService
Credit: Event Horizon Telescope Collaboration

Sparse Modeling Captures Black Holes

In April 2019, the news of the first ever image of a blackhole-made made its way across the world - however, that beautiful jet-black circular image is not actually (or rather, naturally) an optical image. It is an image derived by AI from the captured image data, using the same sparse modeling technology that is used at HACARUS.


Introduction to Sparse Modeling for IT Engineers (shared writing by members of HACARUS)

Part 1 Challenges in machine learning projects and background of growing expectations for sparse modeling https://codezine.jp/article/detail/10957
Part 2 Why was sparse modeling born? Emergence of the representative algorithm "LASSO" https://codezine.jp/article/detail/11148
Part 3 Evaluating sparse modeling models - How to evaluate LASSO estimateshttps://codezine.jp/article/detail/11593
Part 4 Application of Sparse Modeling to Image Processing - Image Reconstruction and Denoising https://codezine.jp/article/detail/11823
Part 5 Advanced Applications of Sparse Modeling to Image Processing - Missing Interpolation, Anomaly Detection, Super Resolution https://codezine.jp/article/detail/12433
Part 6 Advanced Sparse Modeling - HMLasso and Pliable Lasso - https://codezine.jp/article/detail/12662
Masayuki Ozeki: "Sparse Modeling - You Can Start Today"

Subscribe to our newsletter

Click here to sign up