Why is deep learning difficult to use practically?

Why Is Deep Learning Difficult To Use Practically?

Hello everyone, My name is Kenshin Fujiwara and I am the CEO and founder of HACARUS Inc. 

In this series of blogs, I will discuss a wide range of topics of AI, from the history of AI to practical tips for successful application of AI projects. I hope my blog posts will help you gain a better understanding of AI and solve your business issues.

Today, I will introduce several challenges to utilize deep learning technology in business practice despite its breakthrough capability. Deep learning is currently driving the recent AI boom, on the other hand, it is difficult to use it practically in the world of business. There are three main challenges that make deep learning difficult to apply in practice. I will cover these reasons in depth later, but today we can take a look at a short overview.

Massive Amount of Data for Deep Learning

The first issue is the volume of data that is necessary for deep learning. To illustrate this point, let’s look at “Google’s Cat,” one of the most famous case studies of deep learning. In this case, AI was able to identify images with cats in them. The amount of data used for this training was enormous, requiring the AI to extract approximately 10 million images from YouTube videos. 

While this is a major limitation, many individuals or organizations are still looking to implement deep learning into their businesses. One common example of deep learning implementation is the visual inspection of products on a production line. With deep learning, it ensures inspection accuracy by studying the data of “good” and “defective” products. The problem that arises is that in a high-quality factory with a high yield rate, it is sometimes difficult to collect enough data on defective products. 

If the amount of data for either defective or good samples is unbalanced, it can become difficult to improve the accuracy of the AI. This is the first high hurdle in the data collection aspect of building an AI project with deep learning. 

Deep Learning is a Black Box

When using a deep learning system, you are often given outputs that provide certain predictions or decisions. However, with deep learning methods, it does not offer any reasons as to why it made such a decision. As a black box, only the results are presented. 

For simple cases, like the cat and dog photo example before, black-boxing is not a problem. However, when complex processes are performed with large data sets that are processed at high speeds, it becomes difficult to understand what criteria are being used to make a decision. 

Black box issues, for example, are a major hurdle for introducing AI into complex areas like the medical field. For example, consider the case where AI reads the patient’s data and determines that the likelihood of disease is “x” percent. However, if no reason is given as to why such a judgment was made, it is natural that doubts will arise from this prediction. 

This issue is true not only in the medical field but also in the world of business. Looking at another hypothetical example, imagine an AI method that analyzes investment decisions. When a client is asking whether or not to proceed with an investment project, they won’t be satisfied with just the conclusion to invest. The client will most likely want to know more intricate details and the rationale behind the decision. 

On the company side, this can also cause problems. If the decision made by the AI causes financial damage to the business, executives will want to make corrections or better understand what went wrong. However, since the decision criteria are black-boxed, it may be difficult to know what to fix. This is another reason why the introduction and utilization of AI projects have not progressed. 

Unsustainable Computational Resources Consumption

The final issue with deep learning is the problem of computer computational resources. Earlier, I explained that the reason for the rapid evolution of deep learning research was the dramatic increase in computer processing power. Another way of looking at it is that the performance of AI depends on the capacity and abundance of computation resources. 

Currently, deep learning is mainly carried out by running a large number of GPU computer chips to perform high-speed computations. High-performance GPUs, however, come with the negative side effects of generating a lot of heat and consuming a lot of power. In order to cool these chips, a large amount of electricity must also be used.

A recent research paper published by the University of Massachusetts reported that the energy expended by an AI model during the training process for machine learning tasks is tremendous. The energy consumption is estimated to be about 5 times the energy compared to the lifespan of an average passenger car in the U.S. from production until scrapping. In fact, not only AI but from the viewpoint of the SDGs, the amount of energy that is expended by the computational processing of computers is a serious problem.

In the next blog, I will explore how to implement AI into the existing system and discuss some barriers that prevent the smooth progress of AI applications in the business field. For your updated knowledge and insight about AI technology, subscribe to our newsletter or visit HACARUS website https://hacarus.com.

Subscribe to our newsletter

Click here to sign up