AI Strategies for Japanese Companies to Compete Globally (Part 3)

AI Strategies For Japanese Companies To Compete Globally (Part 3)

Hello everyone, My name is Kenshin Fujiwara and I am the CEO and founder of HACARUS Inc. 

Through this series of blogs, I will discuss a wide range of topics of AI, from the history of AI to practical tips for the successful application of AI projects. I hope my blog posts will help you better understand AI and solve your business issues.

In the last blog, we discussed differences in the use of AI between Japan and the rest of the world. Today, we will continue to discuss how important accountability factor is in AI systems.

Accountability is Important Regardless of the Industry

In my previous discussions, I mentioned the issues of black boxing and interpretability numerous times. To recap, in the case of deep learning, no evidence is provided by the AI as to why it made the decisions and analyses that it did. 

As AI becomes more prevalent in society and the business world, opinions that emphasize interpretability are also becoming more prominent. Interpretability becomes increasingly important for applications where a human makes the final decision based on AI output. This includes fields like medical care, loan approval, employee hiring, and management. 

Along with a call for increased interpretability, there is also a deepening debate about how the relationship between humans and AI should be. As AI evolves and approaches the capabilities of humankind, we will face the question of how to deal with it in the future. 

In this context, ethical issues will be of increasing interest as well. Next, I would like to introduce some events that epitomize these issues and how unjustified biases can affect the judgment of AI systems. 

Example 1: Gender Discrimination for Resume Screenings

In October 2018, Reuters reported that Amazon, an American company, had been conducting a study since 2014 to use AI when evaluating employee applications. However, this project was halted after it was discovered that the AI was making decisions that discriminated against women.  

For this study, the AI was programmed to rate each resume on a scale of one to five stars. Interestingly, this is similar to how shoppers rate the products on Amazon’s e-commerce site. The AI was loaded with a large number of resumes collected from the past 10 years with the purpose of identifying competent candidates. 

However, the majority of those resumes were from men. Because of this, the AI would regularly give a resume a lower score if it contained the word “woman.” For example, if a resume had the title, “women’s chess club captain,” the AI would give it a lower rating. In addition, it was reported that in two cases, the AI gave an application a lower score because the applicant graduated from a women’s college. 

This AI may have produced highly accurate results in the area of judging the abilities of personnel. However, it exposed that an AI trained with biased data can make biased judgments.

Example 2: Racism – Learning Hate Speech and Praising Nazis

Back in 2016, Microsoft developed a chatbot named Tay that learned and got smarter the more it interacted with humans. However, it was found that Tay shockingly made statements that glorified nazis. 

As a chatbot, Tay targeted 18-24-year-olds and connected via casual conversations on social networking sites. In less than an hour, Tay had amassed 50,000 followers and tweeted about 10,000 times. 

However, Tay had experienced both positive and negative conversations, and this affected its speech patterns. From these interactions, Tay had begun to make posts that glorified Nazis and undermined gender sensitivities. 

Just hours after its launch, Microsoft released a statement that announced Tay had been improperly trained to make inappropriate comments by multiple users. In the end, Tay was shut down due to this controversy. 

Besides this case, there have been other incidents with AI, but they have only been reported by the media so the credibility cannot be confirmed. However, Tay has shown us that AI has the ability to perform maliciously if it is trained incorrectly.

In the next blog, I will discuss the current progress and future responses regarding AI ethics. For your updated knowledge and insight about AI technology, subscribe to our newsletter or visit HACARUS website https://hacarus.com.

Subscribe to our newsletter

Click here to sign up