What is the way to avoid prejudice against artificial intelligence?



From Google search and shopping to data analysis and fraud detection,

artificial intelligence (AI) is useful in many aspects of our lives. AI repeats machine learning using data set by humans to improve the accuracy of data analysis, but if there is insufficient data used for machine learning or if there is a mixture of biased data, There is a problem that prejudice deepens by learning many times. Bill Simpson-Young , the chief executive of the AI research institute Gradient Institute, has announced how to identify the cause of this bias called algorithm bias and improve the problem.

Artificial intelligence can deepen social inequality. Here are 5 ways to help prevent this
https://theconversation.com/artificial-intelligence-can-deepen-social-inequality-here-are-5-ways-to-help-prevent-this-152226

One of the causes of algorithm bias by Bill Simpson-Young et al. Is improper system design. For example, the AI that banks use when deciding where to lend usually learns using the bank's previous lending decision dataset . AI checks the financial history and employment history of new loan applicants and contrasts them with historical data in an attempt to predict whether the applicant will be able to repay the loan. However, if the historical data contained a pattern that 'the employee refused the loan due to his own prejudice,' the AI would learn this without recognizing it as a prejudice and make the wrong decision. You may drop it. 'Prejudice' here refers to age, gender, race, etc., and it is said that 'prejudice that is rarely seen now' may even affect AI.



These algorithmic biases also pose a great risk to banks. The risk of getting it into the hands of competitors. The other is the risk that if the same AI decisions are repeated, the pattern of prejudice will be exposed to the government and consumers, leading to proceedings.

According to Bill Simpson-Young, there are five ways to correct the algorithm bias:

◆ 1: Get better data
To acquire information that has not been acquired until now. In particular, get new data for minority groups and people who see inaccurate results.

◆ 2: Modify the dataset
Do not delete or display information that is considered discriminatory, such as age, gender, or race.

◆ 3: Make the model more complicated
Simple AI models are easy to analyze and make decisions, but they are less accurate and can be based on the majority rather than the minority.

◆ 4: Change the system
The AI system can change the control pattern etc. in advance. You can adjust the algorithm bias to cancel by setting different thresholds for the minority group.

◆ 5: Change the forecast model
Setting up a good predictive model can help reduce algorithm bias.

Bill Simpson-Young and colleagues say governments and businesses that want to adopt AI-based decision-making must consider the general principles of impartiality and human rights, meticulously ensuring that algorithmic bias does not produce inappropriate results. Claims that the system needs to be designed and monitored. Now that AI decision-making is becoming more commonplace, we conclude that we need to not only improve productivity, but also build a more equitable society.



in Posted by log1p_kr