A research team consisting of Designated Associate Professor Shinnosuke Ishikawa of the Rikkyo University Graduate School of Artificial Intelligence Science and Mamezou Co., Ltd. developed a new ``explainable AI'' that can intuitively judge ``whether the judgment results of AI can be trusted.'' announced the development of a method for
In recent years, the development of AI technology has been remarkable, and various AI services are now being provided, but some people hesitate to introduce AI because they do not know whether they can really trust the judgment results of AI.On the other hand, the research field of "explainable AI", which explains and interprets AI's judgment and makes AI's judgment transparent, is attracting attention.
A representative method of explainable AI is, for example, to explain that "high blood pressure" greatly influenced the AI's prediction that "you have a high risk of disease". There is a method for extracting elements that have had a large influence.On the other hand, this team tried an approach from a "data-centric" standpoint, which is different from these methods, and developed a new method that explains AI's decisions based on "specifically what kind of data the AI learned." bottom.
For example, in order to confirm whether an AI judgment that ``a certain object is a cat'' is reliable, this method asks ``the most similar ``cat'' that the AI has seen (learned) so far. This is an explainable AI that presents the data that the AI has judged to be the closest among the data it has learned so far, such as presenting ".The researchers called it What I Know (WIK), meaning that the AI presents "what I know."
As a result of examining the effectiveness of WIK in the classification problem of earth observation images by artificial satellites, WIK was able to present images similar to the satellite images to be identified.In other words, it can be said that the AI has performed sufficient learning to identify satellite images, and the judgment results are likely to be reliable.
The research team believes that WIK can be applied to many social issues immediately, and plans to promote its use while continuing to verify its effectiveness.
Paper information:[International Journal of Applied Earth Observation and Geoinformation] Example-based explainable AI and its application for remote sensing image classification