less than 1 minute read

This is the introduction page for the interpretable machine learning series. Machine Learning Or broadly, Artificial Intelligent techniques are well known for their great predictions. Those predictions became available due to the computational developments which could realize the high-end algorithms. However, because of the usage of the highly complicated algorithms, humans were not able to interprete the prediction result. Humans tried to understand why the “machines” predict specific results, but it was very hard in traditional statistical interpretations, so that the machine learnings have been called “black boxes”.

Today, as the needs for the interpretation for the machine learning predictions have been increased, many academical efforts have been made to make the interpretation possible. Even though the prediction process does not contain statistical explanation tools, humans end up with making a rational and consistent “posterior” explanations on the predictions. We call those explanations as “Interpretable Machine Learning” or “XAI (eXplainable Artificial Intelligence)”.

I will keep writing on the topic in the “Interpretable Machine Learning” catergory, so if you are interested in this topic, please follow up the articles and don’t hesitate to leave feedbacks (Even negative feedbacks will always be welcomed!)

Thank you. EOD.

Leave a comment