|
 |
|
|
|
Explainability in AI Using Fuzzy Inference Systems For the Regression Problem |
|
PP: 973-987 |
|
Author(s) |
|
Khaleel Ibrahim Al-Daoud,
N. Yogeesh,
Suleiman Ibrahim Mohammad,
N. Raja,
R. Chetana,
P. William,
Asokan Vasudevan,
Nawaf Alshdaifat,
Mohammad Faleh Ahmmad Hunitie,
|
|
Abstract |
|
Fuzzy Inference Systems (FIS) have gained traction as a key player in Explainable AI (XAI). Created through exchanging vectors in common against linguistic input variables, threshold output membership functions; transparent, rule-based reasoning that aids in mitigating the challenges facing AI systems when it comes to interpretability. A real-world case was explored: predicting the price of a house. In the case of the regression problem, the location score, house size, and the number of bedrooms were features used in estimating house prices, which led to a Mean Mean Absolute Error (MAE): $10,000 and Root Mean Squared Error (RMSE): $14,142.14. In addition, novel evaluation metrics for FIS were proposed, while some future directions such as hybrid neuro-fuzzy systems, dynamic rule learning, and Green AI techniques were also furnished. This work through a comprehensive investigation illustrates how FIS as a framework is capable of bridging the need for interpretability and accuracy, compatibility, adaptability, therefore a ideal model for transparent and explainable decisions around sensitive fields like public or health environment and autonomous systems. This research highlights the importance of FIS in engaging trust and accountability in AI, and lends insights in its application and deliverables.
|
|
|
 |
|
|