dc.contributor.author | Gürüler, Hüseyin | |
dc.contributor.author | Islam, Naveed | |
dc.contributor.author | Din, Alloud | |
dc.date.accessioned | 2023-05-24T13:02:09Z | |
dc.date.available | 2023-05-24T13:02:09Z | |
dc.date.issued | 2022 | en_US |
dc.identifier.citation | Guruler, H., Islam, N., & Din, A. (2023). Security-based explainable artificial intelligence (XAI) in healthcare system. Explainable Artificial Intelligence in Medical Decision Support Systems, 229. | en_US |
dc.identifier.uri | https://hdl.handle.net/20.500.12809/10712 | |
dc.description.abstract | Explainable Artificial Intelligence (XAI) is one of the most advanced research areas of Artificial Intelligence (AI). To explain the deep learning (DL) model is the main objective of XAI. It deals with artificial models which are understandable to humans, including the users, developers, policymakers, etc. XAI is very important in some critical domains like security, healthcare, etc. The purpose of XAI is only to provide a clear answer to the question of how the model made its decision. The explanation is very important before any system decision-making. As an example, if a system responds to a decision, it is necessary to have inside knowledge of the model about that decision. The decision can be positive or negative, but it is more important to know the decision based on characteristics. The decision of the model should be trusted when we know the internal structure of the DL model. Generally, DL models come under the black box models. So for security purposes, it is very necessary to explain a system internally for any decision-making. Security is very crucial in healthcare as well as in any other domain. The objective of this research is to provide a decision about security based on XAI which is a big challenge. We can improve security systems based on XAI for the next level. For medical/healthcare security, when we recognize human action using transfer learning techniques, one pre-trained model is considered good for action and the same action is not good in terms of accuracy using another pre-trained model. This is called the black-box model problem, and it needs to know what is the internal mechanism of both models for the same action. Why one model considers good for action and why the same action is not very well using another model? Here need a model-specific approach of post-hoc interpretability to know the internal structure and characteristics of both models for the same action. | en_US |
dc.item-language.iso | eng | en_US |
dc.publisher | INST ENGINEERING TECH-IET | en_US |
dc.item-rights | info:eu-repo/semantics/closedAccess | en_US |
dc.subject | Security in hospital | en_US |
dc.subject | Artificial Intelligence | en_US |
dc.title | Security-based explainable artificial intelligence (XAI) in healthcare system | en_US |
dc.item-type | bookPart | en_US |
dc.contributor.department | MÜ, Teknoloji Fakültesi, Bilişim Sistemleri Mühendisliği Bölümü | en_US |
dc.contributor.authorID | 0000-0003-1855-1882 | en_US |
dc.contributor.institutionauthor | Gürüler, Hüseyin | |
dc.identifier.volume | 50 | en_US |
dc.identifier.startpage | 229 | en_US |
dc.identifier.endpage | 257 | en_US |
dc.relation.journal | EXPLAINABLE ARTIFICIAL INTELLIGENCE IN MEDICAL DECISION SUPPORT SYSTEMS | en_US |
dc.relation.publicationcategory | Kitap Bölümü - Uluslararası | en_US |