Basit öğe kaydını göster

dc.contributor.authorGürüler, Hüseyin
dc.contributor.authorIslam, Naveed
dc.contributor.authorDin, Alloud
dc.date.accessioned2023-05-24T13:02:09Z
dc.date.available2023-05-24T13:02:09Z
dc.date.issued2022en_US
dc.identifier.citationGuruler, H., Islam, N., & Din, A. (2023). Security-based explainable artificial intelligence (XAI) in healthcare system. Explainable Artificial Intelligence in Medical Decision Support Systems, 229.en_US
dc.identifier.urihttps://hdl.handle.net/20.500.12809/10712
dc.description.abstractExplainable Artificial Intelligence (XAI) is one of the most advanced research areas of Artificial Intelligence (AI). To explain the deep learning (DL) model is the main objective of XAI. It deals with artificial models which are understandable to humans, including the users, developers, policymakers, etc. XAI is very important in some critical domains like security, healthcare, etc. The purpose of XAI is only to provide a clear answer to the question of how the model made its decision. The explanation is very important before any system decision-making. As an example, if a system responds to a decision, it is necessary to have inside knowledge of the model about that decision. The decision can be positive or negative, but it is more important to know the decision based on characteristics. The decision of the model should be trusted when we know the internal structure of the DL model. Generally, DL models come under the black box models. So for security purposes, it is very necessary to explain a system internally for any decision-making. Security is very crucial in healthcare as well as in any other domain. The objective of this research is to provide a decision about security based on XAI which is a big challenge. We can improve security systems based on XAI for the next level. For medical/healthcare security, when we recognize human action using transfer learning techniques, one pre-trained model is considered good for action and the same action is not good in terms of accuracy using another pre-trained model. This is called the black-box model problem, and it needs to know what is the internal mechanism of both models for the same action. Why one model considers good for action and why the same action is not very well using another model? Here need a model-specific approach of post-hoc interpretability to know the internal structure and characteristics of both models for the same action.en_US
dc.item-language.isoengen_US
dc.publisherINST ENGINEERING TECH-IETen_US
dc.item-rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectSecurity in hospitalen_US
dc.subjectArtificial Intelligenceen_US
dc.titleSecurity-based explainable artificial intelligence (XAI) in healthcare systemen_US
dc.item-typebookParten_US
dc.contributor.departmentMÜ, Teknoloji Fakültesi, Bilişim Sistemleri Mühendisliği Bölümüen_US
dc.contributor.authorID0000-0003-1855-1882en_US
dc.contributor.institutionauthorGürüler, Hüseyin
dc.identifier.volume50en_US
dc.identifier.startpage229en_US
dc.identifier.endpage257en_US
dc.relation.journalEXPLAINABLE ARTIFICIAL INTELLIGENCE IN MEDICAL DECISION SUPPORT SYSTEMSen_US
dc.relation.publicationcategoryKitap Bölümü - Uluslararasıen_US


Bu öğenin dosyaları:

DosyalarBoyutBiçimGöster

Bu öğe ile ilişkili dosya yok.

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster