skip to content
Covid-19 virus
COVID-19 Resources

Reliable information about the coronavirus (COVID-19) is available from the World Health Organization (current situation, international travel). Numerous and frequently-updated resource results are available from this WorldCat.org search. OCLC’s WebJunction has pulled together information and resources to assist library staff as they consider how to handle coronavirus issues in their communities.

Image provided by: CDC/ Alissa Eckert, MS; Dan Higgins, MAM
Explainable AI : interpreting, explaining and visualizing deep learning Preview this item
ClosePreview this item
Checking...

Explainable AI : interpreting, explaining and visualizing deep learning

Author: Wojciech Samek; Grégoire Montavon; Andrea Vedaldi; Lars Kai Hansen; Klaus-Robert Müller
Publisher: Cham : Springer, 2019.
Series: Lecture notes in computer science., Lecture notes in artificial intelligence.; Lecture notes in computer science, 11700.; LNCS sublibrary., SL 7,, Artificial intelligence.
Edition/Format:   eBook : Document : Conference publication : EnglishView all editions and formats
Summary:
The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intelligent" machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to  Read more...
Rating:

(not yet rated) 0 with reviews - Be the first.

Subjects
More like this

Find a copy online

Links to this item

Find a copy in the library

&AllPage.SpinnerRetrieving; Finding libraries that hold this item...

Details

Genre/Form: Electronic books
Additional Physical Format: Printed edition:
Printed edition:
Material Type: Conference publication, Document, Internet resource
Document Type: Internet Resource, Computer File
All Authors / Contributors: Wojciech Samek; Grégoire Montavon; Andrea Vedaldi; Lars Kai Hansen; Klaus-Robert Müller
ISBN: 9783030289546 3030289540 3030289532 9783030289539 9783030289553 3030289559
OCLC Number: 1120722055
Description: 1 online resource (xi, 439 pages) : illustrations (some color).
Contents: Towards Explainable Artificial Intelligence --
Transparency: Motivations and Challenges --
Interpretability in Intelligent Systems: A New Concept? --
Understanding Neural Networks via Feature Visualization: A Survey --
Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation --
Unsupervised Discrete Representation Learning --
Towards Reverse-Engineering Black-Box Neural Networks --
Explanations for Attributing Deep Neural Network Predictions --
Gradient-Based Attribution Methods --
Layer-Wise Relevance Propagation: An Overview --
Explaining and Interpreting LSTMs --
Comparing the Interpretability of Deep Networks via Network Dissection --
Gradient-Based vs. Propagation-Based Explanations: An Axiomatic Comparison --
The (Un)reliability of Saliency Methods --
Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation --
Understanding Patch-Based Learning of Video Data by Explaining Predictions --
Quantum-Chemical Insights from Interpretable Atomistic Neural Networks --
Interpretable Deep Learning in Drug Discovery --
Neural Hydrology: Interpreting LSTMs in Hydrology --
Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI --
Current Advances in Neural Decoding --
Software and Application Patterns for Explanation Methods.
Series Title: Lecture notes in computer science., Lecture notes in artificial intelligence.; Lecture notes in computer science, 11700.; LNCS sublibrary., SL 7,, Artificial intelligence.
Responsibility: Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Muller (eds.).

Abstract:

The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intelligent" machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI. --

Reviews

User-contributed reviews
Retrieving GoodReads reviews...
Retrieving DOGObooks reviews...

Tags

Be the first.

Similar Items

Related Subjects:(2)

User lists with this item (1)

Confirm this request

You may have already requested this item. Please select Ok if you would like to proceed with this request anyway.

Linked Data


\n\n

Primary Entity<\/h3>\n
<http:\/\/www.worldcat.org\/oclc\/1120722055<\/a>> # Explainable AI : interpreting, explaining and visualizing deep learning<\/span>\n\u00A0\u00A0\u00A0\u00A0a \nschema:CreativeWork<\/a>, schema:Book<\/a>, schema:MediaObject<\/a> ;\u00A0\u00A0\u00A0\nlibrary:oclcnum<\/a> \"1120722055<\/span>\" ;\u00A0\u00A0\u00A0\nlibrary:placeOfPublication<\/a> <http:\/\/id.loc.gov\/vocabulary\/countries\/sz<\/a>> ;\u00A0\u00A0\u00A0\nschema:about<\/a> <http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Topic\/machine_learning<\/a>> ; # Machine learning<\/span>\n\u00A0\u00A0\u00A0\nschema:about<\/a> <http:\/\/dewey.info\/class\/006.3\/e23\/<\/a>> ;\u00A0\u00A0\u00A0\nschema:about<\/a> <http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Topic\/artificial_intelligence<\/a>> ; # Artificial intelligence<\/span>\n\u00A0\u00A0\u00A0\nschema:bookFormat<\/a> schema:EBook<\/a> ;\u00A0\u00A0\u00A0\nschema:datePublished<\/a> \"2019<\/span>\" ;\u00A0\u00A0\u00A0\nschema:description<\/a> \"Towards Explainable Artificial Intelligence -- Transparency: Motivations and Challenges -- Interpretability in Intelligent Systems: A New Concept? -- Understanding Neural Networks via Feature Visualization: A Survey -- Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation -- Unsupervised Discrete Representation Learning -- Towards Reverse-Engineering Black-Box Neural Networks -- Explanations for Attributing Deep Neural Network Predictions -- Gradient-Based Attribution Methods -- Layer-Wise Relevance Propagation: An Overview -- Explaining and Interpreting LSTMs -- Comparing the Interpretability of Deep Networks via Network Dissection -- Gradient-Based vs. Propagation-Based Explanations: An Axiomatic Comparison -- The (Un)reliability of Saliency Methods -- Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation -- Understanding Patch-Based Learning of Video Data by Explaining Predictions -- Quantum-Chemical Insights from Interpretable Atomistic Neural Networks -- Interpretable Deep Learning in Drug Discovery -- Neural Hydrology: Interpreting LSTMs in Hydrology -- Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI -- Current Advances in Neural Decoding -- Software and Application Patterns for Explanation Methods.<\/span>\"@en<\/a> ;\u00A0\u00A0\u00A0\nschema:description<\/a> \"The development of \"intelligent\" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to \"intelligent\" machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI. --<\/span>\"@en<\/a> ;\u00A0\u00A0\u00A0\nschema:editor<\/a> <http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Person\/muller_klaus_robert<\/a>> ; # Klaus-Robert M\u00FCller<\/span>\n\u00A0\u00A0\u00A0\nschema:editor<\/a> <http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Person\/montavon_gregoire<\/a>> ; # Gr\u00E9goire Montavon<\/span>\n\u00A0\u00A0\u00A0\nschema:editor<\/a> <http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Person\/samek_wojciech<\/a>> ; # Wojciech Samek<\/span>\n\u00A0\u00A0\u00A0\nschema:editor<\/a> <http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Person\/hansen_lars_kai<\/a>> ; # Lars Kai Hansen<\/span>\n\u00A0\u00A0\u00A0\nschema:editor<\/a> <http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Person\/vedaldi_andrea<\/a>> ; # Andrea Vedaldi<\/span>\n\u00A0\u00A0\u00A0\nschema:exampleOfWork<\/a> <http:\/\/worldcat.org\/entity\/work\/id\/9544006752<\/a>> ;\u00A0\u00A0\u00A0\nschema:genre<\/a> \"Conference publication<\/span>\"@en<\/a> ;\u00A0\u00A0\u00A0\nschema:genre<\/a> \"Electronic books<\/span>\"@en<\/a> ;\u00A0\u00A0\u00A0\nschema:inLanguage<\/a> \"en<\/span>\" ;\u00A0\u00A0\u00A0\nschema:isPartOf<\/a> <http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Series\/lecture_notes_in_artificial_intelligence<\/a>> ; # Lecture notes in artificial intelligence<\/span>\n\u00A0\u00A0\u00A0\nschema:isPartOf<\/a> <http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Series\/lecture_notes_in_computer_science<\/a>> ; # Lecture notes in computer science.<\/span>\n\u00A0\u00A0\u00A0\nschema:isPartOf<\/a> <http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Series\/lncs_sublibrary_sl_7_artificial_intelligence<\/a>> ; # LNCS sublibrary. SL 7, Artificial intelligence<\/span>\n\u00A0\u00A0\u00A0\nschema:isPartOf<\/a> <http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Series\/lncs_sublibrary<\/a>> ; # LNCS sublibrary.<\/span>\n\u00A0\u00A0\u00A0\nschema:isSimilarTo<\/a> <http:\/\/worldcat.org\/entity\/work\/data\/9544006752#CreativeWork\/<\/a>> ;\u00A0\u00A0\u00A0\nschema:name<\/a> \"Explainable AI : interpreting, explaining and visualizing deep learning<\/span>\"@en<\/a> ;\u00A0\u00A0\u00A0\nschema:productID<\/a> \"1120722055<\/span>\" ;\u00A0\u00A0\u00A0\nschema:url<\/a> <https:\/\/doi.org\/10.1007\/978-3-030-28954-6<\/a>> ;\u00A0\u00A0\u00A0\nschema:url<\/a> <https:\/\/link.springer.com\/10.1007\/978-3-030-28954-6<\/a>> ;\u00A0\u00A0\u00A0\nschema:workExample<\/a> <http:\/\/worldcat.org\/isbn\/9783030289546<\/a>> ;\u00A0\u00A0\u00A0\nschema:workExample<\/a> <http:\/\/worldcat.org\/isbn\/9783030289539<\/a>> ;\u00A0\u00A0\u00A0\nschema:workExample<\/a> <http:\/\/dx.doi.org\/10.1007\/978-3-030-28954-6<\/a>> ;\u00A0\u00A0\u00A0\nschema:workExample<\/a> <http:\/\/worldcat.org\/isbn\/9783030289553<\/a>> ;\u00A0\u00A0\u00A0\numbel:isLike<\/a> <http:\/\/bnb.data.bl.uk\/id\/resource\/GBB9G3417<\/a>> ;\u00A0\u00A0\u00A0\nwdrs:describedby<\/a> <http:\/\/www.worldcat.org\/title\/-\/oclc\/1120722055<\/a>> ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n\n

Related Entities<\/h3>\n
<http:\/\/dewey.info\/class\/006.3\/e23\/<\/a>>\u00A0\u00A0\u00A0\u00A0a \nschema:Intangible<\/a> ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/dx.doi.org\/10.1007\/978-3-030-28954-6<\/a>>\u00A0\u00A0\u00A0\u00A0a \nschema:IndividualProduct<\/a> ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Person\/hansen_lars_kai<\/a>> # Lars Kai Hansen<\/span>\n\u00A0\u00A0\u00A0\u00A0a \nschema:Person<\/a> ;\u00A0\u00A0\u00A0\nschema:familyName<\/a> \"Hansen<\/span>\" ;\u00A0\u00A0\u00A0\nschema:givenName<\/a> \"Lars Kai<\/span>\" ;\u00A0\u00A0\u00A0\nschema:name<\/a> \"Lars Kai Hansen<\/span>\" ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Person\/montavon_gregoire<\/a>> # Gr\u00E9goire Montavon<\/span>\n\u00A0\u00A0\u00A0\u00A0a \nschema:Person<\/a> ;\u00A0\u00A0\u00A0\nschema:familyName<\/a> \"Montavon<\/span>\" ;\u00A0\u00A0\u00A0\nschema:givenName<\/a> \"Gr\u00E9goire<\/span>\" ;\u00A0\u00A0\u00A0\nschema:name<\/a> \"Gr\u00E9goire Montavon<\/span>\" ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Person\/muller_klaus_robert<\/a>> # Klaus-Robert M\u00FCller<\/span>\n\u00A0\u00A0\u00A0\u00A0a \nschema:Person<\/a> ;\u00A0\u00A0\u00A0\nschema:familyName<\/a> \"M\u00FCller<\/span>\" ;\u00A0\u00A0\u00A0\nschema:givenName<\/a> \"Klaus-Robert<\/span>\" ;\u00A0\u00A0\u00A0\nschema:name<\/a> \"Klaus-Robert M\u00FCller<\/span>\" ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Person\/samek_wojciech<\/a>> # Wojciech Samek<\/span>\n\u00A0\u00A0\u00A0\u00A0a \nschema:Person<\/a> ;\u00A0\u00A0\u00A0\nschema:familyName<\/a> \"Samek<\/span>\" ;\u00A0\u00A0\u00A0\nschema:givenName<\/a> \"Wojciech<\/span>\" ;\u00A0\u00A0\u00A0\nschema:name<\/a> \"Wojciech Samek<\/span>\" ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Person\/vedaldi_andrea<\/a>> # Andrea Vedaldi<\/span>\n\u00A0\u00A0\u00A0\u00A0a \nschema:Person<\/a> ;\u00A0\u00A0\u00A0\nschema:familyName<\/a> \"Vedaldi<\/span>\" ;\u00A0\u00A0\u00A0\nschema:givenName<\/a> \"Andrea<\/span>\" ;\u00A0\u00A0\u00A0\nschema:name<\/a> \"Andrea Vedaldi<\/span>\" ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Series\/lecture_notes_in_artificial_intelligence<\/a>> # Lecture notes in artificial intelligence<\/span>\n\u00A0\u00A0\u00A0\u00A0a \nbgn:PublicationSeries<\/a> ;\u00A0\u00A0\u00A0\nschema:hasPart<\/a> <http:\/\/www.worldcat.org\/oclc\/1120722055<\/a>> ; # Explainable AI : interpreting, explaining and visualizing deep learning<\/span>\n\u00A0\u00A0\u00A0\nschema:name<\/a> \"Lecture notes in artificial intelligence<\/span>\" ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Series\/lecture_notes_in_computer_science<\/a>> # Lecture notes in computer science.<\/span>\n\u00A0\u00A0\u00A0\u00A0a \nbgn:PublicationSeries<\/a> ;\u00A0\u00A0\u00A0\nschema:hasPart<\/a> <http:\/\/www.worldcat.org\/oclc\/1120722055<\/a>> ; # Explainable AI : interpreting, explaining and visualizing deep learning<\/span>\n\u00A0\u00A0\u00A0\nschema:name<\/a> \"Lecture notes in computer science.<\/span>\" ;\u00A0\u00A0\u00A0\nschema:name<\/a> \"Lecture notes in computer science ;<\/span>\" ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Series\/lncs_sublibrary<\/a>> # LNCS sublibrary.<\/span>\n\u00A0\u00A0\u00A0\u00A0a \nbgn:PublicationSeries<\/a> ;\u00A0\u00A0\u00A0\nschema:hasPart<\/a> <http:\/\/www.worldcat.org\/oclc\/1120722055<\/a>> ; # Explainable AI : interpreting, explaining and visualizing deep learning<\/span>\n\u00A0\u00A0\u00A0\nschema:name<\/a> \"LNCS sublibrary.<\/span>\" ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Series\/lncs_sublibrary_sl_7_artificial_intelligence<\/a>> # LNCS sublibrary. SL 7, Artificial intelligence<\/span>\n\u00A0\u00A0\u00A0\u00A0a \nbgn:PublicationSeries<\/a> ;\u00A0\u00A0\u00A0\nschema:hasPart<\/a> <http:\/\/www.worldcat.org\/oclc\/1120722055<\/a>> ; # Explainable AI : interpreting, explaining and visualizing deep learning<\/span>\n\u00A0\u00A0\u00A0\nschema:name<\/a> \"LNCS sublibrary. SL 7, Artificial intelligence<\/span>\" ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Topic\/artificial_intelligence<\/a>> # Artificial intelligence<\/span>\n\u00A0\u00A0\u00A0\u00A0a \nschema:Intangible<\/a> ;\u00A0\u00A0\u00A0\nschema:name<\/a> \"Artificial intelligence<\/span>\"@en<\/a> ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/experiment.worldcat.org\/entity\/work\/data\/9544006752#Topic\/machine_learning<\/a>> # Machine learning<\/span>\n\u00A0\u00A0\u00A0\u00A0a \nschema:Intangible<\/a> ;\u00A0\u00A0\u00A0\nschema:name<\/a> \"Machine learning<\/span>\"@en<\/a> ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/id.loc.gov\/vocabulary\/countries\/sz<\/a>>\u00A0\u00A0\u00A0\u00A0a \nschema:Place<\/a> ;\u00A0\u00A0\u00A0\ndcterms:identifier<\/a> \"sz<\/span>\" ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/worldcat.org\/entity\/work\/data\/9544006752#CreativeWork\/<\/a>>\u00A0\u00A0\u00A0\u00A0a \nschema:CreativeWork<\/a> ;\u00A0\u00A0\u00A0\nschema:description<\/a> \"Printed edition:<\/span>\" ;\u00A0\u00A0\u00A0\nschema:isSimilarTo<\/a> <http:\/\/www.worldcat.org\/oclc\/1120722055<\/a>> ; # Explainable AI : interpreting, explaining and visualizing deep learning<\/span>\n\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/worldcat.org\/isbn\/9783030289539<\/a>>\u00A0\u00A0\u00A0\u00A0a \nschema:ProductModel<\/a> ;\u00A0\u00A0\u00A0\nschema:isbn<\/a> \"3030289532<\/span>\" ;\u00A0\u00A0\u00A0\nschema:isbn<\/a> \"9783030289539<\/span>\" ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/worldcat.org\/isbn\/9783030289546<\/a>>\u00A0\u00A0\u00A0\u00A0a \nschema:ProductModel<\/a> ;\u00A0\u00A0\u00A0\nschema:isbn<\/a> \"3030289540<\/span>\" ;\u00A0\u00A0\u00A0\nschema:isbn<\/a> \"9783030289546<\/span>\" ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n
<http:\/\/worldcat.org\/isbn\/9783030289553<\/a>>\u00A0\u00A0\u00A0\u00A0a \nschema:ProductModel<\/a> ;\u00A0\u00A0\u00A0\nschema:isbn<\/a> \"3030289559<\/span>\" ;\u00A0\u00A0\u00A0\nschema:isbn<\/a> \"9783030289553<\/span>\" ;\u00A0\u00A0\u00A0\u00A0.\n\n\n<\/div>\n