AI Neuroscience: Visualizing and Understanding Deep Neural Networks

Please download to get full document.

View again

of 249
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information Report
Category:

Constitutional Law

Published:

Views: 0 | Pages: 249

Extension: PDF | Download: 0

Share
Related documents
Description
Deep Learning, a type of Artificial Intelligence, is transforming many industries including transportation, health care and mobile computing. The main actors behind deep learning are deep neural networks (DNNs). These artificial brains have
Tags
Transcript
  To the University of Wyoming:The members of the Committee approve the dissertation of Anh M. Nguyen presentedon May 24, 2017.Jeff Clune, ChairpersonCameron Wright, External Department MemberJames CaldwellJohn HitchcockAmy BanicAPPROVED:James Caldwell, Head, Department of Computer ScienceMichael Pishko, Dean, College of Engineering and Applied Science  Nguyen, Anh M., AI Neuroscience: Visualizing and Understanding Deep Neural Networks,Ph.D., Department of Computer Science, Aug, 2017.Deep Learning, a type of Artificial Intelligence, is transforming many industries includ-ing transportation, health care and mobile computing. The main actors behind deep learningare deep neural networks (DNNs). These artificial brains have demonstrated impressive per-formance on many challenging tasks such as synthesizing and recognizing speech, drivingcars, and even detecting cancer from medical scans. Given their excellent performance andwidespread applications in everyday life, it is important to understand: (1) how DNNs func-tion internally; (2) why they perform so well; and (3) when they fail. Answering thesequestions would allow end-users (e.g. medical doctors harnessing deep learning to assistthem in diagnosis) to gain deeper insights into how these models behave, and therefore moreconfidence in utilizing the technology in important real-world applications.Artificial neural networks traditionally had been treated as black boxes—little wasknown about how they arrive at a decision when an input is present. Similarly, in neu-roscience, understanding how biological brains work has also been a long-standing quest.Neuroscientists have discovered neurons in human brains that selectively fire in responseto specific, abstract concepts such as Halle Berry or Bill Clinton, informing the discussionof whether learned neural codes are local or distributed. These neurons were identified byfinding the  preferred stimuli   (here, images) that highly excite a specific neuron, which wasaccomplished by showing subjects many different images while recording a target neuron’sactivation.Inspired by such neuroscience techniques, my Ph.D. study produced a series of visual-ization methods that  synthesize   the preferred stimuli for each neuron in DNNs to shed morelight into (1) the weaknesses of DNNs, which raise serious concerns about their widespreaddeployment in critical sectors of our economy and society; and (2) how DNNs function inter-nally. Some of the notable findings are summarized as follows. First, DNNs are easily fooledin that it is possible to produce images that are visually unrecognizable to humans, but thatstate-of-the-art DNNs classify as familiar objects with near certainty confidence (i.e. label-ing white-noise images as “school bus”). These images can be optimized to fool the DNN1  regardless of whether we treat the network as a white- or black-box (i.e. we have access tothe network parameters or not). These results shed more light into the inner workings of DNNs and also question the security and reliability of deep learning applications. Second,our visualization methods reveal that DNNs can automatically learn a hierarchy of increas-ingly abstract features from the input space that are useful to solve a given task. In addition,we also found that neurons in DNNs are often multifaceted in that a single neuron fires fora variety of different input patterns (i.e. it is invariant to changes in the input). Theseobservations align with the common wisdom previously established for both human visualcortex and DNNs. Lastly, many machine learning hobbyists and scientists have successfullyapplied our methods to visualize their own DNNs or even generate high-quality art images.We also turn the visualization frameworks into (1) an art generator algorithm, and (2) astate-of-the-art image generative model, making contributions to the fields of evolutionarycomputation and generative modeling, respectively.  AI NEUROSCIENCE: VISUALIZING ANDUNDERSTANDING DEEP NEURAL NETWORKS by Anh M. Nguyen, B.S, M.S A dissertation submitted to theDepartment of Computer Scienceand theUniversity of Wyomingin partial fulfillment of the requirementsfor the degree of DOCTOR OF PHILOSOPHYinCOMPUTER SCIENCELaramie, WyomingAug 2017  Copyright c   2017byAnh M. Nguyenii
Recommended
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks