Abstract
Using neural networks to explore spatial patterns and temporal dynamics of human brain activities has been an important yet challenging problem because it is hard to manually design the most optimal neural networks. There have been several promising deep learning methods that can decompose neuroscientifically meaningful spatial-temporal patterns from 4D fMRI data, e.g., the deep sparse recurrent auto-encoder (DSRAE). However, those previous studies still depend on hand-crafted neural network structures and hyperparameters, which are not optimal in various senses. In this paper, we employ evolutionary algorithms to optimize such DSRAE neural networks by minimizing the expected loss of the generated architectures on data reconstruction via the neural architecture search (NAS) framework, named NAS-DSRAE. The optimized NAS-DSRAE is evaluated by the publicly available human connectome project (HCP) fMRI datasets and our promising results showed that NAS-DSRAE has sufficient generalizability to model the spatial-temporal features and is better than the hand-crafted model. To our best knowledge, the proposed NAS-DSRAE is among the earliest NAS models that can extract connectome-scale meaningful spatial-temporal brain networks from 4D fMRI data.
Original language | English (US) |
---|---|
Title of host publication | Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 - 23rd International Conference, Proceedings |
Editors | Anne L. Martel, Purang Abolmaesumi, Danail Stoyanov, Diana Mateus, Maria A. Zuluaga, S. Kevin Zhou, Daniel Racoceanu, Leo Joskowicz |
Publisher | Springer Science and Business Media Deutschland GmbH |
Pages | 377-386 |
Number of pages | 10 |
ISBN (Print) | 9783030597276 |
DOIs | |
State | Published - 2020 |
Externally published | Yes |
Event | 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020 - Lima, Peru Duration: Oct 4 2020 → Oct 8 2020 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 12267 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020 |
---|---|
Country/Territory | Peru |
City | Lima |
Period | 10/4/20 → 10/8/20 |
Keywords
- Auto-encoder
- Neural architecture search
- Recurrent neural network
ASJC Scopus subject areas
- Theoretical Computer Science
- General Computer Science
Access to Document
Other files and links
Fingerprint
Dive into the research topics of 'Neural Architecture Search for Optimization of Spatial-Temporal Brain Network Decomposition'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS
Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 - 23rd International Conference, Proceedings. ed. / Anne L. Martel; Purang Abolmaesumi; Danail Stoyanov; Diana Mateus; Maria A. Zuluaga; S. Kevin Zhou; Daniel Racoceanu; Leo Joskowicz. Springer Science and Business Media Deutschland GmbH, 2020. p. 377-386 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 12267 LNCS).
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
}
TY - GEN
T1 - Neural Architecture Search for Optimization of Spatial-Temporal Brain Network Decomposition
AU - Li, Qing
AU - Zhang, Wei
AU - Lv, Jinglei
AU - Wu, Xia
AU - Liu, Tianming
N1 - Funding Information: A researcher might do the same searches without LAMHDI, looking for relationships and pointers from article to database to database to article to model. But a researcher could take an hour or more to do each search; LAMHDI returns its results in fractions of a second. When searches are that fast, researchers can do dozens, even scores of them to find exactly the right result. Moreover, LAMHDI’s software does not just search, it infers relationships that could lead researchers to new ways of thinking about the data. LAMHDI does not claim to find the right result. It takes a trained scientist to evaluate the results and decide whether Drosophila melonagaster , PBac{RB}Nf1e00084, is a crummy model or just right for what a researcher needs to do at a particular point in her work. LAMHDI’s main strength today is its speed: it lets a scientist try out ideas and follow thoughts without being distracted by the process. The NIH expected that: an important motivation for developing better access to resources for animal models of human disease [would be] the enhanced ability to search across multiple animal models as well as to capture specific model information that may be relevant to multiple diseases. NIH NCRR Statement of Work (2008) But LAMHDI could go only so far. The five disease model databases it includes—MGI, the Mouse Genome Informatics website ( searches. If you need a hippopotamus (for research into obesity and longevity, for instance), you have to phone up the local zoo, and if they do not have the capability to provide cheek swabs or blood, keep calling zoos. The national primate research centers have reams, even petabytes (10 http://www.informatics.jax.org/ ); ZFIN, the Zebrafish Model Organism Database ( http://www.zfin.org/ ); RGD, the Rat Genome Database ( http://rgd.mcw.edu/ ); SGD, the Saccharomyces Genome Database ( http://www.yeastgenome.org/ ); and FlyBase ( http://www.flybase.org/ )—are the only extensive, public, well-organized, and well-curated disease-model databases. (Several well-known databases, such as the Knockout Rat Consortium ( http://www.knockoutrat.org/ ) and KOMP, the Knockout Mouse Project ( http://www.nih.gov/science/models/mouse/knockout/index.html ), are subsets of the disease-model databases LAMHDI already uses.) Information about all other disease models is stored in private, or small, or out-of-date, or non-standard databases. For instance, zoos are eager to make their animals available for research (through non-invasive procedures), but do not have comparable data for structured 15 bytes, or one million gigabytes) of information on their charges, but it is locked in individual researchers’ files and not standardized in useful ways. Even commercial providers of disease models do not create the kind of data that a computer can parse; they get the data about their models from the same public databases that LAMHDI already uses. The NIH was aware of the problem; the request for proposals asked for “more rigorous methods for describing animal models.” To help meet that goal, LAMHDI scientists met to consider the problem. One issue was that searches of the scientific literature did not easily yield information about disease models. A relative handful of articles mentioned the exact strain or source of the disease model, and certainly not in a standardized format that could be parsed by a computer. It seemed to the LAMHDI team that their efforts could not succeed without help from the journals themselves. The LAMHDI team, supported by scientists from a range of disciplines, drafted a letter for journal publishers urging them to establish standards for information about animal models (see ). Box 2.1 The letter asks that journals enforce three “minimal” practices for authors: 1. Provide gene accession numbers for all genes 2. Identify the species for the subject of a study based on the NCBI taxonomy, and the strains from the model organism databases 3. Provide catalog numbers and vendor information for reagents and animals. Scientific literature is intended not only to report advances, but to ensure repeatability. Without the three minimal pieces of information that the letter outlined, it is impossible to replicate experiments, scientists say. Too many variables are unclear, and any one of them could take an experiment off in an entirely different direction. For instance, a reagent from one vendor could be just different enough from a reagent from another that the entire procedure would be affected. With that minimal information, the scientists add, the value of animal models could be increased dramatically, as research reporting would also build the record on disease modeling. Findings could be correlated across models, allowing researchers to home in on models that address their particular interests. The NIH had foreseen that benefit: As more animal models are generated, it is important to develop better mechanisms by which these models are mapped onto human conditions as well as better ways to capture and then access this newly generated information. Such methods must eventually use automated ways of linking various types of representations to identify equivalent, comparable, or related concepts. NIH NCRR Statement of Work (2008) Despite that effort—which has thus far had only limited success—LAMHDI has not been able to have the effect on disease model research that the NIH had hoped. The original request for proposals asked for: better mechanisms by which … models are mapped onto human conditions as well as better ways to capture and then access this newly generated information. Such methods must eventually use automated ways of linking various types of representations to identify equivalent, comparable, or related concepts. NIH NCRR Statement of Work (2008) Other efforts are under way to refine automated curation to allow initiatives like LAMHDI to take better advantage of existing material. LAMHDI’s advances will be linked to their success. Also, other LAMHDI-like efforts can be built on LAMHDI, as the software technology , which the LAMHDI team presented at the kickoff meeting for the project. ∗ ∗ LAMHDI is based on PhP, and uses the CakePHP framework, Lamp stack, Postgres, MnogoSearch, and Sphinx, among other tools. is all open source (that is, freely available to anyone who wants to use it). LAMHDI in turn is based in part on another open-source project funded by NIH and built by TCG, NITRC ( http://www.nitrc.org ), the Neuroimaging Informatics Tools and Resources Clearinghouse. NITRC facilitates finding and comparing neuroimaging resources for functional and structural neuroimaging analyses. NITRC collects and points to standardized information about the tools it includes. LAMHDI was intended to be more like NITRC, as evidenced from the slide shown in Fig. 2.3 To date, neither the “participate” nor the “curate” efforts has been a focus of LAMHDI. NITRC’s strength is its community (represented by “participate”), and LAMHDI has yet to incorporate either the discussion forums or the automated processes to accept and validate individual model listings from researchers. Accepting disease model listings from researchers means establishing standards for data that go beyond the three minimal data elements urged for journals, and engaging the “curate” function in the slide by creating a formal mechanism for evaluating suggestions by allowing scientists to rate models and their appropriateness. Such crowd-sourcing may be in LAMHDI’s future. Another important model for LAMHDI is NIF. Good practices and technological advances pioneered by NIF may be applicable to LAMHDI as well. For instance, NIF is exploring the use of a tool that uses a semantic hierarchy to filter results and display them in context. This tool could be ideal for pathway searches—the next big challenge for LAMHDI. In addition, LAMHDI will follow NIF’s lead in creating ways to move between databases using the semantics or ontologies developed by others, as it now uses OMIM and Homologene. LAMHDI will continue to link to repositories elsewhere rather than bringing those repositories into LAMHDI. Yet LAMHDI has far to go to fully meet the NIH’s expectations. Publisher Copyright: © 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - Using neural networks to explore spatial patterns and temporal dynamics of human brain activities has been an important yet challenging problem because it is hard to manually design the most optimal neural networks. There have been several promising deep learning methods that can decompose neuroscientifically meaningful spatial-temporal patterns from 4D fMRI data, e.g., the deep sparse recurrent auto-encoder (DSRAE). However, those previous studies still depend on hand-crafted neural network structures and hyperparameters, which are not optimal in various senses. In this paper, we employ evolutionary algorithms to optimize such DSRAE neural networks by minimizing the expected loss of the generated architectures on data reconstruction via the neural architecture search (NAS) framework, named NAS-DSRAE. The optimized NAS-DSRAE is evaluated by the publicly available human connectome project (HCP) fMRI datasets and our promising results showed that NAS-DSRAE has sufficient generalizability to model the spatial-temporal features and is better than the hand-crafted model. To our best knowledge, the proposed NAS-DSRAE is among the earliest NAS models that can extract connectome-scale meaningful spatial-temporal brain networks from 4D fMRI data.
AB - Using neural networks to explore spatial patterns and temporal dynamics of human brain activities has been an important yet challenging problem because it is hard to manually design the most optimal neural networks. There have been several promising deep learning methods that can decompose neuroscientifically meaningful spatial-temporal patterns from 4D fMRI data, e.g., the deep sparse recurrent auto-encoder (DSRAE). However, those previous studies still depend on hand-crafted neural network structures and hyperparameters, which are not optimal in various senses. In this paper, we employ evolutionary algorithms to optimize such DSRAE neural networks by minimizing the expected loss of the generated architectures on data reconstruction via the neural architecture search (NAS) framework, named NAS-DSRAE. The optimized NAS-DSRAE is evaluated by the publicly available human connectome project (HCP) fMRI datasets and our promising results showed that NAS-DSRAE has sufficient generalizability to model the spatial-temporal features and is better than the hand-crafted model. To our best knowledge, the proposed NAS-DSRAE is among the earliest NAS models that can extract connectome-scale meaningful spatial-temporal brain networks from 4D fMRI data.
KW - Auto-encoder
KW - Neural architecture search
KW - Recurrent neural network
UR - http://www.scopus.com/inward/record.url?scp=85092738949&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85092738949&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-59728-3_37
DO - 10.1007/978-3-030-59728-3_37
M3 - Conference contribution
AN - SCOPUS:85092738949
SN - 9783030597276
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 377
EP - 386
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 - 23rd International Conference, Proceedings
A2 - Martel, Anne L.
A2 - Abolmaesumi, Purang
A2 - Stoyanov, Danail
A2 - Mateus, Diana
A2 - Zuluaga, Maria A.
A2 - Zhou, S. Kevin
A2 - Racoceanu, Daniel
A2 - Joskowicz, Leo
PB - Springer Science and Business Media Deutschland GmbH
T2 - 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020
Y2 - 4 October 2020 through 8 October 2020
ER -