数据科学
计算机科学
大数据
分析
数据分析
仿形(计算机编程)
比例(比率)
降维
数据挖掘
机器学习
人工智能
量子力学
操作系统
物理
作者
Danilo Bzdok,Thomas E. Nichols,Stephen M. Smith
标识
DOI:10.1038/s42256-019-0069-5
摘要
The traditional goal of quantitative analytics is to find simple, transparent models that generate explainable insights. In recent years, large-scale data acquisition enabled, for instance, by brain scanning and genomic profiling with microarray-type techniques, has prompted a wave of statistical inventions and innovative applications. Here we review some of the main trends in learning from ‘big data’ and provide examples from imaging neuroscience. Some main messages we find are that modern analysis approaches (1) tame complex data with parameter regularization and dimensionality-reduction strategies, (2) are increasingly backed up by empirical model validations rather than justified by mathematical proofs, (3) will compare against and build on open data and consortium repositories, as well as (4) often embrace more elaborate, less interpretable models to maximize prediction accuracy. Classical statistical analysis in many empirical sciences has lagged behind modern trends in analytics for large-scale datasets. The authors discuss the influence of more variables, larger sample sizes, open data sources for analysis and assessment, and ‘black box’ prediction methods on the empirical sciences, and provide examples from imaging neuroscience.
科研通智能强力驱动
Strongly Powered by AbleSci AI