Science Research Articles
New category! Peer-reviewed articles, pre-prints, etc. Open access articles are preferred.
19 listings
Submitted Sep 12, 2023 to Science Research Articles Scilime offers daily science news with breaking updates on the most recent scientific research, fascinating technological breakthroughs, and exciting new discoveries. Here, you will be able to learn about latest researches and technological advancements in almost every field of science. Moreover, we believe in making science fun and interactive through various modes of knowledge representations such as info-graphics, short videos, pictogram and posters.
|
Submitted Apr 17, 2017 to Science Research Articles We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.
|
Submitted Apr 16, 2017 (Edited Apr 16, 2017) to Science Research Articles Abstract: Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy.
|
Submitted Apr 16, 2017 (Edited Apr 16, 2017) to Science Research Articles Abstract: Metrics derived from Twitter and other social media—often referred to as altmetrics—are increasingly used to estimate the broader social impacts of scholarship. Such efforts, however, may produce highly misleading results, as the entities that participate in conversations about science on these platforms are largely unknown. For instance, if altmetric activities are generated mainly by scientists, does it really capture broader social impacts of science? Here we present a systematic approach to identifying and analyzing scientists on Twitter. Our method can identify scientists across many disciplines, without relying on external bibliographic data, and be easily adapted to identify other stakeholder groups in science. We investigate the demographics, sharing behaviors, and interconnectivity of the identified scientists. We find that Twitter has been employed by scholars across the disciplinary spectrum, with an over-representation of social and computer and information scientists; under-representation of mathematical, physical, and life scientists; and a better representation of women compared to scholarly publishing. Analysis of the sharing of URLs reveals a distinct imprint of scholarly sites, yet only a small fraction of shared URLs are science-related. We find an assortative mixing with respect to disciplines in the networks between scientists, suggesting the maintenance of disciplinary walls in social media. Our work contributes to the literature both methodologically and conceptually—we provide new methods for disambiguating and identifying particular actors on social media and describing the behaviors of scientists, thus providing foundational information for the construction and use of indicators on the basis of social media metrics.
|
Submitted Apr 12, 2017 to Science Research Articles In this guide, we present a reading list to serve as a concise introduction to Bayesian data analysis. The introduction is geared toward reviewers, editors, and interested researchers who are new to Bayesian statistics. We provide commentary for eight recommended sources, which together cover the theoretical and practical cornerstones of Bayesian statistics in psychology and related sciences.
|
Submitted Mar 27, 2017 to Science Research Articles We present a novel graph-based summarization framework (Opinosis) that generates concise abstractive summaries of highly redundant opinions. Evaluation results on summarizing user reviews show that Opinosis summaries have better agreement with human summaries compared to the baseline extractive method. The summaries are readable, reasonably well-formed and are informative enough to convey the major opinions.
|
Submitted Mar 10, 2017 to Science Research Articles Previously, I was working as a physicist in the area of phase retrieval (PR). PR is concerned with finding the phase of a complex valued function (typically in Fourier space) given knowledge of its amplitude, along with constraints in real-space (things like positivity and finite extent).
PR is a non-convex optimization problem and has been the subject of quite a lot of work and forms the backbone of crystallography, a stalwart of structural biology. Some of the most successful algorithms for the general PR problem are projection based methods, inspired by convex optimizations projection onto convex sets (for an excellent overview see [Marchesini2007]). Due to the projection based methods success in PR, I wondered if it would be possible to train a neural net using something similar. |
Submitted Mar 06, 2017 to Science Research Articles Modern commodity processors such as GPUs may execute up to about a thousand of physical threads per chip to better utilize their numerous execution units and hide execution latencies. Understanding this novel capability, however, is hindered by the overall complexity of the hardware and complexity of typical workloads. In this dissertation, we suggest a better way to understand modern multithreaded performance by considering a family of synthetic workloads, which use the same key hardware capabilities – memory access, arithmetic operations, and multithreading – but are otherwise as simple as possible.
One of our surprising findings is that prior performance models for GPUs fail on these workloads: they mispredict observed throughputs by factors of up to 1.7. We analyze these prior approaches, identify a number of common pitfalls, and discuss the related subtleties in understanding concurrency and Little’s Law. Also, we help to further our understanding by considering a few basic questions, such as on how different latencies compare with each other in terms of latency hiding, and how the number of threads needed to hide latency depends on basic parameters of executed code such as arithmetic intensity. Finally, we outline a performance modeling framework that is free from the found limitations. As a tangential development, we present a number of novel experimental studies, such as on how mean memory latency depends on memory throughput, how latencies of individual memory accesses are distributed around the mean, and how occupancy varies during execution. |
Submitted Mar 06, 2017 to Science Research Articles I recently saw an old friend for the first time in many years. We had been Ph.D. students at the same time, both studying science, although in different areas. She later dropped out of graduate school, went to Harvard Law School and is now a senior lawyer for a major environmental organization. At some point, the conversation turned to why she had left graduate school. To my utter astonishment, she said it was because it made her feel stupid. After a couple of years of feeling stupid every day, she was ready to do something else.
I had thought of her as one of the brightest people I knew and her subsequent career supports that view. What she said bothered me. I kept thinking about it; sometime the next day, it hit me. Science makes me feel stupid too. It's just that I've gotten used to it. So used to it, in fact, that I actively seek out new opportunities to feel stupid. I wouldn't know what to do without that feeling. I even think it's supposed to be this way. Let me explain. |
Submitted Feb 25, 2017 (Edited Feb 25, 2017) to Science Research Articles Ole Peters and Murray Gell-Mann, Chaos 26, 023103 (2016)
ABSTRACT Gambles are random variables that model possible changes in wealth. Classic decision theory transforms money into utility through a utility function and defines the value of a gamble as the expectation value of utility changes. Utility functions aim to capture individual psychological characteristics, but their generality limits predictive power. Expectation value maximizers are defined as rational in economics, but expectation values are only meaningful in the presence of ensembles or in systems with ergodic properties, whereas decision-makers have no access to ensembles, and the variables representing wealth in the usual growth models do not have the relevant ergodic properties. Simultaneously addressing the shortcomings of utility and those of expectations, we propose to evaluate gambles by averaging wealth growth over time. No utility function is needed, but a dynamic must be specified to compute time averages. Linear and logarithmic “utility functions” appear as transformations that generate ergodic observables for purely additive and purely multiplicative dynamics, respectively. We highlight inconsistencies throughout the development of decision theory, whose correction clarifies that our perspective is legitimate. These invalidate a commonly cited argument for bounded utility functions. Over the past few years, we have explored a conceptually deep, simple, change of perspective that leads to a novel approach to economics. Much of current economic theory is based on early work in probability theory, performed specifically between the 1650s and the 1730s. This foundational work predates the development of the notion of ergodicity, and it assumes that expectation values reflect what happens over time. This is not the case for stochastic growth processes, but such processes constitute the essential models of economics. As a consequence, nowadays expectation values are often used to evaluate situations where time averages would be appropriate instead, and the result is a “paradox,” “puzzle,” or “anomaly.” This class of problems, including the St. Petersburg paradox and the equity-premium puzzle, can be resolved by ensuring the following: the stochastic growth process involved in the problem needs to be made explicit; the process needs to be transformed to find an appropriate ergodic observable. The expectation value of the new observable will then indeed reflect long-time behavior, and the puzzling essence of the problem will go away. Here we spell out the general recipe, which we phrase as the solution to the general gamble problem that stood at the beginning of the debate in the 17th century. We hope that this recipe will resolve puzzles in many different areas. |
Submitted Jan 26, 2017 to Science Research Articles Advances in genetic technology and biological understanding in the last 100 years have opened a new era of discovery and investigation. In the centennial year of the journal GENETICS, we quantify the remarkable shifts in genetic research through textual analysis of publications. We characterize changes in the focus of genetic research by studying all available abstracts and titles of papers published since 1916 in GENETICS. We document a massive expansion in publications in genetics, beginning in the 1950s, and accelerating in the 1980s, as genetic research expanded globally from a few initial locations. We also describe changes in word usage over time, reflecting evolving research interests, methods, and organisms. For example, we observe stable use of Drosophila in genetics research throughout the century. In contrast, we document rapid increases in human genetic research and extreme expansion in use of model organisms, including Escherichia coli, Arabidopsis thaliana, Caenorhabditis elegans, and Saccharomyces cerevisiae, and a later decline in use of prokaryotes. We use bibliometric analyses to measure the most prominent research trends as reflected in the journal GENETICS.
|
Submitted Jan 23, 2017 to Science Research Articles This links to a 2013 report by The Professional Institute of the Public Service of Canada on the risks felt by Canadian scientists against speaking out about issues related to public health, safety, and the environment.
Intro: "A major survey of federal government scientists commissioned by the Professional Institute of the Public Service of Canada (PIPSC) has found that 90% feel they are not allowed to speak freely to the media about the work they do and that, faced with a departmental decision that could harm public health, safety or the environment, nearly as many (86%) would face censure or retaliation for doing so." |
Submitted Jan 23, 2017 to Science Research Articles EPA's Climate Change Indicators in the United States report published in 2016 presents 37 indicators, each describing trends related to the causes and effects of climate change. It focuses primarily on the United States, but in some cases global trends are presented to provide context or a basis for comparison.
|
Submitted Jan 17, 2017 (Edited Jan 17, 2017) to Science Research Articles An article by Martin Zinkevich, Research Scientist at Google, on best practices in machine learning from his experience at Google. From the intro: "It presents a style for machine learning, similar to the Google C++ Style Guide and other popular guides to practical programming. If you have taken a class in machine learning, or built or worked on a machinelearned model, then you have the necessary background to read this document."
|
Submitted Jan 13, 2017 (Edited Jan 14, 2017) to Science Research Articles This is President Barack Obama's paper published in the journal Science in January 2017.
Abstract: The release of carbon dioxide (CO2) and other greenhouse gases (GHGs) due to human activity is increasing global average surface air temperatures, disrupting weather patterns, and acidifying the ocean (1). Left unchecked, the continued growth of GHG emissions could cause global average temperatures to increase by another 4°C or more by 2100 and by 1.5 to 2 times as much in many midcontinent and far northern locations (1). Although our understanding of the impacts of climate change is increasingly and disturbingly clear, there is still debate about the proper course for U.S. policy—a debate that is very much on display during the current presidential transition. But putting near-term politics aside, the mounting economic and scientific evidence leave me confident that trends toward a clean-energy economy that have emerged during my presidency will continue and that the economic opportunity for our country to harness that trend will only grow. This Policy Forum will focus on the four reasons I believe the trend toward clean energy is irreversible. |
Submitted Jan 13, 2017 to Science Research Articles Mass spectrometry (MS) is an essential part of the cell biologist’s proteomics toolkit, allowing analyses at molecular and system-wide scales. However, proteomics still lag behind genomics in popularity and ease of use. We discuss key differences between MS-based -omics and other booming -omics technologies and highlight what we view as the future of MS and its role in our increasingly deep understanding of cell biology.
|
Submitted Jan 12, 2017 to Science Research Articles Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectation propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.
|
Submitted Jan 12, 2017 to Science Research Articles This paper describes the tectonic summaries for all magnitude 7 and larger earthquakes in the period 2000–2015, as produced by the U.S. Geological Survey National Earthquake Information Center during their routine response operations to global earthquakes. The goal of such summaries is to provide important event-specific information to the public rapidly and concisely, such that recent earthquakes can be understood within a global and regional seismotectonic framework. We compile these summaries here to provide a long-term archive for this information, and so that the variability in tectonic setting and earthquake history from region to region, and sometimes within a given region, can be more clearly understood.
|
Submitted Jan 11, 2017 to Science Research Articles From the abstract:
Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research. |