Science Blogs
Blogs, magazines, and articles, mostly science and research related.
473 listings
Submitted May 04, 2017 to Science Blogs Founded in 2012, BBC News Labs is an innovation incubator charged with driving innovation for BBC News. We are part of Connected Studio - the pan-BBC Innovation Programme.
Since 2012 we have grown from a few part-time staff, to about 20, from a variety of backgrounds. Our team includes journalists, developers, scientists, developer-journalists and broadcast craft experts from the UK, France, Italy, China, Germany, Russia, Thailand, Uruguay, New Zealand, Canada and the USA. |
Submitted Apr 29, 2017 (Edited Apr 29, 2017) to Science Blogs Simply put: I'm a senior research scientist at MetaMind, acquired by Salesforce, where I poke datasets with the entropy stick and complain about deep neural networks converging too slowly. I'm passionate about machine learning, open data, and teaching computer science.
|
Submitted Apr 25, 2017 to Science Blogs An international team of researchers have created a new structure that allows the tuning of topological properties in such a way as to turn on or off these unique behaviors. The structure could open up possibilities for new explorations into the properties of topological states of matter.
“This is an exciting new direction in topological matter research,” said M. Zahid Hasan, professor of physics at Princeton University and an investigator at Lawrence Berkeley National Laboratory in California who led the study, which was published March 24th in the journal Science Advances. “We are engineering new topological states that do not occur naturally, opening up numerous exotic possibilities for controlling the behaviors of these materials.” |
Submitted Apr 25, 2017 (Edited Apr 25, 2017) to Science Blogs A blog about science and other interestingness by Joe Hanson, a Ph.D. biologist and science writer based in Austin, TX.
|
Submitted Apr 25, 2017 to Science Blogs This blog is meant to parse the wealth of information and the multitudinous online resources available regarding earthquakes in the hopes of transmitting experience and knowledge of these fearsome events to those of you with questions and curiosity about them.
I mainly focus on the fascinating physical phenomena revealed by each new earthquake and brought onto a global stage by YouTube, Twitter, and the likes. When applicable, I also strive to include resources such as aid organizations and community preparation information in order to bridge the gap between the curious, the concerned, and the stricken, and hopefully to raise awareness of the reality of the earthquake risk that most of the global population faces. I also intend to compile demonstrative eyewitness videos to illustrate, elucidate, and correct commonly misconceived or misconstrued phenomena associated with earthquakes. Examples include the various types of seismic waves that constitute earthquakes, which are perceived differently and sometimes lead to misinformation, or the nature of tsunami, a phenomenon fraught with misconception. Follow along if you want to understand awesome forces, witness rare natural phenomena, or seek information about an earthquake you’ve experienced. |
Submitted Apr 24, 2017 to Science Blogs This blog is dedicated to helping other programmers understand how image search engines work. While a lot of computer vision concepts are theoretical in nature, I’m a big fan of “learning by example”. My goal is to distill my life experiences in building image search engines into concise, easy to understand examples.
|
Submitted Apr 23, 2017 to Science Blogs Computing Education Research is about how people come to understanding computing, and how we can facilitate that understanding. I am Mark Guzdial, a professor in the School of Interactive Computing at Georgia Institute of Technology. I am a researcher in computing education.
|
Submitted Apr 22, 2017 to Science Blogs We are 2 friends living in the Pacific Northwest of the United States. We work together and realized we both shared the love of all things space, whether it be astrophotography, discussing aerospace events and studies, such as SpaceX landing their rockets, or just looking through our telescopes. We hope to share this love of all things space-related with the world and help gather all the resources we’ve found useful online, and share them with you here.
|
Submitted Apr 22, 2017 to Science Blogs Despite the many advances in computing over the past decades, the actual process of writing computer software has not fundamentally changed — a programmer must manually code the exact algorithmic logic of a program in a step-by-step manner using a specialized programming language. Although programming languages have become much more user-friendly over the years, learning how to program is still a major endeavor that most computer users have not undertaken.
In a recent paper, we report our latest work in deep learning for program synthesis, where deep neural networks learn how to generate computer programs based on a user’s intent. The user simply provides a few input/output (I/O) examples to specify the desired program behavior, and the system uses these to generate a corresponding program. |
Submitted Apr 20, 2017 to Science Blogs At Stitch Fix, we’re transforming the way people find what they love. Our clients want the perfect clothes for their individual preferences—yet without the burden of search or having to keep up with current trends. Our merchandise is curated from the market and augmented with our own designs to fill in the gaps. It’s kept current and extremely vast and diverse—ensuring something for everyone. Rich data on both sides of this 'market' enables Stitch Fix to be a matchmaker, connecting clients with styles they love (and never would’ve found on their own).
Our business model enables unprecedented data science, not only in recommendation systems, but also in human computation, resource management, inventory management, algorithmic fashion design and many other areas. Experimentation and algorithm development is deeply engrained in everything that Stitch Fix does. We’ll describe a few examples in detail as you scroll along. |
Submitted Apr 20, 2017 to Science Blogs Every week, new papers on Generative Adversarial Networks (GAN) are coming out and it’s hard to keep track of them all, not to mention the incredibly creative ways in which researchers are naming these GANs! You can read more about GANs in this Generative Models post by OpenAI or this overview tutorial in KDNuggets.
So, here’s the current and frequently updated list, from what started as a fun activity compiling all named GANs in this format: Name and Source Paper linked to Arxiv. |
Submitted Apr 16, 2017 to Science Blogs A look at the importance of Natural Language Processing by Christopher D. Manning.
|
Submitted Apr 12, 2017 to Science Blogs In this post we will take you behind the scenes on how we built a state-of-the-art Optical Character Recognition (OCR) pipeline for our mobile document scanner. We used computer vision and deep learning advances such as bi-directional Long Short Term Memory (LSTMs), Connectionist Temporal Classification (CTC), convolutional neural nets (CNNs), and more. In addition, we will also dive deep into what it took to actually make our OCR pipeline production-ready at Dropbox scale.
|
Submitted Apr 11, 2017 to Science Blogs This post walks through the PyTorch implementation of a recursive neural network with a recurrent tracker and TreeLSTM nodes, also known as SPINN—an example of a deep learning model from natural language processing that is difficult to build in many popular frameworks. The implementation I describe is also partially batched, so it’s able to take advantage of GPU acceleration to run significantly faster than versions that don’t use batching.
|
Submitted Apr 07, 2017 to Science Blogs Standard machine learning approaches require centralizing the training data on one machine or in a datacenter. And Google has built one of the most secure and robust cloud infrastructures for processing this data to make our services better. Now for models trained from user interaction with mobile devices, we're introducing an additional approach: Federated Learning.
Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud. This goes beyond the use of local models that make predictions on mobile devices (like the Mobile Vision API and On-Device Smart Reply) by bringing model training to the device as well. |
Submitted Apr 05, 2017 to Science Blogs Here’s a popular story about momentum [1, 2, 3]: gradient descent is a man walking down a hill. He follows the steepest path downwards; his progress is slow, but steady. Momentum is a heavy ball rolling down the same hill. The added inertia acts both as a smoother and an accelerator, dampening oscillations and causing us to barrel through narrow valleys, small humps and local minima.
This standard story isn’t wrong, but it fails to explain many important behaviors of momentum. In fact, momentum can be understood far more precisely if we study it on the right model. One nice model is the convex quadratic. This model is rich enough to reproduce momentum’s local dynamics in real problems, and yet simple enough to be understood in closed form. This balance gives us powerful traction for understanding this algorithm. |
Submitted Mar 29, 2017 to Science Blogs You’ve heard it on the news and you’ve read the latest reports. Solar power is projected to become cheaper than coal in about 10 years. Just consider the significant drops in the cost of going solar – since 2009, solar prices have dropped 62%! What was once a far-reaching solution to lowering your home energy bill has now become a reality in the life of many homeowners. In fact, DIY residential solar kits are appearing on the shelves of big box stores. As a homeowner, you’re ready to get in on the action! And with a DIY kit, how hard could it be to start saving money on your monthly electric bill? In this article, we’ll cover what you’ll do to install a home solar energy system and the pros and cons of the DIY method versus hiring the professionals.
|
Submitted Mar 27, 2017 to Science Blogs Graphify is a Neo4j unmanaged extension that provides plug and play natural language text classification. Graphify gives you a mechanism to train natural language parsing models that extract features of a text using deep learning. When training a model to recognize the meaning of a text, you can send an article of text with a provided set of labels that describe the nature of the text. Over time the natural language parsing model in Neo4j will grow to identify those features that optimally disambiguate a text to a set of classes. This blog post explains how it works.
|
Submitted Mar 26, 2017 to Science Blogs Techdirt has just written about ResearchGate, which claims to offer access to 100 million academic papers. However, as we wrote, there's an issue about whether a significant proportion of those articles are in fact unauthorized copies, for example uploaded by the authors but in contravention of the agreement they signed with publishers. The same legal issues plague the well-known Sci-Hub site, which may deter some from using it. But as further evidence of how the demand for access to millions of academic papers still locked away is driving technical innovation, there's a new option, called Unpaywall, which is available as a pre-release add-on for Chrome (Firefox is promised later), and is free. It aims to provide access to every paper that's freely available to read in an authorized version.
|
Submitted Mar 25, 2017 to Science Blogs The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.
|