Science Blogs
Blogs, magazines, and articles, mostly science and research related.
473 listings
Submitted Dec 05, 2016 to Science Blogs With new neural network architectures popping up every now and then, it’s hard to keep track of them all. Knowing all the abbreviations being thrown around (DCIGN, BiLSTM, DCGAN, anyone?) can be a bit overwhelming at first.
So I decided to compose a cheat sheet containing many of those architectures. Most of these are neural networks, some are completely different beasts. Though all of these architectures are presented as novel and unique, when I drew the node structures… their underlying relations started to make more sense. |
Submitted Dec 04, 2016 to Science Blogs Natural language processing and a tiny neural network applied to, what else?, whiskey recommendations.
|
Submitted Dec 04, 2016 to Science Blogs Many people use Wikipedia as a go-to source of information on a topic. People know that anyone can edit Wikipedia at any time, but generally trust that it is largely accurate and trustworthy because of the large number of vigilant volunteers and the policy of requiring sources for facts. A few years ago, I noticed that the article on the Planck length had some very detailed yet weird and incorrect information that was causing a lot of confusion among physics enthusiasts, and I decided to look into it.
|
|
Submitted Dec 03, 2016 to Science Blogs Although extremely useful for visualizing high-dimensional data, t-SNE plots can sometimes be mysterious or misleading. By exploring how it behaves in simple cases, we can learn to use it more effectively.
|
Submitted Dec 03, 2016 to Science Blogs “F.D.R.’s War Plans!” reads a headline from a 1941 Chicago Daily Tribune. Had this article been written today, it might rather have said “21 War Plans F.D.R. Does Not Want You To Know About. Number 6 may shock you!”. Modern writers have become very good at squeezing out the maximum clickability out of every headline. But this sort of writing seems formulaic and unoriginal. What if we could automate the writing of these, thus freeing up clickbait writers to do useful work?
If this sort of writing truly is formulaic and unoriginal, we should be able to produce it automatically. Using Recurrent Neural Networks, we can try to pull this off. |
Submitted Dec 03, 2016 (Edited Dec 03, 2016) to Science Blogs There’s something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience I’ve in fact reached the opposite conclusion). Fast forward about a year: I’m training RNNs all the time and I’ve witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with you.
|
Submitted Dec 03, 2016 (Edited Dec 03, 2016) to Science Blogs While it is challenging to understand the behavior of deep neural networks in general, it turns out to be much easier to explore low-dimensional deep neural networks – networks that only have a few neurons in each layer. In fact, we can create visualizations to completely understand the behavior and training of such networks. This perspective will allow us to gain deeper intuition about the behavior of neural networks and observe a connection linking neural networks to an area of mathematics called topology.
|
Submitted Dec 03, 2016 to Science Blogs In the past few months I’ve been fascinated with “Deep Learning”, especially its applications to language and text. I’ve spent the bulk of my career in financial technologies, mostly in algorithmic trading and alternative data services. You can see where this is going.
I wrote this to get my ideas straight in my head. While I’ve become a “Deep Learning” enthusiast, I don’t have too many opportunities to brain dump an idea in most of its messy glory. I think that a decent indication of a clear thought is the ability to articulate it to people not from the field. I hope that I’ve succeeded in doing that and that my articulation is also a pleasurable read. |
Submitted Dec 03, 2016 to Science Blogs In recent years, there has been an explosion of research and experiments that deal with Creativity and A.I. Almost every week, there is a new bot that paints, writes stories, composes music, designs objects or builds houses. Projects cover a wide range of disciplines — from machine learning, music, writing, art, fashion to industrial design and architecture. We call this phenomenon “CreativeAI”.
CreativeAI.net is a space to share CreativeAI projects. Countless links to CreativeAI Projects are floating around the web. It’s hard to keep track of these gems. CreativeAI.net attempts to solve this problem by offering a collaboratively curated feed of projects. Each project is conveniently presented as Link + Media + Tags + Discussion. People can submit their findings and let the community discover and discuss it. A regular newsletter makes it easy to stay up-to-date on recent advancements. It’s free and open. |
Submitted Dec 03, 2016 to Science Blogs Ruminations on the teaching of and research in microbiology at a small liberal arts undergraduate institution. Blog of Mark O. Martin, associate professor of Biology at the University of Puget Sound in Tacoma, Washington.
|
Submitted Dec 02, 2016 to Science Blogs My name is Denny Britz and I am currently a resident on the Google Brain team. I studied Computer Science at Stanford University, where I worked on probabilistic models for NLP, and UC Berkeley, where I worked on a popular cluster-computing framework called Spark.
I started WildML to share my excitement about Deep Learning. I am still learning myself, but I found that writing posts and tutorials is the best way to deepen my own understanding. Topics I am currently excited about are Natural Language Understanding and Reinforcement Learning, so these will account for most of the posts on here. |
Submitted Oct 03, 2010 (Edited Nov 28, 2016) to Science Blogs Irving Wladawsky-Berger, a former IBM executive, blogs about technology and computing trends. |
Submitted Jul 21, 2010 to Science Blogs Ira Flatow makes science user-friendly every week on NPR's Science Friday. Catch the broadcast online from the website from 2-4 PM (EST) every Friday, follow the podcast, and read the stories covered on the website.
|
Submitted Jul 19, 2010 to Science Blogs Field of Science (a.k.a., FoS) is a science blogging network that is home to bloggers who are doing actual science and whose blogging is clearly informed by their work.
|
Submitted Jul 19, 2010 (Edited Jan 15, 2017) to Science Blogs A Joint Tau Zero Foundation and British Interplanetary Society Initiative, Project Icarus is a theoretical design study with the aim of designing a credible interstellar probe that will serve as a concept design for a potential mission that could be launched before then end of the 21st century. Icarus would utilise fusion based engine technology which would accelerate the spacecraft to approximately 10% the speed of light.
|
Submitted Feb 08, 2010 to Science Blogs Research Blog is where the highest quality posts from the ResearchGATE community are aggregated to provide a reputable source for news, commentary, research, and innovation. Unique to Research Blog, Microarticles are a summary of a published article in 306 characters.
|
Submitted Feb 05, 2010 (Edited Nov 28, 2016) to Science Blogs A blog about a grad student in geophysics who has also done research in electrical engineering. Topics covered include science, math, politics, grad school, and family issues. |
Submitted Jan 27, 2010 (Edited Jun 27, 2010) to Science Blogs Marmorkrebs is a species of marbled crayfish discovered in the late 1990s. Marmorkrebs are particularly interesting and unusual because they are all females and reproduce asexually. They are also genetically identical. The Marmorkrebs blog is written by Zen Faulkes, a professor in the Department of Biology at the University of Texas - Pan American.
|