Kalman died at the age of 86. RIP, Kalman.
I came across the following post in nature.com (thanks to Ananda Ghosh). It is the story of a female scientist who was harassed by a senior male colleague. It is a must-read, specially if you are in academia.
The subject of harassment (sexual or not) is not discussed openly in academia. Most academics are not trained at all to handle these types of situations. But, these types of harassments is not rare, mostly because there is a huge power imbalance in academia; your supervisors can potentially ruin your career if they want to (and it also works the other way round in some cases). Such an imbalance creates an atmosphere that enables exploitation, and the least we can do is to be aware of it.
This is amazing!
This work (and many other works by DeepMind researchers) show that reinforcement learning might play an important role for AI. The open question (for me), however, is to design similar systems that can learn from a small set of independent data (or a large set of dependent data). In particular, can we use machine learning algorithms to sequentially select appropriate data to train intelligent machines?
I read this article (on Mark Crowley’s twitter feed) about a ‘dramatic generic discovery’ about schizophrenia. My favourite line from the article is the following:
“This could not have been done five years ago,” said Hyman. “This required the ability to reference a very large dataset . …When I was [NIMH] director, people really resisted collaborating. They were still in the Pharaoh era. They wanted to be buried with their data.”
I strongly agree. I still see this trend in neuroscience and other medical fields; most researchers are reluctant to share their data. To tell you the truth, it makes sense. They probably invest a lot in getting this data and they would first like to exploit the data before giving it out to the world.
But, this is obviously harmful to science. Data-science experts rarely get access to good medical data, and therefore may not develop methodologies useful to scientific fields. Instead, we continue to solve problems that involve selling movies and recommending restaurants. Not that there is anything wrong with that :), but many of us prefer to work on applications that are a bit more useful to the world.
Here is an amazing article by Neil deGrasse Tyson on “what science is and how and why it works”. I heard this from somebody earlier “Science is not a belief system; it is a method of investigation”.
I am happy that I don’t have to explain this anymore and now I can simply refer them to this article! Thanks Neil.
Very interesting article about how machine learning is changing economics, by focusing more on data-driven methods rather than math-driven philosophies. The article is quite balanced in its view that Machine learning faces difficulties on finding causal relationships, but it still might be more useful than `unverified’ conjectures.
Stolen from Cedric Archambeau’s Facebook post.
A nice article by Guy Lebanon on the negative impact of machine learning on society.
Another related article that talks about the inappropriate application of machines that do more harm to the society than good.
A beautiful article written by Michael Jordan and Tom Mitchell. Machine learning is growing fast and gaining popularity. It is easy for one to get lost and lose the big picture. I highly recommend this article to regain focus. It is also a great article for absolute beginners to get the right perspective about machine learning.
Here is a reading guide. ML stands for Machine Learning here.
- The section on “Drivers of ML” talks about the Big Data revolution and how it makes ML an important tool to make use of the Big Data.
- The section on “Core methods and recent progress” provides a good summary of current ML methods. It talks about 3 main categories: supervised, unsupervised, and reinforcement learning. Some popular examples are also included.
- The section on “Emerging trends” talks about some new applications of ML.
- The section on “Opportunity and challenges” talks about some issues, e.g. a big gap between human learning and machine learning, impact of ML methods on the society, and inadequacy of current laws to handle the consequences.