Other things

Stuffs that I think worth sharing

Ultracrepidarianism

The following is copied directly from this RationalWiki page, a good read on criticising people who think they know everything.

Ultracrepidarianism is the tendency for people to confidently make authoritative pronouncements in matters above or outside one's level of knowledge. Often, those pronouncements fall entirely outside the ultracrepidarian's realm of legitimate expertise.

Another expression of ultracrepidarianism, as instantiated by those with an actual expertise in something, is the tendency to start treating all other fields as somehow being sub-categories to your own field.

Epistemologists saying “it's all epistemology in the end”, mathematicians saying “it's all mathematics in the end”, physicists saying “it's all physics in the end”, psychologists saying “it's all psychology in the end” (et cetera) and thus proceeding to apply their methods to a completely different field which they hardly realize they don't understand.

The lesson is: being an expert means being an expert at something — and “something” is specific, not universal. In other words, various forms of expertise are not interchangeable.

“Don't assume that because somebody has one intellectual skillset, they have another — that those tools apply to all types of intelligence, thinking or claims. They don't.”

Machine cramming

The following is copied directly from regularize, a good read on criticising machine learning.

… let us look at how human learn for a very specific task: Preparing for an exam. … Here are some observations of how the students tried to learn: … the students asked for information on the “training data”). They were studying heavy on exercises, barely using the textbook or lecture notes to look up theorems or definitions (in other words, some were working “model free” or with a “general purpose model” which says something like “do computations following general rules”). They work with the underlying assumption that the exam is made up of questions similar to the exercises … in other words, the test data comes from the same distribution as the training data).

Viewed like this, learning of humans (for an exam, that is) and machine learning sound much more similar. … some known problems with machine learning methods can be observed with the students: Students get stuck in local minima (they reach a point where further improvement in impossible by revising the seen data – even though they could, in principle, learn more from the given exercises, they keep going the known paths, practicing the known computations, not learning new techniques). Students overfit to the training data (on the test data, aka the exam, they face new problems and oftentimes apply the learned methods to tasks where they don’t work, getting wrong results which would be true if the problem would be a little different). The trained students are vulnerable to adversarial attacks (for every problem I posed as exercises I could make a slight change that would confuse most students). Also, similar to recent observations in machine learning, overparametrization helps to avoid overfitting and overparametrization helps to avoid spurious local valleys, i.e. when the students have more techniques at hand, which is related to a more flexible machine learning method, they do better on unseen data and do not get stuck at bad local minima where no improvement is possible.

… in conclusion, I’d advocate to replace the term “machine learning” with machine cramming (the German version would be maschinelles Büffeln or maschinelles Pauken).