Update: if you’re looking for an academic distinction, redzēt: The Difference between Risk and Uncertainty. In the post below, we criticize firms for not changing their practices despite the recent failures of their estimations, methodologies, and models. Protams, we think that both are worth reading.
Every Monday morning for the past several years, we’ve received an e-mail from http://jobs.phds.org that lists available positions according to our specifications, which are:
“Send Weekly emails containing jobs…
…for PhDs in: Business / Finance / Ekonomika
…of types: Contract / Project / Temporary, Employee, Non-tenure-track faculty, Postdoctoral researcher, Tenure-track / tenured faculty
…in sectors: all
…located: in United States
…with keywords: none
Parasti, there’s 20 – 40 positions listed each week, and most of those involve quantitative finance, usually in the NYC area. For the past year or so, we’ve been particularly interested to see if the job descriptions would change given the failure of many quantitative trading strategies, modeling techniques and risk measures. (Yeah, we know they didn’t actually “fail.” Recent results were just plain bad luck that no one could have predicted. The models worked perfectly, except when they didn’t.)
Diemžēl, our parenthetical sarcasm seems to be the implicit position of many financial firms–without the sarcasm, protams. We say because we haven’t observed any change in the posted job descriptions in the jobs.phds.org emails or any of the other ones that we receive from recruiters who regularly send similar descriptions.
Tagad, we’ve been meaning to write about this observation for a few months but were finally motivated to do so because of several other items we read this morning, including two opinion columns and one article.
The article, Computer-Trading Models Meet Match uz Wall Street Journal, describes how several algorithmic-based hedge funds have lost money recently because of “the recent high volatility.” Tā, we guess their models aren’t flawless.
One of the op-ed pieces is līdz L. Gordon Crovitz, and it is also in the Journal: In Finance, Too, Learning Entails Risk. Tajā, Mr. Crovitz attempts to relate “financial engineering” to other types of engineering, piem, mechanical engineering, and he seems to imply that it’s still a young discipline; tā, give it time, but we think that his argument ultimately fails and is unconvincing.
That’s because “financial engineering” isn’t really engineering, which we’d define as the thoughtful application of science or technology to (or in) a well-understood, physical environment. Finance is a subset of a social “science.”
Mr. Crovitz writes in his last paragraph that: “The measure of innovators is not in the mistakes they make, but in the lessons they learn. We now know that our complex markets need better models, which should include more humility, acknowledging that some risks are still too uncertain to measure and should be avoided.” We’d argue with the “still too” in the last sentence as we doubt that such social uncertainty can be resolved or precisely measured. (Starp citu, we also disagree with his conclusion in that sentence that “some risks…should be avoided.” We have no problem with folks taking wild or uncertain gambles; Tomēr, we see no reason that we should subsidize their losses when those gambles go bad.)
To his main point, Tomēr, we don’t see much learnin’ goin’ on. It seems to be business as usual at many firms and funds.
A much more critical op-ed piece is by Michael Barone, and it’s entitled ‘Formulas’ for certain failure, and his first sentence is “Beware of geeks bearing formulas.” He discusses (and criticizes) financial models, global warming/climate change models, and health-care models, and it reads much like our post from six months ago, Global Warming and the Mortgage Crisis. Remember that this is Michael Barone, who is very well-known for using statistical data in the analysis of politics and demographics.
Kā parasti, we point new readers to our essay, Nenoteiktība Management, which details our perspective and philosophy on these issues as well as any number of related posts: see our blog archives. The main point is that not all uncertainty is measurable, proti, that measurable uncertainty, or risk, is a proper subset of uncertainty and unknowing. (Citiem vārdiem sakot,, specific mathematical conditions must be met for uncertainty to be risk. Tā, uncertainty is a more general term, proti, all risk involves uncertainty, but not everything that is uncertain is risky because not all uncertainty is measurable, which a specific mathematical definition.)
As we read the evidence, many institutions and their ‘quants’ will continue to solve mis-specified risks problems, because they don’t know how to treat more diffuse and difficult uncertainty problems; tā, they assume them away and treat them as risk problems. We’re clearly not underestimating the difficulties these folks face nor the necessity of making trade-offs, but we’re not sure if they understand the nature of the problem or trade-off. As we’ve written many times before, if they don’t understand them, then they are ignorant, and if they do, then they are cynical., piem, Our Eternal Question: Cynical or Naive? Neither charactistic is appealing or useful.
Ignoring the larger epistemological issues and the problem of induction, here’s a simple example of the difficulty of making inferences and finding useful information. Even when a distribution can be perfectly known, it’s moments–like the mean and variance–need not exist. (Look a Cauchy distributions and, vispārīgāk, certain stable distributions. While one can calculate historical means and variances from a time series, tie “estimates” may be nonsensical. (They can’t estimate something that doesn’t exist.) The arithmetic can be performed, but the notion is empty.
As we see it, too often if one has a (risks) hammer, then everything looks like a (risks) nail, and it’s easy to pound away, especially when the alternate is to admit that a solution doesn’t exist, which too often sounds like, “I don’t know.” Tā, while various numbers can be calculated–even calculated very precisely, earnestly, and diligently–to do so is to apply technology, but it’s not engineering nor is it very smart and it can be very harmful.