Reviews of publications on Info-Gap decision theoryReview # 13 (Posted: February 19, 2010; Last update: February 25, 2010)
Reference: Tracy M. Rout, Colin J. Thompson, and Michael A. McCarthy
Robust decisions for declaring eradication of invasive species
Journal of Applied Ecology 46, 782–786, 2009.Summary 1. Invasive species threaten biodiversity, and their eradication is desirable whenever possible. Deciding whether an invasive species has been successfully eradicated is difficult because of imperfect detection. Two previous studies [Regan et al., Ecology Letters, 9 (2006), 759; Rout et al., Journal of Applied Ecology, 46 (2009), 110] have used a decision theory framework to minimize the total expected cost by finding the number of consecutive surveys without detection (absent surveys) after which a species should be declared eradicated. These two studies used different methods to calculate the probability that the invasive species is present when it has not been detected for a number of surveys. However, neither acknowledged uncertainty in this probability, which can lead to suboptimal solutions.
2. We use info-gap theory to examine the effect of uncertainty in the probability of presence on decision-making. Instead of optimizing performance for an assumed systemmodel, info-gap theory finds the decision among the alternatives considered that is most robust to model uncertainty while meeting a set performance requirement. This is the first application of info-gap theory to invasive species management.
3. We find the number of absent surveys after which eradication should be declared to be relatively robust to uncertainty in the probability of presence. This solution depends on the nominal estimate of the probability of presence, the performance requirement and the cost of surveying, but not the cost of falsely declaring eradication.
4. More generally, to be robust to uncertainty in the probability of presence, managers should conduct at least as many surveys as the number that minimizes the total expected cost. This holds for any nominal model of the probability of presence.
5. Synthesis and applications. Uncertainty is pervasive in ecology and conservation biology. It is therefore crucial to consider its impact on decision-making; info-gap theory provides a way to do this. We find a simple expression for the info-gap solution, which could be applied by eradication managers to make decisions that are robust to uncertainty in the probability of presence.
Acknowledgement Many thanks to Yakov Ben-Haim and Dane Panetta for helpful advice and discussions. Also thanks to Michael Bode, Mark Burgman, Peter Baxter and three anonymous reviewers for comments on this manuscript. This research was supported by an Australian Postgraduate Award, the Commonwealth Environment Research Facility (AEDA), the Australian Centre of Excellence for Risk Analysis and an Australian Research Council Linkage Grant to MMcC (LP0884052). Scores TUIGF:100%
SNHNSNDN:100%
GIGO:90%
Overview
My decision to review this article was triggered by the discovery that the info-gap model proposed in the paper as a framework for the modeling, analysis, and solution of the problem under consideration is not in fact a "proper" info-gap model. Or, to put it more accurately, strictly speaking the proposed model is not an info-gap model because it violates the Nesting Axiom of info-gap decision theory.
But as I began to examine the paper more carefully, it turned out that there are other matters that require critical comment.
I wish to point out, though, that at present I am working on a number of more urgent projects, so the amount of time I can devote to this review is quite limited. I shall therefore have to come back to it from time to time to enlarge on the current discussion.
This is a draft of Work in Progress (Feb 25, 2010) At this stage I focus on the following points:
- The Role of the Nesting Axiom in Info-gap Decision Theory
In view of the fact that the proposed model violates the Nesting Axiom and considering that the authors do not supplement it with a substitute, the role of this axiom in the theory must be reassessed.
If there is no need for the Nesting Axiom, namely if the Axiom is superfluous, then how is it that it serves as the hallmark of info-gap's uncertainty model? And, if this axiom is required by the theory, then on what grounds do the authors establish the validity of their proposed model?
Apparently, the authors are unaware of the fact that their proposed model violates the Nesting Axiom. For, how is it that they do not mention this fact in the paper?
On February 3, 2010 I proposed that Yakov Ben-Haim explain the importance/significance of the Nesting Axiom in info-gap decision theory. So, at this stage I leave it at that. If we do not hear from Ben-Haim within a couple of months, I shall come back to this matter at a later stage to give my explanation of it.
So stay tuned.
- Errata
The mathematical analysis presented in the paper runs into a number of technical difficulties. As a consequence, some of the main results reported on in the paper are, strictly speaking, unsubstantiated, and some claims are incorrect.
- The problem under consideration is trivial
Both from a mathematical point of view and a decision theory point of view, the problem considered in this paper is trivial, so much so that it can basically be solved by inspection.
- The 64K$ question
The question obviously is: why turn to info-gap decision theory as a framework for the modeling, analysis and solution of the problem if it can be solved by inspection?!
My point in raising this question is as follows: not only is the use of an info-gap analysis totally uncalled for in this case, it is in fact counter-productive. This is so because the info-gap treatment encumbers a simple problem with a nomenclature that is totally alien to the problem so that in the end not only is the utter simplicity of the problem obscured, the task set by it becomes muddled.
Stay tuned ....
- The Continuing Maximin Saga
The paper adopts the usual SNHNSNDN approach -- present in most info-gap publications -- regarding the relation between Info-Gap's robustness model and Wald's Maximin model. There is nothing in the paper to give even the slightest indication that Info-Gap's robustness model is in fact a Maximin model and this in spite of the fact that Bryan Beresford-Smith and Colin J. Thompson (2009) now concede that Info-Gap's robustness model is a Maximin model (see Review 11). The discussion in the paper is conducted in a manner where info-gap is treated as though it were a distinct method in the discipline of decision theory, disregarding the fact that it is a simple instance of Wald's Maximin model.
- The Ongoing Severe Uncertainty Saga
The explanation given in the paper (to presumably explain away) the difficulties encountered by info-gap decision theory in dealing with severe uncertainty -- more specifically poor point estimates -- is flawed.
The result is that the rhetoric does not match the practice in info-gap decision theory. As for the info-gap decision model proposed in the paper, it can be contemplated if the point estimate is good. If the true value of the parameter is subject to considerable uncertainty, then the validity of the proposed model is highly questionable.
- The State of the Art Saga
The article continues to perpetuate the position that info-gap is in some way different from all other decision theories for decision under uncertainty. As usual in info-gap publications, not a single reference is made to the area of Robust Optimization and to the literature of this discipline; and this in spite of the fact that the article is concerned with a robust optimization problem.
The result is that the paper gives a totally distorted view of the state of the art and thus misinform the readership of the Journal about methods available for the treatment of problems of the type examined in the paper.
Unsubstantiated and/or misleading statements
I discuss only a few of the these statements.
I now turn to a more detailed discussion of these points.
Errata
The mathematical analysis in the paper is not sufficiently careful. As a consequence, some of the results are incorrect. Some of the assertions may eventually prove to be correct, but the arguments supporting them are incorrect and/or incomplete.
For the purposes of this discussion it is sufficient to mention the following:
- Convexity and minimization
The authors argue (p. 785) that (color is mine):
That is, the claim is that for the problem
min {(d-1)Cs + û(d)Ce: d≥1}to have an optimal solution, the nominal model of probability of presence, namely û(d), must be a convex function of d.
This assertion is patently incorrect.
For instance, it follows by inspection that if û(d) is "smooth" and its derivative is greater than -Cs/Ce for all d > c for some c≥ 1, then an optimal solution exists even if û(d) is not convex with d. Here is a concrete counter example:
And just in case you do not see that û is not convex, here is the Chord test:
Since the above argument is invoked by the authors to show that the robust-optimal solution is at least as large as the solution that minimizes the expected cost, the proof has actually not been clinched so that the conclusion based on it, namely (p. 782)
4. More generally, to be robust to uncertainty in the probability of presence, managers should conduct at least as many surveys as the number that minimizes the total expected cost. This holds for any nominal model of the probability of presence.remains unsubstantiated in the context of the uncertainty model specified by eqn 3.
- Main results
The main results, namely (p. 782)
3. We find the number of absent surveys after which eradication should be declared to be relatively robust to uncertainty in the probability of presence. This solution depends on the nominal estimate of the probability of presence, the performance requirement and the cost of surveying, but not the cost of falsely declaring eradication.4. More generally, to be robust to uncertainty in the probability of presence, managers should conduct at least as many surveys as the number that minimizes the total expected cost. This holds for any nominal model of the probability of presence.
are invalid.
In particular, they are not valid in the context of the uncertainty model specified by eqn A2 in Appendix S2. Here is a concrete counter example to 4.
Note that the decision (d*) that minimizes the net expected cost is strictly greater than the decision (d^) that maximizes the robustness function associated with the uncertainty model specified by eqn A2 in Appendix S2.
- First-order optimality conditions
The authors seem to operate under the idea that optimal solutions of (constrained) optimization problems can be obtained simply by equating the derivative of the objective function to zero. This is a common theme in the paper and the two supplementary documents. They thus ignore the possibility that the optimal solution is d=1.
For example, consider the claim that the optimal solution to the problem of minimizing the expected total cost, namely
min {(d-1)Cs + û(d)Ce: d≥1}is obtained by equating the derivative of the objective function to zero.
Note that although the objective function is convex, if Ce/Cs < -1/(r ln(r)), then the derivative of the objective function is strictly positive for all d≥ 1. Thus, the derivative is not equal to zero on [1,∞), it is equal to zero for a value of d that is smaller than 1. By inspection, in this case the optimal solution is d=1.
For the same reason, the robust-optimal solution is not always at a point where the derivative of the robustness function is equal to zero. In particular, for the case considered in the paper, if Nc/Cs is smaller than 1/ln(r), then the derivative of the robustness function is equal to zero for a value of d that is smaller than 1. In short, in this case the robust-optimal decision is d=1, and the derivative of the robustness function is not equal to zero there. Here is a concrete counter example:
Here α*=0.113 is the robustness of the robust-optimal decision, d^=1, and α'=0.228 is the robustness of the (infeasible) decision, d'=-1.92, where the derivative of the robustness function is equal to zero.
Thus, strictly speaking, eqn 5 and eqn 6 in the paper are not entirely correct and should be refined.
In a word, the results presented in this paper require corrections. This can be done either by imposing additional conditions on the problem so as to validate the results. Or by re-working the mathematical analysis properly to obtain results that are valid under the assumptions postulated in the paper.
The problem
It is important to be clear on what I mean when I say that the problem studied in this paper is a trivial problem.
Obviously, by this I do not want to suggest that the problem is unimportant in the disciplines of applied ecology, conservation biology, and so on. Indeed, the problem may have enormous consequences for applied ecology and conservation biology.
The point I am making is entirely different. My point is that when the "real world" situation that is investigated in this paper is cast as a decision-making problem and given a mathematical formulation, it becomes immediately clear that the mathematical problem is trivial in the sense that its solution flows (more or less) directly from its mathematical statement. As a matter of fact, the essential part of the problem can be solved by inspection.
Granted, you may contend that this does not argue against using info-gap for this purpose. Indeed, why not use info-gap to solve a problem that readily lends itself to solution by inspection? To which I would say, paraphrasing a reply once given by the renowned mathematician Richard Bellman (founder of dynamic programming): you may just as well tie a chair to your leg and attempt to swim across the river in this state.
My point is then that not only is the use of info-gap as a framework for modeling analysis and solution totally superfluous in this case. It is in fact counter-productive, if not harmful, because it gives a distorted picture of the problem in question. Worse, it gives a totally distorted picture of the real issues that we face in robust decision-making in the face of uncertainty.
To set the stage for the discussion, recall that the elements of the eradication problem under consideration are as follows:
- d = number of absent surveys, namely number of consecutive surveys in which the species is not found (decision variable).
- Cs = cost of one survey.
- Ce = expected cost of escape and damage (the expected cost of falsely declaring eradication).
- Nc = Upper limit on the total net expected cost of declaring eradication (budget).
- u(d) = probability that the species is still present after d absent surveys: its true value is unknown and is subject to considerable uncertainty.
- û(d) = estimate of the true value of u(d).
- (Net expected cost for decision d if the true value of u(d) is u) NEC(d) = (D-1)Cs + uCe
To simplify the notation, we assume (with no loss of generality) that Cs=1, and we let E=Ce, and C=Nc. Also assume that C is an integer and let k=C+1.
So,
Statement of the problem:
Find the most robust decision d in D={1,2,3,...,k} against the uncertainty in the true value of u ∈[0,1] with respect to the performance requirement
(1) d + uE ≤ C+1where E and C are given positive numeric scalars, typically much greater than 1. The value of k is relatively small, say much smaller than 1000.
Note that because D is a discrete set, this problem is a discrete optimization problem.
Finally, it is important to stress that the uncertainty in the true value of u is assumed to be "considerable" (page 783) and that no probabilistic or likelihood assessments of this uncertainty are provided.
The essence of the problem
Roughly, in this context, decision d is robust if it satisfices the performance requirement d + uE ≤ C+1 for a large number of values of u∈[0,1]. The larger the number, the more robust d is. That is, we are searching for a decision whose feasible region of u, namely
(2) F(d):= {u∈[0,1]: d + uE ≤ C+1} , d∈Dis large. The larger this region is the more robust the decision.So, the point to note here is that the robustness that this problem aims to determine is not merely that of robustness against the uncertainty in the probability of presence. Rather, the idea here is to identify decisions that are robust against the impact of the uncertainty in the true value of this parameter on the budgetary constraint stipulated in (1).
Admittedly, as things stand, the problem is not sufficiently defined. That is, we still lack a formal definition of 'robustness'. Also, information is required regarding the nature of the uncertainty in the true value of u. Note that the latter affects the former.
And yet, despite all this, it is clear that this problem is trivial. To wit, it involves:
- Only one performance requirement.
- Only one uncertain parameter.
- A performance function that is linear with this parameter.
Thus, from the perspective of robust optimization, this is a "trivial problem".
But more than this, one need not even be a "professional" mathematician or an expert in robust optimization to figure out that
(3) F(d) = [0,u*(d)] , d∈Dwhere(4) u*(d):= (C+1-d)/EThis follows directly (by inspection) from the definition of the performance requirement (1). For obvious reasons, we shall refer to u*(d) as the critical value of u associated with decision d.
It is immediately clear that the critical value of u would be pivotal in the definition of robustness against the uncertainty in the true value of u.
So:
- If the uncertainty is truly severe in the sense that we have no clue as to the true values of u, then we can argue that the most robust decision is that whose critical value is the largest. In other words, under extremely severe uncertainty, u*(d) would be the measure of robustness of decision d.
- We expect the robustness function to be increasing with u*(d).
In short, as can be gathered from the above, the formal definition of robustness would express "adjustments" made in these critical values in relation to what we know about the uncertainty in the true value of u.
The solution
Having clarified the issues that bear on the solution of the problem under consideration, we can now outline its solution procedure. To this end let ρ(d) denote the robustness of decision d against the uncertainty in the true value of u.
Solution Procedure
- Step 1: Compute the critical values u*(d) for all d∈D.
- Step 2: Define a suitable robustness measure, ρ(d).
- Step 3: Find the decision in D that maximizes ρ(d) over D.
Note that the first step is easy: we compute the critical values according to the formula given in (4). The third step is also easy because the set of feasible decisions D is small, hence we can conduct the maximization of ρ(d) over D by enumeration. This means that a spreadsheet can be easily set up to solve this problem.
Of course, in cases where the maximization of ρ(d) over D={1,...,k} is also amenable to solution by analytic methods, it might be possible to obtain a closed-form solution for the continuous counterpart of the (discrete) problem that is under consideration. This would depend on the definition of robustness used.
What emerges therefore is that the real issue is in Step 2: the definition of the robustness function ρ=ρ(d).
Robustness against uncertainty
Let us examine a number of cases, representing various degrees of uncertainty in the true value of u. These cases refer to an estimate of the true value of u. Since this estimate may depend on d, we denote it û(d), d∈D.
Case 1: Certainty.
In this case we assume that the estimates û(d),d∈D are "perfect" and are equal to the respective true values of u. Thus, there is no uncertainty in the true value of the parameter.
This means that any decision is either thoroughly robust or utterly fragile. That is, d is thoroughly robust iff û(d) ≤ u*(d). Otherwise it is utterly fragile (infeasible).
Conclusion:
d is robust iff û(d) ≤ (c+1-d)/Ed is fragile iff û(d) > (c+1-d)/E
Note that in this extreme situation -- a perfect estimate -- there is even no need to use the solution procedure outlined above. We simply do as follows:
(5) min {d + û(d)E: d∈D}If the estimates û(d) are given by a nice smooth formula, it may be possible to identify robust decisions analytically by minimizing, over [1,k] the expression d + û(d)E, namely
(6) min {d + û(d)E: d∈[1,k]}If the optimal value of this expression is not greater than C+1, then the optimal value of d is the most robust decision. If this value is not a positive integer, then the nearest integer neighbors will have to be compared. And if the optimal value of this expression is greater than C+1, then all the decision are fragile.
Case 2: The estimates are very good
If the estimates û(d),d ∈ D, are very good, we would argue that it is in fact unnecessary to explore values of u that are substantially different from (much smaller or larger than) these estimates.
So, the following definition of robustness would be appropriate(7) ρ(d):= u*(d) - û(d) , d∈DThat is, the robustness of decision d is the "distance" between the estimate and the sub-region of [0,1] where u violates the performance requirement for this decision. For d to be robust, this distance should be large. The larger this distance the greater the robustness.
Thus, to find the optimal decision, we solve the following optimization problem:
(8) max { ρ(d): d∈ D} = max {[C+1-d - û(d)E]/E: d∈D}which is equivalent to
(9) min {d + û(d)E: d∈D}which is equivalent to the problem in Case 1: Certainty.
Case 3: The Estimates are very poor
In this case the estimates can be substantially wrong, namely no more than wild guesses, or perhaps just rumors. It may therefore be best to ignore them altogether.
Under the circumstances we may let the robustness ρ(d) be defined as follows
(10) ρ(d):= u*(d) , d∈DBy inspection, in this case the most robust decision is d=1. Its robustness is equal to C/E. Note that if E≤C, then d=1 is super-robust: it satisfies the performance constraint for all u∈ [0,1].
Again, the situation is so simple that we need not even use the procedure outlined above.
Remarks:
- Should you decide to incorporate the estimates in the definition of robustness knowing that the estimate is very poor, then make sure that you do not give the estimates too much 'weight'.
- And do not forget to examine the impact of errors in the estimates on the results.
The Moral of the Story
We can continue in this vein: formulating "sensible" definitions of robustness for our small problem ad infinitum ...
For example, we may want to "scale" the critical values u*(d),d∈D by dividing them by the corresponding estimates û(d),d∈D; or we may even want to "normalize" the critical values and consider [u*(d)-û(d)]/û(d) as the measure of robustness of decision d -- as done in the paper. Note that these two alternatives are equivalent.
More generally, we can incorporate "weights", call them w(d), d∈D, to refine the robustness function ρ. In particular, in cases where the estimates are very good, we can let
(11) ρ(d) := [u*(d) - û(d)]/w(d) = [C+1-d - û(d)E]/w(d)Ewhere w(d) >0 is the weight associated with decision d.
The point is then that the simplicity of the performance constraint implies that the critical values u*(d),d∈D can be easily determined by inspection, and then be invoked in the definition of robustness.
Remarks:
- Note that by definition, (11) allows ρ(d) to take negative values. This occurs in cases where u*(d) < û(d), namely when decision d is fragile in the immediate neighborhood of û(d). In other words, ρ(d) is negative when d violates the performance requirement d + uE ≤ C+1 for values of u that are ever-so-slightly greater than û(d).
This means that û(d) is a measure of the robustness and fragility of d: if &rho(d) > 0, then d satisfices the performance requirement in the immediate neighborhood of û(d); and if &rho(d) < 0, then d violates the performance requirement in the immediate neighborhood of û(d).
- The estimates û(d),d∈D can be quite small for large values of d. For example, in the article û(d)=0.136d, hence û(5) = 0.00004652587. Hence, one has to be careful when using robustness measures such as (11) with w(d)=û(d).
Let us now examine how robustness is defined in the article.
Proposed Info-Gap Robustness Models:
In sharp contrast to the effortless manner in which the definitions of robustness are derived above - directly from the statement of the problem, the derivations of robustness in the article get unnecessarily complicated by the requirement to be expressed in terms of an info-gap robustness model.
In fact, the derivations are suffiently complicated that they are not explained in full in the body of the article. You may plough through the article but you will not find the expression (formula) used to measure the robustness of decisions in the context of the problem under consideration. For this you'll have to read Supplement S1.
But more than this, the derivations of robustness in the article and in the two supplements make no reference to the intuitive notion critical value of u(d). As we have seen above, this concept in fact brings out what robustness is all about in the problem studied in the article.
When you finally figure out what it is (eqn A5 in Appendix S1), you'll discover that it is the instance of the robustness model specified by (11) that corresponds to
(12) w(d) = û(d) ----> ρ(d) = [C+1-d]/[û(d)E] - 1The info-gap robstness model specified in eqn A2 in Appendix S2 corresponds to
(13) w(d) = 1 - û(d) ----> ρ(d) = [C+1-d - û(d)E]/[E(1-û(d))]The point to note about these two info-gap models is that they correspond to the definitions of robustness falling under what we refer to above as Case 2: The Estimates are Very Good.
Indeed, this is what is so interesting about these two info-gap models: the fact they that they correspond to the definitions falling under what we refer to above as Case 2: The Estimates are Very Good. Namely, for the models to make sense the estimates should be asssumed to be good, but ... the authors assume that the true value of u " ... is subject to considerable uncertainty and ignoring this uncertainty may lead to suboptimal solutions ..." (p. 783; emphasis is mine).
No comment whatsoever is made to explain the blatant incongruity between the fact that (as assumed in the paper) the estimates are poor (due to the considerable uncertainty in the true values of the parameters) and the fact that they are pivotal in the determination of robustness. Moreover, no sensitivity analysis is conducted on the estimates themselves!
Stay tuned ...
The 64K$ question
Given then the utter simplicity with which the problem under consideration can be analyzed and solved, I repeat the question raised above: why use info-gap decision theory to solve this problem?
Surely the authors should explain the rational, the point, the merit of using info-gap decision theory as a framework for the modeling, analysis and solution of a problem whose solution is simplicity itself.
I might add in this regard that over the past five years I have written a number of letters to authors of papers on info-gap decision theory. Occasionally, the letter is about the triviality of the problem under consideration.
So I prepared a generic letter, which I modify according to the circumstances. Here is a version intended for cases where the problem is "trivial":
Dear ??????:
I read with interest your paper entitled ??????.
Note that the essence of the problem investigated in this paper boils down to this:
Determine the critical value of ?????, namely the worst (largest) value of ?????? that satisfies the performance constraint ????? or equivalently ?????.
Therefore, my immediate reaction to the analysis in your paper was sheer amazement!
After all, by inspection, the answer is obviously ??????. Therefore, one cannot help but wonder how Info-Gap suddenly appears on the scene ?!
This is yet another example of what can happen when instead of trying to model and solve a given problem, one tries -- by hook or by crook -- to use a given methodology to model and solve this problem.
On a number of occasions I alluded to this danger. But here I must be more forthright.
This article is a good example of how easily one can end up focusing almost exclusively on manipulating the formulation of a given problem so as to fit it into the paradigm of a Beloved methodology. So much so that one may fail to see that the problem is actually so trivial that it can be easily solved by inspection.
Isn't it time, ??????, that we asked ourselves:Are we in the business of developing, using and promoting scientific methods for decision-making under uncertainty in the area of ??????, or are we in the Info-Gap business? I cannot see how we can make progress on the important issues that we identified if we keep ourselves busy trying to fix conceptual and technical Info-Gap bugs.Best wishes
MosheMelbourne (date:??????)This generic letter applies to the paper under review.
Stay tuned for more ...
The Continuing Maximin Saga
As indicated above, now that Bryan Beresford-Smith and Colin J. Thompson (2009) have conceded that Info-Gap's robustness model is a Maximin model (see Review 11), what is the point of withholding this fact from scientists in the field of applied ecology?
Stay tuned more ...
The Ongoing Severe Uncertainty Saga
The authors concede that info-gap decision theory is unsuitable for situations where the estimate is likely to be substantially wrong. For consider this:
Although info-gap theory is relevant for many management problems, two components must be carefully selected: the nominal estimate of the uncertain parameter, and the model of uncertainty in that parameter. If the nominal estimate is radically different from the unknown true parameter value, then the horizon of uncertainty around the nominal estimate may not encompass the true value, even at low performance requirements.Rout et al (2009, p. 785)However, their explanation of this fact is totally wrong.
To begin with, a distinction must be drawn between:
- The uncertainty stipulated by the problem statement itself.
- Info-gap's model of uncertainty.
Insofar as the problem statement is concerned, the uncertainty is described by the uncertainty space under consideration, call it U, and the estimate, call it û. Needless to say, the estimate û and the true value of u are assumed to be elements of U. Obviously, if the estimate is poor the complete uncertainty space can be vast. This explains why, according to Ben-Haim (2006, p. 210), the most commonly encountered info-gap uncertainty models are unbounded.
Enter info-gap.
Given this, the info-gap model of uncertainty is constructed so that its regions of uncertainty, call them U(α,û),α≥0, are centered at û, and at least one of them contains U. Thus, if the uncertainty space U is vast, so would be the regions of uncertainty U(α,û) for large values of α.
That said, it is clear that the real trouble with info-gap's robustness analysis is not that the true value of u may not be contained in the uncertainty space of U.
Comment
I should add that I have yet to come across an info-gap publication where it is not immediately obvious that the uncertainty space U contains the (unknown) true value of u. So, for our purpose here it would be best to leave it at that: the true value of u is unknown, but it is contained in the complete uncertainty space. This means that it is contained in at least one of the regions of uncertainty centered at the estimate.
Indeed, it is ironic that the authors should raise this issue at all in this paper. After all, in the case of the problem they investigate, the unknown parameter under consideration is a probability, hence the (unknown) true value of the parameter of interest is definitely in the bounded interval [0,1].
The real trouble with info-gap's analysis lies elsewhere. It lies in info-gap's localized robustness analysis.
That is, info-gap's robustness model conducts the robustness analysis, in the first instance, in the immeidate neighborhood of the given estimate. This means that a decision that violates the performance constraint at a u near the estimate is deemed fragile regardless of its performance in neighborhoods of the uncertainty space that are further way from the estimate.
By definition, therefore, info-gap decision theory does not seek decisions that are robust against uncertainty over the given uncertainty space U. It seeks decisions that are robust in the neighborhood of the given estimate û.So, the difficulties that info-gap's analysis would run into would remain even if the (unknown) true value of the parameter would be contained in the uncertainty space stipulated by the problem statement. In a word, the trouble is with info-gap's lame "search methodology" which a-priori undermines its ability to properly explore the uncertainty space especially under conditions of --- what the authors term --- "considerable" uncertainty.
The following picture illustrates this point.
It shows the rewards R(q,u) generated by two decisions, q' and q'', as a function of some parameter u. The estimate of the true value of u is û = 0, the uncertainty space is U=(-∞,∞) and the performance requirement is R(q,u) ≥ 0.
According to Info-Gap's robustness model, q'' is more robust than q' with respect to R(q,u) ≥ 0 because the closest u to û that violates the constraint R(q',u) ≥ 0 is at a distance α'=1.08 from û, whereas the closest u to û that violates the constraint R(q'',u) ≥ 0 is at a distance α''=1.429 from û. Hence, q'' is more robust than q'.
Note, however, that
- R(q',u) > R(q'',u) almost everywhere on U, except for the two small intervals of [1.08,2.22] and [-2.22,-1.08].
- q'' violates the robustness condition R(q'',u) ≥ 0 almost everywhere on U, except for the small interval [-1.429,1.429].
- q' satisfices the robustness condition R(q',u) ≥ 0 almost everywhere on U, except for the small intervals [1.23,2.78] and [-2.78,-1.23].
This example also illustrates why Info-Gap's robustness analysis cannot handle ordinary, plain, white Swans, let alone genuine (Australian) Black Swans.
See the discussion on this issue at Second Opinion on Info-Gap Decision Theory.
And how about this:
Thus, the method challenges us to question our belief in the nominal estimate, so that we evaluate whether differences within the horizon of uncertainty are 'plausible'. Our uncertainty should not be so severe that a reasonable nominal estimate cannot be selected.Rout et al (2009, p. 785)Since info-gap decision theory is non-probabilistic and likelihood-free, info-gap users are in no position to quantify levels, or degrees, or what have you, of "good". "reasonable", or "bad" that are applicable to the estimate. Nor are they in any position to determine what is more or less "plausible" within the horizon of uncertainty even when the plausible is only 'plausible'.
Add to this the fact that info-gap decision theory does not even begin to deal with the question of how the estimates are obtained. Namely, info-gap decision theory does not bother to give us so much as a clue on how to check/verify whether the estimate is bad, poor, good, excellent, perfect and it is clear that determining the quality of the estimate is an external issue. In other words, you come to info-gap decision theory with an estimate in hand. And in this case as well, there is nothing in info-gap decision theory itself that would enable it to distinguish between the quality of various estimates.
So what are the authors telling us?
The authors seem to be saying the obvious: unless you have good reasons to believe that the estimate you have is "pretty reasonably good" (whatever that means), it makes no sense to focus the robustness analysis on the immediate neighborhood of the estimate. In other words, it makes no sense to do what info-gap prescribes doing. In this case the authors agree with my criticism of info-gap decision theory.
But more than this, are the authors willing to stick their necks out and declare that an estimate subject to "considerable" uncertainty (be it a rumor or gut feeling or whatever) is so good that it makes sense to confine the robustness analysis to a given neighborhood of the estimate and call it a day?
In this case, the authors would have to do as follows:
- Stipulate the uncertainty space of the problem, call it U. That is, they would have to specify the smallest set that (the decision-maker is reasonably confident) contains the true value of the parameter of interest.
- Specify the value of the estimate.
- Conduct a robustness analysis that seeks decisions that are robust on the given uncertainty space U.
But this, one need hardly point out, is not what info-gap decision theory does!
Info-gap decision theory does not seek decision that are robust on U. It seeks decisions that are robust in the neighborhood of the given estimate, to wit: a decision that is not robust in the immediate neighborhood of the estimate is eliminated from any further consideration even if it performs exceptionally well in other neighborhoods of U.
So, ... how are the authors going to use info-gap decision theory to identify decisions that are robust on U rather than in the immediate neighborhood of the estimate?
In any case, suppose that the uncertainty is not so severe and we have in hand a reasonably good estimate. How then can the local analysis in the neighborhood of this estimate enable dealing with rare events, catastrophes etc? Hence, how about this:
For ecological management in the face of uncertainty, managers may use info-gap to gain some protection against catastrophic outcomes by answering the question: how wrong could this model be before outcomes are unacceptably bad?
Rout et al (2009, p. 785)Note that info-gap decision theory does not -- indeed, is in principle unable to -- answer the question: how wrong could this model be before outcomes are unacceptably bad?
This is so because the true value of the parameter of interest is unknown and is subject to considerable uncertainty. There is therefore no way of knowing how wrong the model is.
Info-gap robustness model answers a completely different question, namely: what is the largest region of uncertainty around the estimate over which the performance constraint is satisfied?
Strictly speaking, this question has nothing to do with uncertainty as such. Moreover, the answer to this question does not depend on the "quality" of the nominal point, or estimate, used as the center point of the regions of uncertainty.
See discussion on this topic in my response to Burgamn's comments on my criticism of info-gap decision theory.
In short, info-gap decision theory does not -- much less is it able to -- deal with the question stated by the authors and it therefore cannot help managers gain protection against catastrophic outcomes.
As a matter of fact, info-gap decision theory constitutes the precise antithesis of what a theory for modeling, analyzing and managing severe uncertainty ought to be. Instead of exploring thoroughly the given complete uncertainty space, info-gap decision theory focuses its robustness analysis in the neighborhood of a point estimate.
Remarks:
- It seems that the authors have not really thought through the difficulties arising from the fact that info-gap decision theory is non-probabilistic and likelihood free. I refer them to Review 6 for a discussion on this matter.
- Although the authors' reasoning is flawed, it nevertheless demonstrates that they now realize that the numerous claims made in the info-gap literature on the theory's ability to cope with extremely poor estimates are without any foundation.
- Indeed, it is interesting to note the sharp contradiction between the authors' statement and the claim by Ben-Haim:
Thus, the method challenges us to question our belief in the nominal estimate, so that we evaluate whether differences within the horizon of uncertainty are 'plausible'. Our uncertainty should not be so severe that a reasonable nominal estimate cannot be selected.Rout et al (2009, p. 785)
Info-gap theory is useful precisely in those situations where our best models and data are highly uncertain, especially when the horizon of uncertainty is unknown. In contrast, if we have good understanding of the system then we don’t need info-gap theory, and can use probability theory or even completely deterministic models. It is when we face severe Knightian uncertainty that we need info-gap theory.
Ben-Haim (2007, p. 2)This is definitely a move in the right direction, but ... it does not go far enough!
Stay tuned for more ...
The State of the Art Saga
The paper shows complete disrespect for the state of the art in decision-making under uncertainty. The discussion section basically amounts to an uncritical endorsement of info-gap decision theory. Well established methods that have become the "bread and butter" approaches to decision-making subject to uncertainty are not even mentioned. The paper thus provides an extrememly distorted picture of the area of decision-making under severe uncertainty.
It also ignores relevant publications that are critical of info-gap decision theory.
Stay tuned for more ...
Unsubstantiated and/or misleading statements
The paper is riddled with "problematic" statements. I shall mention just a few.
- Role and place of Info-Gap in decision theory
Unlike other common forms of uncertainty analysis, the info-gap uncertainty model does not require a specific probability distribution or plausible interval for uncertain parameters (Ben-Haim 2006).
Rout et al. (2009, p. 783)Not only is this claim grossly misleading, it is in fact puzzling.
The most well known, indeed classic, model for the treatment of severe uncertainty, namely Wald's Maximin model (1939), is non-probabilistic and likelihood-free.The statement is puzzling because, the authors are well aware that info-gap's robustness model is a Maximin model !!!!!. To wit:
Knight's ideas have been further developed by several authors over the years and in particular by Ben-Haim (2006) who has developed a quantitative formulation known as information-gap decision theory. This theory has recently been shown by Sniedovich (2008) to be formally equivalent to Wald's maximin model in classical decision theory (French, 1988).
Bryan Beresford-Smith and Colin J. Thompson (2009, p. 278)
Journal of Risk Finance, 10(3), 277-287.Note that this statement is incorrect: info-gap's robustness model is not equivalent to Wald's Maximin model, it is a simple instance thereof. See Review #11
- Minimizing the Chance!
However, if they are uncertain about this model and wish to minimize the chance of unacceptably large costs, they can calculate the robust–optimal number of surveys with eqn 5.
Rout et al. (2009, p. 785)This statement is not only unsubstantiated, it flies in the face of numerous statement made by the Father of Info-Gap decision theory, warning against imputing any "likelihood", "plausibility", or "belief" --- hence chance --- to info-gap's uncertainty model. For example (emphasis is mine)
Since the horizon of uncertainty is unknown and unbounded, there is no worst case. Since no measure functions of probability (or plausibility, or belief, etc.) are specified by an info-gap model, the analyst cannot calculate statistical expectations and cannot probabilistically insure against the unknown contingencies identified in the info-gap model.Ben-Haim Y. and Jeske, K. (2003, p. 12), Bias in Financial Markets: Robust Satisficing with Info Gaps
FRB of Atlanta Working Paper No. 2003-35.
Available at SSRN: http://ssrn.com/abstract=487585However, unlike in a probabilistic analysis, r has no connotation of likelihood. We have no rigorous basis for evaluating how likely failure may be; we simply lack the information, and to make a judgment would be deceptive and could be dangerous. There may definitely be a likelihood of failure associated with any given radial tolerance. However, the available information does not allow one to assess this likelihood with any reasonable accuracy.
Ben-Haim (1994, p. 152)
Convex models of uncertainty: applications and implications
Erkenntnis, 4, 139-156.Note that my point is not that it would be impossible to incorporate probabilistic and likelihood structures in info-gap decision theory. I am merely stating the obvious: info-gap decision theory -- in its present formulation hence, as applied in this paper -- is non-probabilistic and likelihood-free.
The bottom line is that the authors somehow manage to create something out of nothing. Namely, they minimize the chance of an event using a non-probabilistic, likelihood-free analysis!
See the discussion on this issue in Review #6 and Review #9. My discussion on Alchemy is also relevant here.
- Plausibility assessment by a likelihood-free analysis!
Thus, the method challenges us to question our belief in the nominal estimate, so that we evaluate whether differences within the horizon of uncertainty are 'plausible'. Our uncertainty should not be so severe that a reasonable nominal estimate cannot be selected.
Rout et al. (2009, p. 785)Again, an unsubstantiated statement that flies in the face of the numerous statement made by the Father of Info-Gap decision theory, warning against attributing any "likelihood", "plausibility", or "belief", to info-gap's uncertainty model. For example (emphasis is mine)
Since the horizon of uncertainty is unknown and unbounded, there is no worst case. Since no measure functions of probability (or plausibility, or belief, etc.) are specified by an info-gap model, the analyst cannot calculate statistical expectations and cannot probabilistically insure against the unknown contingencies identified in the info-gap model.Ben-Haim Y. and Jeske, K. (2003, p. 12)
Bias in Financial Markets: Robust Satisficing with Info Gaps
FRB of Atlanta Working Paper No. 2003-35.
Available at SSRN: http://ssrn.com/abstract=487585However, unlike in a probabilistic analysis, r has no connotation of likelihood. We have no rigorous basis for evaluating how likely failure may be; we simply lack the information, and to make a judgment would be deceptive and could be dangerous. There may definitely be a likelihood of failure associated with any given radial tolerance. However, the available information does not allow one to assess this likelihood with any reasonable accuracy.
Ben-Haim (1994, p. 152)
Convex models of uncertainty: applications and implications
Erkenntnis, 4, 139-156.My comment above applies here as well. It is not impossible to incorporate probabilistic and likelihood structures in info-gap decision theory. However, as we know it, info-gap decision theory is non-probabilistic and likelihood-free.
- Choice of uncertainty model
The authors realize that the results generated by info-gap's robustness analysis can be greatly affected by the choice of the uncertainty model (p. 785):
The particular choice of uncertainty model greatly affects the robust–optimal solution, so efforts must be made to choose a form that is sensible, and appropriate to the system being examined. The structure of uncertainty can be quite an abstract concept, so this could prove difficult in many cases. However, this does not diminish the relevance or applicability of info-gap -- it simply means that it is not a standard formula to be mindlessly applied, but must be carefully tailored to each problem.Yet, they phrase the main results (p. 782):
3. We find the number of absent surveys after which eradication should be declared to be relatively robust to uncertainty in the probability of presence. This solution depends on the nominal estimate of the probability of presence, the performance requirement and the cost of surveying, but not the cost of falsely declaring eradication.4. More generally, to be robust to uncertainty in the probability of presence, managers should conduct at least as many surveys as the number that minimizes the total expected cost. This holds for any nominal model of the probability of presence.
as though they apply generally. Namely, as though they are independent of the choice of the uncertainty model.
But this is manifestly incorrect.
- Optimal solution vs Robust-optimal solution
There is an extensive discussion in the paper on the relationship between the optimal solution generated by info-gap's robustness model -- called robust-optimal solution and the optimal solution obtained by minimizing the net expected cost:
(14) Q:= min {d + û(d)E: d∈D}For example, the authors claim that the optimal solution to (14) has zero robustness to uncertainty:
The optimal solution has the lowest possible expected cost (Fig. 1), but also has zero robustness to uncertainty (Fig. 2).
Rout et al 2009, p. 784).This statement is misleading. It is valid only if u*(d*)≤û(d*) where d* denotes the optimizer of (14). If u*(d*)>û(d*), the robustness of optimal d* is greater than zero.
But more than this, the robust-optimal solution itself, call it d^, has zero robustness in cases where u*(d^) < û(d^). Meaning that the sweeping statement about the zero-robustness of the optimal solution and by implication that the robust optimal solution is therefore "better" is groundless.
Furthermore, the authors report on general results obtained from a comparison between the performance of the optimal solution and the performance of the robust-optimal solution using two additional models of uncertainty (pp. 784-785).
But, what is conspicuously missing from this comparison is the most .... basic model of uncertainty that should have logically been included in this comparison to serve as a benchmark. This is the model where the distance between two points in [0,1] is equal to the Euclidean distance between them. This would be equivalent to the robustness used in Case 2: The Estimates are Very Good above.
Because, had the authors conducted this comparison, they would have realized that in this case the optimal solution is equal to the robust-optimal solution!
So much then for the claim that the optimal solution has zero robustness.
In short, the authors are telling us that an optimal solution to one problem may not be, or is not, optimal for a different version of the problem. So what? Isn't this inference self-evident. The question is: does it have any merit or significance?
It is well known that showing that an optimal solution to an optimization problem is not optimal for a satisficing problem associated with it does not mean very much because you are arguing the obvious. But more than this, this "argument" does not prove that the satisficing solution has an advantage over the optimizing solution (emphasis is mine):
It seems meaningless to draw more general conclusions from this study than those presented in section 2.2. Hence, that section maybe the conclusion of this paper. In my opinion there is room for both 'optimizing' and 'satisficing' models in business economics. Unfortunately, the difference between 'optimizing' and 'satisficing' is often referred to as a difference in the quality of a certain choice. It is a triviality that an optimal result in an optimization can be an unsatisfactory result in a satisficing model. The best things would therefore be to avoid a general use of these two words.Jan Odhnoff
On the Techniques of Optimizing and Satisficing
The Swedish Journal of Economics
Vol. 67, No. 1 (Mar., 1965)
pp. 24-39See my discussion on the Optimizing vs satisficing debate.
- Main results
The statement of the main result, namely (p. 782):
4. More generally, to be robust to uncertainty in the probability of presence, managers should conduct at least as many surveys as the number that minimizes the total expected cost. This holds for any nominal model of the probability of presence.gives the false impression that it is valid in general, namely at least for all the uncertainty models examined in the paper.
However, in the paper the authors attempt to prove it only for one of the uncertainty models examined in the paper, namely the model specified by eqn 3 (p. 783).
This result does not always hold in the context of the uncertainty model specified by eqn A2 in Appendix S2 --- even if the nominal probability of presence is convex.
Similar comments apply to (p. 782):
3. We find the number of absent surveys after which eradication should be declared to be relatively robust to uncertainty in the probability of presence. This solution depends on the nominal estimate of the probability of presence, the performance requirement and the cost of surveying, but not the cost of falsely declaring eradication.In the context of the uncertainty model specified by eqn A2 in Appendix S2 the robust-optimal solutions does depend on the cost of falsely declaring eradication.
Stay tuned for more ...
Conclusions
The problem examined in the article is trivial, so much so that its essence can be solved by inspection.
The main results, namely (p. 782):
3. We find the number of absent surveys after which eradication should be declared to be relatively robust to uncertainty in the probability of presence. This solution depends on the nominal estimate of the probability of presence, the performance requirement and the cost of surveying, but not the cost of falsely declaring eradication.4. More generally, to be robust to uncertainty in the probability of presence, managers should conduct at least as many surveys as the number that minimizes the total expected cost. This holds for any nominal model of the probability of presence.
are unsubstantiated and certain assertions made in the article are (technically) wrong.
On the whole, this article is a typical info-gap article. The only point of difference between this article and other info-gap publications is that the authors concede that (despite all the rhetoric in the info-gap literature) it is obvious that this theory is unsuitable for the treatment of severe uncertainty where the estimate is likely to be substantially wrong.
Otherwise, the paper follows the established info-gap line that info-gap decision theory is distinct and radically different from "common" theories for decision under uncertainty. Particularly jarring in this respect is its omission of the fact that info-gap robustness model is in fact a Maximin model.
Consequently the paper gives a thoroughly distorted account of the state of the art in robust decision-making under uncertainty.
Other Reviews
- Ben-Haim (2001, 2006): Info-Gap Decision Theory: decisions under severe uncertainty.
- Regan et al (2005): Robust decision-making under severe uncertainty for conservation management.
- Moilanen et al (2006): Planning for robust reserve networks using uncertainty analysis.
- Burgman (2008): Shakespeare, Wald and decision making under severe uncertainty.
- Ben-Haim and Demertzis (2008): Confidence in monetary policy.
- Hall and Harvey (2009): Decision making under severe uncertainty for flood risk management: a case study of info-gap robustness analysis.
- Ben-Haim (2009): Info-gap forecasting and the advantage of sub-optimal models.
- Yokomizo et al (2009): Managing the impact of invasive species: the value of knowing the density-impact curve.
- Davidovitch et al (2009): Info-gap theory and robust design of surveillance for invasive species: The case study of Barrow Island.
- Ben-Haim et al (2009): Do we know how to set decision thresholds for diabetes?
- Beresford and Thompson (2009): An info-gap approach to managing portfolios of assets with uncertain returns
- Ben-Haim, Dacso, Carrasco, and Rajan (2009): Heterogeneous uncertainties in cholesterol management
- Rout, Thompson, and McCarthy (2009): Robust decisions for declaring eradication of invasive species
- Ben-Haim (2010): Info-Gap Economics: An Operational Introduction
- Hine and Hall (2010): Information gap analysis of flood model uncertainties and regional frequency analysis
- Ben-Haim (2010): Interpreting Null Results from Measurements with Uncertain Correlations: An Info-Gap Approach
- Wintle et al. (2010): Allocating monitoring effort in the face of unknown unknowns
- Moffitt et al. (2010): Securing the Border from Invasives: Robust Inspections under Severe Uncertainty
- Yemshanov et al. (2010): Robustness of Risk Maps and Survey Networks to Knowledge Gaps About a New Invasive Pest
- Davidovitch and Ben-Haim (2010): Robust satisficing voting: why are uncertain voters biased towards sincerity?
- Schwartz et al. (2010): What Makes a Good Decision? Robust Satisficing as a Normative Standard of Rational Decision Making
- Arkadeb Ghosal et al. (2010): Computing Robustness of FlexRay Schedules to Uncertainties in Design Parameters
- Hemez et al. (2002): Info-gap robustness for the correlation of tests and simulations of a non-linear transient
- Hemez et al. (2003): Applying information-gap reasoning to the predictive accuracy assessment of transient dynamics simulations
- Hemez, F.M. and Ben-Haim, Y. (2004): Info-gap robustness for the correlation of tests and simulations of a non-linear transient
- Ben-Haim, Y. (2007): Frequently asked questions about info-gap decision theory
- Sprenger, J. (2011): The Precautionary Approach and the Role of Scientists in Environmental Decision-Making
- Sprenger, J. (2011): Precaution with the Precautionary Principle: How does it help in making decisions
- Hall et al. (2011): Robust climate policies under uncertainty: A comparison of Info--Gap and RDM methods
- Ben-Haim and Cogan (2011) : Linear bounds on an uncertain non-linear oscillator: an info-gap approach
- Van der Burg and Tyre (2011) : Integrating info-gap decision theory with robust population management: a case study using the Mountain Plover
- Hildebrandt and Knoke (2011) : Investment decisions under uncertainty --- A methodological review on forest science studies.
- Wintle et al. (2011) : Ecological-economic optimization of biodiversity conservation under climate change.
- Ranger et al. (2011) : Adaptation in the UK: a decision-making process.
Recent Articles, Working Papers, Notes
Also, see my complete list of articles
Moshe's new book! - Sniedovich, M. (2012) Fooled by local robustness, Risk Analysis, in press.
- Sniedovich, M. (2012) Black swans, new Nostradamuses, voodoo decision theories and the science of decision-making in the face of severe uncertainty, International Transactions in Operational Research, in press.
- Sniedovich, M. (2011) A classic decision theoretic perspective on worst-case analysis, Applications of Mathematics, 56(5), 499-509.
- Sniedovich, M. (2011) Dynamic programming: introductory concepts, in Wiley Encyclopedia of Operations Research and Management Science (EORMS), Wiley.
- Caserta, M., Voss, S., Sniedovich, M. (2011) Applying the corridor method to a blocks relocation problem, OR Spectrum, 33(4), 815-929, 2011.
- Sniedovich, M. (2011) Dynamic Programming: Foundations and Principles, Second Edition, Taylor & Francis.
- Sniedovich, M. (2010) A bird's view of Info-Gap decision theory, Journal of Risk Finance, 11(3), 268-283.
- Sniedovich M. (2009) Modeling of robustness against severe uncertainty, pp. 33- 42, Proceedings of the 10th International Symposium on Operational Research, SOR'09, Nova Gorica, Slovenia, September 23-25, 2009.
- Sniedovich M. (2009) A Critique of Info-Gap Robustness Model. In: Martorell et al. (eds), Safety, Reliability and Risk Analysis: Theory, Methods and Applications, pp. 2071-2079, Taylor and Francis Group, London.
.
- Sniedovich M. (2009) A Classical Decision Theoretic Perspective on Worst-Case Analysis, Working Paper No. MS-03-09, Department of Mathematics and Statistics, The University of Melbourne.(PDF File)
- Caserta, M., Voss, S., Sniedovich, M. (2008) The corridor method - A general solution concept with application to the blocks relocation problem. In: A. Bruzzone, F. Longo, Y. Merkuriev, G. Mirabelli and M.A. Piera (eds.), 11th International Workshop on Harbour, Maritime and Multimodal Logistics Modeling and Simulation, DIPTEM, Genova, 89-94.
- Sniedovich, M. (2008) FAQS about Info-Gap Decision Theory, Working Paper No. MS-12-08, Department of Mathematics and Statistics, The University of Melbourne, (PDF File)
- Sniedovich, M. (2008) A Call for the Reassessment of the Use and Promotion of Info-Gap Decision Theory in Australia (PDF File)
- Sniedovich, M. (2008) Info-Gap decision theory and the small applied world of environmental decision-making, Working Paper No. MS-11-08
This is a response to comments made by Mark Burgman on my criticism of Info-Gap (PDF file )
- Sniedovich, M. (2008) A call for the reassessment of Info-Gap decision theory, Decision Point, 24, 10.
- Sniedovich, M. (2008) From Shakespeare to Wald: modeling wors-case analysis in the face of severe uncertainty, Decision Point, 22, 8-9.
- Sniedovich, M. (2008) Wald's Maximin model: a treasure in disguise!, Journal of Risk Finance, 9(3), 287-291.
- Sniedovich, M. (2008) Anatomy of a Misguided Maximin formulation of Info-Gap's Robustness Model (PDF File)
In this paper I explain, again, the misconceptions that Info-Gap proponents seem to have regarding the relationship between Info-Gap's robustness model and Wald's Maximin model.
- Sniedovich. M. (2008) The Mighty Maximin! (PDF File)
This paper is dedicated to the modeling aspects of Maximin and robust optimization.
- Sniedovich, M. (2007) The art and science of modeling decision-making under severe uncertainty, Decision Making in Manufacturing and Services, 1-2, 111-136. (PDF File) .
- Sniedovich, M. (2007) Crystal-Clear Answers to Two FAQs about Info-Gap (PDF File)
In this paper I examine the two fundamental flaws in Info-Gap decision theory, and the flawed attempts to shrug off my criticism of Info-Gap decision theory.
- My reply (PDF File) to Ben-Haim's response to one of my papers. (April 22, 2007)
This is an exciting development!
- Ben-Haim's response confirms my assessment of Info-Gap. It is clear that Info-Gap is fundamentally flawed and therefore unsuitable for decision-making under severe uncertainty.
- Ben-Haim is not familiar with the fundamental concept point estimate. He does not realize that a function can be a point estimate of another function.
So when you read my papers make sure that you do not misinterpret the notion point estimate. The phrase "A is a point estimate of B" simply means that A is an element of the same topological space that B belongs to. Thus, if B is say a probability density function and A is a point estimate of B, then A is a probability density function belonging to the same (assumed) set (family) of probability density functions.
Ben-Haim mistakenly assumes that a point estimate is a point in a Euclidean space and therefore a point estimate cannot be say a function. This is incredible!
- A formal proof that Info-Gap is Wald's Maximin Principle in disguise. (December 31, 2006)
This is a very short article entitled Eureka! Info-Gap is Worst Case (maximin) in Disguise! (PDF File)
It shows that Info-Gap is not a new theory but rather a simple instance of Wald's famous Maximin Principle dating back to 1945, which in turn goes back to von Neumann's work on Maximin problems in the context of Game Theory (1928).
- A proof that Info-Gap's uncertainty model is fundamentally flawed. (December 31, 2006)
This is a very short article entitled The Fundamental Flaw in Info-Gap's Uncertainty Model (PDF File) .
It shows that because Info-Gap deploys a single point estimate under severe uncertainty, there is no reason to believe that the solutions it generates are likely to be robust.
- A math-free explanation of the flaw in Info-Gap. ( December 31, 2006)
This is a very short article entitled The GAP in Info-Gap (PDF File) .
It is a math-free version of the paper above. Read it if you are allergic to math.
- A long essay entitled What's Wrong with Info-Gap? An Operations Research Perspective (PDF File) (December 31, 2006).
This is a paper that I presented at the ASOR Recent Advances in Operations Research (PDF File) mini-conference (December 1, 2006, Melbourne, Australia).Recent Lectures, Seminars, Presentations
If your organization is promoting Info-Gap, I suggest that you invite me for a seminar at your place. I promise to deliver a lively, informative, entertaining and convincing presentation explaining why it is not a good idea to use — let alone promote — Info-Gap as a decision-making tool.
Here is a list of relevant lectures/seminars on this topic that I gave in the last two years.
ASOR Recent Advances, 2011, Melbourne, Australia, November 16 2011. Presentation: The Power of the (peer-reviewed) Word. (PDF file).
- Alex Rubinov Memorial Lecture The Art, Science, and Joy of (mathematical) Decision-Making, November 7, 2011, The University of Ballarat. (PDF file).
- Black Swans, Modern Nostradamuses, Voodoo Decision Theories, and the Science of Decision-Making in the Face of Severe Uncertainty (PDF File) .
(Invited tutorial, ALIO/INFORMS Conference, Buenos Aires, Argentina, July 6-9, 2010).
- A Critique of Info-Gap Decision theory: From Voodoo Decision-Making to Voodoo Economics(PDF File) .
(Recent Advances in OR, RMIT, Melbourne, Australia, November 25, 2009)
- Robust decision-making in the face of severe uncertainty(PDF File) .
(GRIPS, Tokyo, Japan, October 16, 2009)
- Decision-making in the face of severe uncertainty(PDF File) .
(KORDS'09 Conference, Vilnius, Lithuania, September 30 -- OCtober 3, 2009)
- Modeling robustness against severe uncertainty (PDF File) .
(SOR'09 Conference, Nova Gorica, Slovenia, September 23-25, 2009)
- How do you recognize a Voodoo decision theory?(PDF File) .
(School of Mathematical and Geospatial Sciences, RMIT, June 26, 2009).
- Black Swans, Modern Nostradamuses, Voodoo Decision Theories, Info-Gaps, and the Science of Decision-Making in the Face of Severe Uncertainty (PDF File) .
(Department of Econometrics and Business Statistics, Monash University, May 8, 2009).
- The Rise and Rise of Voodoo Decision Theory.
ASOR Recent Advances, Deakin University, November 26, 2008. This presentation was based on the pages on my website (voodoo.moshe-online.com).
- Responsible Decision-Making in the face of Severe Uncertainty (PDF File) .
(Singapore Management University, Singapore, September 29, 2008)
- A Critique of Info-Gap's Robustness Model (PDF File) .
(ESREL/SRA 2008 Conference, Valencia, Spain, September 22-25, 2008)
- Robust Decision-Making in the Face of Severe Uncertainty (PDF File) .
(Technion, Haifa, Israel, September 15, 2008)
- The Art and Science of Robust Decision-Making (PDF File) .
(AIRO 2008 Conference, Ischia, Italy, September 8-11, 2008 )
- The Fundamental Flaws in Info-Gap Decision Theory (PDF File) .
(CSIRO, Canberra, July 9, 2008 )
- Responsible Decision-Making in the Face of Severe Uncertainty (PDF File) .
(OR Conference, ADFA, Canberra, July 7-8, 2008 )
- Responsible Decision-Making in the Face of Severe Uncertainty (PDF File) .
(University of Sydney Seminar, May 16, 2008 )
- Decision-Making Under Severe Uncertainty: An Australian, Operational Research Perspective (PDF File) .
(ASOR National Conference, Melbourne, December 3-5, 2007 )
- A Critique of Info-Gap (PDF File) .
(SRA 2007 Conference, Hobart, August 20, 2007)
- What exactly is wrong with Info-Gap? A Decision Theoretic Perspective (PDF File) .
(MS Colloquium, University of Melbourne, August 1, 2007)
- A Formal Look at Info-Gap Theory (PDF File) .
(ORSUM Seminar , University of Melbourne, May 21, 2007)
- The Art and Science of Decision-Making Under Severe Uncertainty (PDF File) .
(ACERA seminar, University of Melbourne, May 4, 2007)
- What exactly is Info-Gap? An OR perspective. (PDF File)
ASOR Recent Advances in Operations Research mini-conference (December 1, 2006, Melbourne, Australia).
Disclaimer: This page, its contents and style, are the responsibility of the author (Moshe Sniedovich) and do not represent the views, policies or opinions of the organizations he is associated/affiliated with.