The Spin Stops Here!
Decision-Making Under Severe Uncertainty  
Faqs | Help | @ | Contact | home  
voodoostan info-gap decision theory info-gap economics severe uncertainty mighty maximin robust decisions responsible decisions


Reviews of publications on Info-Gap decision theory

Review # 18 (Posted: September 11, 2010)

Reference:

L. Joe Moffitt, John K. Stranlund, and Craig D. Osteen
Securing the Border from Invasives: Robust Inspections under Severe Uncertainty
Economics Research International, Volume 2010
2010 doi:10.1155/2010/510127

Abstract Two important features of agricultural quarantine inspections of shipping containers for invasive species at U.S. ports of entry are the general absence of economic considerations and the severe uncertainty that surrounds invasive species introductions. In this article, we propose and illustrate a method for determining an inspection monitoring protocol that addresses both issues. An inspection monitoring protocol is developed that is robust in maximizing the set of uncertain outcomes over which an economic performance criterion is achieved. The framework is applied to derive an alternative to Agricultural Quarantine Inspection (AQI) for shipments of fruits and vegetables as currently practiced at ports of entry in the United States.
Acknowledgment Funding for this research was provided by the U. S. Department of Agriculture under USDA/ERS/PREISM Cooperative Agreement no. 43-3AEM-4-80115. Additional support was provided by the Cooperative State Research Extension, Education Service, U. S. Department of Agriculture, Massachusetts Agricultural Experiment Station under Project no. MAS00861. The views expressed in the paper are the authors' and do not necessarily represent those of the sponsoring agencies. The authors are grateful to the United States Department of Homeland Security, particularly National Data QualityManager, Rojelio Lozano, for providing unpublished information on the Agricultural Quarantine Inspection program used in this research. Without implicating them, this article has benefited from the helpful input of Yakov Ben-Haim, Barry C. Field, and Peyton M. Ferrier.
Scores TUIGF:100%
SNHNSNDN:587%
GIGO:100%


Overview

This is an extremely interesting article but not necessarily for its scientific content. To see what I have in mind read the Appendix. For our purposes here it suffices to say that this review is in fact a tale of three papers. The one under review and two others:

L. Joe Moffitt, John K. Stranlund, Craig D. Osteen
Securing the Border from Invasives: Robust Inspections Under Severe Uncertainty
University of Massachusetts Amherst, Department of Resource Economics (2009).
Working Paper No. 2009-6 (PDF file)

L. Joe Moffitt, John K. Stranlund, Barry C. Field
Inspections to Avert Terrorism: Robustness Under Severe Uncertainty
Journal of Homeland Security and Emergency Management, 2(3), 1-17, (2005).

I shall refer to them as the 2010 paper, 2009 paper and 2005 paper, respectively.

Of course, I cannot discuss here the truly interesting aspects of this article's history, that I am familiar with. So, all I shall do is indicate that I contacted the first two authors in April 2008 regarding some of their info-gap publications.

In October 2009 I sent the authors comments on their 2009 working paper. Since I did not receive any reply, I have no way of knowing to what extent, if any, did my comments influence the modifications in the 2009 paper that eventually resulted in the 2010 paper, so I shall not dwell on this here.

Now, back to the article under review. Why is it such an interesting article?

From my perspective the answer is as follows:

The authors claim that their analysis is based on info-gap decision theory. Yet, the proposed uncertainty model is not an info-gap uncertainty model and the proposed robustness model is not an info-gap robustness model.

So the question naturally arises: Given that the uncertainty model proposed by the authors is not an info-gap uncertainty model, and the robustness model proposed by the authors is not an info-gap robustness model, on what grounds do the authors claim that the analysis as a whole is based on info-gap decision theory?

This, of course, is a rhetorical question.

Because, the authors apparently hold (I hasten to add, erroneously) that their uncertainty model is an info-gap uncertainty model and their robustness model is an info-gap robustness model.

So, in what follows I explain why the authors are mistaken.

I also explain why it is very odd that the problem under consideration should be analyzed in the framework of info-gap decision theory to begin with.

The problem under consideration

If you strip the mathematical objects of the interpretation given them in the article and simplify the notation, you immediately discover that the problem the authors basically seek to solve comes down to this:

Generic robustness problem:

Given two finite sets X and Q, find the size (cardinality) of the largest subset of X whose elements satisfy a certain condition, say r(q,x) ≤ r*, where r* is a given numeric constant, r is a real-valued function on Q×X and q is a given element of Q.

Formally, this generic problem can be stated as follows:

(1) .......... Robustness Model:   ρ(q):= max {|Y|: Y⊆X, r(q,x) ≤ r*, ∀x∈Y}

where |Y| denotes the cardinality of set Y.

In this framework ρ(q) is the robustness of decision q.


Generic decision problem:

Find a q∈Q that maximizes ρ(q) over Q. That is,

(2) .......... Decision Model:     maxq∈Q ρ(q) = maxq∈Q max {|Y|: Y⊆X, r(q,x) ≤ r*, ∀x∈Y}


Explanation:

Here Q represents the set of decisions available to the decision maker and X represents the uncertainty space, namely the set of possible values of a given parameter called x. The true value of the parameter is unknown as it is subject to severe uncertainty. All we know is that the true value of the parameter is an element of X. In the case studied in the article, x is a triplet of numbers, namely X is a subset of R3, where R denotes the real line; and decision q represents the percentage of the containers to be inspected, so Q is a finite subset of [0,100].

The robustness of decision q, namely ρ(q), is defined as the size (cardinality) of the largest subset of X over which decision q satisfies the performance constraint r(q,x)≤r*, for every x in this subset.

The best (optimal) decision is that whose robustness is the largest.

The question is then:

Why on earth is it necessary/desirable to reformulate these two models so as to fit them into the info-gap decision theory mold?

Good question!

Of course, the above generic problems can be given a (thoroughly correct) formulation that is completely different from the above. So, my point in raising this question is not to suggest that all other formulations are necessarily wrong, flawed or in any way inferior.

Rather, my objective is to point out to info-gap scholars that they should be able to see, at a glance, that the above models are not info-gap models.

That said, let us now examine the models proposed by the authors for these generic problems. Since the decision model is an obvious consequence of the robustness model, my discussion will focus exclusively on the robustness model.

Proposed uncertainty and robustness models

The uncertainty model proposed by the authors is unduly complicated by the introduction of the cardinality of subsets of X, call it h, as an explicit object. In the parlance of info-gap decision theory, h represents the horizon of uncertainty. That is, the authors introduce the following family of sets parameterized by h:

(3) .......... Uncertainty model:     X(h):= {Y⊆X: |Y|≤h} , h≥0

That is, X(h) denotes the set of all the subsets of X whose cardinality is not greater than h.

Using this contrivance, the authors then define the robustness of decision q as follows:

(4) .......... Robustness model:     α(q):= max {h≥0: r(q,x)≤r*,∀x∈Y,Y∈X(h)}

This definition is equivalent to the definition given in (1), except that it is unnecessarily complicated. In fact, it is not at all clear why the need to resort to (4) arises in the first place, given that (1) does the job perfectly well (read the Appendix for a possible explanation).

Turning now to the info-gap connection.

For (3) to count as an info-gap uncertainty model, the family of sets X(h),h≥0, must satisfy the structure imposed by info-gap decision theory on the subsets of the uncertainty space that it associates with the horizon of uncertainty, h. In particular, X(h) must be a set whose elements are possible values of the parameter x, namely X(h) must be a subset of X (see the section 2.6 Axioms of Info-Gap Uncertainty in Ben-Haim (2006, p.31)).

But it is clear that, the sets X(h),h≥0, defined by (3) do not satisfy this basic required structure: X(h) is not a subset of X, it is a subset of the power set of X. This is the reason that the element Y of X(h) in (4) cannot be used as an argument of r as required by info-gap's robustness model.

Also, according to info-gap decision theory, the family X(h),h≥0, must have a center point in X (the estimate, or nominal value of x) which is the sole element of the singleton X(0). But according to (3), the relation of X(0) to X is ambiguous(formally X(0) is an empty set).

Here is the picture and the story:

Info-Gap's Robustness Model

Proposed model vs Info-gap model

In short, the proposed uncertainty model is not an info-gap uncertainty model and consequently the proposed robustness model is not an info-gap robustness model.

All I can say about this matter is that from this side of the ocean the proposed models seem "contrived" in the sense that they appear to be the product of a deliberate attempt to formulate the problem under consideration -- by hook or by crook -- as an info-gap robustness problem.

If you have a better explanation, I would be glad to hear/read it

In the appendix I briefly discuss a possible explanation for this.

The state of the art

The generic problem under consideration is so basic that one's immediate hunch is that it easily submits to one or more of the "standard" models offered by decision theory.

Indeed, if you are familiar with Maximin theory, you'll immediately recognize that the robustness model (1) is a Maximin model in disguise.

Recall that the generic Maximin model is as follows:

Classic formatMP format
maxming(d,s)  ≡max{z: z ≤ g(d,s),∀s∈S(d)}
 d∈D   s∈S(d)   d∈D,z∈R

where R denotes the real line. The MP format (Mathematical Programming format) is a format that is used extensively in mathematical programming and robust optimization.

To show that the proposed robustness model (1) is an instance (specific case) of this Maximin model, consider the case where

D=power set of X
S(d)=d
g(d,s)=|d| if r(q,s) ≤ r*
=−∞ if r(q,s) > r*

We now show that this instance of the Maximin model yields the robustness model proposed by the authors. The formal proof runs as follows:

Classic formatMP format
maxming(d,s)max{z: z ≤ g(d,s),∀s∈S(d)}
 d∈D   s∈S(d)   d∈D,z∈R 
  ≡   max{z: z≤ g(Y,x),∀x∈Y}
Y⊆X,z∈R 
max{|Y|: |Y| ≤ g(Y,x),∀x∈Y}
Y⊆X 
max{|Y|: r(q,x) ≤ r*,∀x∈Y}
Y⊆X 
max{|Y|: Y⊆X, r(q,x) ≤ r*,∀x∈Y}  (QED)

Note that the clause "r(q,x) ≤ r*, ∀x∈Y" immediately gives the game away. That is, this clause indicates that the question whether set Y is admissible depends on the performance of its worst element. Here the "worst x in Y" is the element x in Y that minimizes r(q,x) over x in Y. Note that if the constraint is of the form r(q,x) ≥ r*, then the worst x in Y is the element x in Y that maximizes r(q,x) over x in Y).

Still, if you are in doubt and you want to ascertain that this problem indeed submits to a typical Maximin model, view this problem as a game between two players. Player 1 selects a subset Y of X whereupon, Player 2 selects an element x in Y. If x satisfies the constraint r(q,x) ≤ r*, then Player 1 is awarded |Y| bars of dark chocolate. If x violates this constraint, then Player 1 is penalized severely. Assuming that Player 1 loves dark chocolate her objective is to maximize the value of |Y|. Therefore, she will not select a Y whose worst element does not satisfy the constraint r(q,x) ≥ r*.

Now suppose that Player 2 is a hostile adversary of Player 1. She will then select the worst x in Y, namely she will select the x in Y that minimizes the value of r(q,x) over x in Y.

Mathematically, this game can be described as follows:

maxming(Y,x)  ≡   max{z: z ≤ g(Y,x), ∀x∈Y}
 Y⊆X   x∈Y 

where g(Y,x) = |Y| if r(q,x) ≤ r* and g(Y,x) = -∞ if r(q,x) > r*. The penalty −∞ is used as a device to (mathematically) deter Player 1 from selecting a Y whose worst element violates the constraint r(q,x) ≤ r*.

In the context of decision-making under severe uncertainty, Player 1 represents the decision-maker and Player 2 represents Nature, namely uncertainty. According to this game, Nature is a hostile adversary: She always selects the worst x in Y, where "worst" refers to Player 1's objective of satisficing the constraint r(q,x) ≤ r*. So, if there is an x in Y such that r(q,x) > r*, then Nature will select such an x. If there is no such x in Y, then Nature will select some arbitrary element of Y.

For her part, the decision maker will try to maximize the size (cardinality) of set Y that she selects. But She must be careful. Because, should She select a set Y⊆X that contains an x such that r(q,x) > r*, She will incur a severe penalty.

The question is then: what is the relevance of this lengthy story about this generic Maximin problem to the problem that the authors seek to solve?

The answer of course is very simple:

Although the problem's intimate connection to the Maximin is there for all to see, as far as the authors are concerned: if you have a hammer, the whole world is a nail!

In the authors' case the hammer is of course info-gap decision theory.

Having made up their minds to turn to info-gap decision theory for support, or inspiration, or what have you, come what may; they force upon the problem under consideration a model that is purported to be an info-gap model, but as I show above, is in fact an info-gap model only in name.

Structure of the uncertainty model

By definition, info-gap's uncertainty model stipulates only specific subsets of the uncertainty space under consideration. To clarify this point, let X denote the uncertainty space, that is, let X be the set of all possible values of x. The subsets of X considered by an info-gap uncertainty model are confined only to neighborhoods of the estimate c of the true value of x. Hence, an info-gap analysis probes only such neighborhoods. Each such neighborhood, denoted X(h,c), is the set of elements of X that are within a distance h from c, where the distance is determined by some metric or norm associated with X. In other words, these neighborhoods cannot be arbitrary subsets of X, as posited by the authors.

This is illustrated by the following figure.

The shaded area represents the subset of the uncertainty space whose elements satisfy the performance requirement r(q,x) ≤ r* for the decision q under consideration.

The regions of uncertainty defined by info-gap decision theory are neighborhoods of the estimate c, shown here as circles centered at the estimate c. No other neighborhoods but such circles are probed by the info-gap robustness analysis. In sharp contrast, the uncertainty model proposed in the article allows the neighborhoods probed by the analysis to be arbitrary subsets of the uncertainty space X!

The robustness of decision q according to info-gap decision theory is the radius of the largest circle contained in the shaded area. Robustness according to the model proposed by the authors is the size of the shaded area.

Clearly, these are two fundamentally different things.

Summary

So the onus is on the authors to explain:

It ought to be pointed out that robustness models based on the cardinality of the set of admissible outcomes date back to Starr's (1963, 1966) "domain criterion" and to Rosenhead et al. (1972) . See my discussion on robustness and my paper the Mighty Maximin!.


Conclusion

As a final note it is of the utmost importance to set the record straight on the authors' general comment on info-gap decision theory.

On page 2 the authors make the following grossly misleading statement about info-gap decision theory:

Ben-Haim [8] has developed a new approach known as information-gap (infogap) decision theory, which he designed for cases in which probability distributions for uncontrolled events are not available. The essence of info-gap analysis is the pursuit of decisions that are robust in the sense that, roughly speaking, they maximize the range of uncertainty in the decision environment within which the decision maker is certain to achieve a specified performance requirement. One decision is more robust than another if the range of uncertainty under which the performance requirement is met is larger. Given a performance criterion, a robust decision gives the decision maker maximum confidence that his or her performance criterion will be met.

Comments:

References


Appendix


A tale of three papers

The 2010 paper

This article is the modified version of the 2009 working paper. The main modification of interest to us in this review is the one associated with the proposed uncertainty space and its subsets (regions of uncertainty) used in the robustness analysis.

Since the proposed regions of uncertainty are not subsets of the uncertainty space X, this model is not an info-gap uncertainty model.

Back to 2009

In this working paper, no horizon of uncertainty is stipulated, so the robustness model is expressed in terms of subsets of the set of possible values of the parameter of interest, X. In other words, the regions of uncertainty, Y, can be arbitrary subsets of X.

On October 22 2009, I sent a note to the authors, indicating, among other things, that their uncertainty model is not an info-gap uncertainty model because the uncertainty sets do not satisfy the nesting requirement. That is, |Y| < |Y'| does not imply that Y is a subset of Y'.

Since I have not received a reply to my letter, I have no way of knowing whether the changes the authors made in the proposed uncertainty model are in response to my comments.

But the fact is that the main differences between the 2009 uncertainty model and the 2010 uncertainty model is that in the 2010 model the regions of uncertainty are nested and in the 2009 model they are not.

What the authors failed to appreciate, though, is that by contriving a forced nesting of the regions of uncertainty, they violated a more fundamental requirement, namely the requirement that these regions of uncertainty be subsets of the set of possible values of the parameter of interest, X. In the 2009 model the proposed regions of uncertainty do not meet this basic requirement. Hence, as explained above, the proposed model is not an info-gap uncertainty model.

So what did the authors accomplish by correcting the flaw in the 2009 model (violation of the nesting requirement)? They effected a new, even more serious, flaw in the 2010 model, namely, the regions of uncertainty are not subsets of the set of possible values of the parameter x.

And all this doomed effort for what? To achieve one goal: to formulate an info-gap decision model, come hell or high water!

And all this, in spite of the fact that -- as I show above -- the problem itself is extremely easy to model.

And here is the most interesting episode in this saga.

Back to 2005

On page 2 of the 2005 paper we find the following interesting statement (emphasis is mine):

In the next section we generalize info-gap theory with expected utility as a measure of performance.

On page 3 we are reminded that in the framework of info-gap decision theory (emphasis is mine):

The uncertainty model consists of a family of convex, nested sets where the elements of each set are possible realizations of uncertain events affecting rewards.

And on page 4 we are informed that (emphasis is mine):

While preserving the info-gap philosophy of uncertainty and robustness, we generalize the theory’s components to include both the basic info-gap and hybrid cases and to permit a less restrictive characterization of uncertainty.

And ... surprise! surprise! ... the generalization in the uncertainty model comes down to ... the use of arbitrary regions of uncertainty rather than nested subsets of the uncertainty space, X And as in the 2009 paper and the 2010 paper, robustness is defined as the cardinality of the largest subset of the uncertainty space over which the performance requirement is satisfied.

The question therefore arises: If in 2005 this was considered by the authors to be a generalization of info-gap decision theory, how come that in 2009 and 2010 this is not a generalization of the (very same) theory? And if in 2009 and 2010 the uncertainty model is not a generalization of info-gap uncertainty model, why was the same model a generalization of the info-gap uncertainty model in 2005?

The point that I am trying to highlight is that it is crystal clear that in 2005 the authors were aware of the fact that the proposed uncertainty model was not an info-gap uncertainty model, so they referred to it as a generalization of the model.

So how come that in 2009 and 2010 the 2005 generalization is no longer a generalization?

I do not have an answer to this intriguing question. What I can say, though, is that the proposed 2009 and 2010 models are not info-gap uncertainty models. The authors of the 2005 paper knew that much in 2005. The authors of the 2009 and 2010 papers do not explain why the 2005 generalization vanished into thin air ....

If you find this story a bit confusing, perhaps the following table can help. If not, feel free to contact me. The table refers to the three versions of the uncertainty model proposed in the three papers.

Comparison of four uncertainty models

So the bottom line is this:

Important note

It is extremely interesting that Wald's Maximin model -- the most important model for decision-making under severe uncertainty particularly for robust optimization -- does not even get a mention in the 2010 article. This is interesting because the authors are well aware that info-gap's robustness model is a Maximin model. It is certainly most interesting that the authors have determined to turn to info-gap decision theory, rather than to Wald's Maximin paradigm, as a framework for the formulation of the problem under consideration. Because, as we saw above in (1), the problem lends itself to a Maximin formulation in a simple, straightforward, and angst-free manner.

Since April 2008 I have been trying to bring the first two authors around to the view that info-gap decision theory is highly problematic and that their models are in fact not info-gap models. But to no avail! So, I am not too optimistic about the current attempt.

Apparently there must be something that spurs the authors to use info-gap decision theory, at least in name ... regardless of the consequences.

Based on the 2005, 2009, and 2010 versions of the authors' uncertainty model, I shouldn't be a surprised if in 3-4 years the authors will refer to Wald's maximin model as a generalization of info-gap decision theory!

This will be the day!


Other Reviews

  1. Ben-Haim (2001, 2006): Info-Gap Decision Theory: decisions under severe uncertainty.

  2. Regan et al (2005): Robust decision-making under severe uncertainty for conservation management.

  3. Moilanen et al (2006): Planning for robust reserve networks using uncertainty analysis.

  4. Burgman (2008): Shakespeare, Wald and decision making under severe uncertainty.

  5. Ben-Haim and Demertzis (2008): Confidence in monetary policy.

  6. Hall and Harvey (2009): Decision making under severe uncertainty for flood risk management: a case study of info-gap robustness analysis.

  7. Ben-Haim (2009): Info-gap forecasting and the advantage of sub-optimal models.

  8. Yokomizo et al (2009): Managing the impact of invasive species: the value of knowing the density-impact curve.

  9. Davidovitch et al (2009): Info-gap theory and robust design of surveillance for invasive species: The case study of Barrow Island.

  10. Ben-Haim et al (2009): Do we know how to set decision thresholds for diabetes?

  11. Beresford and Thompson (2009): An info-gap approach to managing portfolios of assets with uncertain returns

  12. Ben-Haim, Dacso, Carrasco, and Rajan (2009): Heterogeneous uncertainties in cholesterol management

  13. Rout, Thompson, and McCarthy (2009): Robust decisions for declaring eradication of invasive species

  14. Ben-Haim (2010): Info-Gap Economics: An Operational Introduction

  15. Hine and Hall (2010): Information gap analysis of flood model uncertainties and regional frequency analysis

  16. Ben-Haim (2010): Interpreting Null Results from Measurements with Uncertain Correlations: An Info-Gap Approach

  17. Wintle et al. (2010): Allocating monitoring effort in the face of unknown unknowns

  18. Moffitt et al. (2010): Securing the Border from Invasives: Robust Inspections under Severe Uncertainty

  19. Yemshanov et al. (2010): Robustness of Risk Maps and Survey Networks to Knowledge Gaps About a New Invasive Pest

  20. Davidovitch and Ben-Haim (2010): Robust satisficing voting: why are uncertain voters biased towards sincerity?

  21. Schwartz et al. (2010): What Makes a Good Decision? Robust Satisficing as a Normative Standard of Rational Decision Making

  22. Arkadeb Ghosal et al. (2010): Computing Robustness of FlexRay Schedules to Uncertainties in Design Parameters

  23. Hemez et al. (2002): Info-gap robustness for the correlation of tests and simulations of a non-linear transient

  24. Hemez et al. (2003): Applying information-gap reasoning to the predictive accuracy assessment of transient dynamics simulations

  25. Hemez, F.M. and Ben-Haim, Y. (2004): Info-gap robustness for the correlation of tests and simulations of a non-linear transient

  26. Ben-Haim, Y. (2007): Frequently asked questions about info-gap decision theory

  27. Sprenger, J. (2011): The Precautionary Approach and the Role of Scientists in Environmental Decision-Making

  28. Sprenger, J. (2011): Precaution with the Precautionary Principle: How does it help in making decisions

  29. Hall et al. (2011): Robust climate policies under uncertainty: A comparison of Info-­-Gap and RDM methods

  30. Ben-Haim and Cogan (2011) : Linear bounds on an uncertain non-linear oscillator: an info-gap approach

  31. Van der Burg and Tyre (2011) : Integrating info-gap decision theory with robust population management: a case study using the Mountain Plover

  32. Hildebrandt and Knoke (2011) : Investment decisions under uncertainty --- A methodological review on forest science studies.

  33. Wintle et al. (2011) : Ecological-economic optimization of biodiversity conservation under climate change.

  34. Ranger et al. (2011) : Adaptation in the UK: a decision-making process.


Recent Articles, Working Papers, Notes

Also, see my complete list of articles
    Moshe's new book!
  • Sniedovich, M. (2012) Fooled by local robustness, Risk Analysis, Early View.

  • Sniedovich, M. (2012) Black swans, new Nostradamuses, voodoo decision theories and the science of decision-making in the face of severe uncertainty, International Transactions in Operational Research, 19(1-2), 253-281 (Available free of charge)

  • Sniedovich, M. (2011) A classic decision theoretic perspective on worst-case analysis, Applications of Mathematics, 56(5), 499-509.

  • Sniedovich, M. (2011) Dynamic programming: introductory concepts, in Wiley Encyclopedia of Operations Research and Management Science (EORMS), Wiley.

  • Caserta, M., Voss, S., Sniedovich, M. (2011) Applying the corridor method to a blocks relocation problem, OR Spectrum, 33(4), 815-929, 2011.

  • Sniedovich, M. (2011) Dynamic Programming: Foundations and Principles, Second Edition, Taylor & Francis.

  • Sniedovich, M. (2010) A bird's view of Info-Gap decision theory, Journal of Risk Finance, 11(3), 268-283.

  • Sniedovich M. (2009) Modeling of robustness against severe uncertainty, pp. 33- 42, Proceedings of the 10th International Symposium on Operational Research, SOR'09, Nova Gorica, Slovenia, September 23-25, 2009.

  • Sniedovich M. (2009) A Critique of Info-Gap Robustness Model. In: Martorell et al. (eds), Safety, Reliability and Risk Analysis: Theory, Methods and Applications, pp. 2071-2079, Taylor and Francis Group, London.
  • .
  • Sniedovich M. (2009) A Classical Decision Theoretic Perspective on Worst-Case Analysis, Working Paper No. MS-03-09, Department of Mathematics and Statistics, The University of Melbourne.(PDF File)

  • Caserta, M., Voss, S., Sniedovich, M. (2008) The corridor method - A general solution concept with application to the blocks relocation problem. In: A. Bruzzone, F. Longo, Y. Merkuriev, G. Mirabelli and M.A. Piera (eds.), 11th International Workshop on Harbour, Maritime and Multimodal Logistics Modeling and Simulation, DIPTEM, Genova, 89-94.

  • Sniedovich, M. (2008) FAQS about Info-Gap Decision Theory, Working Paper No. MS-12-08, Department of Mathematics and Statistics, The University of Melbourne, (PDF File)

  • Sniedovich, M. (2008) A Call for the Reassessment of the Use and Promotion of Info-Gap Decision Theory in Australia (PDF File)

  • Sniedovich, M. (2008) Info-Gap decision theory and the small applied world of environmental decision-making, Working Paper No. MS-11-08
    This is a response to comments made by Mark Burgman on my criticism of Info-Gap (PDF file )

  • Sniedovich, M. (2008) A call for the reassessment of Info-Gap decision theory, Decision Point, 24, 10.

  • Sniedovich, M. (2008) From Shakespeare to Wald: modeling wors-case analysis in the face of severe uncertainty, Decision Point, 22, 8-9.

  • Sniedovich, M. (2008) Wald's Maximin model: a treasure in disguise!, Journal of Risk Finance, 9(3), 287-291.

  • Sniedovich, M. (2008) Anatomy of a Misguided Maximin formulation of Info-Gap's Robustness Model (PDF File)
    In this paper I explain, again, the misconceptions that Info-Gap proponents seem to have regarding the relationship between Info-Gap's robustness model and Wald's Maximin model.

  • Sniedovich. M. (2008) The Mighty Maximin! (PDF File)
    This paper is dedicated to the modeling aspects of Maximin and robust optimization.

  • Sniedovich, M. (2007) The art and science of modeling decision-making under severe uncertainty, Decision Making in Manufacturing and Services, 1-2, 111-136. (PDF File) .

  • Sniedovich, M. (2007) Crystal-Clear Answers to Two FAQs about Info-Gap (PDF File)
    In this paper I examine the two fundamental flaws in Info-Gap decision theory, and the flawed attempts to shrug off my criticism of Info-Gap decision theory.

  • My reply (PDF File) to Ben-Haim's response to one of my papers. (April 22, 2007)

    This is an exciting development!

    • Ben-Haim's response confirms my assessment of Info-Gap. It is clear that Info-Gap is fundamentally flawed and therefore unsuitable for decision-making under severe uncertainty.

    • Ben-Haim is not familiar with the fundamental concept point estimate. He does not realize that a function can be a point estimate of another function.

      So when you read my papers make sure that you do not misinterpret the notion point estimate. The phrase "A is a point estimate of B" simply means that A is an element of the same topological space that B belongs to. Thus, if B is say a probability density function and A is a point estimate of B, then A is a probability density function belonging to the same (assumed) set (family) of probability density functions.

      Ben-Haim mistakenly assumes that a point estimate is a point in a Euclidean space and therefore a point estimate cannot be say a function. This is incredible!


  • A formal proof that Info-Gap is Wald's Maximin Principle in disguise. (December 31, 2006)
    This is a very short article entitled Eureka! Info-Gap is Worst Case (maximin) in Disguise! (PDF File)
    It shows that Info-Gap is not a new theory but rather a simple instance of Wald's famous Maximin Principle dating back to 1945, which in turn goes back to von Neumann's work on Maximin problems in the context of Game Theory (1928).

  • A proof that Info-Gap's uncertainty model is fundamentally flawed. (December 31, 2006)
    This is a very short article entitled The Fundamental Flaw in Info-Gap's Uncertainty Model (PDF File) .
    It shows that because Info-Gap deploys a single point estimate under severe uncertainty, there is no reason to believe that the solutions it generates are likely to be robust.

  • A math-free explanation of the flaw in Info-Gap. ( December 31, 2006)
    This is a very short article entitled The GAP in Info-Gap (PDF File) .
    It is a math-free version of the paper above. Read it if you are allergic to math.

  • A long essay entitled What's Wrong with Info-Gap? An Operations Research Perspective (PDF File) (December 31, 2006).
    This is a paper that I presented at the ASOR Recent Advances in Operations Research (PDF File) mini-conference (December 1, 2006, Melbourne, Australia).

Recent Lectures, Seminars, Presentations

If your organization is promoting Info-Gap, I suggest that you invite me for a seminar at your place. I promise to deliver a lively, informative, entertaining and convincing presentation explaining why it is not a good idea to use — let alone promote — Info-Gap as a decision-making tool.

Here is a list of relevant lectures/seminars on this topic that I gave in the last two years.


Disclaimer: This page, its contents and style, are the responsibility of the author (Moshe Sniedovich) and do not represent the views, policies or opinions of the organizations he is associated/affiliated with.


Last modified: