The Spin Stops Here!
Decision-Making Under Severe Uncertainty  
Faqs | Help | @ | Contact | home  
voodoostan info-gap decision theory info-gap economics severe uncertainty mighty maximin robust decisions responsible decisions

Review # 30 (Posted: June 26, 2011)

Reference:

Yakov Ben-Haim and Scott Cogan
Linear bounds on an uncertain non-linear oscillator: an info-gap approach
IUTAM Bookseries, 27(1):3-14.

Abstract

We study a 1-dimensional cubic non-linear oscillator in the frequency domain, in which the non-linearity is roughly estimated but highly uncertain. The task is to choose a suite of linear computational models at different excitation frequencies whose responses are useful approximations to, or upper bounds of, the real non-linear system. These model predictions must be robust to uncertainty in the non-linearity. A worst case for the uncertain non-linearity is not known. The central question in this paper is: how to choose the linear computational models when the magnitude of error of the estimated non-linearity is unknown. A resolution is proposed, based on the robustness function of info-gap decision theory. We also prove that the non-probabilistic info-gap robustness is a proxy for the probability of success.

Acknowledgements The authors are pleased to acknowledge useful comments by Lior Davidovitch and Oded Gottlieb.
Scores TUIGF:100%
SNHNSNDN:500%
GIGO:100%

Introduction

This is a most intriguing article --- intriguing because its authors are at the forefront of info-gap decision theory. One would have therefore expected them to know better!

Because, the plain fact about the robustness problem studied in this article is that, from the standpoint of robust decision-making, it is manifestly trivial. And what I mean by this is that just looking at this robustness problem you would immediately see that the robustness task is solved by inspection. In other words, it wouldn't even cross your mind to resort to a formal robustness analysis, let alone a local robustness analysis such as the one conducted by info-gap decision theory.

So the question is: what is the point, the merit, the rationale, for using info-gap's "robustness function" to solve this trivial robustness problem seeing that the solution to the global robustness task is immediately obvious by inspection?

The answer to this question is very simple indeed. As in many other info-gap publications, there is no point, or merit, or rationale, for using info-gap's "robustness function" to solve the problem under consideration.

So, let us examine how this fact is manifested in the article under review here.

To do this, the discussion will be conducted under the following headings:

The wide world of TRIVIAL problems

If you strip the problem studied in this article of its "physical" interpretation and you focus exclusively on the underlying robustness problem that it poses, then what you uncover is a simple generic robustness problem consisting of these elements:

where r is a real valued function on Q×U and r* is a given numeric scalar.

The robustness question is then as follows:

Robustness question:
How robust is decision q∈Q with respect to the constraint r* ≤ r(q,u) against the variability of u over U?

An intuitive measure of robustness for such problems is the Size Criterion model (circa 1970), according to which the robustness of a decision is measured by the "size" of the subset of U whose elements satisfy the constraints under consideration. So let

A(q):= {u∈U: r* ≤ r(q,u)} , q∈Q

Namely, let A(q) denote the set of "acceptable" values of u pertaining to decision q. This is the subset of U whose elements satisfy the performance constraint for decision q. Naturally, the idea is that the larger A(q) is, the more robust q is. This will be especially justified in situations where these sets are nested, namely if the intersection of any two such sets (for two values of q) is equal to (at least) one of the sets. Observe that this means that one of these two sets is a subset of the other.

Needless to say, to implement the Size Criterion, we must have on hand a suitable means for measuring the "size" of the sets A(q),q∈Q.

But, the whole point about the robustness problem under consideration in this article is that it is so simple (trivial) that the "size" issue is not even an issue, so that the answer to the robustness question literally jumps off the page!

And to see why this is so, consider the following instance of the generic problem considered in the article, namely the case specified by

r(q,u) = f(q) - g(u) , q∈Q, u∈U

where g is a real valued function on U and f is a real valued function on Q.

Note that in this case.

A(q) = {u∈U: r* ≤ f(q) - g(u)} = {u∈U: g(u) ≤ f(q) - r*}

Now.

You may examine this expression any which way you like. But, you cannot possibly avoid this reasoning which essentially answers the robustness question:

Answer to the Robustness Question:

You asked for it, you got it!

The solution to the robustness problem under consideration therefore boils down to this simple recipe:

Recipe for the robustness problem:

END OF STORY!

Observations:

This being so, the question obviously is: what purpose can a local Radius of Stability model (circa 1960) (e.g. info-gap robustness model) serve in this simple case? What is the point in complicating the analysis by introducing an estimate and a neighborhood structure on the uncertainty space U around this estimate?

None

And yet, the authors foist on this trivial robustness problem a local robustness analysis, using to this end the following Radius of Stability model:

α(q,û):= max {α≥0: r* ≤ r(q,u), ∀u∈U(α,û)} , q∈Q

where û is a point estimate of the true value of u and U(α,û) denotes a neighborhood of radius α around û.

Here α(q,û) denotes the Radius of Stability of decision q at û. It is the radius of the largest neighborhood U(α,û) all whose elements satisfy the constraint r* ≤ r(q,u) for decision q. Equivalently, it is the radius of the largest neighborhood U(α,û) that is contained in A(q).

Observe that for the trivial case considered in the article, namely r(q,u) = f(q) - g(u), this local model takes the following form

α(q,û) = max {α≥0: r* ≤ r(q,u), ∀u∈U(α,û)}
= max {α≥0: r* ≤ f(q) - g(u), ∀u∈U(α,û)}
= max {α≥0: g(u) ≤ f(q) - r*, ∀u∈U(α,û)}
= max {α≥0: max {g(u): u∈U(α,û)} ≤ f(q) - r*}

Since the neighborhoods are nested, namely α' < α" implies that U(α',û)⊆U(α",û), it follows by inspection that α(q,û) is non-decreasing with f(q).

In short, f(q) can also be used as a measure of local robustness (assuming, with no loss of generality that r* ≤ r(q,û),∀q∈Q).

So, what merit can there possibly be to using the far more complicated local Radius of Stability model rather than the very simple (indeed, in this case, trivial ) Size Criterion model?

And to make this point patently clear, here are the two approaches, side by side:

Modelinfo-gapSize Criterion
robustness of decision q max {α≥0: max {g(u): u∈U(α,û)} ≤ f(q) - r*} f(q)
decision problem maxq∈Qmax {α≥0: max {g(u): u∈U(α,û)} ≤ f(q) - r*}max {f(q): q∈Q}

The following observation provides further evidence of how alien the neighborhoods U(α,û), α≥0, are in the robustness analysis of the problem featured in the article:

Consider the (quite common) case where U(α,û) is strictly increasing (set-inclusion-wise) with α. Then clearly, in this case the structure of the neighborhoods U(α,û), α≥0, has no impact whatsoever on the ranking of the decisions. The definition of these sets only affect the "distance" of the elements of U from the estimate û. In other words, the definition of these sets affects the values of the radii of stability of the decisions, but not their ranking (robustness-wise).

That said, I should point out that it is also important to note that the article is mum on the fact that the local robustness model proposed in this article for the treatment of the trivial robustness problem in question, is a simple Radius of Stability model. Neither does it make clear, that Radius of Stability models have been used extensively in many fields for the purpose of local robustness analysis, since the 1960's. Nor indeed, that there is a huge literature on the stability of matrices and polynomials associated with dynamical systems (see reference below to Hinrichsen and Pritchard seminal work in this area).

And what is most interesting of all, is that in the analysis of the "Proxy Theorem", the authors do use the ... Size Criterion.

Proxy Theorem

Having said all that, let us now examine the authors' dramatic declaration in the abstract: "We also prove that the non-probabilistic info-gap robustness is a proxy for the probability of success."

Because, the question arising is this: What exactly have the authors proved through their solution of a trivially simple problem, using for the purposes of this exercise info-gap's "robustness function"?

They proved that: if you are dealing with an exceedingly trivial case, you stand to obtain the momentous result that in this trivial case "...the non-probabilistic info-gap robustness is a proxy for the probability of success ...". Surely, this should have been obvious to begin with!

Because, observe that even a superficial examination immediately reveals that:

In other words, the ranking of decisions is invariant with the probability structure imposed on the parameter set U which is an immediate indication of how trivial the problem under consideration is:

The probability structure on U has no impact whatsoever on the ranking of the decisions according to their global robustness with respect to the constraint r* ≤ r(q,u)!!!

Surely, this can only mean that the problem under consideration must be highly "degenerate", hence downright trivial. For, in what other case (other than a trivial one) would the ranking of decisions according to their robustness on U with respect to r* ≤ r(q,u) be totally independent of the probability distribution of u?

So, all that's left to say about the result "...the non-probabilistic info-gap robustness is a proxy for the probability of success ..." is: what is the wonder?!! If you pose a trivial problem, chances are that the solution to this trivial problem will be trivially simple!

And this further underscores the question raised above: what advantage can there possibly be in complicating the analysis by introducing an info-gap robustness model into the picture?

What do we stand to gain from foisting on the above exceedingly simple analysis two rather complicated info-gap models such as these?


The first is the neighborhood structure proposed in the article and the second is the associated info-gap robustness model.

Suffice it to say that in these models h represent the horizon of uncertainty (α); Xc represents function f; |Xr| represents function g; (k1,k3,F) represents u; and δ represents r*. Decision q is represented by three parameters that are not shown explicitly in these models (formally they are implicit arguments of |Xc|).

And to shed more light on the Proxy Theorems issue, I want to point out the following.

On page 12 of the article we read:

In this section we discuss a theorem which asserts that the non-probabilistic info-gap robustness is monotonically related to the probability that the non-linear system satisfies the performance requirement. This 'proxy property' is important since, when it holds, it implies that a computational model can be chosen which maximizes the probability of success, without knowing the probability distribution of the uncertain variables. The value of maximum probability will remain unknown.

It is important to take note then that, as stated very clearly in Davidovitch's (2009), proxy theorems are expected to be "very rare":

We have shown that the definition of strong proxy theorems discussed by Ben-Haim (2007), is very restrictive, and that when the uncertainty is multi-dimensional, strong proxy theorems are expected to be very rare. Then we shall prove that even this weaker definition does not hold for a wide family of common problems.
Davidovitch (2009, p. 137)
PhD Thesis, Department of Mechanical Engineering, Technion

Since the technical issues are discussed in detail in Davidovitch (2009), I shall not elaborate on them here.

I should, however, call attention to the following important issue that is not discussed in Davidovitch (2009) --- nor in Ben-Haim (2007):

The whole point about info-gap "Proxy Theorems" is that they typically hold for .... trivial robustness problems. This is so because these theorems impose extremely limiting requirements on the elements of the robustness model such that invariably simplify the global robustness problem under consideration to the point of rendering it trivial.

As we just saw, trivial problems of this type can be solved without any appeal to a local model of robustness such as info-gap's robustness mode. Differently put, such a model only complicates an otherwise trivial robustness analysis.

The bottom line is then that info-gap scholars are yet to identify a non-trivial robustness problem that would benefit from existing Proxy Theorems. I undertake to immediately post such a problem on this page.

And this leads me straight to an examination of a number of cryptic statements made in this article about worst-case.

The case of the worst case

To emphasize info-gap's purported unique ability to deal with severe uncertainty, the article repeatedly calls attention to the fact that the horizon of uncertainty α is unbounded (above). But in their zeal to underscore this fact, the authors obscure the relation between worst-case analysis and info-gap's robustness analysis. Thus, in the abstract of the article we read:

The worst case for the uncertain non-linearity is not known.

Then, in page 4 we read

Moreover, we assume that a worst case for this uncertainty is not known

And in page 6:

These error estimates do not constitute knowledge of a worst case.

And in page 8:

Since we do not know the magnitude of error --- no realistic worst case is known --- the horizon of uncertainty is unbounded.

And in page 14:

It is assumed that the worst case for the uncertainties is not known.

My point is then that these statements --- which incidentally are of a piece with similar statements in the info-gap literature, including the three primary texts (Ben-Haim 2001, 2006, 2010) --- not only obscure the fact that info-gap's robustness analysis is indeed a (local) worst-case analysis, but that info-gap's robustness model is in fact a simple Maximin model.

The important point to note here is that the worst-case analysis conducted by info-gap decision theory is local rather than global. That is, the important point here is that info-gap's worst case analysis is not conducted on the parameter space U. It is conducted, for each α ≥ 0, only on the neighborhood U(α,û), namely in the neighborhood of radius α around the estimate û. It therefore follows that the fact that α is unbounded above has no bearing whatsoever on the fact that a worst-case analysis is conducted for each value of α on the neighborhood U(α,û), namely in the neighborhood of radius α around the estimate û.

But more than this, info-gap's worst case analysis is conducted with respect to a constraint. This means that a worst case always exists. In other words, there are at most two cases: either the constraint is satisfied or it is violated. Since there are at most two "cases", there is always a worst case (even if α is unbounded).

I should add that the big fuss that is being made in info-gap publications about the fact that the horizon of uncertainty α is allowed to be unbounded (above) in fact borders on the risible. As those who are familiar with this literature would no doubt know, the momentous point that is supposed to be made by these repeated assertions is that an info-gap model is capable of dealing with the severest uncertainty. One imagines therefore that this is the rationale for the repeated assertions that an info-gap model does not posit a worst-case. Presumably, given that the horizon of uncertainty α is unbounded (above), there would not be a worse case.

What is so amusing about these repeated assertions is that for all the song and dance about the horizon of uncertainty α being unbounded (above), when it comes to conducting the robustness analysis itself, the whole business of the horizon of uncertainty α being unbounded (above) comes to naught! Because, what does info-gap decision theory prescribe doing? It prescribes conducting a worst-case analysis only on the neighborhood U(α,û), for one given α at a time, namely in the neighborhood of radius α around the estimate û. So, methodologically speaking, it does not make one jot of difference whether the horizon of uncertainty α is bounded or unbounded.

And to further illustrate this point, keep in mind that the objective of info-gap's robustness model is to .... maximize the value of α. That is:

max {α≥0: r*≤ r(q,u),∀u∈U(α,û)}

So, at first glance you would no doubt ask:

How can one possibly maximize the value of α if --- as claimed by the authors --- α is unbounded above?

The answer to this question is simplicity itself. This apparent contradiction does not even arise when you realize that the whole point about info-gap's robustness analysis is that regardless of whether α is or is not unbounded above, info-gap's robustness analysis is driven by the robustness constraint. Thus, in the info-gap robustness analysis, the robustness constraint r* ≤ r(q,u),∀u∈U(α,û) prevents the admissible values of α from increasing indefinitely. Consequently, the robustness of decision q is the largest admissible value of α with respect to the performance constraint imposed on q.

So much then for the purported merit of the horizon of uncertainty α being unbounded above!

And as a final note about the worst-case issue, I should point out the following. What is truly remarkable about the repeated (implicit) denials in this article that info-gap's robustness analysis is a worst-case analysis is that .... in a number of papers by Hemez and Ben-Haim (2002, 2003, 2004), the precise opposite is maintained. Indeed, these papers state categorically that info-gap's robustness analysis is ... a worst case analysis (see Review 23, Review 23, Review 25 ).

For instance, the caption to Figure 6-2 in Hemez, Ben-Haim, and Cogan (2002) reads as follows (emphasis added):

Worst-case info-gap robustness.

And in Hemez and Ben-Haim (2003, p. 10) we read (emphasis added):

For the application, the optimization searches for the model that yields the worst possible test-analysis correlation metric R(q;u) at each uncertainty level.

And the caption to Figure 7 in Hemez and Ben-Haim (2004) reads as follows (emphasis added):

Results of the worst-case info-gap robustness analysis.

Indeed, is it possible to hold otherwise? Because, take a look at info-gap's generic robustness model:

max {α≥0: r*≤ r(q,u),∀u∈U(α,û)} , q∈ Q

Clearly the ∀u∈U(α,û) clause indicates in no uncertain terms that a worst-case analysis is conducted on the premises: all the values of u∈U(α,û) must satisfy the constraint --- hence also the worst u∈U(α,û). Conversely, if the worst u∈U(α,û) satisfies the constraint, then so do all the other elements of U(α,û).

In short, the "worst case" story is as follows:

Beginning of Story.

    For each α≥0, one at a time, a worst case analysis of the constraint r* ≤ r(q,u) is conducted on U(α,û).

End of Story.

And for a second opinion on this issue --- keeping in mind that info-gap's robustness model is a simple Radius of Stability model --- consider the following statements (emphasis added):

We then introduce the stability radius as a measure of the smallest perturbation for which the perturbed system no longer satisfies the constraints. This is a worst case robustness measure expressed by a single number that provides an efficient tool for assessing the robustness of the stability of a given system.
Hinrichsen, and Pritchard (2005, p.519)

The stability radius is a worst case measure of robustness. It measures the size of the smallest perturbation for which the perturbed system is either not well-posed or does not have spectrum in Cg.

Hinrichsen and Pritchard (2005, p. 585)

And just in case you are not familiar with Hinrichsen and Pritchard's (1986, 2005) seminal work on the Radius of Stability in the field of control theory, consider this:

Robustness analysis has played a prominent role in the theory of linear systems. In particular the state-state approach via stability radii has received considerable attention, see [HP2], [HP3], and references therein. In this approach a perturbation structure is defined for a realization of the system, and the robustness of the system is identified with the norm of the smallest destabilizing perturbation. In recent years there has been a great deal of work done on extending these results to more general perturbation classes, see, for example, the survey paper [PD], and for recent results on stability radii with respect to real perturbations.

Paice and Wirth (1998, p. 289)

here: HP2 = Hinrichsen and Pritchard (1990), HP3 = Hinrichsen and Pritchard (1992) and PD= Packard and Doyle (1993).

So much then for the worst case issue.

Summary and conclusions

The obvious conclusion to be drawn from this short discussion is that the info-gap decision model, proposed by the authors for the solution of the trivially simple problem studied in this article, is at best redundant.

Because, consider what this article accomplishes.

Granting that the objective here is to illustrate the capabilities of info-gap's "robustness function", then all that this article does is to demonstrated the sledgehammer approach: how to solve a trivially simple problem by means of an unduly complicated model/procedure.

The trouble is, though, that this proposition not only contributes nothing to our understanding of the robustness problem examined in the article. It actually obscures from view how trivially simple this problem really is. Hence, how trivially simple its solution is!

And the greater trouble is that analysts, practitioners etc. who are not versed in decision theory, optimization theory, and related areas of expertise, may not be able to evaluate and judge these propositions for what they are!

And this remark leads me straight to comment on the distorted view that this article gives of the state of the art. Because, readers who are not versed in decision theory, optimization theory, robust optimization, control theory, etc. may not be able to tell that what is truly absent from this article are the basic facts about info-gap decision theory.

The missing links

The real facts about info-gap decision theory and the analysis presented in Ben-Haim and Cogan (2011) are these:

Other Reviews

  1. Ben-Haim (2001, 2006): Info-Gap Decision Theory: decisions under severe uncertainty.

  2. Regan et al (2005): Robust decision-making under severe uncertainty for conservation management.

  3. Moilanen et al (2006): Planning for robust reserve networks using uncertainty analysis.

  4. Burgman (2008): Shakespeare, Wald and decision making under severe uncertainty.

  5. Ben-Haim and Demertzis (2008): Confidence in monetary policy.

  6. Hall and Harvey (2009): Decision making under severe uncertainty for flood risk management: a case study of info-gap robustness analysis.

  7. Ben-Haim (2009): Info-gap forecasting and the advantage of sub-optimal models.

  8. Yokomizo et al (2009): Managing the impact of invasive species: the value of knowing the density-impact curve.

  9. Davidovitch et al (2009): Info-gap theory and robust design of surveillance for invasive species: The case study of Barrow Island.

  10. Ben-Haim et al (2009): Do we know how to set decision thresholds for diabetes?

  11. Beresford and Thompson (2009): An info-gap approach to managing portfolios of assets with uncertain returns

  12. Ben-Haim, Dacso, Carrasco, and Rajan (2009): Heterogeneous uncertainties in cholesterol management

  13. Rout, Thompson, and McCarthy (2009): Robust decisions for declaring eradication of invasive species

  14. Ben-Haim (2010): Info-Gap Economics: An Operational Introduction

  15. Hine and Hall (2010): Information gap analysis of flood model uncertainties and regional frequency analysis

  16. Ben-Haim (2010): Interpreting Null Results from Measurements with Uncertain Correlations: An Info-Gap Approach

  17. Wintle et al. (2010): Allocating monitoring effort in the face of unknown unknowns

  18. Moffitt et al. (2010): Securing the Border from Invasives: Robust Inspections under Severe Uncertainty

  19. Yemshanov et al. (2010): Robustness of Risk Maps and Survey Networks to Knowledge Gaps About a New Invasive Pest

  20. Davidovitch and Ben-Haim (2010): Robust satisficing voting: why are uncertain voters biased towards sincerity?

  21. Schwartz et al. (2010): What Makes a Good Decision? Robust Satisficing as a Normative Standard of Rational Decision Making

  22. Arkadeb Ghosal et al. (2010): Computing Robustness of FlexRay Schedules to Uncertainties in Design Parameters

  23. Hemez et al. (2002): Info-gap robustness for the correlation of tests and simulations of a non-linear transient

  24. Hemez et al. (2003): Applying information-gap reasoning to the predictive accuracy assessment of transient dynamics simulations

  25. Hemez, F.M. and Ben-Haim, Y. (2004): Info-gap robustness for the correlation of tests and simulations of a non-linear transient

  26. Ben-Haim, Y. (2007): Frequently asked questions about info-gap decision theory

  27. Sprenger, J. (2011): The Precautionary Approach and the Role of Scientists in Environmental Decision-Making

  28. Sprenger, J. (2011): Precaution with the Precautionary Principle: How does it help in making decisions

  29. Hall et al. (2011): Robust climate policies under uncertainty: A comparison of Info-­-Gap and RDM methods

  30. Ben-Haim and Cogan (2011) : Linear bounds on an uncertain non-linear oscillator: an info-gap approach

  31. Van der Burg and Tyre (2011) : Integrating info-gap decision theory with robust population management: a case study using the Mountain Plover

  32. Hildebrandt and Knoke (2011) : Investment decisions under uncertainty --- A methodological review on forest science studies.

  33. Wintle et al. (2011) : Ecological-economic optimization of biodiversity conservation under climate change.

  34. Ranger et al. (2011) : Adaptation in the UK: a decision-making process.

Recent Articles, Working Papers, Notes

Also, see my complete list of articles
    Moshe's new book!
  • Sniedovich, M. (2012) Fooled by local robustness, Risk Analysis, in press.

  • Sniedovich, M. (2012) Black swans, new Nostradamuses, voodoo decision theories and the science of decision-making in the face of severe uncertainty, International Transactions in Operational Research, in press.

  • Sniedovich, M. (2011) A classic decision theoretic perspective on worst-case analysis, Applications of Mathematics, 56(5), 499-509.

  • Sniedovich, M. (2011) Dynamic programming: introductory concepts, in Wiley Encyclopedia of Operations Research and Management Science (EORMS), Wiley.

  • Caserta, M., Voss, S., Sniedovich, M. (2011) Applying the corridor method to a blocks relocation problem, OR Spectrum, 33(4), 815-929, 2011.

  • Sniedovich, M. (2011) Dynamic Programming: Foundations and Principles, Second Edition, Taylor & Francis.

  • Sniedovich, M. (2010) A bird's view of Info-Gap decision theory, Journal of Risk Finance, 11(3), 268-283.

  • Sniedovich M. (2009) Modeling of robustness against severe uncertainty, pp. 33- 42, Proceedings of the 10th International Symposium on Operational Research, SOR'09, Nova Gorica, Slovenia, September 23-25, 2009.

  • Sniedovich M. (2009) A Critique of Info-Gap Robustness Model. In: Martorell et al. (eds), Safety, Reliability and Risk Analysis: Theory, Methods and Applications, pp. 2071-2079, Taylor and Francis Group, London.
  • .
  • Sniedovich M. (2009) A Classical Decision Theoretic Perspective on Worst-Case Analysis, Working Paper No. MS-03-09, Department of Mathematics and Statistics, The University of Melbourne.(PDF File)

  • Caserta, M., Voss, S., Sniedovich, M. (2008) The corridor method - A general solution concept with application to the blocks relocation problem. In: A. Bruzzone, F. Longo, Y. Merkuriev, G. Mirabelli and M.A. Piera (eds.), 11th International Workshop on Harbour, Maritime and Multimodal Logistics Modeling and Simulation, DIPTEM, Genova, 89-94.

  • Sniedovich, M. (2008) FAQS about Info-Gap Decision Theory, Working Paper No. MS-12-08, Department of Mathematics and Statistics, The University of Melbourne, (PDF File)

  • Sniedovich, M. (2008) A Call for the Reassessment of the Use and Promotion of Info-Gap Decision Theory in Australia (PDF File)

  • Sniedovich, M. (2008) Info-Gap decision theory and the small applied world of environmental decision-making, Working Paper No. MS-11-08
    This is a response to comments made by Mark Burgman on my criticism of Info-Gap (PDF file )

  • Sniedovich, M. (2008) A call for the reassessment of Info-Gap decision theory, Decision Point, 24, 10.

  • Sniedovich, M. (2008) From Shakespeare to Wald: modeling wors-case analysis in the face of severe uncertainty, Decision Point, 22, 8-9.

  • Sniedovich, M. (2008) Wald's Maximin model: a treasure in disguise!, Journal of Risk Finance, 9(3), 287-291.

  • Sniedovich, M. (2008) Anatomy of a Misguided Maximin formulation of Info-Gap's Robustness Model (PDF File)
    In this paper I explain, again, the misconceptions that Info-Gap proponents seem to have regarding the relationship between Info-Gap's robustness model and Wald's Maximin model.

  • Sniedovich. M. (2008) The Mighty Maximin! (PDF File)
    This paper is dedicated to the modeling aspects of Maximin and robust optimization.

  • Sniedovich, M. (2007) The art and science of modeling decision-making under severe uncertainty, Decision Making in Manufacturing and Services, 1-2, 111-136. (PDF File) .

  • Sniedovich, M. (2007) Crystal-Clear Answers to Two FAQs about Info-Gap (PDF File)
    In this paper I examine the two fundamental flaws in Info-Gap decision theory, and the flawed attempts to shrug off my criticism of Info-Gap decision theory.

  • My reply (PDF File) to Ben-Haim's response to one of my papers. (April 22, 2007)

    This is an exciting development!

    • Ben-Haim's response confirms my assessment of Info-Gap. It is clear that Info-Gap is fundamentally flawed and therefore unsuitable for decision-making under severe uncertainty.

    • Ben-Haim is not familiar with the fundamental concept point estimate. He does not realize that a function can be a point estimate of another function.

      So when you read my papers make sure that you do not misinterpret the notion point estimate. The phrase "A is a point estimate of B" simply means that A is an element of the same topological space that B belongs to. Thus, if B is say a probability density function and A is a point estimate of B, then A is a probability density function belonging to the same (assumed) set (family) of probability density functions.

      Ben-Haim mistakenly assumes that a point estimate is a point in a Euclidean space and therefore a point estimate cannot be say a function. This is incredible!


  • A formal proof that Info-Gap is Wald's Maximin Principle in disguise. (December 31, 2006)
    This is a very short article entitled Eureka! Info-Gap is Worst Case (maximin) in Disguise! (PDF File)
    It shows that Info-Gap is not a new theory but rather a simple instance of Wald's famous Maximin Principle dating back to 1945, which in turn goes back to von Neumann's work on Maximin problems in the context of Game Theory (1928).

  • A proof that Info-Gap's uncertainty model is fundamentally flawed. (December 31, 2006)
    This is a very short article entitled The Fundamental Flaw in Info-Gap's Uncertainty Model (PDF File) .
    It shows that because Info-Gap deploys a single point estimate under severe uncertainty, there is no reason to believe that the solutions it generates are likely to be robust.

  • A math-free explanation of the flaw in Info-Gap. ( December 31, 2006)
    This is a very short article entitled The GAP in Info-Gap (PDF File) .
    It is a math-free version of the paper above. Read it if you are allergic to math.

  • A long essay entitled What's Wrong with Info-Gap? An Operations Research Perspective (PDF File) (December 31, 2006).
    This is a paper that I presented at the ASOR Recent Advances in Operations Research (PDF File) mini-conference (December 1, 2006, Melbourne, Australia).

Recent Lectures, Seminars, Presentations

If your organization is promoting Info-Gap, I suggest that you invite me for a seminar at your place. I promise to deliver a lively, informative, entertaining and convincing presentation explaining why it is not a good idea to use — let alone promote — Info-Gap as a decision-making tool.

Here is a list of relevant lectures/seminars on this topic that I gave in the last two years.


Disclaimer: This page, its contents and style, are the responsibility of the author (Moshe Sniedovich) and do not represent the views, policies or opinions of the organizations he is associated/affiliated with.


Last modified: