The Spin Stops Here!
Decision-Making Under Severe Uncertainty  
Faqs | Help | @ | Contact | home  
voodoostan info-gap decision theory info-gap economics severe uncertainty mighty maximin robust decisions responsible decisions


Frequently Asked Questions about Info-Gap Decision Theory

Last modified:

A PDF version is available.

At the end of 2006 I launched a campaign to contain the spread of Info-Gap decision theory in Australia. I took this radical step because I am convinced that Info-Gap decision theory is fundamentally flawed. In fact, this theory is the best example of a classic voodoo decision theory I know. The compilation of FAQs about this theory posted here is part of my campaign.

In case you are unfamiliar with it, my criticism of Info-Gap Decision Theory is now well documented in many articles and presentations, including

WIKIPEDIA article on Info-Gap Decision Theory

Also, the criticism is very harsh.

The harshness is proportional to the severity of the flaws in Info-Gap Decision Theory and the level of promotion that it receives in Australia and elsewhere, for instance the AEDA 5-day workshop on Info-Gap Applications in Ecological Decision Making (University of Queensland, September 15-19, 2008).

The degree of harshness often increases in response to additional misguided arguments put forward by Info-Gap proponents in repeated futile attempts to explain the basic flaws. For instance the repeated misguided futile attempts to explain why Info-Gap's robustness model is not a Maximin model.

The questions about Info-Gap Decision Theory that I take up on this page go to the heart of this theory: its characteristic features, mode of operation, capabilities etc. Hence, they bear directly on its validity and the justification (in fact lack thereof) for using it and should therefore be of interest to those who use/promote this theory, as well as to those who contemplate using/promoting it.

I have assembled this set of questions over the past five years and have in fact discussed them in various forums: conferences, workshops, informal discussions, articles, web site. Still, I do not presume that I have thus anticipated all the questions that might be raised about this topic. So, if after reading this discussion you may still have questions about Info-Gap Decision Theory that are not dealt with here, feel free to communicate them to me and I shall gladly add them to the current list.

However, before you do this, please check the list of FAQs on my agenda.

Note that the focal point of this theory is a -- rather simple -- mathematical model. This means of course that many of the questions that Info Gap gives rise to are technical in nature and must therefore be given a quantitative treatment. Still, some of the truly important questions about this theory -- those that expose its failure -- are not technical, but rather conceptual, and can be described graphically.

Also note that this is an on-going, long-term, work-in-progress, project. So, I do not have a completion date in mind.


The questions:

The FAQs and the answers are listed in the order in which they were composed. For your convenience a subject classification is also provided.

  • FAQ-1:
  • What is Info-Gap Decision Theory?
  • FAQ-2:
  • On what grounds do you claim that Info-Gap Decision Theory is fundamentally flawed?
  • FAQ-3:
  • How is it that you are the only analyst who claims that Info-Gap is so flawed?
  • FAQ-4:
  • In what sense is Info-Gap's robustness model radically different from existing classic models of robustness?
  • FAQ-5:
  • On what grounds do you claim that Info-Gap's robustness model is not new?
  • FAQ-6:
  • How do you justify your claim that Info-Gap Decision Theory is a voodoo decision theory?
  • FAQ-7:
  • What is Info-Gap's definition of severe uncertainty?
  • FAQ-8:
  • What is Info-Gap's definition of robustness?
  • FAQ-9:
  • What are the main ingredients of an Info-Gap decision-making model?
  • FAQ-10:
  • What is Info-Gap's uncertainty model?
  • FAQ-11:
  • What is Info-Gap's generic robustness model?
  • FAQ-12:
  • What is Info-Gap's generic decision-making model?
  • FAQ-13:
  • In what sense is Info-Gap's robustness model inherently local?
  • FAQ-14:
  • What is the significance of the local nature of Info-Gap's robustness model?
  • FAQ-15:
  • What is the fundamental flaw in Info-Gap's uncertainty model?
  • FAQ-16:
  • What is the fundamental flaw in Info-Gap's generic robustness model?
  • FAQ-17:
  • What is the fundamental flaw in Info-Gap's decision-making model?
  • FAQ-18:
  • What is the exact relationship between Info-Gap's robust model and Wald's Maximin model?
  • FAQ-19:
  • What is the significance of the proof that Info-Gap's robust model is a Maximin model in disguise?
  • FAQ-20:
  • What are the errors in Ben-Haim's argument that Info-Gap's robust model is not a Maximin model?
  • FAQ-21:
  • What is Info-Gap's opportuneness model?
  • FAQ-22:
  • Is "opportuneness" a new concept?
  • FAQ-23:
  • What is the role and place of Info-Gap Decision theory in Robust Optimization?
  • FAQ-24:
  • How would you describe Info-Gap's robustness and opportuneness in the language of classical decision theory?
  • FAQ-25:
  • How is it that no references can be found in the Info-Gap literature to the thriving field of "Robust Optimization"?
  • FAQ-26:
  • How exactly does Info-Gap "deal" with the severity of the uncertainty?
  • FAQ-27:
  • Why do you make it a rule to assume that the "safe" area around the estimate is minutely small?
  • FAQ-28:
  • What question does Info-Gap's robustness model actually address?
  • FAQ-29:
  • How do you explain the emergence of Info-Gap as a (presumably) novel methodology for decision-making under of severe uncertainty?
  • FAQ-30:
  • Are there any indications in the Info-Gap literature attesting to an awareness of the GIGO principle?
  • FAQ-31:
  • Isn't some of your criticism of Info-Gap notably, your dubbing it a "voodoo decision theory" unfair indeed hyperbolic?
  • FAQ-32:
  • What is your definition of voodoo decision theory?
  • FAQ-33:
  • What is the meaning of the term "information-gap" in the framework of Info-Gap Decision Theory?
  • FAQ-34:
  • What is the difference between "robust-optimal" and "robust-satisficing" decisions in the framework of Info-Gap Decision Theory?
  • FAQ-35:
  • What is the significance of Info-Gap allowing its complete region of uncertainty to be unbounded?
  • FAQ-36:
  • How do Info-Gap's proponents justify the use of the theory in cases where the complete region of uncertainty is unbounded?
  • FAQ-37:
  • Is the true value of u more likely to be in the neighborhood of the estimate û?
  • FAQ-38:
  • Can Info-Gap's uncertainty model be cast as a simple "probabilistic" model?
  • FAQ-39:
  • What is Info-Gap's solution to the famous Two-Envelope Puzzle?
  • FAQ-40:
  • Isn't it more accurate to view Info-Gap's decision-making model as a representation of an optimization problem rather than a satisficing problem?
  • FAQ-41:
  • Isn't there a systemic error in Info-Gap's representation of the regions of uncertainty of functions?
  • FAQ-42:
  • What is the relationship between Info-Gap's decision-making model and the robust optimization models of Ben-Tal and Nemirovski (1998, 2002)?
  • FAQ-43:
  • Is it really so difficult to quantify severe uncertainty probabilistically?
  • FAQ-44:
  • How do you "define" robustness probabilistically?
  • FAQ-45:
  • Can you give an example of a "global" approach to robustness?
  • FAQ-46:
  • Don't you miss the message and intent of the Info-Gap approach (eg. robustness of solutions and local nature of the analysis?
  • FAQ-47:
  • Can the fundamental flaws in Info-Gap decision theory be fixed?
  • FAQ-48:
  • Does Info-Gap decision theory deserve the level of attention that you give it?
  • FAQ-49:
  • Are there any Info-Gap software packages?
  • FAQ-50:
  • Are there any general purpose solution methods for the generic problem posed by Info-Gap's decision-making model?
  • FAQ-51:
  • From a (algorithmic) Maximin point of view, isn't α just a nuisance decision-variable that can be handled by a line search?
  • FAQ-52:
  • What is your favorite example of an Info-Gap model/analysis in action?
  • FAQ-53:
  • Will ardent Info-Gap proponents ever concede that Info-Gap's robustness model is a Maximin model?
  • FAQ-54:
  • Aren't your language and tone too cynical?
  • FAQ-55:
  • Don't Info-Gap proponents essentially argue that "anything goes" under severe uncertainty?
  • FAQ-56:
  • In what sense is Wald's Maximin model much more powerful and general than Info-Gap's decision-making model?
  • FAQ-57:
  • Do you plan to compile these FAQs into a book?
  • FAQ-58:
  • On what grounds is it claimed that Info-Gap's robustness analysis and a Maximin analysis may yield different results?
  • FAQ-59:
  • Why do Info-Gap proponents persist in using the wrong Maximin formulation in their analysis of the relationship between Maximin and Info-Gap?
  • FAQ-60:
  • What is the PROBLEM that Info-Gap's decision-making MODEL represents?
  • FAQ-61:
  • How does Info-Gap decision theory distinguish between different levels of uncertainty?
  • FAQ-62:
  • Isn't Info-Gap's uncertainty model superfluous in the one-dimensional case?
  • FAQ-63:
  • What are the differences/similarities between Info-Gap's tradeoff curves and the famous Pareto Frontiers?
  • FAQ-64:
  • Can Info-Gap's generic tradeoff problem be formulated as a Pareto Optimization problem?
  • FAQ-65:
  • What exactly is "Knightian Uncertainty" and in what way is it different from "conventional" uncertainty?
  • FAQ-66:
  • Do you know of any convincing example where the Info-Gap complete region of uncertainty is unbounded?
  • FAQ-67:
  • Can't Info-Gap's "localness" flaw be fixed by means of a robustness analysis at a number of different estimates spread over the complete region of uncertainty?
  • FAQ-68:
  • What exactly is behind Info-Gap's claim that decisions designed solely to optimize performance have no robustness?
  • FAQ-69:
  • Can you give a counter-example to the claim that decisions that optimize "reward" have no robustness to "info-gaps"?
  • FAQ-70:
  • On what grounds does Info-Gap decision theory claim that utility maximization is entirely unreliable?
  • FAQ-71:
  • Can you elucidate your sketch of Info-Gap's "No Man's Land" Syndrome?
  • FAQ-72:
  • What is the difference between "robustness to deviation from a given (nominal) value" and "robustness to severe uncertainty"?
  • FAQ-73:
  • Can you give a complete illustrative example of an info-gap model?
  • FAQ-74:
  • Does Info-Gap robustness represent likelihood of safe performance?
  • FAQ-75:
  • What are the ramifications of the argument that Info-Gap robustness does not represent likelihood of events?
  • FAQ-76:
  • Have there been any attempts to correct the fundamental flaws in Info-Gap decision theory?
  • FAQ-77:
  • What is the difference between "substantial uncertainty", "high uncertainty" and "severe uncertainty"?
  • FAQ-78:
  • Why is it erroneous to impute "likelihood'' to Info-Gap's robustness?
  • FAQ-79:
  • How is it that Info-Gap decision theory is so laconic about the estimate û?
  • FAQ-80:
  • In what sense is the estimate û "best"?
  • FAQ-81:
  • How is it that Info-Gap decision theory fails to prescribe a sensitivity analysis with respect to the estimate û?
  • FAQ-82:
  • Is info-Gap's measure of robustness a reinvention of the good old "stability radius"?

    Subject classification

    Local:1,13,14,23,35,37,45,46,70,71,76
    Flaws:2,3,15,16,17,47,67,71,74,76,78
    Voodoo:6,31,32,37,48,57,61,FAQ-72,75,76,77,78
    General:1,2,4,73,74,77,78
    Maximin:4,5,18,19,20,29,47,51,53,56,58,59,74
    Satisficing:34,40,60
    Robustness:1,8,9,11,12,18,19,20,24,28,44,45,46,61,62,70,72,74,76
    Worst Case:20,29,53
    Severe Uncertainty:1,7,10,26,29,31,36,43,52,55,60, 61,62,65,67,70,71,72,74,76,77,78
    Pareto Optimization:63,64,67
    Robust Optimization:1,2,23,25,42,70
    Knightian Uncertainty:1,52,65,74
    Garbage In - Garbage Out:6,30,31

    The Answers:

    No Man's LandûNo Man's Land
    U(α(d,û),û)
    -∞   <----------------- Complete region of uncertainty ----------------->   ∞
    One need hardly point out that it is practically impossible to faithfully depict the full dimensions of this absurd as no picture would be able to show an infinitesimally small "safe" region, namely U(α(d,û),û), that would be visible to the naked eye.

    So the combined effect of the unbounded region of uncertainty and the local robustness analysis is that Info-Gap's No Man's Land constitutes practically the entire region of uncertainty.



  • FAQ-43: Is it really so difficult to quantify severe uncertainty probabilistically?

    Answer-43: This is indeed the question that needs to be asked first and foremost about Info-Gap. And to be sure, this is precisely the question that I am asked most often, mainly by statisticians. I might add that, this question was raised and debated at some length, during a question/answer period following a student's presentation, in our department, on November 28, 2008. Some of my colleagues go so far as to argue that in the context of a mathematical model, uncertainty should "always" be described by some sort of a probability model.

    The point is that in those situations where Info-Gap is typically applied to, coming up with a rough probabilistic quantification of the parameter of interest is no more difficult a task than venturing a wild guess about it. The question of course is which is the better practice. Is it preferable (does it more accurately capture the situation?) to "wild-guess" a point estimate of the parameter of interest and conduct a robustness analysis around it -- as done by Info-Gap decision theory? Or, is it better to "wild-guess" a probabilistic model for this parameter?

    It seems to me that the answer to this question is problem-oriented, namely it may vary from case to case. But most of all, it must be remembered that constructing a probabilistic model for the parameter of interest in itself does not guarantee robust decisions. You would still have to quantify robustness in the framework of the probabilistic model (see discussion FAQ-44).

    Indeed, it is not uncommon to have "hard" constraints such as r(d,u) ≥r*,∀u∈U, even within a probabilistic model where u represents the realization of a random variable whose probability distribution function on U is given. The fact that such a constraint can be described probabilistically, eg. Pr[r(d,u)≥r*]=1, is beside the point.

    Be that as it may, my criticism of Info-Gap decision theory is not directed at it being a non-probabilistic approach to the modeling and analysis of severe uncertainty. Not at all.

    My criticism is directed at the manner in which it deals -- or rather fails to deal -- with the severity of the uncertainty.


  • FAQ-44: How do you "define" robustness probabilistically?

    Answer-44: There are various ways to define robustness probabilistically.

    For example, in the framework of the "satisficing" problems addressed by Info-Gap decision theory, we can define the robustness of decision d∈D as follows:

    ρ(d) := Pr[r(d,u)≥r*|d]

    where u denotes the random variable representing the true value of u, and Pr[e|d] denotes the conditional probability of event e given that the decision is d.

    Note that this notation suggests that the probability distribution of u may depend on d -- which is not allowed by Info-Gap's generic uncertainty model.

    There are, of course, other ways to do it. For instance, consider the following two cases:

    ρ* := max {E[r(d,u)|d]: d∈D, Pr[r(d,u)≥r*|d] = 1}

    ρº := max {P[r(d,u)≥rº|d]: d∈D, Pr[r(d,u)≥r*|d] = 1}

    where E[a|b] denote the conditional expected value of a given b; and rº is some number greater than r*. Observe that under standard regularity conditions these two models are Maximin models in disguise.

    It should be noted that seemingly similar formulations of robust optimization models may yield quite different results, some of which are counter intuitive (see for example Sniedovich (1979) -- yes, this date is correct!).


  • FAQ-45: Can you give an example of a "global" approach to robustness?

    Answer-45: Sure. This will take us more than forty years back!

    Perhaps the simplest most intuitive "global" approach to robustness is to "sample" the complete region of uncertainty and determine the number of sample points where a decision satisfies the performance requirement. The technical term used in the robust optimization literature to designate this technique is: scenario generation.

    The following picture pits the local approach to robustness adopted by Info-Gap decision theory against the simple global approach to robustness based on scenario generation:

    Local Approach to Robustness
    a la Info-Gap decision theory
                  Simple Global Approach to Robustness
    via Scenario Generation

    In this illustration the scenarios are generated by drawing a grid on the island. Each white node on the grid represents a value (scenario) of the parameter of interest. There are 59 scenarios in this example.

    So all we have to do to evaluate the robustness of decision d with respect to the performance constraint r(d,u)≥r* is to count how many points on the grid satisfy this constraint:

    N
    ρ(d) :=Σ (r(d,uj)◊r*)
    j=1

    where N denotes the number of scenarios, uj denote the value of scenario j, and ◊ is the binary function such that a ◊ b = 1 iff a≥ b and a ◊ b = 0 otherwise.

    For example, consider the two decisions d' and d'' whose respective performances are as follows:

    d'               d''
    ρ(d') = 28 ρ(d'') = 29

    where the white dots represent scenarios that satisfy the performance requirement and gray dots represent scenarios that violate it.

    In fact, it can prove effective to assign weights to scenarios, whereupon the robustness of decision d would be defined as follows:

    N
    ρ(d) :=Σ (r(d,uj)◊r*)·w(j)
    j=1

    where w(j) denotes the weight associated with scenario j.

    Discussions on the importance of scenario selection for a proper representation of the complete region of uncertainty in robust optimization can be found in Rustem and Howe (2002) and Kouvelis and Yu (1997).

    For new global approaches to robustness, see Ben-Tal et al (2006, 2009).


  • FAQ-46: Don't you miss the message and intent of the Info-Gap approach (eg. robustness of solutions and local nature of the analysis)?

    Answer-46: Definitely not. I am not missing any Info-Gap message. I merely call a spade a spade.

    The central issues here are of course "robustness" and the "localness" of the solutions obtained by Info-gap's analysis.

    • I fully accept that there are cases where no robust decisions can be obtained no matter what. Therefore, to expect Info-Gap, or for that matter any other methodology, to generate robust decisions for such cases would be naive, indeed unjustified.

      But, this is not the point of my criticism. My criticism of Info-Gap decision theory is not that it may fail to deliver in certain (hard) cases. My criticism is that it is flawed methodologically. That is, that it cannot, as a matter of principle, be counted on to deliver, because, as I have been arguing all along, its approach to and definition of robustness are fundamentally flawed for decision-making under severe uncertainty. This, as I have shown, is due to its robustness model prescribing a local analysis which in effect amounts to ignoring the severity of the uncertainty.

      Indeed, the contention (by Info-Gap proponents) that Info-Gap's analysis "can be helpful" — the implication being that "sometimes" it can deliver robust solutions — only brings out (in an amusing way) its dismal failure as a methodology. After all we do not bestow the title "methodology" on a "plan of attack" that all it can (half heartedly) say (and only when hard pressed) that it can (sometimes) deliver solutions. Such solutions are flukes and must be justified on a case by case basis.

      In any event, the picture depicting Info-gap's approach is perfectly clear and no amount of rhetoric can change it:

      No Man's LandNo Man's Land
      U(α(d,û),û)
      Safe


    • Info-Gap's local robustness analysis is what Info-Gap is all about and no amount of rhetoric about "Knightian Uncertainty" will make any difference:

      No Man's LandNo Man's Land
      U(α(d,û),û)
      Safe

    In short, Info-Gap's contention is crystal clear, you cannot miss it. It reads as follows:

      wild guess ---->     Info-Gap Robustness Model     ----> robust decision

    α(d,û):= max {α≥0: r(d,u) ≥ r*, ∀u∈U(α,û)} , d∈D

    û = wild guess of the true value of u.

    My contention is therefore equally clear: a method propounding such contentions is fundamentally flawed.


  • FAQ-47: Can the fundamental flaws in Info-Gap decision theory be fixed?

    Answer-47: I can't see how this can be done.

    The only way to amend Info-Gap decision theory is to overhaul its domain of operation. That is, we would have to assume that the estimate û is of good quality, namely reliable, and that the true value of u is in the neighborhood of û.

    This would mean in turn that we would have to redefine Info-Gap a Maximin methodology that is designed for decision-making under very mild uncertainty. However, this would fly in the face of Ben-Haim's (2007, p. 2) position:

    Info-gap theory is useful precisely in those situations where our best models and data are highly uncertain, especially when the horizon of uncertainty is unknown. In contrast, if we have good understanding of the system then we don't need info-gap theory, and can use probability theory or even completely deterministic models. It is when we face severe Knightian uncertainty that we need info-gap theory.

    In a word, to fix Info-Gap we would have to remove "severe" from "Severe Uncertainty", but in so doing we would remove Info-Gap's raison d'être.


  • FAQ-48: Does Info-Gap decision theory deserve the level of attention that you give it?

    Answer-48: Indeed it does.

    It is important to bring into full view and in the greatest detail the grave errors that afflict this theory, precisely because it is so woefully flawed. In other words, it is important to make it patently clear that this theory is flawed beyond repair.

    This is important because my experience has shown that Info-Gap's profuse rhetoric seems to have an irresistible lure so that analysts/scholars who come to it from other disciplines (for instance, applied ecology and conservation biology, finance etc.) fall for its grand declarations and promises, without realizing that the facts that are buried under this rhetoric bespeak a different story altogether.

    To appreciate this point and to illustrate my concern, consider the following recommendation quoted from a paper authored by nine senior scientists from four countries and published in the journal Ecological Modelling (Moilanen et al 2006, p. 124):

    " ... In summary, we recommend info-gap uncertainty analysis as a standard practice in computational reserve planning. The need for robust reserve plans may change the way biological data are interpreted. It also may change the way reserve selection results are evaluated, interpreted and communicated. Information-gap decision theory provides a standardized methodological framework in which implementing reserve selection uncertainty analyses is relatively straightforward. We believe that alternative planning methods that consider robustness to model and data error should be preferred whenever models are based on uncertain data, which is probably the case with nearly all data sets used in reserve planning. ..."

    I am particularly worried that PhD students and young researchers with minimal or no knowledge of Decision Theory may fall for the Info-Gap rhetoric and make it part of their research work.

    I should also point out that my intensive investigation of Info-Gap is part of the research I am conducting in preparation for my forthcoming book provisionally entitled The Rise and Rise of Voodoo Decision Theories, where Info-Gap features as a classic example of such theories.

    And finally, my work on Info-Gap is of course part of my Info-Gap Campaign to contain the spread of Info-Gap decision theory in Australia.


  • FAQ-49: Are there any Info-Gap software packages available?

    Answer-49: I am not familiar with any such software packages.

    However, I am aware of the software package called Zonation. It is promoted as a decision support tool for spatial conservation prioritization in the framework of large-scale conservation planning. One of its key features is that its uncertainty analysis is based on Info-Gap decision theory:

    " ... In Zonation, uncertainty analysis has been implemented according to a convenient formulation that uses information-gap decision theory (see Ben-Haim 2006). ... "

    Zonation User Manual Version 2, (2008, p. 32)

    Apparently users of this package of are not required to use Info-Gap's robustness model, so in this sense Zonation is not an Info-Gap package.

    Remark:
    I know for a fact that Zonation's developer, Atte Moilanen, is aware of the fact that Info-Gap's robustness model is a Maximin model and that Info-Gap's robustness model is unsuitable for decision-making under severe uncertainty.

    These facts should therefore be mentioned/discussed in the Zonation User Manual. The reason that it is important that these facts be made explicit in the package is that the literature on the application of Info-Gap decision theory in applied ecology and conservation biology lays great stress on the fact that the uncertainty under consideration is severe.

    FAQ-50 is also relevant to this discussion.


  • FAQ-50: Are there any general purpose solution methods for the generic problem posed by Info-Gap's decision-making model?

    Answer-50: No, and it is extremely unlikely that such methods can be developed.

    The solution of the optimization problem posed by Info-Gap's generic decision-making model

    α(û)  = max {α: r(d,u) ≤ r*, ∀u∈U(α,û)}
    d∈D
    α≥0

    is a problem-specific task.

    It all depends on the specifications of the objects D, r, and U(α,û). In some cases it is very easy to solve the problem posed by the model, in others it is extremely difficult.

    This explains why no general purpose software package for solving the optimization problem posed by Info-Gap's decision-making model are available (see FAQ-49).

    The two Info-Gap books (Ben-Haim 2001, 2006) pay no attention whatsoever to this important aspect of the theory.

    Remark
    By completely divorcing itself from Optimization Theory, Info-Gap decision theory does not take advantage of methods and techniques that have been developed over the past 50 years for the solution of a variety of Maximin problems of this type.

    What a pity!

    Also see FAQ-51


  • FAQ-51: : From the standpoint of a Maximin algorithm, isn't the decision-variable α just a nuisance that can be handled by a line search?

    Answer-51: Indeed it is.

    In other words, consider Info-Gap's generic decision-making model:

    α(û)  = max {α: r(d,u) ≥ r*, ∀u∈U(α,û)}
    d∈D
    α≥0

    and assume that for each α≥0 it is easy to solve the following Maximin problem:

    Problem P(α,û):       ρ(α,û) := max min r(d,u)
    d∈D   u∈U(α,û)  

    Then Info-Gap's decision-making problem would be solved by a search for the largest value of α for which ρ(α,û) ≥ r*.

    Of course, sometimes it is possible to solve the problem by simpler or more efficient solution methods. In such cases resorting to this rather crude solution technique can be obviated.


  • FAQ-52: What is your favorite example of an Info-Gap model/analysis in action?

    Answer-52: I can cite a number of examples from the Info-Gap literature, illustrating how the flawed Info-Gap approach is applied to problems subject to severe uncertainty.

    But it is difficult to pick a "favorite". When I decide, I'll discuss it here.

    One of the major contenders at present is a rather simple Info-Gap model whose ingredients are as follows:

    • Decision space: D = {d',d''}.

    • Complete region of uncertainty: U=ℜ+:=[0,∞).

    • Regions of uncertainty: U(α,û):= {u∈U: |u-û|≤ α}, α≥0, where û >>>>>>> 0.

    • Robustness:

      • α(d',û):= max {α≥0: u ≤ r' , ∀u∈U(α,û)}, if û ≤ r' and α(d',û):= 0 otherwise.

      • α(d'',û):= max {α≥0: u ≥ r'' , ∀u∈U(α,û)}, if û ≥ r'' and α(d'',û):= 0 otherwise.

      where r' and r'' are given positive numbers such that r'' ≥ r' >>>>> 0.

    It is straightforward to show that

    • α(d',û) = r' - û , if û ≤ r'.

    • α(d'',û) = û - r'' , if û ≥ r''.

    This means that we select decision d' if û ≤ r' and we select decision d'' if û ≥ r''. We "pass", namely do not select any decision, if r' < û < r''.

    The great merit of this rather simple model --- according to Info-Gap experts --- is that it provides a reliable methodology for tackling a variety of very important practical forecasting problems that are subject to severe (true Knightian) uncertainty. That is, forecasting problems where the estimate û is unreliable due to the severe uncertainty in the true value of the parameter of interest.

    You'll appreciate this point more fully when you learn what practical problems this model is said to solve. For the time being, let us keep the discussion abstract and describe the contention made by this Info-Gap model. To begin with let us be clear on the main outlines of the problem that this model takes on:

    • You have to determine which decision to select from D = {d',d''}, if any.

    • Your best choice hinges on the value of some parameter u whose true value, call it u*, is subject to severe (true Knightian) uncertainty.

    • The best strategy under certainty (assuming that we know the value of u*) is as follows:

      • If u* ≤ r' it is best to select d'.

      • If u* ≥ r'' it is best to select d''.

      • If r' < u* < r'' it is best to "pass".

    • You have a highly unreliable estimate of u*, call it û.

    • What should you do? Should you select d', d'' or "pass"?

    Info-Gap's robustness analysis instructs you to adopt the following strategy, under which the choice between d', d'' and "pass" depends on the relationship between the estimate û and the values of r' and r'':

    • If û ≤ r', then select d'.

    • If û ≥ r'' then select d''.

    • If r' < û < r'', then "pass".

    To give you better insight into the working of the Info-Gap methodology under consideration, let us compare its results to those yielded by an analysis conducted under Certainty. The picture is this:

    Best Strategy under Certainty

    • If u* ≤ r' it is best to select d'.

    • If u* ≥ r'' it is best to select d''.

    • If r' < u* < r'' it is best to "pass".
    Info-Gap's Strategy under Severe Uncertainty

    • If û ≤ r', then select d'.

    • If û ≥ r'' then select d''.

    • If r' < û < r'', then "pass".

    As you can clearly see, Info-Gap's strategy under severe uncertainty is identical to the best strategy under certainty except that it uses the estimate û instead of the true (unknown) value u*.

    So, the question obviously arises:

    What is this big idea of using the Info-Gap methodology given that all that this methodology does here is to replace the true value u* by its estimate û in the formulation of the best strategy under certainty?

    Why don't we simply say: assume that there is no uncertainty and pretend that u* is equal to û?

    But there is a far more serious question that the proposed methodology gives rise to:

    Given that the estimate û is highly unreliable, what guarantee is there that the proposed methodology is reliable?

    That is, aren't we dealing here with a methodology that is based on the following unreliable premise?

    highly unreliable estimate ---->     Model     ----> reliable robust decision

    The short answer is: Yes, of course we are! The proposed methodology is only as reliable as the estimate û.

    In any event, according to these Info-Gap's experts, using this methodology would enable generating reliable decisions now for gold trading (d'="sell", d''="buy", u= price of gold) in the middle of next year! All we need for this purpose is an estimate (û) of the price of gold in the middle of next year. And the good news is that a rough, unreliable estimate will do!

    Personal Note:
    Had I had in my possession a reliable methodology for gold trading in the middle of next year, I would not have been writing this paragraph right now. Indeed, I would have kept it top secret.

    To be continued ...


  • FAQ-53: Will ardent Info-Gap proponents ever concede that Info-Gap's robustness and decision-making models are Maximin models?

    Answer-53: Extremely unlikely.

    For some years now -- at least since 1999 -- the thesis that Info-Gap's robustness model is not a Maximin model has been a fixture in the Info-Gap literature. This thesis has been part of the broader thesis that Info-gap is a novel/radical approach to decision making under uncertainty. So understandably, an admission to the contrary is no simple matter.

    For consider what would a concession that Info-Gap's robustness model is a Maximin model amount to?

    Such a concession will completely demolish Info-Gap's basic thesis that it is a new theory that is substantially different from all current theories for decision under uncertainty.

    More importantly, it will call into question the validity of Info-Gap's many other (unsubstantiated) claims, and by association, the validity of the entire Info-Gap enterprise.

    In short, there is a lot at stake here for Info-Gap decision theory: The debate on the Maximin/ Info-Gap connection is not just a technical discussion.

    It is not surprising therefore that the stance adopted in Info-Gap is that of adhering to the Seventh Natural Law of Operations Analysis (see FAQ-54), namely extending an error is preferred to admitting a mistake.

    In other words, the Info-Gap literature remains oblivious of the following clear-cut result:

    Maximin Theorem
    (Sniedovich 2006b, 2007a, 2008a, 2008b, 2008c), WIKIPEDIA

    α(d,û):= max {α: r(d,u) ≥ r*, ∀u∈U(α,û)} ≡  maxmin α·(r(d,u)◊r*)
    α≥0   u∈U(α,û) 
    where a ◊ b := 1 iff a ≥ b
    and a ◊ b := 0 otherwise.

    Instead, the opposite (erroneous) idea, namely that Info-Gap's robustness model is not a Maximin model, is being promoted.

    But, as I have shown in Sniedovich (2008a), and discussed in FAQ-20, this misguided effort ends in a huge mathematical failure. For not only is the "result" wrong, the arguments on which this "result" are based exhibit grave misconceptions regarding the modeling of Wald's Maximin paradigm and worst-case analysis in general.

    In other words, rather than take the bull by the horns and deal directly with the basic fact (Maximin Theorem) for what it is -- a mathematical statement about the relationship between two simple mathematical models -- a lengthy scholastic argument is put forward in support of an array of erroneous contentions.

    But what is the outcome of this scholastic argument?

    For one thing, it brings into full view, indeed exacerbates, the flaws in the arguments.

    Consider for instance Ben-Haim's (2008) latest essay on the "differences/similarities" between Info-Gap robustness and Maximin. The technical and conceptual errors revealed here are so grave that I will have to take up some of them in subsequent FAQs.

    To give you an idea of what I mean, consider the following:

    " ... Info-gap robust-satisficing is motivated by the same perception of uncertainty which motivates the min-max class of strategies: lack of reliable probability distributions and the potential for severe and extreme events. We will see that the robust-satisficing decision will sometimes coincide with a min-max decision. On the other hand we will identify some fundamental distinctions between the min-max and the robust-satisficing strategies and we will see that they do not always lead to the same decision.

    First of all, if a worst case or maximal uncertainty is unknown, then the min-max strategy cannot be implemented. That is, the min-max approach requires a specific piece of knowledge about the real world: "What is the greatest possible error of the analyst's model?". This is an ontological question: relating to the state of the real world. In contrast, the robust-satisficing strategy does not require knowledge of the greatest possible error of the analyst's model. The robust-satisficing strategy centers on the vulnerability of the analyst's knowledge by asking: "How wrong can the analyst be, and the decision still yields acceptable outcomes?" The answer to this question reveals nothing about how wrong the analyst in fact is or could be. The answer to this question is the info-gap robustness function, while the true maximal error may or may not exceed the info-gap robustness. This is an epistemic question, relating to the analyst's knowledge, positing nothing about how good that knowledge actually is. The epistemic question relates to the analyst's knowledge, while the ontological question relates to the relation between that knowledge and the state of the world. In summary, knowledge of a worst case is necessary for the min-max approach, but not necessary for the robust-satisficing approach.

    The second consideration is that the min-max approaches depend on what tends to be the least reliable part of our knowledge about the uncertainty. Under Knightian uncertainty we do not know the probability distribution of the uncertain entities. We may be unsure what are typical occurrences, and the systematics of extreme events are even less clear. Nonetheless the min-max decision hinges on ameliorating what is supposed to be a worst case. This supposition may be substantially wrong, so the min-max strategy may be mis-directed.

    A third point of comparison is that min-max aims to ameliorate a worst case, without worrying about whether an adequate or required outcome is achieved. This strategy is motivated by severe uncertainty which suggests that catastrophic outcomes are possible, in conjunction with a precautionary attitude which stresses preventing disaster. The robust-satisficing strategy acknowledges unbounded uncertainty, but also incorporates the outcome requirements of the analyst. The choice between the two strategies -- min-max and robust-satisficing -- hinges on the priorities and preferences of the analyst.

    The fourth distinction between the min-max and robust-satisficing approaches is that they need not lead to the same decision, even starting with the same information. ..."

    Ben-Haim (2008, p. 9)

    A similar misguided discussion appears in Ben-Haim and Demertzis (2008, pp. 17-18).

    The point to note first of all is that nowhere in this thesis is there any reference to the simple fact

    Maximin Theorem
    (Sniedovich 2006b, 2007a, 2008a, 2008b, 2008c), WIKIPEDIA

    α(d,û):= max {α: r(d,u) ≥ r*, ∀u∈U(α,û)} ≡  maxmin α·(r(d,u)◊r*)
    α≥0   u∈U(α,û) 

    Equally disturbing is the explanation of the difference between Info-Gap and Maximin. This explanation demonstrates a profound lack of understanding and appreciation of the expressive power of Wald's Maximin model. For example, the claim that

    "... In summary, knowledge of a worst case is necessary for the min-max approach, but not necessary for the robust-satisficing approach. ...

    is most revealing.

    For not only is a priori knowledge of the worst case not necessary for the implementation of the Minimax model, it is precisely the task of the implementation of the Minimax model to "find" the best worst case.

    For example, consider the following classic Minimax model, exhibiting one of the most famous saddle points on Planet Earth:

     p := min  max  x2 - y2
    x∈ℜy∈ℜ

    = real line.

    The implementation of this Minimax model yields the optimal solution, namely the best worst case with respect to the min player. It is the saddle point (x,y) = (0,0), yielding p=0. Note that the objective function here, namely the function f=f(x,y) defined by f(x,y)=x2 - y2, is unbounded on ℜ2.

    So much then for the big idea that " ... knowledge of a worst case is necessary for the min-max approach ..."

    To sum up:
    Not only is it the case that Ben-Haim's conclusions are wrong, the arguments on which his conclusions are based are woefully misguided. In any event, the question of the relationship between Wald's Maximin model and Info-Gap's robustness and decision-making models is a technical one and must therefore be treated as such. No amount of scholastic rhetoric can change this basic fact:

    Maximin Theorem
    (Sniedovich 2006b, 2007a, 2008a, 2008b, 2008c), WIKIPEDIA

    α(d,û):= max {α: r(d,u) ≥ r*, ∀u∈U(α,û)} ≡  maxmin α·(r(d,u)◊r*)
    α≥0   u∈U(α,û) 

    More details concerning the errors in Info-Gap's rhetoric on the relationship between Wald's Maximin model and Info-Gap's robustness model can be found in Sniedovich (2007a, 2008a, 2008b, 2008c).


  • FAQ-54: Aren't your language and tone too cynical?

    Answer-54: I don't think so.

    For almost thirty years a copy of the following Ten-Laws hung in my office. I believe that this list is extremely relevant to the present discussion.

    The Ten Natural Laws of Operations Analysis

    Bob Bedow
    DELEX Systems, INC.
    8150 Leesburg Pike
    Vienna, VA 22180

    1. Ignore the problem and go immediately to the solution, that is where the profit lies.

    2. There are no small problems only small budgets.

    3. Names are control variables.

    4. Clarity of presentation leads to aptness of critique.

    5. Invention of the wheel is always on the direct path of a cost plus contract.

    6. Undesirable results stem only from bad analysis.

    7. It is better to extend an error than to admit to a mistake.

    8. Progress is a function of the assumed reference system.

    9. Rigorous solutions to assumed problems are easier to sell than assumed solutions to rigorous problems.

    10. In desperation address the problems.

    Source: Interfaces 7(3), p. 122, 1979.

    You can figure out for yourself which items on this list apply to the case of Info-Gap.


  • FAQ-55: Don't Info-Gap proponents essentially argue that "anything goes" under severe uncertainty?

    Answer-55: Indeed, they do.

    This position is a direct consequence of their broader view as to what constitutes a scientific approach to decision-making. For instance, consider this statement in the Info-Gap article in WIKIPEDIA:

    " ... It is correct that the info-gap robustness function is local, and has restricted quantitative value in some cases. However, a major purpose of decision analysis is to provide focus for subjective judgments. That is, regardless of the formal analysis, a framework for discussion is provided. Without entering into any particular framework, or characteristics of frameworks in general, discussion follows about proposals for such frameworks. ..."

    In Cole Porter's (1891-1964) memorable phrase, Anything Goes!

    In other words, under severe uncertainty the estimate is indeed a wild guess, therefore the result of a robustness analysis in the neighborhood of this estimate is unlikely to represent a decision's robustness relative to the complete region of uncertainty.

    So what?

    Anything Goes!

    Anything Goes (Cole Porter, 1934)
    Times have changed,
    And we've often rewound the clock,
    Since the Puritans got a shock,
    When they landed on Plymouth Rock.
    If today,
    Any shock they should try to stem,
    'Stead of landing on Plymouth Rock,
    Plymouth Rock would land on them.

    In olden days a glimpse of stocking,
    Was looked on as something shocking,
    But now, God knows,
    Anything Goes.

    Good authors too who once knew better words,
    Now only use four letter words
    Writing prose,
    Anything Goes.

    The world has gone mad today
    And good's bad today,
    And black's white today,
    And day's night today,
    When most guys today
    That women prize today
    Are just silly gigolos
    And though I'm not a great romancer
    I know that I'm bound to answer
    When you propose,
    Anything goes

    When grandmama whose age is eighty
    In night clubs is getting matey with gigolo's,
    Anything Goes.

    When mothers pack and leave poor father
    Because they decide they'd rather be tennis pros,
    Anything Goes.
    If driving fast cars you like,
    If low bars you like,
    If old hymns you like,
    If bare limbs you like,
    If Mae West you like
    Or me undressed you like,
    Why, nobody will oppose!
    When every night,
    The set that's smart
    Is intruding in nudist parties in studios,
    Anything Goes.

    The world has gone mad today
    And good's bad today,
    And black's white today,
    And day's night today,
    When most guys today
    That women prize today
    Are just silly gigolos
    And though I'm not a great romancer
    I know that I'm bound to answer
    When you propose,
    Anything goes

    If saying your prayers you like,
    If green pears you like
    If old chairs you like,
    If back stairs you like,
    If love affairs you like
    With young bears you like,
    Why nobody will oppose!

    And though I'm not a great romancer
    And though I'm not a great romancer
    I know that I'm bound to answer
    When you propose,
    Anything goes...
    Anything goes!


  • FAQ-56: In what sense is Wald's Maximin model much more powerful and general than Info-Gap's decision-making model?

    Answer-56: The power of Wald's famous Maximin model is in its versatility. It gives the analyst the freedom to specify the objective function and the uncertainty space to faithfully reflect the problem's requirements.

    This versatility is made clear in the manner in which it is expressed in Info-Gap's decision-making model whose two distinguishing characteristics are:

    • The objective function represents Info-Gap's performance requirement r(d,u)≥r*.

    • The uncertainty space represents Info-Gap's regions of uncertainty U(α,û),α≥0.

    So borrowing Annie's inimitable line to Frank, Wald's Maximin model would contend: Anything you can do I can do better!

    The picture is this:

    General Maximin Model

    v*:=   max    min  f(x,s)
    x∈Xs∈S(x)


    Info-Gap's Decision-Making Model

    z*:=   max    min  α·(r(d,u)◊r*)
    d∈Du∈U(α,û)
    α≥0

    Anything You Can Do (Irving Berlin, 1888 - 1989)
    Anything You Can Do

    ANNIE: Anything you can do I can do better ......I can do anything better than you FRANK: No, you can't ANNIE: Yes, I can FRANK: No, you can't ANNIE: Yes, I can FRANK: No, you can't ANNIE: Yes, I can, yes, I can FRANK: Anything you can be I can be greater ......Sooner or later I'm greater than you ANNIE: No, you're not FRANK: Yes, I am ANNIE: No, you're not FRANK: Yes, I am ANNIE: No, you're not FRANK: Yes, I am, yes I am FRANK: I can shoot a partridge with a single cartridge ANNIE: I can get a sparrow with a bow and arrow FRANK: I can live on bread and cheese ANNIE: And only on that? FRANK: Yes ANNIE: So can a rat FRANK: Any note you can reach I can go higher ANNIE: I can sing anything higher than you FRANK: No, you can't ANNIE: Yes, I can FRANK: No, you can't ANNIE: Yes, I can FRANK: No, you can't ANNIE: Yes, I can FRANK: No, you can't ANNIE: Yes, I can FRANK: No, you can't ANNIE: Yes, I can ANNIE: Anything you can buy I can buy cheaper ......I can buy anything cheaper than you FRANK: Fifty cents ANNIE: Forty cents FRANK: Thirty cents ANNIE: Twenty cents FRANK: No, you can't ANNIE: Yes, I can, yes, I can FRANK: Anything you can say I can say softer ANNIE: I can say anything softer than you FRANK: No, you can't ANNIE: Yes, I can FRANK: No, you can't ANNIE: Yes, I can FRANK: No, you can't ANNIE: Yes, I can, yes, I can FRANK: I can drink my liquor faster than a flicker ANNIE: I can do it quicker and get even sicker FRANK: I can open any safe ANNIE: Without being caught? FRANK: Sure ANNIE: That's what I thought (you crook) FRANK: Any note you can hold I can hold longer ANNIE: I can hold any note longer than you FRANK: No, you can't ANNIE: Yes, I can FRANK: No, you can't ANNIE: Yes, I can FRANK: No, you can't ANNIE: Yes, I can, yes, I can FRANK: No, you can't - yes, you can ANNIE: Anything you can wear I can wear better ......In what you wear I'd look better than you FRANK: In my coat ANNIE: In your vest FRANK: In my shoes ANNIE: In your hat FRANK: No, you can't ANNIE: Yes, I can, yes, I can FRANK: Anything you can say I can say faster ANNIE: I can say anything faster than you FRANK: Noyoucan't ANNIE: YesIcan FRANK: Noyoucan't ANNIE: YesIcan FRANK: Noyoucan't ANNIE: YesIcan FRANK: Noyoucan't ANNIE: YesIcan FRANK: I can jump a hurdle ANNIE: I can wear a girdle FRANK: I can knit a sweater ANNIE: I can fill it better FRANK: I can do most anything ANNIE: Can you bake a pie? FRANK: No ANNIE: Neither can I FRANK: Anything you can sing I can sing sweeter ANNIE: I can sing anything sweeter than you FRANK: No, you can't ANNIE: Yes, I can FRANK: No, you can't ANNIE: Oh, yes, I can FRANK: No, you can't ANNIE: Yes, I can FRANK: No, you can't ANNIE: Yes, I can FRANK: No, you can't, can't, can't ANNIE: Yes, I can, can, can, can FRANK: No, you can't ANNIE: Yes, I can

    Observe that the Maximin model imposes no structure whatsoever on its state spaces S(x), x∈X whereas Info-Gap imposes the nesting property on its regions of uncertainty U(α,û), α≥0. Similarly, the Maximin model does not impose any structure on its objective function f=f(x,s), whereas Info-Gap induces a rather specific objective function, namely g(d,α,u)=α·(r(d,u)◊r*). Clearly, the Maximin framework is incomparably more versatile insofar as the flexibility in modeling is concerned.

    In sum, this is another way of saying that Info-Gap's robustness and decision-making models are no more than specific instances of the mighty Maximin (see Sniedovich (2008c).

    See FAQ-18 for more details on the relationship between Info-Gap's decision-making model and Wald's Maximin model.


  • FAQ-57: Do you plan to compile these FAQs into a book?

    Answer-57: Yes I do.

    My plan is to publish a book, tentatively entitled The Rise and Rise of Voodoo Decision Theories, based on my Info-Gap experience over the past five years.


  • FAQ-58: On what grounds is it claimed that Info-Gap's robustness analysis and a Maximin analysis may yield different results?

    Answer-58: Such erroneous claims -- which as we have seen, are advanced in support of the denial that Info-Gap's robustness model is an instance of Wald's Maximin model -- are based on a woefully misguided comparison of apples with oranges.

    The point is this. Given that Wald's Maximin model is a general prototype model it necessarily subsumes countless special cases and instances one of which is Ben-Haim's Info-gap model. So, when setting out to conduct a comparison between Info-Gap's model and Wald's Maximin model one's first obligation is to make sure that the instance of Wald's Maximin model chosen for this purpose is appropriate (correct, right) for the comparison. But, this is precisely where Ben-Haim errors begins.

    Ben-Haim's (2008, p. 9) claim

    " ... The fourth distinction between the min-max and robust-satisficing approaches is that they need not lead to the same decision, even starting with the same information. ..."

    is an unfortunate error resulting from the choice of a wrong instance of Wald's Maximin model.

    The correct instance of Wald's Maximin model is spelled out clearly by

    Maximin Theorem
    (Sniedovich 2006b, 2007a, 2008a, 2008b, 2008c), WIKIPEDIA

    α(d,û):= max {α: r(d,u) ≥ r*, ∀u∈U(α,û)} ≡  maxmin α·(r(d,u)◊r*)
    α≥0   u∈U(α,û) 

    That is,

    maxmax {α: r(d,u) ≥ r*, ∀u∈U(α,û)} ≡  maxmin α·(r(d,u)◊r*)
    d∈Dd∈D   u∈U(α,û) 
    α≥0

    Hence, (d,α) is an optimal solution to Info-Gap's decision-making problem if and only if this pair is also an optimal solution to the max player in the Maximin model spelled out above. In short, if you construct the Maximin model according to the recipe spelled out by the Maximin Theorem, then Info-Gap and Maximin will yield the same optimal decisions.

    To better understand how Ben-Haim's flawed argument works, consider the following presumably "challenging" argument that Ben-Haim's line of reasoning would use to "prove" that the generic linear equation bx + c = 0 is not an instance of the generic quadratic equation Ax2 + Bx + C = 0:

    A linear equation and a quadratic equation may yield different solutions: for instance, 2x-1=0 yields x=1/2, whereas 2x-1-x2=0 yields x=1.

    Does this means that the generic linear equation bx + c = 0 is not an instance of the generic quadratic equation Ax2 + Bx + C = 0 ?

    Of course not!

    The correspondence requires setting A = 0; B = b and C = c. If you erroneously set A=1, then the blame for the resulting mess is on you!

    In other words, to properly model the generic linear equation as an instance of the generic quadratic equation you have to make sure that you use the correct instance of the latter. Likewise, to properly formulate Info-Gap's decision-making model as a Maximin model, you have to make sure that you use the correct instance (eg. the instance specified by the Maximin Theorem) of the latter.

    As explained in FAQ-20, Ben-Haim's error is that instead of using the Maximin model stipulated by the Maximin Theorem, he uses (eg. Davidovitch and Ben-Haim 2008) an inappropriate Maximin formulation.

    The full story is as follows:

    Info-Gap's decision-making model
    α(û)  = max   max {α≥0: r(d,u) ≥ r*, ∀u∈U(α,û)}
    d∈D

    Ben-Haim's Wrong Maximin modelCorrect Maximin model
    α(û;α'):= maxmin r(d,u)
    d∈D  u∈U(α',û)  
      α(û):= maxmin α·(r(u,d) ◊ r*)
    d∈D  u∈U(α,û)  
    α≥0
    Note: α' is a prespecified positive number. Note: a◊b =1 iff a≥b; a◊b =0 otherwise.

    The correct Maximin model and Info-Gap's decision-making model are equivalent and both yield the same optimal solutions.


  • FAQ-59: Why do Info-Gap proponents persist in using the wrong Maximin formulation in their analysis of the relationship between Maximin and Info-Gap?

    Answer-59: I have no explanation for this.

    Since they are fully aware of the existence of the correct Maximin formulation and they do not question its validity as a proper representation of Info-Gap's decision-making model, it is indeed baffling.

    It would appear that this saga continues because Info-Gap proponents try to maintain the myth that Info-Gap's robustness and decision-making models are not Maximin models. Perhaps.


  • FAQ-60: What PROBLEM does Info-Gap's decision-making MODEL actually represent?

    Answer-60: This important FAQ brings to light one of Info-Gap's greatest methodological failings. It highlights the fact that it is actually unclear what problem does the Info-Gap methodology grapple with. What problem does Info-Gap seek to answer?

    The point is this. When we conduct an investigation of a problem, be it for purely theoretical purposes, or for practical purposes or, for a combination of both, a crucial activity in this effort is the "mathematical modeling" of the problem concerned. The accepted practice while engaging in this activity is to distinguish between the problem that we aim to analyze/solve and the model that we use for this purpose. The reason for this distinction is simple. It is usually the case that the same problem can be described and analyzed by more than one model.

    For example, consider the famous Traveling Salesman Problem (TSP). This problem can be formulated in various ways: by an Integer Programming (IP) model and by a dynamic programming (DP) model. So, while these models are utterly different from one another, they nevertheless describe the same generic TSP problem.

    Now, in the case of Info-Gap, the distinction between the problem and the model is all but blurred, one might say non-existent. Confusion reigns in the Info-Gap literature as to what is the problem and what is the model. As a result, is it seldom clear what elements of a specific Info-Gap model are induced by the problem itself, and what elements are ingredients of the Info-Gap methodology, namely are not prescribed by the requirements of the problem itself. In a word, reading the Info-gap literature one is left wondering what problem does the Info-Gap methodology actually address.

    So, to be able to identify Info-Gap's problem of concern we need to perform some detective work. To this end we shall adopt another good practice from the "world of mathematical modeling" whereby a distinction will also be drawn between two versions of the problem under consideration. That is, we shall distinguish between the problem expressing conditions of certainty and, its counterpart expressing conditions of uncertainty. The latter version will be viewed as the more complex counterpart of the former, the complication obviously arising from the uncertainty. With this as background, consider now the following:

    Info-Gap's generic decision-making model

    α(û)  = max   max {α≥0: r(d,u) ≥ r*, ∀u∈U(α,û)}
    d∈D

    Of interest to us here are two problems (versions) that are associated with this model:

    • The problem represented by this model.

    • The simplified version of the problem where we know the true value of u.

    Setting off the problem (version) postulating certainty against the problem postulating uncertainty should give us insight into the complications caused by the underlying severe uncertainty. In other words, pitting one version against the other should enlighten us on why the uncertainty is represented in Info-Gap's formulation of its problem of interest, the way it is.

    Now, when one reads the Info-gap literature with this juxtaposition in mind, it becomes abundantly clear that there is nothing in this literature to suggest that the idea of nesting the regions of uncertainty so as to "quantify" the uncertainty, is dictated by any special needs expressed by the problem in question. The problem itself calls only for a definition of a region of uncertainty meaning that all that it prescribes is the complete region of uncertainty U, and perhaps the estimate û of the true value of u. So clearly, the characteristic of nested regions of uncertainty is projected on the problem entirely by the Info-Gap methodology.

    As for the formulation of the performance requirement in Info-gap's model description, namely r(d,u) ≥ r*. Again, reading the Info-Gap literature it emerges that there is a good deal of ambiguity as to the exact role of the constraint in this framework. Indeed, the constraint can be understood to impose two different requirements:

    • It is desired (albeit not always possible) to satisfy this constraint over the assumed complete region of uncertainty.

    • It is desired that r(d,u) be as large as possible -- yet "robust" against uncertainty in the true value of u.

    It follows then that if we remove the uncertainty altogether -- by assuming that we know with certainty the true value of u, call it u* -- there are two problems to consider:

        Complete Certainty    
                Optimization Problem            

    max {r(d,u*): d∈D}

      Satisficing Problem

    Find a d∈D such that r(d,u*) ≥ r*


    The question then is:

    What are the counterparts of these two problems if we assume that the value of u* is subject to severe uncertainty hence unknown?

    This is the question that Info-Gap decision theory is in fact supposed to address.

    But does it?

    Let us see.

    Suppose that when we move from the safety of the world of certainty to the obscure world of severe uncertainty, we nominate robustness to be our main concern so that consequently our goal is to select the most robust decision. Then the questions that we would seek to answer are these:

    • How should we define "robustness" in the framework of the Optimization Problem "max {r(d,u): d∈D}" where the true value of u is subject to severe uncertainty?

    • How should we define "robustness" in the framework of the Satisficing Problem "Find a d∈D such that r(d,u) ≥ r*" where the true value of u is subject to severe uncertainty?

    Info-Gap's decision theory does not address the first question explicitly, and its answer to the second question is as follows:

    Info-Gap's generic Robustness Model

    α(d,û)  =   max {α≥0: r(d,u) ≥ r*, ∀u∈U(α,û)} , d∈D
     

    This, at best, is an answer to the wrong question.

    For it is important to stress that the conundrum that conditions of severe uncertainty present us with is not that of finding the decision that is most robust in the neighborhood of a particular value of u, be that the "best estimate" or an arbitrary value of u. The challenge that we are facing is: how do we go about deciding which decision is the most robust with respect to u given that we have no clue as to the true value of u?

    In short, the problem actually addressed by Info-Gap decision theory is this:

    What is the most robust decision for the Satisficing Problem "Find a d∈D such that r(d,u) ≥ r*" in the neighborhood of the estimate û of the true value of u?

    So, the point is that by a priori restricting the scope of robustness to the neighborhood of the estimate û, Info-Gap has already thrown severe uncertainty out of the window. Indeed, Info-Gap's definition of robustness is taken "as is" from Ben-Haim (1996), where the uncertainty is not assumed to be severe!


  • FAQ-61: How does Info-Gap decision theory distinguish between different levels of uncertainty?

    Answer-61: It does not.

    As brought out by the discussion in FAQ-13 through FAQ-17, the "localness" of Info-Gap's robustness analysis is due to the analysis focusing entirely on a point estimate û of the true value of the parameter of interest and its immediate neighborhood.

    But this single minded concentration on the estimate and its immediate neighborhood in effect entails that the analysis does not discriminate between levels (severity) of the uncertainty. The same analysis is applied ad infinitum regardless of the uncertainty growing or diminishing, intensifying or lessening. This is illustrated in the picture below where complete regions of uncertainty (U', U'', U''') of various sizes are shown, all centered at the same estimate û.

    where α'=α(û) +ε, for some ε>0.

    Note that the analysis remains unchanged despite the uncertainty intensifying - the region of uncertainty growing from U' to U'', to U''' and so on.

    To better appreciate this absurd, assume that your complete region of uncertainty is say U=[-1000,1000] and that your "best" estimate is û=0. Now, suppose that research and development enable a significant reduction in the size of the region of uncertainty. How will such a reduction affect the results of Info-Gap's robustness analysis? For instance, suppose that we can reduce U to say U'=[-500,500]. How will this affect the results of Info-Gap's robustness analysis?

    The answer is that so long as the "best estimate" remains the same and U contains U(α',û), the reduction in the size of U will have no impact whatsoever on Info-Gap's robustness analysis.

    This is the grounds for my contention that Info-Gap decision theory in fact does not deal with the severity of the uncertainty and for my defining it the way I do in FAQ-6.


  • FAQ-62: Isn't Info-Gap's uncertainty model superfluous in the one-dimensional case?

    Answer-62: Yes, it is.

    The primary function of Info-Gap's uncertainty model is to specify a one-dimensional measure of distance between the estimate û and other elements of U. So, to use it in the one dimensional case, that is when U is an interval of the real line, is utterly pointless because |u-û| can be used for this purpose.

    Indeed applying Info-Gap's uncertainty model to the one dimensional case is not only utterly pointless, it is in fact counter-productive because it conceals the true facts obtained in the analysis. And yet surprisingly, Info-Gap experts do apply Info-Gap's uncertainty model to the one dimensional case.

    To see what I am driving at note that if the performance function r is continuous with u, then the critical value of u -- the value beyond which the performance requirement is not satisfied -- is equal to one of the roots of the equation r(d,u) = r*, namely the root in U that is closest to û. If there are no such roots, then either no element of U satisfies the performance requirement, or all the elements of U satisfy this requirement. In the former case the robustness is equal to zero, in the latter it is unbounded.

    In short, in the one dimensional case, Info-Gap's robustness analysis boils down to solving the equation r(d,u) = r*.

    Remark:
    Note that if for a given decision d∈D the equation r(d,u) = r* has a unique root in U, then the implication is that the critical value of u associated with d does not depend on the estimate û. The picture is this:

    Since we assume that r(d,û)≥r*, it follows that uc is critical regardless of the specific value of û.

    If r(d,u)=r* has more than one root in U, then the critical value of u is the root that is closest to û, and this of course depends on the value of û. The picture is this:

    Here we have 5 roots in U. Which of those is the critical u depends on the location of û.


  • FAQ-63: What are the differences/similarities between Info-Gap's tradeoff curves and the famous Pareto Frontiers?

    Answer-63: Info-Gap's tradeoff curves (between performance and robustness) are typical examples of the famous Pareto Frontier.

    It is therefore utterly incomprehensible that no indication whatsoever is given in either edition (Ben-Haim 2001, 2006) of this fact. Both editions of the Info-Gap book burst with discussions (supported by curves) describing tradeoffs between robustness and performance yet, nowhere is it made clear that these are Pareto tradeoffs. In fact, the term Pareto is not even listed in the subject index of these books.

    It is important to call attention to this fact if only for the reason that -- as my experience has shown -- newcomers to the field have the impression that the idea of robustness/performance tradeoffs is an Info-Gap innovation.


  • FAQ-64: Can Info-Gap's generic tradeoff problem be formulated as a Pareto Optimization problem?

    Answer-64: Indeed it can.

    Info-Gap's tradeoff problem can be stated formally as follows:

    z*:=  P-Max     {(α,ρ): r(d,u)≥ ρ, ∀u∈U(α,û)}
      d∈D  
    α≥0
    ρ∈ℜ

    where P-Max denotes the Pareto Maximization operation.

    Note that in practice it is often convenient to generate the Pareto Frontier for the Info-Gap tradeoff problems by solving the following parametric Maximin problem:

    r*(α):= maxminr(d,u)   ,   α ≥ 0
      d∈D     u∈U(α,û)  

    The Pareto frontier consists of all pairs (α,r*(α)), α≥0.


  • FAQ-65: What exactly is "Knightian Uncertainty" and in what way is it different from "conventional" uncertainty?

    Answer-65: To answer this question I shall have to take a closer look at the concept of uncertainty and its affiliated terminology. It is important to make clear from the outset, though, that the epithet "Knightian" is no more than a stand-in for "Severe". So the constant recitation of the term "Knightian Uncertainty" (in the Info-Gap literature) to highlight the severity of the uncertainty has the effect of turning it into a mere buzzword.

    The epithet "Knightian" is due to the economist Frank Hyneman Knight (1885-1972) -- one of the founders of the so-called "Chicago school of economics" -- who is credited with the distinction between "risk" and "uncertainty".

    Recall that classical decision theory distinguishes between three levels of knowledge pertaining to a state-of-affairs, namely

    • Certainty
    • Risk
    • Uncertainty

    The "Risk" category refers to situations where the uncertainty can be quantified by standard probabilistic constructs such as probability distributions.

    In contrast, the "Uncertainty" category refers to situations where our knowledge about the parameter under consideration is so meager that the uncertainty cannot be quantified even by means of an "objective" probability distribution.

    The point is then that "Uncertainty" eludes "measuring". It is simply impossible to provide a means by which we would "measure" the level, or degree, of "Uncertainty" to thereby indicate how great or daunting it is. To make up for this difficulty a tradition has developed whereby the intensity of the "Uncertainty" is captured descriptively, that is, informally through the use of "labels" such as these:

    • Strict uncertainty
    • Severe uncertainty
    • Extreme uncertainty
    • Deep uncertainty
    • Substantial uncertainty
    • Essential uncertainty
    • Hard uncertainty
    • Hight uncertainty
    • True uncertainty
    • Fundamental uncertainty
    • Wild uncertainty
    • Knightian uncertainty
    • True Knightian uncertainty
    • Severe Knightian uncertainty

    The trouble is, however, that all too often, these terms are used as no more than buzzwords with a web of empty rhetoric spun around them. So, to guard against this, it is important to be clear on their meaning in the context of the problem under consideration.

    In this discussion I prefer to use the term "Severe Uncertainty".  I understand "Severe Uncertainty" to connote a state-of-affairs where uncertainty obtains with regard to the true value of a parameter of interest. That is, the true value of this parameter is unknown so that the estimate we have of this true (correct) value is:

    • A wild guess.
    • A poor indication of this true (correct) value.
    • Likely to be substantially wrong.

    According to some of my colleagues, "true" severe uncertainty can also entail that the estimate of the true value of the parameter of interest is based on

    • Intuition
    • Gut feeling
    • Rumors

    So, as you can see, decision under severe uncertainty entails dealing with extreme situations where the estimates used are based on the flimsiest grounds even ... rumor!

    This, needless to say, is a formidable challenge, especially if the stated goal (as in the case of Info-gap) is to provide robust decisions.

    To return then to the origins of the term "Knightian Uncertainty", here is Knight's description of the difference between "Risk" and "Uncertainty":

    To preserve the distinction which has been drawn in the last chapter between the measurable uncertainty and an unmeasurable one we may use the term "risk'' to designate the former and the term "uncertainty'' for the latter. The word "risk'' is ordinarily used in a loose way to refer to any sort of uncertainty viewed from the standpoint of the unfavorable contingency, and the term "uncertainty'' similarly with reference to the favorable outcome; we speak of the "risk'' of a loss, the "uncertainty'' of a gain. But if our reasoning so far is at all correct, there is a fatal ambiguity in these terms, which must be gotten rid of, and the use of the term "risk'' in connection with the measurable uncertainties or probabilities of insurance gives some justification for specializing the terms as just indicated. We can also employ the terms "objective'' and "subjective'' probability to designate the risk and uncertainty respectively, as these expressions are already in general use with a signification akin to that proposed. The practical difference between the two categories, risk and uncertainty, is that in the former the distribution of the outcome in a group of instances is known (either through calculation a priori or from statistics of past experience), while in the case of uncertainty this is not true, the reason being in general that it is impossible to form a group of instances, because the situation dealt with is in a high degree unique. The best example of uncertainty is in connection with the exercise of judgment or the formation of those opinions as to the future course of events, which opinions (and not scientific knowledge) actually guide most of our conduct.
    Knight (1921, III.VIII.1-2)

    Personally, I prefer the following characterization of uncertainty. It is taken from a paper by the famous British economist John Maynard Keynes (1883 - 1946), whose "Keynesian economics'', had a major impact on modern economic and political theory.

    By "uncertain" knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Nevertheless, the necessity for action and for decision compels us as practical men to do our best to overlook this awkward fact and to behave exactly as we should if we had behind us a good Benthamite calculation of a series of prospective advantages and disadvantages, each multiplied by its appropriate probability, waiting to be summed.
    Keynes (1937, pp. 213-5)

    In sum, the term "knightian" does not connote conditions of uncertainty that are more exacting or, forbidding or, whatever, than those conveyed by the term "severe". Furthermore, the constant recitation of the term "Knightian Uncertainty" in the Info-Gap literature should not be interpreted as suggesting that Info-Gap in fact has the capabilities to deal with this uncertainty.


  • FAQ-66: Do you know of any convincing example where the Info-Gap complete region of uncertainty is unbounded?

    Answer-66: No, I don't.

    But I know of many unconvincing examples. In fact, all the examples I have found in the Info-Gap literature are unconvincing.

    It suffices to mention two.

    Example 1: Project Scheduling

    In the project scheduling problem described in Ben-Haim (2006, pp. 64-70) the unknown parameter of interest is the list of N=16 duration times of some given tasks. The nominal durations of the tasks vary from 1 to 6 and the prescribed critical completion time of the entire project, tc, varies from 21 to 30.

    Note that if the duration of a task is longer than tc, the performance requirement is violated. Hence, in this example, task durations larger than say 30.01 would not even come into the picture.

    Yet, the complete region of uncertainty defined in this case (Ben-Haim 2006, pp. 64-70) is unbounded.

    Example 2: Do foragers Optimize or Satisfice?

    In the foraging problem studied in Carmel and Ben-Haim (2005) the unknown parameter is the rate of gain in energy (joules/minute) for a foraging animal in a given patch.

    Carmel and Ben-Haim (2005, p. 635) propose an unbounded complete region of uncertainty for this parameter.

    I must admit that I am no expert on the energy consumption of foraging animals, but I am confident that it is far more realistic to assume that the region of uncertainty under consideration is bounded (above and below) rather than unbounded.

    Furthermore, I am confident that experts in this area can provide reasonable bounds for the complete region of uncertainty.

    If hard pressed, I would argue that the bounded interval say G=[-1000000,1000000] is far more realistic than the interval (-∞,∞) proposed by Carmel and Ben-Haim (2005).

    ......................................................

    In any event my main point is this:

    The issue here is not whether it makes sense to define an unbounded region of uncertainty in cases where a bounded region of uncertainty will do. The issue here is -- as I explain in FAQ-26 and FAQ-36 -- that the very idea of positioning the Info-Gap analysis in an unbounded region of uncertainty makes a mockery of this analysis as it exposes it to the following absurd:

  • No Man's LandûNo Man's Land
    Safe Sub-region
    -∞   <----------------- Complete     region     of     uncertainty ----------------->   ∞

    ............................

    No amount of rhetoric can undo the idea conveyed by this picture.


  • FAQ-67: Can't Info-Gap's "localness" flaw be fixed by means of a robustness analysis at a number of different estimates spread over the complete region of uncertainty?

    Answer-67: No it cannot.

    Suppose that we adopt this prescription and we conduct Info-Gap's robustness analysis k times using k different estimates, û(1),...,û(k). We would then end with k (potentially different) optimal decisions, d*(1),...,d*(k), to reckon with. How would we decide which decision to select?

    In other words, instead of resolving the "localness" of Info-Gap's robustness analysis, this attempted "rescue operation" brings out more forcefully how deep rooted and intractable a failure it is. The failure here is intractable because the basic approach to severe uncertainty is flawed. It fails to recognize that the real difficulty posed by the true value being subject to severe uncertainty is that for each value of the parameter there is (potentially) a different optimal decision.

    So, modifying Info-Gap's robustness analysis by means of the above prescription would require that the modified analysis be additionally backed up by a theory prescribing the selection of decisions under conditions of ... severe uncertainty. But this of course would require dealing with the following two issues:

    • Scenario generation
      How to select the different estimates (scenarios) in a manner insuring that the complete region of uncertainty is properly represented?

    • Scenario resolution
      How to resolve the difficulty of choosing the "best" decision out of the results obtained from the different scenarios?

    In short, modifying Info-Gap's robustness analysis by means of the above prescription will be tantamount to re-inventing the well established fields of scenario optimization (eg. Dembo 1991) and robust optimization (eg. Kouvelis and Yu, 1997).

    Remark:
    It is interesting to note that Info-Gap experts are aware of the difficulties associated with scenario generation and analysis:

    "... In recent years Rob Lempert and Steve Bankes at the RAND Corporation have developed computer-intensive simulation models for analysing the possible outcomes of policy decisions over large spaces of possible futures. The approach recognises the deficiencies in any model of a complex system and does not attempt to represent the uncertainties in probabilistic terms. Rather, the approach, which the team at RAND refer to as 'Robust Decision Making', is based upon identifying options that perform acceptably well over the widest subset of the space of possible futures. The problem still remains, however, of specifying the range of that space of possibilities. Actually, if decision makers and their analysts know anything, it is about the central tendencies, not the bounds of variation. ..."
    Hall and Ben-Haim (2007)

    Does this explain the lack of any discussion on "Scenario Analysis" in the Info-Gap books (Ben-Haim 2001, 2006) and the absence of this topic from the subject indices?

    Aren't Hall and Ben-Haim (2007) grossly inconsistent here?

    While raising concerns about the specification of the "range of that space of possibilities" and the difficulties in specifying the "bounds" of this space, it does not even occur to them that Info-Gap's local approach to scenario generation might be incomparably more problematic!

    In other words, Hall and Ben-Haim (2007) are perfectly happy with Info-Gap's local scenarios completely ignoring the severity of the uncertainty under consideration and the huge No Man's Land that the method generates:

  • Scenario Analysis a la Info-Gap

    No Man's LandûNo Man's Land
    Safe Sub-region
    -∞   <----------------- Complete     region     of     uncertainty ----------------->   ∞

    ..............................

    No amount of rhetoric can "reinterpret" this picture.


  • FAQ-68: What exactly is behind Info-Gap's claim that decisions designed solely to optimize performance have no robustness?

    Answer-68: Mostly rhetoric.

    Info-Gap decision theory advances the view that under conditions of severe uncertainty maximizing the robustness of decisions is superior to maximizing the "reward" yielded by them. So as might be expected, when two decisions are compared, one that maximizes reward and another that maximizes robustness, the former may not necessarily be robust according to Info-Gap's definition of robustness.

    Of course, one can argue that the decision that maximizes robustness does not necessarily generate a good level of "reward".

    But this is hardly surprising: If x is an optimal solution to Problem A and y is an optimal solution to Problem B then we neither expect x to be a good solution to Problem B, nor y to be a good solution to Problem A. And why should we? After all, we compare the performance of solutions obtained for two different problems!

    Why should we be surprised to learn that a solution found for Problem A does not perform well in the context of Problem B, and vice versa?

    In short, the thesis that maximizing robustness is superior is a shibboleth that may seem sensible at first sight especially when it is accompanied with high-flown rhetoric. But it does not withstand the slightest scrutiny. I discuss this issue in connection with the Optimizing vs Satisficing debate.

    I fully sympathize with Odhnoff's (1965) frustration:

    It seems meaningless to draw more general conclusions from this study than those presented in section 2.2. Hence, that section maybe the conclusion of this paper. In my opinion there is room for both 'optimizing' and 'satisficing' models in business economics. Unfortunately, the difference between 'optimizing' and 'satisficing' is often referred to as a difference in the quality of a certain choice. It is a triviality that an optimal result in an optimization can be an unsatisfactory result in a satisficing model. The best things would therefore be to avoid a general use of these two words.
    Jan Odhnoff
    On the Techniques of Optimizing and Satisficing
    The Swedish Journal of Economics
    Vol. 67, No. 1 (Mar., 1965)
    pp. 24-39

    Now back to the matter at hand.

    Consider the following two problems:

    Problem A     Problem B
    z*(û):= max   r(d,û)
    d∈D
    α(û):= max   max {α≥0: r(d,u) ≥ r*, ∀u∈U(α,û)}
    d∈D

    Note that Problem B is Info-Gap's generic decision-making model.

    You do not have to be an expert on optimization theory to figure out that the two problems are utterly different, so that an optimal solution to say Problem A is unlikely to be optimal for Problem B.

    In particular, if we let r* = z*(û), then clearly the Info-Gap robustness of an optimal solution to Problem A would most likely be equal to zero, recalling that the robustness of decision d according to Info-Gap is equal to

    α(d,û):= max {α≥0: r(d,u) ≥ r*, ∀u∈U(α,û)}

    To see why this is so, substitute r* = z*(û) in this expression. This yields,

    α(d,û):= max {α≥0: r(d,u) ≥ z*(û), ∀u∈U(α,û)} , d∈D

    Now let dA be any optimal solution to Problem A. Then α(dA,û) > 0 only if r(dA,u) = z*(û) for all u ∈U(α',û), where α'=α(dA,û). Otherwise, α(dA,û) = 0.

    So what?

    The same is true for any decision d∈D, including the decision that is generated by Info-Gap decision-making model -- and this, mind you, is the case regardless of what specific value of r* is used.

    Observe that this does not mean that the optimal decision to Problem A has zero robustness if r* < z*(û).

    In short, the claim that decisions designed solely to optimize performance have no robustness are meaningless unless the critical performance level r* is specified:

    • If r* < z*(û) then this claim is groundless and is typically wrong (see FAQ-69).

    • If r* = z*(û) then this claim is correct but ... does not mean anything (see FAQ-69).

    Therefore, statements such as

    "... Attempting solely to optimize energy intake may endanger the animal because a maximal energy-intake strategy has zero robustness to info gaps. ..."
    Carmel and Ben-Haim (2005, p. 639)

    and

    "... maximal utility is invariably accompanied by zero robustness. ..."
    Ben-Haim (2006, p. 295)

    are at best misguided.

    Unfortunately, they are also misleading, as they omit to specify the assumed value of r* and the fact that for the value of r* for which the claim is valid, all other decisions are in the same boat.

    Remarks:

    • The fact that the model "z*(û):= max {r(d,û): d∈D}" is used to determine what decision d should be used, does not mean that the reward z*(û) is expected to be realized -- especially under conditions of severe uncertainty.

    • Furthermore, if your interest is in the values of u in some set V⊂U, you can use the following robust optimization model:

      max  r(d,û) subject to r(d,u)≥r',∀u∈V
      d∈D

      where r' is your preferred threshold performance level on V.


  • FAQ-69: Can you give a counter-example to the claim that decisions that optimize performance have no robustness to info-gaps?

    Answer-69: Indeed I can.

    In fact this is an extremely easy task because it gives expression to what is the rule rather than the exception.

    Consider for instance the Info-Gap model where

    • Complete uncertainty region:

      U = [0,∞)2

    • Decision space:

      D = {(d1,d2): d1+d2 = 10, d1,d2≥0}

    • Performance function:

      r(d,u) = d1u1 + d1u1

    • Regions of uncertainty:

      U(α,û)={u∈U: (u1 - û1)2 + (u2 - û2)2 ≤ α2}

    • Best estimate: û=(6,5)

    • Critical performance level: r* = 30

    The optimal solution to the "performance optimization problem"

    z*(û):= max   r(d,û)
    d∈D

    is d'=(10,0), yielding a reward of r'= r(d',û) = 60.

    The robustness of this decision -- according to Info-Gap -- is as follows:

    α(d',û):= max {α≥0: r(d',u) ≥ r*, ∀u∈U(α,û)}
    = max {α≥0: 10u1≥ 30, ∀u∈{u≥0: (u1 - 6)2 + (u2 - 5)2 ≤α2} }
    =3

    So clearly, the robustness of d' is not equal to zero.

    Furthermore, observe that the most robust decision according to Info-Gap is d''=(6,4), yielding a reward of r''= r(d'',û) = 56 and robustness α(d'',û)=131/2=3.6055551275464.

    Here, then, is the picture, where "safe" means that the performance requirement is satisfied:

    d' = Performance Optimization Decisiond''= Optimal Info-Gap Robustness Decision
    r* = 30

    Robustness of decision d'=(10,0) which maximizes the reward r(d,û), yielding r(d',û)=60 and α(d',û)=3.

    r* = 30

    Robustness of the decision d''=(6,4) that maximizes the robustness α(d,û), yielding r(d'',û)=56 and α(d'',û) = 3.6055551275464.

    Needless to say, the picture is utterly different if we change r*=30 to r*=60.

    r* = 60

    The robustness of decision d'=(10,0) which maximizes the reward r(d,û) is equal to zero. This decision satisfies the performance requirement r(d,û)≥r*.

    r* = 60

    The robustness of the decision d''=(6,4) that maximizes the robustness α(d,û) is equal to zero. Note that this decision does not satisfy the performance requirement r(d,û)≥r*.

    As an aside, I should point out that it is not at all clear in what sense is d'' a priori superior to d' as claimed by Info-Gap decision theory.


  • FAQ-70: On what grounds does Info-Gap decision theory claim that utility maximization is entirely unreliable?

    Answer-70: Who knows?

    Info-Gap decision theory contends that under severe uncertainty maximizing robustness is superior to maximizing performance (utility). So, its decision model calls for the maximization of robustness subject to a performance requirement (constraint).

    Indeed, one of the arguments that Info-Gap decision theory puts forward to promote itself is that maximization of utility is unreliable (Ben-Haim 2006, p. 295):

    "... maximal utility is invariably accompanied by zero robustness. Utility-maximization is entirely unreliable. ..."

    Of course, this is absurd.

    If robustness is a factor, then one would maximize performance (utility) subject to a robustness constraint (requirement). In other words, the maximization of utility does not automatically preclude the incorporation of robustness requirements in the optimization model. In fact, this is routine in robust optimization.

    And so, not only is the global claim (eg. Ben-Haim 2006, p. 295) "utility maximization is entirely unreliable" itself "entirely unreliable", it is in fact risible.

    This, of course, is due to the kind of robustness that is prescribed by Info-Gap itself. The irony here is that it is precisely Info-Gap's robustness that is entirely unreliable under conditions of severe uncertainty. This is due to the local nature of Info-Gap's robustness model (see FAQ-13 - FAQ-17). The picture is this:

  • Reliability a la Info-Gap

    No Man's LandûNo Man's Land
    Safe Sub-region
    -∞   <----------------- Complete     region     of     uncertainty ----------------->   ∞

    ..............................

    No amount of rhetoric can explain away the facts expressed by this picture.


  • FAQ-71: Can you elucidate your sketch of Info-Gap's "No Man's Land" Syndrome?

    Answer-71: Certainly.

    I created this sketch to explain in a non-technical manner the fundamental flaw in Info-Gap's robustness model. It is a variation of the Treasure Hunt sketch that I use for the same purpose. So, recall that the tale of the Treasure Hunt runs as follows:

    Treasure Hunt

    • The island represents the complete region of uncertainty under consideration (the region where the treasure is located).

    • The tiny black dot represents the estimate of the parameter of interest (estimate of the location of the treasure).

    • The large white circle represents the region of uncertainty pertaining to info-gap's robustness analysis.

    • The small white square represents the true (unknown) value of the parameter of interest (true location of the treasure).

    So, basing our search plan on Info-Gap Decision Theory, we may zero in on the neighborhood of downtown Melbourne, while for all we know, the true location of the treasure may well be in the Middle of the Simpson desert, or perhaps just north of Brisbane.

    Perhaps.


    Now, let us have a look at the No Man's Land sketch:

  • Info-Gap's No Man's Land Syndrome

    No Man's LandûNo Man's Land
    Safe Sub-region
    -∞   <----------------- Complete     region     of     uncertainty ----------------->   ∞

    ..............................

    No amount of rhetoric can cover up the facts expressed by this picture.

    Explanation:

    So what is the point that is made by this sketch?

    The point is crystal clear:

    Info-Gap's robustness analysis is utterly unreliable because it is conducted only on a tiny section of the complete region of uncertainty.

    Differently put, Info-Gap's robustness is a local property, representing the performance of decisions in the neighborhood of the estimate û. It does not represent robustness with respect to the complete regions of uncertainty.

    Since the whole point in severe uncertainly is the unknown location of the true value of the parameter of interest, it follows that a robustness analysis that is confined to the neighborhood of Info-Gap's "safe" sub-region -- or for that matter, any other sub-region -- is unreliable.

    It is akin to taking soil samples or measuring the temperature in the neighborhood of Melbourne, when the complete region of interest is Australia.

    Of course, when one examines this obviously flawed idea as depicted by this sketch, the question that immediately springs to mind is how can such an idea be formulated at all.

    And the answer to this is simple as it is well documented. The origins of this idea are to be found in a muddled approach to uncertainty, where the same treatment is accorded to "severe" uncertainty as to say "very mild" uncertainty. More precisely where a "bad estimate" is treated as though it were a "nominal value".

    The point is not that Info-Gap's robustness model originated in "Robust Reliability in the Mechanical Sciences" (Ben-Haim, 1996) where the terms "severe uncertainty" and "Knightian uncertainty" are not so much as mentioned and where no indication is given that the uncertainty under consideration is severe. Rather, the point is that in Ben-Haim (1996), right from the outset, reliability is defined in terms of a safe deviation from a given nominal value:

    "... In this book, reliability is assessed in terms of robustness to uncertain variation. The system or model is reliable if failure is avoided even when large deviation from nominal conditions occur. On the other hand, a system is not reliable if small fluctuations can lead to unacceptable performance. ..."

    Ben-Haim (1996, p. 6)

    The picture then is this, where û denotes the nominal value of the parameter of interest:

    Info-Gap Reliability Model (1996)

    û
    Safe Sub-region
      <----------------- Complete     region     of     uncertainty ----------------->  

    ..............................

    So, here we are concerned not with the uncertainty in û but with the value of u -- which is treated as a given nominal value

    In contrast, in Ben-Haim (2001, 2006) the difficulty stems from the fact that the estimate û is "bad" and is likely to be "substantially wrong".

    In short:

    Of course, the trouble is that Ben-Haim (1996, 2001, 2006) prescribe the same methodology for both cases. This methodology, although suitable for the 1996 version of the problem, is utterly unsuitable for the 2001/2006 version.

    Remarks:


  • FAQ-72: What is the difference between "robustness to deviation from a given (nominal) value" and "robustness to severe uncertainty"?

    Answer-72: This is a very relevant question. It brings out the great confusion in the Info-Gap literature about the similarities/differences between these two concepts.

    The issue here is basically about the range of values that a parameter can take and what this range represents in the framework of robustness analysis. To clarify this point, we need to distinguish between two thoroughly different states-of-affairs:

    • Certain variability

    • Uncertain fixed value

    In the case of "certain variability" we know with certainty that the value of a parameter of interest will vary within a specified range. That is, we know that the parameter will take all the values it can within this pre-specified range. The task then is to evaluate our decisions according to this variation in values.

    In contrast, in the case of an "uncertain fixed value" it goes without saying that the range does not signify a possible variation in the parameter's values. Rather, the specified range represents the fact that there is a complete lack of knowledge as to the exact value that the parameter has or can take within this range.

    For instance, consider the following two cases describing, the endurance test of a car relative to speed (km/hr):

    • Test 1: Varied but known: You conduct a test where a car is driven for 2 hrs according to a prescribed pattern of varied speeds (km/hr) in the range [80,140].

    • Test 2: Fixed but unknown: You plan to conduct a test whereby a car will be driven at a fixed constant speed (km/hr) for 2 hrs, but the speed is unknown and its value is subject to severe uncertainty. That is, all that is known, at present, is that the fixed constant speed will be in the range [80,140].

    Here is the picture:

  • Certain Variability Uncertain Fixed Value
    Range=[80,140]
    Range=[80,140]

    The difference between these two cases is clear so it requires no further elucidation:

    We shall consider the case where "certain variability" is expressed as a "deviation from a given nominal value". For instance, we shall represent the range [80,140] as a deviation from the given value vn=110, namely the range is expressed as [vn - 30 , vn + 30], where vn represents the "nominal" value (speed).

    So as you can see, the notion of the "robustness of a car to speed" can have different meanings which are answers to two totally different questions:

    To throw more light on the difference between the two questions hence the different answers, let us consider the following simple example.

    In the greenhouse


    see original picture at WIKIPEDIA

    You need to replace the old climate control system in your greenhouse. The main function of this system is to control the temperature, t, in the greenhouse. Several products in the market fit the bill so, your task is to decide which of these suits your needs best.

    I shall describe two versions of this problem.

    The first version is about an "uncertainty-free" problem where, all that needs to be known about the parameters of the problem is indeed known, including the temperature t. The second version is about an utterly different situation. Here, the true value of t is unknown as it is subject to severe uncertainty.

    Version 1: Robustness to deviation from a given nominal value.

    The new climate control system will be used in a strawberry greenhouse where the nominal temperature, tn, is known. The new system must maintain the temperature in the greenhouse at this level. However, the system must also be able to cope well with deviations from this nominal value, and preference will be given to systems that can cope well with large deviations -- subject to a performance requirement.

    Let

    To evaluate system s, the criterion that you may use for this purpose would determine the largest deviation in t from the given nominal value tn that still satisfies the requirement h(s,t) ≥ h*. Thus, you would define the robustness of system s as follows:

    α(s):= max{α≥0: h(s,t) ≥ h*, ∀t∈U(α,tn)}

    where U(α,tn)={t: |t-tn|≤α}.

    The robustness that is being sought here has got nothing to do with uncertainty. That is, robustness is not directly driven, or motivated, or required, by the fact that there is uncertainty as to the parameter of interest. For, the important thing to note here is that tn is not an estimate of an unknown parameter. Rather, tn is a given known quantity that is used as a "reference point". This is not to say, of course, that uncertainty in the process governing the temperature t in the greenhouse can be a reason for seeking robustness. The point here is that robustness is evaluated in terms of the deviation of t from tn.

    Version 2: Robustness to severe uncertainty in the true value of the parameter.

    You plan to install a new climate control unit in your old square-watermelons greenhouse. The difficulty is that for various reasons you must purchase the unit without delay.

    This means that your decision as to what control unit to purchase must be taken without knowing the prospective crop in greenhouse, hence the required nominal temperature that the unit will have to maintain.

    Moreover, the huge success of the your square-watermelons project, impels you to consider a variety of exotic options -- including a miniature pink polar bear. In short, as things stand, the required nominal temperature related to prospective projects in the greenhouse is subject to severe uncertainty.

    Let:

    Note that the climate control unit is required to keep the temperature in the greenhouse at a fixed constant level and that the true value of this level is subject to severe uncertainty hence, unknown.

    Again, to select the most appropriate climate control unit, you would need to evaluate it with respect to the range of temperatures that the unit can maintain. The question is then: What is a proper definition for robustness against the severe uncertainty in the true value of t in this case?

    If you are an Info-Gap enthusiast, you may be tempted to set up the following robustness model for unit s:

    α(s,tw):= max{α≥0: h(s,t)≥h*, ∀t∈U(α,tw)}

    where tw is a wild guess of the true value of t and U(α,tw)={t: |t-tw| ≤ α}.


    Clearly, the robustness models of the two versions of the problem are identical except for the use of the symbols tn and tw.

    However:

    So, as I have been arguing all along, the flaw in the robustness model of Version 2 is in the fact that rather than take on the uncertainty in the true value of t, it focuses on the neighborhood of the wild guess tw. The picture is this:

    Robustness a la Info-Gap

    No Man's LandtwNo Man's Land
    Safe Sub-region
      <----------------- Complete     region     of     uncertainty ----------------->  

    ..............................

    No amount of rhetoric can explain away the facts expressed by this picture.

    In summary:

    So, the lesson for Info-Gap users is that for all the bombastic rhetoric in the Info-Gap literature, Info-Gap decision theory does not provide a methodology for robust decision-making in the face of severe uncertainty. The methodology that it provides for this purpose does no more than measure safe deviations from a wild guess.

    Info-Gap decision theory does not tackle the severity of the uncertainty it is supposed to manage, it ... ignores it.

    Put another way, Info-Gap decision theory makes it its business to determine the "safe" deviations from the estimate û, whereas decision-making under severe uncertainty enjoins tackling the gap between the estimate û and the true value u*. This, of course, is the reason for my labeling Info-Gap decision theory a voodoo decision theory (see FAQ-6).


  • FAQ-73: Can you give a complete example illustrating the working of an info-gap model?

    Answer-73: Certainly. The following example which is taken from my paper Sniedovich (2007) is based on an example outlined in (Ben-Haim 2006, pp. 70-74).

    Example: Investment Portfolio

    You have $Q to spare, so you consider a 10-year investment portfolio consisting of $n possible options (securities). The future values of these securities after 10 years are unknown as they are subject to severe uncertainty. The question is then: given the uncertainty, what is the best investment strategy?

    To give readers who have not encountered this problem before a fuller appreciation of the difficulties presented by it, let us first consider a much simpler version thereof. Let us pretend that the future values of the securities are known. In this case we can state the problem under consideration as follows:





    Remarks

  • FAQ-74: Does Info-Gap robustness represent likelihood of safe performance?

    Answer-74: Of course it does not! How can it?!

    When all the rhetoric is stripped away, the Info-Gap methodology boils down to a non-probabilistic, likelihood-free model of robustness that -- according to its founder -- is designed specifically for decision-making under severe uncertainty. Granted, the theory does not offer a formal definition of severe uncertainty. Still, the underlying understanding is that the Info-Gap model gives expression to situations where the estimate of a parameter of interest is a poor indication of the true value and is likely to be substantially wrong.

    In other words, the underlying idea is that the estimate is no more than a guess -- perhaps a wild guess -- of the true value of the parameter of interest. Some Info-Gap scholars even consider situations where the estimate can be based on gut feeling and rumors.

    The poor quality of the estimate û explains why the horizon of uncertainty, α, of Info-Gap's nested regions of uncertainty, U(α,û), α≥ 0, is unbounded above.

    In this framework, the robustness of decision d is, by definition, the largest value of α, call it α(d,û), such that the performance requirement is satisfied by d at every point u in the region of uncertainty U(α(d,û),û):

    α(d,û):= max {α≥0: r(d,u) ≥ r*, ∀u∈U(α,û)}

    The following is a schematic representation of the result generated by Info-Gap's robustness analysis for decision d, where the red area around the estimate û represents the largest safe region around the estimate associated with this decision

    û
    Safe sub-region
      <----------------- Complete     region     of     uncertainty ----------------->  

    Now, key to this analysis and the results generated by it is that pre-analysis as well as post-analysis, the true value of u can be anywhere in the complete region of uncertainty U, including the No Man's Land.

    Of course, this result is fully concordant with the thinking underlying Info-Gap decision theory. For as already pointed out in FAQ-37, the theory makes no assumption whatsoever regaring the likely location of the true value of u. To reiterate, nowhere is it assumed that the true value of u is more likely to be in the neighborhood of the estimate û than in any other neighborhood in the complete region of uncertainty U.

    In short, Info-Gap's uncertainty model is not only non-probabilistic, it is also likelihood-free and membership-free (Info-Gap's uncertainty model is not a fuzzy-set model).

    This means, of course, that Info-Gap's robustness model does not admit of any talk of "likelihood". And the immediate consequence of this key point is that Info-Gap robustness does not -- nor can it -- represent the likelihood of "successful" events such as "Decision d satisfies the performance requirement".

    Indeed, the claim that Info-Gap robustness represents the likelihood of "safe performance" flies in the face of Ben-Haim's long standing position on this matter. This position even predates the formulation of Info-Gap's robustness model: It goes back to the use of convex models to represent uncertainty.

    And to illustrate, consider this (emphasis is mine):

    While uncertainty in the shell shapes is fundamental to this analysis, there is no likelihood information, either in the formulation of the convex model or in the concept of reliability

    .........

    ........

    However, unlike in a probabilistic analysis, r has no connotation of likelihood. We have no rigorous basis for evaluating how likely failure may be; we simply lack the information, and to make a judgement would be deceptive and could be dangerous. There may definitely be a likelihood of failure associated with any given radial tolerance. However, the available information does not allow one to assess this likelihood with any reasonable accuracy.

    Ben-Haim (1994, p. 152)

    and this:

    Classical reliability theory is thus based on the mathematical theory of probability, and depends on knowledge of probability density functions of the uncertain quantities. However, in the present situation we cannot apply this quantification of reliability because our information is much too scanty to verify a probabilistic model. The info-gap model tells us how the unknown seismic loads cluster and expand with increasing uncertainty, but it tells us nothing about their likelihoods.
    Ben-Haim (1999, p. 1108)

    and this:

    In any case, an info-gap model of uncertainty is less informative than an probabilistic model (so its use is motivated by severe uncertainty) since it entails no information about likelihood or frequency of occurrence of u-vectors.
    Ben-Haim (2001a, p. 5)

    and this:

    In info-gap set models of uncertainty we concentrate on cluster-thinking rather than on recurrence or likelihood. Given a particular quantum of information, we ask: what is the cloud of possibilities consistent with this information? How does this cloud shrink, expand and shift as our information changes? What is the gap between what is known and what could be known. We have no recurrence information, and we can make no heuristic or lexical judgments of likelihood.
    Ben-Haim (2006, p. 18)

    That said, the following question immediately springs to mind, and would no doubt be raised by anyone -- expert in the field or novice:

    If, as so emphaticaly argued by Ben-Haim himself, it cannot be posited that the true value of u is more likely to be in the neighborhood of the estimate û than in other neighborhoods of the complete region of uncertainty U, what is the sense of a-priori fixing on this specific neighborhood and conducting the robustness analysis exclusively around û?

    This inevitable question explains why I dub Info-Gap decision theory a "voodoo decision theory".

    In any event, my point is that Info-Gap users/scholars are thoroughly careless in their depiction of the robustness yielded by Info-Gap. Indeed, not only Info-Gap scholars, even Ben-Haim himself, seems unable to resist the temptation to impute "likelihood" to Info-Gap's robustness (emphasis is mine):

    Information-gap (henceforth termed `info-gap') theory was invented to assist decision-making when there are substantial knowledge gaps and when probabilistic models of uncertainty are unreliable (Ben-Haim 2006). In general terms, info-gap theory seeks decisions that are most likely to achieve a minimally acceptable (satisfactory) outcome in the face of uncertainty, termed robust satisficing. It provides a platform for comprehensive sensitivity analysis relevant to a decision.
    Burgman, Wintle, Thompson, Moilanen, Runge, and Ben-Haim (2008, p. 8)

    This, of course, is utterly erroneous.

    But more than this:

    How is it that a methodology that in 2006 could not tell us anything about the likelihood of severely uncertain events, is used in 2009 to seek decisions that are most likely to perform well under high uncertainty?

    Clearly, there is no way to justify this position!

    Because, as the above quoted statements from Ben-Haim (1994, 1999, 2001a, 2006) categorically contend, Info-Gap decision theory " ... can make no heuristic or lexical judgments of likelihood ..."

    And more than this, by its very nature, Info-Gap's robustness analysis is local. This means that this analysis, cannot possibly tell us anything about the likelihood of events that occur outside the largest region of uncertainty around the estimate that it considers "safe". In other words, Info-Gap's robustness analysis cannot tell us anything about the "likelihood" of events in its -- typically vast -- No-Man's Land:

    û
    Safe sub-region
      <----------------- Complete     region     of     uncertainty ----------------->  

    And yet, despite all this, and despite Ben-Haim's (1994, 1999, 2001a, 2006) clear warnings, Info-Gap scholars -- including Ben-Haim himself -- now argue that their non-probabilistic, likelihood-free models can be used to draw very strong conclusions regarding the likelihood of highly uncertain events!

    If this is not Voodoo decision-making, what is?!

    For, isn't it clear that the following two statements about the nature of Info-Gap decision theory are contradictory:

    Unfortunately, such zigzags are not unusual in Info-Gap.

    As shown elsewhere in this compilation, the preferred practice in Info-gap is to extend an error rather than admit to a mistake.

    In the words of Cole Porter, Anything Goes!

    Anything Goes (Cole Porter, 1934)
    Times have changed,
    And we've often rewound the clock,
    Since the Puritans got a shock,
    When they landed on Plymouth Rock.
    If today,
    Any shock they should try to stem,
    'Stead of landing on Plymouth Rock,
    Plymouth Rock would land on them.

    In olden days a glimpse of stocking,
    Was looked on as something shocking,
    But now, God knows,
    Anything Goes.

    Good authors too who once knew better words,
    Now only use four letter words
    Writing prose,
    Anything Goes.

    The world has gone mad today
    And good's bad today,
    And black's white today,
    And day's night today,
    When most guys today
    That women prize today
    Are just silly gigolos
    And though I'm not a great romancer
    I know that I'm bound to answer
    When you propose,
    Anything goes

    When grandmama whose age is eighty
    In night clubs is getting matey with gigolo's,
    Anything Goes.

    When mothers pack and leave poor father
    Because they decide they'd rather be tennis pros,
    Anything Goes.
    If driving fast cars you like,
    If low bars you like,
    If old hymns you like,
    If bare limbs you like,
    If Mae West you like
    Or me undressed you like,
    Why, nobody will oppose!
    When every night,
    The set that's smart
    Is intruding in nudist parties in studios,
    Anything Goes.

    The world has gone mad today
    And good's bad today,
    And black's white today,
    And day's night today,
    When most guys today
    That women prize today
    Are just silly gigolos
    And though I'm not a great romancer
    I know that I'm bound to answer
    When you propose,
    Anything goes

    If saying your prayers you like,
    If green pears you like
    If old chairs you like,
    If back stairs you like,
    If love affairs you like
    With young bears you like,
    Why nobody will oppose!

    And though I'm not a great romancer
    And though I'm not a great romancer
    I know that I'm bound to answer
    When you propose,
    Anything goes...
    Anything goes!

    Remark:

    Having said all that, the following ought to be noted. Of course it is possible to construct simple, specially contrived cases where Info-Gap's robustness of a decision would be a proxy for the "probability of success", namely the probability that the decision satisfies the performance requirement (eg. Ben-Haim 2006, pp. 283-284).

    But this in no way serves as a justification for generalized statements about Info-Gap's robustness, so that it is totally erroneous to claim that

    In general terms, info-gap theory seeks decisions that are most likely to achieve a minimally acceptable (satisfactory) outcome in the face of uncertainty, termed robust satisficing.

    Indeed, such claims give an utterly distorted view of what Info-Gap is all about and in particular of its robustness model. To be precise, statements such as this give a totally incorrect account of Info-Gap's definition of robustness:

    α(d,û):= max {α≥0: r(d,u) ≥ r*, ∀u∈U(α,û)}

    û
    Safe Sub-region
      <----------------- Complete     region     of     uncertainty ----------------->  

    So the bottom line is this:


  • FAQ-75: What are the ramifications of the argument that Info-Gap robustness does not represent likelihood of events?

    Answer-75: The argument showing that Info-Gap robustness does not represent likelihood of events brings into full view the fundamental flaw in Info-Gap decision theory: the flaw exposing it for the voodoo decision theory it is. It also brings out the muddled reasoning behind a misinterpretation of Info-Gap robustness that is common in the Info-Gap literature.

    To explain this point, let us consider the simple case where the complete region of uncertainty is the positive segment of the real line. That is, assume that U=(0,∞). This means that the true value of u, call it u*, is equal to some non-negative number.

    Since the value of u* is subject to severe uncertainty, all we know is that it is somewhere on U. That is, given that the uncertainty is severe, we have not the slightest inkling whether u* is more or less likely to be for instance in [100,200] rather than in [600,700], or whether it is more or less likely that u* ≤ 200 than u*≥ 200. We simply have no clue.

    What Info-Gap users/scholars fail to appreciate is that the imposition of Info-Gap's ad hoc uncertainty model on U, does not make the slightest dent, not even by one iota, in the severity of the uncertainty. And to illustrate, let û=500 and

    U(α,û):={u≥0: |u-û| ≤ α} , α≥ 0

    The uncertainty in the true value of u has not changed even in the slightest, despite the imposition of these regions of uncertainty on U. We remain as ignorant as before as to whether u* is more or less likely to be in [100,200] than in [600,700], or whether it is more/less likely that u* ≤ 200 than u*≥ 200. We simply do not know.

    The point here is that the choice of u*=500 as a wild guess of u* in no way indicates that, all of a sudden, we have come to know that u* is more/less likely to be in the neighborhood of u*=500.

    Consequently, the robustness of a decision a la Info-Gap, namely

    α(d,û):= max {α≥ 0: r* ≤ r(d,u), ∀ u∈ U(α,û)} , d∈ D

    has got nothing to do with the "likelihood" that d will satisfy the performance constraint r*≤ r(d,u).

    That is, the fact that α(d'',û) < α(d',û) does not imply that decision d' is more/less likely to satisfy the performance constraint. All we can say is that

    observing that the last statement is a consequence of the nesting property of Info-Gap's regions of uncertainty.

    As far as "likelihood" goes, all we can say is that decision d' is certain to satisfy the performance constraint if the true value of u is in U(α(d',û),û) and that d'' is certain to satisfy the performance constraint if the true value of u is in U(α(d'',û),û).

    But we are in no position whatsoever to contend anything about how likely this will be.

    The picture is this:

    û
    Safe sub-region
      <----------------- Complete     region     of     uncertainty ----------------->  

    But, this of course is hardly surprising as Info-Gap's robustness model:

    α(d,û):= max {α≥0: r(d,u) ≥ r*, ∀u∈U(α,û)}

    is non-probabilistic and likelihood-free.

    And to illustrate, consider this (emphasis is mine):

    While uncertainty in the shell shapes is fundamental to this analysis, there is no likelihood information, either in the formulation of the convex model or in the concept of reliability

    .........

    ........

    However, unlike in a probabilistic analysis, r has no connotation of likelihood. We have no rigorous basis for evaluating how likely failure may be; we simply lack the information, and to make a judgement would be deceptive and could be dangerous. There may definitely be a likelihood of failure associated with any given radial tolerance. However, the available information does not allow one to assess this likelihood with any reasonable accuracy.

    Ben-Haim (1994, p. 152)

    and this:

    Classical reliability theory is thus based on the mathematical theory of probability, and depends on knowledge of probability density functions of the uncertain quantities. However, in the present situation we cannot apply this quantification of reliability because our information is much too scanty to verify a probabilistic model. The info-gap model tells us how the unknown seismic loads cluster and expand with increasing uncertainty, but it tells us nothing about their likelihoods.
    Ben-Haim (1999, p. 1108)

    and this:

    In any case, an info-gap model of uncertainty is less informative than an probabilistic model (so its use is motivated by severe uncertainty) since it entails no information about likelihood or frequency of occurrence of u-vectors.
    Ben-Haim (2001a, p. 5)

    and this:

    In info-gap set models of uncertainty we concentrate on cluster-thinking rather than on recurrence or likelihood. Given a particular quantum of information, we ask: what is the cloud of possibilities consistent with this information? How does this cloud shrink, expand and shift as our information changes? What is the gap between what is known and what could be known. We have no recurrence information, and we can make no heuristic or lexical judgments of likelihood.
    Ben-Haim (2006, p. 18)

    The question requiring an answer in the first instance is of course the following:

    Given that we have not the slightest clue as to whether the true value of u is more likely to be in the neighborhood of the estimate û, what is the rationale for singling out this particular neighborhood out of the complete region of uncertainty and proceeding to confine the robustness analysis to this neighborhood?

    This, of course, is the reason for my dubbing Info-Gap a voodoo decision theory: even though there is no reason to believe that the true value of u is, or is likely to be, in the neighborhood of the wild guess û, Info-Gap fixes its robustness analysis in this neighborhood without bothering to check how decisions fare elsewhere in the complete region of uncertainty.

    But more bewildering is the fact that now Ben-Haim attributes to his non-probabilistic, likelihood-free robustness model the capability to evaluate/compare the likelihood of events such as whether the performance requirement is satisfied (emphasis is mine):

    Information-gap (henceforth termed `info-gap') theory was invented to assist decision-making when there are substantial knowledge gaps and when probabilistic models of uncertainty are unreliable (Ben-Haim 2006). In general terms, info-gap theory seeks decisions that are most likely to achieve a minimally acceptable (satisfactory) outcome in the face of uncertainty, termed robust satisficing. It provides a platform for comprehensive sensitivity analysis relevant to a decision.

    Burgman, Wintle, Thompson, Moilanen, Runge, and Ben-Haim (2008, p. 8)

    The only way that one can claim validity to this statement is to show, or in the very least make the explicit assumption, that the true value of u is likely to be in the neighborhood of the estimate û. But can such an assumption even be contemplated given that the true value of u is subject to severe uncertainty?

    I must conclude, therefore, that either this statement is one of those unfortunate slips, or that Info-Gap no longer claims to be a theory dealing with severe uncertainty, in which case it would be necessary to change, among other things, the title of Ben-Haim's (2006) book.

    If, however, this claim is upheld, then it would be totally in line with the established practice in Info-Gap, where — as my experience over the past five years has shown — extending an error is preferred to admitting to a mistake.

    The important point to note here, incredible though it is, is that this indeed is how the estimate û is grasped and interpreted -- of course, misinterpret -- by Info-Gap scholars/users. So, the real issue here is that they seem unperturbed by the idea that a local analysis in the neighborhood of a wild guess of a parameter of interest can yield robustness against severe uncertainty

    û
    Safe sub-region
      <----------------- Complete     region     of     uncertainty ----------------->  

    This is what is so troubling about this matter!


  • FAQ-76: Have there been any attempts to correct the fundamental flaws in Info-Gap decision theory?

    Answer-76: This seems to be the case.

    Before I proceed to discuss this question I need to point out that when I refer to attempts at correcting flaws in Info-Gap decision theory, I do not mean to suggest that one would find in the Info-Gap literature expressions such as:

    This is not the way things are done in the Info-Gap literature.

    The established practice in the Info-Gap literature, and this has been the case right from its launch, is to engage in rhetoric.

    Hence, statements openly acknowledging the existence of flaws in the theory accompanied by arguments outlining proposed corrections should not be expected. Rather, one should expect to find dissertations consisting of verbiage that inadvertently alludes to flaws, accompanied by semantic quick fixes that superficially attempt to dress up the fundamental flaws without really dealing with them in a serious manner.

    These lame, counter-productive attempts are reminiscent of the following familiar formula:

    • Q: How do you catch a blue kangaroo?
    • A: With a blue kangaroo trap.
      • Q: How do you catch a green Kangaroo?
      • A: Paint it blue and catch it with the blue kangaroo trap.

    In other words, they offer no more than a superficial "paint job".

    But, as we know only too well, such "miracle cures" can offer no remedy at all.

    For, as we shall see, not only do these remedies fall short of addressing Info-Gap's endemic flaws, all they actually manage to do is to exacerbate an already sorry state-of-affairs.

    As things stand now, it seems that a number of Info-Gap scholars have taken note of my criticism of Info-Gap decision theory to the effect that they have deemed it necessary to take certain measures that, they apparently believe, will blunt its bite.

    So before we can go into this matter, let us first recall that the two main points of my criticism of Info-Gap are that:

    Also, recall that Info-Gap's robustness model is formulated as follows:

    α(d,û):= max {α≥ 0: r* ≤ r(d,u), ∀ u∈ U(α,û)} , d∈ D

    It is also important to keep in mind the schematic representation of the result generated by Info-Gap's robustness analysis for decision d, where the white area around the estimate û represents the largest safe region around the estimate associated with this decision.

    û
    Safe sub-region
      <----------------- Complete     region     of     uncertainty ----------------->  

    Let us now examine, in fact revisit, the issues that are raised by the above two points and let us see how instead of dealing with these issues directly, Info-Gap scholars go for the evasive counter-productive quick fix.


  • FAQ-79: How is it that Info-Gap decision theory is so laconic about the estimate û?

    Answer-79: This question brings out another serious methodological failing of Info-Gap.

    The point is this. The estimate û constitutes the fulcrum of Info-Gap's uncertainty and robustness models. It is the core element around which revolves the robustness analysis which, according to Info-Gap's prescription, provides the basis for identifying (presumably) robust decisions. And yet, the theory has precious little to say about û.

    It is remarkable that, in sharp contrast to the lengthy dissertations that are devoted to the "Knightian" uncertainty that Info-Gap (purportedly) takes on, not the slightest instruction is given to enlighten us on how the estimate û is determined. What qualities -- if any -- should it have? What requirements should it satisfy? And so on. All we know is that in the framework of Info-Gap decision theory one proceeds on the assumption that û is an estimate of the true value of u.

    So, formally we give expression to this idea by stating that û is an element of U and that U(0,û)={û}. And since the regions of uncertainty are nested, it also follows that û∈U(α,û), ∀α≥ 0,in fact all the regions of uncertainty U(&alpha:,û),α≥ 0 are "centered" at û.

    But the whole point here is that when applying the Info-Gap decision theory to a specific case, the entire exercise is conducted under conditions of severe uncertainty. So the really important question is: given that we are in the dark as to the true value of our parameter of interest, how do we go about determining its estimate, namely the value of û? Would we be able, or required, or whatever, to consult a list of considerations that will enable us to settle on a specific value of û?

    And more than this. What if we do not have, or cannot come up with, an estimate of the true value of u? And what if we have more than one estimate of the true value of u, say an "interval" of possible (probable ?) values?

    On all this Info-Gap decision theory keeps mum.


  • FAQ-80: In what sense is the estimate û "best"?

    Answer-80: This remains a mystery!

    The phrase "best estimate" (sometimes "best guess") is used extensively in the Info-Gap literature to describe û. Of course, the question that immediately come to mind is: why should Info-Gap scholars and users think it necessary to designate û as the "best estimate"? Why doesn't the simple unqualified "estimate" suffice?

    On this one can only speculate as nowhere in this literature would one find any discussion or argument setting out the justification for this Info-Gap terminology. The most likely explanation seems to be, though, that behind this is the tacit recognition that objections are certain to be raised against the inexcusable practice of treating a "poor", "questionable", "doubtful", "unreliable" estimate as though it were a "decent", "acceptable" even "reliable" estimate.

    After all, Info-Gap's robustness model operates under conditions of severe uncertainty, so the value of û that is used in the analysis is a poor indication of the true value of u and is likely to be substantially wrong (Ben-Haim 2007, p. 2). Hence, to disarm (excpected) objections Info-gap users have taken to calling a "wild guess" a "best guess" and a "poor estimate" a "best estimate".

    It should also be pointed out that appellations such as "best understanding" and "nominal value" are also being used in the literature to describe û.

    The question of course is: in what sense is the value of û "best" to thus render it the contender that one would settle on? Are we to understand that there are other estimates available, but these cannot be deemed "best"? In this case what are the criteria that are used to decide which of the available estimates is "best"?

    But as might be expected, all this is left unanswered.

    One suspects that terms such as "best understanding" and "nominal value" are used in the Info-Gap literature to bestow an even greater legitimacy on the practice of treating û as though it were a fully fledged "acceptable" estimate.

    It is important to keep in mind, therefore, that no amount of re-anointing û will alter the fact that our understanding -- under conditions of severe uncertainty -- is such that the value we assign û in the analysis is a poor indication of the true value of u and is likely to be substantially wrong. Ben-Haim (2007, p. 2) explains that the estimate can be a "wild guess" and other Info-Gap scholars indicate that it can be based on no more than "gut feeling" or "rumors".

    And should you need it, here is a recipe for obtaining a best estimate:

    Wet your index finger and put it in the air. Think of a number and double it.

    See it online at wiki.answers.com/Q/What_is_best_estimate_and_how_do_i_calculate_it.


  • FAQ-81: How is it that Info-Gap decision theory fails to prescribe a sensitivity analysis with respect to the estimate û?

    Answer-81: This is one of those inexplicable facts about Info-Gap decision theory. For, given that the "severe uncertainty" obtains with respect to the true value of u, one would have expected that a sensitivity analysis with respect to its estimate û be made an integral part of the methodology.

    So, the fact that Info-Gap decision theory fails to prescribe a sensitivity analysis with respect to the estimate û is further testimony to the flawed thinking informing it. Indeed, it is a testimony to the underlying failure to appreciate that a methodology that is aimed at tackling severe uncertainty but confines its robustness analysis to the neighborhood of a single (point) estimate of u, in effect ignores the severity of the uncertainty that it is supposed to manage.

    Of course, such a sensitivity analysis must be part and parcel of any methodology with pretensions to be considered a paradigm for decision-making under severe uncertainty. The difficulty in the case of Info-Gap is though that although conceptually such an analysis is "straightforward" -- vary the value of û and check how the solution changes -- its design and implementation within the framework of Info-Gap decision theory is not straightforward at all.

    In particular, such an analysis will require that we determine the "best" decision from among those generated by changing the value of the estimate û. But, given the severe uncertainty in the true value of u, how would we do this?

    For instance, suppose that our sensitivity analysis with respect to û consists of solving the problem for four different values of û, call them û(1), û(2), û(3), û(4), whereupon we obtain the following optimal decisions, respectively:

    Where do we go, then?

    Hence the following suggestion.

    Given that an envisaged sensitivity analysis with respect to û would require varying the value of û, it would be simpler to evaluate the performance function r on some grid superimposed on the complete uncertainty region U and thus save the trouble of evaluating the robustness of decisions for each of the values of the estimate û generated by the sensitivity analysis.

    In short, if one envisions a sensitivity analysis with respect to the value of û, it would be much simpler to conduct the sensitivity analysis with respect to u itself?! (see FAQ-45 for a different perspective on this issue).


  • FAQ-82: Is info-Gap's measure of robustness a reinvention of the good old "stability radius"?

    Answer-82: Of course it is!

    The origins of the concept Radius of Stability apparently go back to the 1960s, to discussions on stability in numerical analysis (Wilf 1960, Milne and and Raynolds 1962). Thus, in Milne and Raynolds (1962, p. 67) we read the following:

    It is convenient to use the term "radius of stability of a formula" for the radius of the largest circle with center at the origin in the s-plane inside which the formula remains stable.

    Note that in the case of Info-Gap decision theory the "formula" is the performance constraint r(d,p) ≤ r*. In short, Info-Gap's robustness model is a radius of stability model where the stability of the system is determined by the performance requirement r(d,p) ≤ r*.

    In any case, today the concept "radius of stability" plays an important role in many fields such as applied mathematics, optimization theory, and control theory.

    For instance, as indicated by Paice and Wirth (1998, p. 289):

    Robustness analysis has played a prominent role in the theory of linear systems. In particular the state-state approach via stability radii has received considerable attention, see [HP2], [HP3], and references therein. In this approach a perturbation structure is defined for a realization of the system, and the robustness of the system is identified with the norm of the smallest destabilizing perturbation. In recent years there has been a great deal of work done on extending these results to more general perturbation classes, see, for example, the survey paper [PD], and for recent results on stability radii with respect to real perturbations ...

    In the first edition of the Encyclopedia of Optimization, Zlobec (2001) describes the "radius of stability" as follows:

    The radius of the largest ball centered at θ*, with the property that the model is stable at its every interior point θ, is the radius of stability at θ*, e.g, [69]. It is a measure of how much the system can be uniformly strained from θ before it starts breaking down.

    First, the math-free picture.

    Consider a system that can be in either one of two states: a stable state or an unstable state, depending on the value of some parameter p. We also say that p is stable if the state associated with it is stable and that p is unstable if the state associated with it is unstable. Let P denote the set of all possible values of p, and let the "stable/unstabe" partition of P be:

    Now, assume that our objective is to determine the stability of the system with respect to small perturbations in a given nominal value of p, call it p'. In this case, the question that we would ask ourselves would be as follows:

    How far can we move away from the nominal point p' (under the worst-case scenario) without leaving the region of stability S?

    The "worst-case scenario" clause determines the "direction" of the perturbations in the value of p': we move away from p' in the worst direction. Note that the worst direction depends on the distance from p'. The following picture illustrates the simple concept behind this fundamental question.

    Consider the largest circle centered at p' in this picture. Since some points in the circle are unstable, and since under the worst-case scenario the deviation proceeds from p' to points on the boundary of the circle, it follows that, at some point, the deviation will exit the region of stability. This means then that the largest "safe" deviation from p' under the worst-case scenario is equal to the radius of the circle centered at p' that is nearest to the boundary of the region of stability. And this is equivalent to saying that, under the worst-case scenario, any circle that is contained in the region of stability S is "safe".

    So generalizing this idea from "circles" to high-dimensional "balls", we obtain:

    The radius of stability of the system represented by (P,S,I) with respect to the nominal value p' is the radius of the largest "ball" centered at p' that is contained in the stability region S.

    The following picture illustrates the clear equivalence of "Info-Gap robustness" and "stability radius":

    In short, for all the spin and rhetoric, hailing Info-Gap's local measure of robustness as new and radically different, the fact of the matter is that this measure is none other than the "old warhorse" known universally as stability radius.

    And as pointed out above, what is lamentable about this state-of-affairs is not only the fact that Info-Gap scholars fail to see (or ignore) this equivalence, but also that those who should know better, continue to promote this theory from the pages of professional journals. See my discussion on Info-Gap Economics.

    Math corner.

    There are many ways to formally define the stability radius of a system. For our purposes it is convenient to do it this way:

    ρ(p')  := max {ρ: p∈S, ∀p∈B(ρ,p')}

    In words: the radius of stability is the largest value of ρ such that the ball B(ρ,p') centered at p' is contained in the region of stability S.

    Now, condier the specific case where the region of stability S is defined by a performance constraint as follows:

    S  := {p∈P: r(d,p) ≤ r*}

    where d denotes the system under consideration and r* is a given critical performance level.

    Then in this case the stability radius of system d is as follow:

    ρ(d,p')  := max {ρ: r(d,p) ≤ r*, ∀p∈B(ρ,p')}

    In short:

    Info-Gap's measure of Robustness   Stability Radius
    α(q,û)  := max {α: r(d,u) ≤ r*, ∀u∈U(α,û)}
    ρ(d,p')  := max {ρ: r(d,p) ≤ r*, ∀p∈B(ρ,p')}

    The conclusion is therefore that Info-Gap's measure of robustness is a re-invention of the good old "stability radius".

    Remark:

    In mathematics and control theory it is often more convenient to use the following alternative definition for the stability radius:

    The radius of stability of the system represented by (P,S,I) with respect to the nominal value p' is the radius of the smallest "ball" centered at p' that contains an unstable point. That is, it is the distance from p' to the nearest point in I.

    In this case,

    ρ(p')  := inf {ρ: ∃ p∈B(ρ,p') such that p∈I}

    Note the use of "inf" rather than "min" due to the fact that a minimum value for ρ may not exist (eg. if I is an open set).


  • References
    Welcome to Factland

    This contribution is dedicated to the Info-Gap people at Wikipedia. They were searching for a formal proof that ....

     

    Fact 1: Info-Gap is a simple instance of Wald's Maximin model [1945].


     

    Fact 2: Info-Gap does not deal with severe uncertainty: it simply ignores it.

    Recent Articles, Working Papers, Notes

    Also, see my complete list of articles
      Moshe's new book!
    • Sniedovich, M. (2012) Fooled by local robustness, Risk Analysis, in press.

    • Sniedovich, M. (2012) Black swans, new Nostradamuses, voodoo decision theories and the science of decision-making in the face of severe uncertainty, International Transactions in Operational Research, in press.

    • Sniedovich, M. (2011) A classic decision theoretic perspective on worst-case analysis, Applications of Mathematics, 56(5), 499-509.

    • Sniedovich, M. (2011) Dynamic programming: introductory concepts, in Wiley Encyclopedia of Operations Research and Management Science (EORMS), Wiley.

    • Caserta, M., Voss, S., Sniedovich, M. (2011) Applying the corridor method to a blocks relocation problem, OR Spectrum, 33(4), 815-929, 2011.

    • Sniedovich, M. (2011) Dynamic Programming: Foundations and Principles, Second Edition, Taylor & Francis.

    • Sniedovich, M. (2010) A bird's view of Info-Gap decision theory, Journal of Risk Finance, 11(3), 268-283.

    • Sniedovich M. (2009) Modeling of robustness against severe uncertainty, pp. 33- 42, Proceedings of the 10th International Symposium on Operational Research, SOR'09, Nova Gorica, Slovenia, September 23-25, 2009.

    • Sniedovich M. (2009) A Critique of Info-Gap Robustness Model. In: Martorell et al. (eds), Safety, Reliability and Risk Analysis: Theory, Methods and Applications, pp. 2071-2079, Taylor and Francis Group, London.
    • .
    • Sniedovich M. (2009) A Classical Decision Theoretic Perspective on Worst-Case Analysis, Working Paper No. MS-03-09, Department of Mathematics and Statistics, The University of Melbourne.(PDF File)

    • Caserta, M., Voss, S., Sniedovich, M. (2008) The corridor method - A general solution concept with application to the blocks relocation problem. In: A. Bruzzone, F. Longo, Y. Merkuriev, G. Mirabelli and M.A. Piera (eds.), 11th International Workshop on Harbour, Maritime and Multimodal Logistics Modeling and Simulation, DIPTEM, Genova, 89-94.

    • Sniedovich, M. (2008) FAQS about Info-Gap Decision Theory, Working Paper No. MS-12-08, Department of Mathematics and Statistics, The University of Melbourne, (PDF File)

    • Sniedovich, M. (2008) A Call for the Reassessment of the Use and Promotion of Info-Gap Decision Theory in Australia (PDF File)

    • Sniedovich, M. (2008) Info-Gap decision theory and the small applied world of environmental decision-making, Working Paper No. MS-11-08
      This is a response to comments made by Mark Burgman on my criticism of Info-Gap (PDF file )

    • Sniedovich, M. (2008) A call for the reassessment of Info-Gap decision theory, Decision Point, 24, 10.

    • Sniedovich, M. (2008) From Shakespeare to Wald: modeling wors-case analysis in the face of severe uncertainty, Decision Point, 22, 8-9.

    • Sniedovich, M. (2008) Wald's Maximin model: a treasure in disguise!, Journal of Risk Finance, 9(3), 287-291.

    • Sniedovich, M. (2008) Anatomy of a Misguided Maximin formulation of Info-Gap's Robustness Model (PDF File)
      In this paper I explain, again, the misconceptions that Info-Gap proponents seem to have regarding the relationship between Info-Gap's robustness model and Wald's Maximin model.

    • Sniedovich. M. (2008) The Mighty Maximin! (PDF File)
      This paper is dedicated to the modeling aspects of Maximin and robust optimization.

    • Sniedovich, M. (2007) The art and science of modeling decision-making under severe uncertainty, Decision Making in Manufacturing and Services, 1-2, 111-136. (PDF File) .

    • Sniedovich, M. (2007) Crystal-Clear Answers to Two FAQs about Info-Gap (PDF File)
      In this paper I examine the two fundamental flaws in Info-Gap decision theory, and the flawed attempts to shrug off my criticism of Info-Gap decision theory.

    • My reply (PDF File) to Ben-Haim's response to one of my papers. (April 22, 2007)

      This is an exciting development!

      • Ben-Haim's response confirms my assessment of Info-Gap. It is clear that Info-Gap is fundamentally flawed and therefore unsuitable for decision-making under severe uncertainty.

      • Ben-Haim is not familiar with the fundamental concept point estimate. He does not realize that a function can be a point estimate of another function.

        So when you read my papers make sure that you do not misinterpret the notion point estimate. The phrase "A is a point estimate of B" simply means that A is an element of the same topological space that B belongs to. Thus, if B is say a probability density function and A is a point estimate of B, then A is a probability density function belonging to the same (assumed) set (family) of probability density functions.

        Ben-Haim mistakenly assumes that a point estimate is a point in a Euclidean space and therefore a point estimate cannot be say a function. This is incredible!


    • A formal proof that Info-Gap is Wald's Maximin Principle in disguise. (December 31, 2006)
      This is a very short article entitled Eureka! Info-Gap is Worst Case (maximin) in Disguise! (PDF File)
      It shows that Info-Gap is not a new theory but rather a simple instance of Wald's famous Maximin Principle dating back to 1945, which in turn goes back to von Neumann's work on Maximin problems in the context of Game Theory (1928).

    • A proof that Info-Gap's uncertainty model is fundamentally flawed. (December 31, 2006)
      This is a very short article entitled The Fundamental Flaw in Info-Gap's Uncertainty Model (PDF File) .
      It shows that because Info-Gap deploys a single point estimate under severe uncertainty, there is no reason to believe that the solutions it generates are likely to be robust.

    • A math-free explanation of the flaw in Info-Gap. ( December 31, 2006)
      This is a very short article entitled The GAP in Info-Gap (PDF File) .
      It is a math-free version of the paper above. Read it if you are allergic to math.

    • A long essay entitled What's Wrong with Info-Gap? An Operations Research Perspective (PDF File) (December 31, 2006).
      This is a paper that I presented at the ASOR Recent Advances in Operations Research (PDF File) mini-conference (December 1, 2006, Melbourne, Australia).

    Recent Lectures, Seminars, Presentations

    If your organization is promoting Info-Gap, I suggest that you invite me for a seminar at your place. I promise to deliver a lively, informative, entertaining and convincing presentation explaining why it is not a good idea to use — let alone promote — Info-Gap as a decision-making tool.

    Here is a list of relevant lectures/seminars on this topic that I gave in the last two years.

     


    Disclaimer: This page, its contents and style, are the responsibility of its owner and do not represent the views, policies or opinions of the organizations he is associated/affiliated with.


    Last modified: