Summary:
Tianyang Wang
and James S. Dyer used a copula function as a basis for decision trees; these copulas
contained parameters that were demonstrated through probability trees, which are discreet and conditional.
The
process of using a dependent decision tree is:
1. Assessment
of marginals, dependence, and copula.
During this step, the authors state the need to assess the information available
as well as determine what type of copula is best used for the dependence and existence
of uncertainties. Statistical measures are used to go through the variables and
determine to correct copula.
2. Specification
of parameters for the underlying copula. In order to use the copula in a dependency
structure, parameters need to be established. For the elliptical copula
parameters can be established through an estimate of the correlation between
original uncertainties.
3. Construction
of the transient tree structure for underlying copula. The resulting process is a probability tree that uses the
conditional probability that was established.
4.
Point-to-point inverse marginal
transformation. This
step uses different distributions to draw out the tree
From
this process, decision trees are drawn out for various outcomes and the
decision tree model is applied to various forms of copulas.
Critique:
This
study provided an interesting application of decision trees. Not only did the study illustrate the various
potential outcomes, but it also broke down the process into specific steps. While the application was not directly related
to the intelligence field, the process of organizing and structuring decision
trees. This process was specifically
geared towards copulas and creating a dependant tree, though there is an issue
with the fact that there can be a significantly large number of variables that
grow form the initial construct. The base of the tree is set from parameters
established through equations. While
this application of a decision tree is effective for this purpose, it is
difficult to gain a deep understanding of the concept in relation to
non-statistical elements. Decision trees
are useful to gain a broad understanding of various different outcomes and they
have the potential to be an effective initial step in an analysis of
information.
This
methodology appears to be useful in a statistical application, though this
study does not directly demonstrate the application from an intelligence
stand-point.
Wang,
T. & Dyer J. S. (2012). A Copulas-Based Approach to Modeling Dependence in
Decision Trees. Operations Research, 60(1),
225-42.
Olivia, I definitely agree that this article is geared towards statistics. I also agree this statistical approach could apply to the intelligence field, particularly with estimated data. Steps are always useful when learning new topics. I like how it seeks to find the variance of the nodes and not just the node values, for this allows for estimated variables where data may be estimated or values may be missing.
ReplyDelete