Misplaced Pages

Relational dependency network

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
BoostSRL
Developer(s)StARLinG Lab
Initial releaseDecember 29, 2016 (2016-12-29)
Stable release1.1.1 / August 1, 2019 (2019-08-01)
Repositorygithub.com/starling-lab/BoostSRL
Written inJava
PlatformLinux, macOS, Windows
TypeMachine Learning, Relational dependency network
LicenseGPL 3.0
Websitestarling.utdallas.edu/software/boostsrl/

Relational dependency networks (RDNs) are graphical models which extend dependency networks to account for relational data. Relational data is data organized into one or more tables, which are cross-related through standard fields. A relational database is a canonical example of a system that serves to maintain relational data. A relational dependency network can be used to characterize the knowledge contained in a database.

Introduction

Relational Dependency Networks (or RDNs) aims to get the joint probability distribution over the variables of a dataset represented in the relational domain. They are based on Dependency Networks (or DNs) and extend them to the relational setting. RDNs have efficient learning methods where an RDN can learn the parameters independently, with the conditional probability distributions estimated separately. Since there may be some inconsistencies due to the independent learning method, RDNs use Gibbs sampling to recover joint distribution, like DNs.

Unlike Dependency Networks, RDNs need three graphs to fully represent them.

  • Data graph: The nodes of this graph represent objects from the data set, and edges represent the dependencies between these objects. Each object and edge receives a type, and each object has an attribute set.
  • Model graph: A higher-order graph representing types. The nodes of this graph represent the attributes of a given type, and the edges represent dependencies between attributes. The dependencies may be between attributes of the same type or different types.
  • Each node is associated with a probability distribution conditioned to its parent nodes. The model graph makes no assumptions about the data set, making it general enough to support different data represented by the data graph. Thus, it is possible to use a given data set to learn the model graph's structure and conditional probability distributions and then generate the inference graph from the model graph applied to a data graph representing another set of data.
  • Inference graph: A graph generated from the data graph and model graph in a process known as 'roll out'. Inference graphs are generally larger than both data graphs and model graphs as every single attribute of any individual object is an instance on the inference graph whose characteristics correspond to the attribute retrieved from the model graph.

In other words, the data graph guides how the model graph will be rolled out to generate the inference graph.

RDN Learning

The learning methods of an RDN are similar to that employed by a DNs. i.e., all conditional probability distributions can be learned for each of the variables independently. However, only conditional relational learners can be used during the parameter estimation process for RDNs. Therefore, the learners used by DNs, like decision trees or logistic regression, do not work for RDNs.

Neville, J., & Jensen, D. (2007) conducted some experiments comparing RDNs when learning with Relational Bayesian Classifiers and RDNs when learning with Relational Probability Trees. Natarajan et al. (2012) used a series of regression models to represent conditional distributions.

This learning method makes the RDN a model with an efficient learning time. However, this method also makes RDNs susceptible to some structural or numerical inconsistencies. If the conditional probability distribution estimation method uses feature selection, it is possible that a given variable finds a dependency between itself and another variable while the latter doesn't find this dependency. In this case, the RDN is structurally inconsistent. In addition, if the joint distribution doesn't sum to one owing to the approximations caused by the independent learning, then it is called a numerical inconsistency. Such inconsistencies can, however, be bypassed during the inference step.

RDN Inference

RDN inference begins with the creation of an inference graph through a process called roll out. In this process, the model graph is rolled out over the data graph to form the inference graph. Next, Gibbs sampling technique can be used to recover a conditional probability distribution.

Applications

RDNs have been applied in many real-world domains. The main advantages of RDNs are their ability to use relationship information to improve the model's performance. Diagnosis, forecasting, automated vision, sensor fusion and manufacturing control are some examples of problems where RDNs were applied.

Implementations

Some suggestions of RDN implementations:

  • BoostSRL: A system specialized on gradient-based boosting approach learning for different types of Statistical Relational Learning models, including Relational Dependency Networks. For more details and notations, see Natarajan et al. (2011).

References

  1. Neville, Jennifer; Jensen, David (2007). "Relational Dependency Networks" (PDF). Journal of Machine Learning Research. 8: 653–692. Retrieved 9 February 2020.
  2. ^ Natarajan, Sriraam; Khot, Tushar; Kersting, Kristian; Gutmann, Bernd; Shavlik, Jude (10 May 2011). "Gradient-based boosting for statistical relational learning: The relational dependency network case" (PDF). Machine Learning. 86 (1): 25–56. doi:10.1007/s10994-011-5244-9. Retrieved 9 February 2020.
  3. Lab, StARLinG. "BoostSRL Wiki". StARLinG. Retrieved 9 February 2020.
Category: