Partial Information Decomposition is an extension of information theory, that aims to generalize the pairwise relations described by information theory to the interaction of multiple variables.[1]
Motivation
Information theory can quantify the amount of information a single source variable has about a target variable via the mutual information. If we now consider a second source variable , classical information theory can only describe the mutual information of the joint variable with , given by . In general however, it would be interesting to know how exactly the individual variables and and their interactions relate to .
Consider that we are given two source variables and a target variable . In this case the total mutual information , while the individual mutual information . That is, there is synergistic information arising from the interaction of about , which cannot be easily captured with classical information theoretic quantities.
Definition
Partial information decomposition further decomposes the mutual information between the source variables with the target variable as
Here the individual information atoms are defined as
is the unique information that has about , which is not in
is the synergistic information that is in the interaction of and about
is the redundant information that is in both or about
There is, thus far, no universal agreement on how these terms should be defined, with different approaches that decompose information into redundant, unique, and synergistic components appearing in the literature.[1][2][3][4]
Applications
Despite the lack of universal agreement, partial information decomposition has been applied to diverse fields, including climatology,[5] neuroscience[6][7][8] sociology,[9] and machine learning[10] Partial information decomposition has also been proposed as a possible foundation on which to build a mathematically robust definition of emergence in complex systems[11] and may be relevant to formal theories of consciousness.[12]