Graph Attention Network GAT
Tags: #machine learning #graph #GNNEquation
Latex Code
1 2 3 4 5 6 | h=\{\vec{h_{1}},\vec{h_{2}},...,\vec{h_{N}}\}, \\ \vec{h_{i}} \ in \mathbb{R}^{F} \\ W \ in \mathbb{R}^{F \ times F^{'}} \\ e_{ij}=a(Wh_{i},Wh_{j}) \\ k \ in \mathcal{N}_{i},\text{ neighbourhood nodes}\\ a_{ij}=\text{softmax}_{j}(e_{ij})=\frac{\exp(e_{ij})}{\sum_{k \ in \mathcal{N}_{i}} \exp(e_{ik})} |
Have Fun
Let's Vote for the Most Difficult Equation!
Introduction
Equation
Latex Code
1 2 3 4 5 6 | h=\{\vec{h_{1}},\vec{h_{2}},...,\vec{h_{N}}\}, \\ \vec{h_{i}} \ in \mathbb{R}^{F} \\ W \ in \mathbb{R}^{F \ times F^{'}} \\ e_{ij}=a(Wh_{i},Wh_{j}) \\ k \ in \mathcal{N}_{i},\text{ neighbourhood nodes}\\ a_{ij}=\text{softmax}_{j}(e_{ij})=\frac{\exp(e_{ij})}{\sum_{k \ in \mathcal{N}_{i}} \exp(e_{ik})} |
Explanation
GAT applies graph attentional layer to model the graph propagation. In each layer, the node i has attention on all the other nodes j. And the attention coefficient is calculated. For the attention calculation, only the set of neighbours nodes N_{i} of each node i contributes to the final softmax attention calculation. You can check more detailed information in this paper, GRAPH ATTENTION NETWORKS for more details.
Reply