Graph Convolutional Networks GCN
Tags: #machine learning #graph #GNNEquation
$$H^{(l+1)}=\sigma(\tilde{D}^{\frac{1}{2}}\tilde{A}\tilde{D}^{\frac{1}{2}}H^{l}W^{l})\\ \tilde{A}=A+I_{N}\\ \tilde{D}_{ii}=\sum_{j}\tilde{A}_{ij} \\ H^{0}=X \\ \mathcal{L}=\sum_{l \in Y}\sum^{F}_{f=1} Y_{lf} \ln Z_{lf}$$Latex Code
H^{(l+1)}=\sigma(\tilde{D}^{\frac{1}{2}}\tilde{A}\tilde{D}^{\frac{1}{2}}H^{l}W^{l})\\ \tilde{A}=A+I_{N}\\ \tilde{D}_{ii}=\sum_{j}\tilde{A}_{ij} \\ H^{0}=X \\ \mathcal{L}=\sum_{l \in Y}\sum^{F}_{f=1} Y_{lf} \ln Z_{lf}
Have Fun
Let's Vote for the Most Difficult Equation!
Introduction
Equation
Latex Code
H^{(l+1)}=\sigma(\tilde{D}^{\frac{1}{2}}\tilde{A}\tilde{D}^{\frac{1}{2}}H^{l}W^{l})\\ \tilde{A}=A+I_{N}\\ \tilde{D}_{ii}=\sum_{j}\tilde{A}_{ij} \\ H^{0}=X \\ \mathcal{L}=\sum_{l \in Y}\sum^{F}_{f=1} Y_{lf} \ln Z_{lf}
Explanation
In this formulation, W indicates layerspecific trainable weight matrix. H^{0} is the original inputs feature matrix X as H^{0}=X, with dimension as N * D, and H^{l} indicates the lth layer hidden representation of graph. The model is trained with semisupervised classification labels and the loss function L is defined above. You can check more detailed information in this ICLR paper, Semisupervised classification with graph convolutional networks for more details.
Related Documents
Related Videos
Discussion
Comment to Make Wishes Come True
Leave your wishes (e.g. Passing Exams) in the comments and earn as many upvotes as possible to make your wishes come true

Hugo ChandlerLonging to see a pass when I get my exam results.Kathryn Olson reply to Hugo ChandlerGooood Luck, Man!20231105 00:00:00.0 
Douglas RichardsonAll I want is to pass this exam.Andrea Scott reply to Douglas RichardsonGooood Luck, Man!20230928 00:00:00.0 
George LewisIt is my deepest desire to pass this exam.Judy Bell reply to George LewisBest Wishes.20240408 00:00:00.0
Reply