SME Linear
Tags: #machine learning #KGEquation
$$\epsilon(lhs,rel,rhs)=E_{lhs(rel)}^{T}E_{rhs(rel)} \\=(W_{l1}E_{lhs}^{T} + W_{l2}E_{rel}^{T} + b_{l})^{T}(W_{r1}E_{rhs}^{T} + W_{r2}E_{rel}^{T} + b_{r})$$Latex Code
\epsilon(lhs,rel,rhs)=E_{lhs(rel)}^{T}E_{rhs(rel)} \\=(W_{l1}E_{lhs}^{T} + W_{l2}E_{rel}^{T} + b_{l})^{T}(W_{r1}E_{rhs}^{T} + W_{r2}E_{rel}^{T} + b_{r})
Have Fun
Let's Vote for the Most Difficult Equation!
Introduction
Equation
Latex Code
\epsilon(lhs,rel,rhs)=E_{lhs(rel)}^{T}E_{rhs(rel)} \\=(W_{l1}E_{lhs}^{T} + W_{l2}E_{rel}^{T} + b_{l})^{T}(W_{r1}E_{rhs}^{T} + W_{r2}E_{rel}^{T} + b_{r})
Explanation
The energy function E (denoted SME) is encoded using a neural network, whose architecture first processes each entity in parallel, like in siamese networks. The intuition is that the relation type should first be used to extract relevant components from each argument’s embedding, and put them in a space where they can then be compared. See paper A Semantic Matching Energy Function for Learning with Multi-relational Data for more details.
Related Documents
Related Videos
Discussion
Comment to Make Wishes Come True
Leave your wishes (e.g. Passing Exams) in the comments and earn as many upvotes as possible to make your wishes come true
-
Rachel FloresMay luck be on my side to pass this exam.Grant Bates reply to Rachel FloresYou can make it...2023-05-06 00:00:00.0 -
Ralph WardYearning for a positive result on this exam.Harold Brooks reply to Ralph WardGooood Luck, Man!2023-09-28 00:00:00.0 -
Stephen CarterCrossing everything possible to pass this exam.Nicholas Baker reply to Stephen CarterYou can make it...2024-04-02 00:00:00.0
Reply