SME Linear

Tags: #machine learning #KG

Equation

$$\epsilon(lhs,rel,rhs)=E_{lhs(rel)}^{T}E_{rhs(rel)} \\=(W_{l1}E_{lhs}^{T} + W_{l2}E_{rel}^{T} + b_{l})^{T}(W_{r1}E_{rhs}^{T} + W_{r2}E_{rel}^{T} + b_{r})$$

Latex Code

                                 \epsilon(lhs,rel,rhs)=E_{lhs(rel)}^{T}E_{rhs(rel)} \\=(W_{l1}E_{lhs}^{T} + W_{l2}E_{rel}^{T} + b_{l})^{T}(W_{r1}E_{rhs}^{T} + W_{r2}E_{rel}^{T} + b_{r})
                            

Have Fun

Let's Vote for the Most Difficult Equation!

Introduction

Equation



Latex Code

            \epsilon(lhs,rel,rhs)=E_{lhs(rel)}^{T}E_{rhs(rel)} \\=(W_{l1}E_{lhs}^{T} + W_{l2}E_{rel}^{T} + b_{l})^{T}(W_{r1}E_{rhs}^{T} + W_{r2}E_{rel}^{T} + b_{r})
        

Explanation

The energy function E (denoted SME) is encoded using a neural network, whose architecture first processes each entity in parallel, like in siamese networks. The intuition is that the relation type should first be used to extract relevant components from each argument’s embedding, and put them in a space where they can then be compared. See paper A Semantic Matching Energy Function for Learning with Multi-relational Data for more details.

Related Documents

Related Videos

Discussion

Comment to Make Wishes Come True

Leave your wishes (e.g. Passing Exams) in the comments and earn as many upvotes as possible to make your wishes come true


  • Rachel Flores
    May luck be on my side to pass this exam.
    2023-05-03 00:00

    Reply


    Grant Bates reply to Rachel Flores
    You can make it...
    2023-05-06 00:00:00.0

    Reply


  • Ralph Ward
    Yearning for a positive result on this exam.
    2023-09-25 00:00

    Reply


    Harold Brooks reply to Ralph Ward
    Gooood Luck, Man!
    2023-09-28 00:00:00.0

    Reply


  • Stephen Carter
    Crossing everything possible to pass this exam.
    2024-03-29 00:00

    Reply


    Nicholas Baker reply to Stephen Carter
    You can make it...
    2024-04-02 00:00:00.0

    Reply