Skip to content
Permalink
Browse files

add paper, organize notes

  • Loading branch information...
shaoxiongji committed Mar 6, 2019
1 parent da5867f commit b267244a27ccf54ced5c7182ba3944434c613487
127 README.md

Large diffs are not rendered by default.

Oops, something went wrong.
@@ -0,0 +1,4 @@
Motivation: identification and linking of entities; identification and linking of relations; identification of query intent; generating formal query
Preprocessing: keyword tokenizer, entity relation predictor, candidate generation
Disambiguation: 1) GTSP solver 2) connection density, adaptive learning
Dataset: LC-QuAD
@@ -0,0 +1,3 @@
Task: code-mix simple questions KBQA
Method: Triplet-Siamese-Hybrid CNN, TSHCNN; triplet inputs: 1) questions, 2) positive/negtive tuple, 3) questions and positive/negative tuple
Datasets: SimpleQuestions (Bordes et al., 2015) dataset,75.9k/10.8k/21.7k training/validation/test
@@ -0,0 +1,5 @@
Method: ConceptNet multi-hop path from common sense
Experiments: generative QA
Datasets: NarrativeQA
Baseline model: embedding layer, reasoning layer, model layer(self-attention, BiLSTM), answer layer (pointer-generator decoder)
Commonsense: multi-hop path, PMI socring, choose path like beam search
@@ -0,0 +1,3 @@
Motivation: learn to rank subject-predicate pairs
Method: pattern extraction, pattern revising, joint fact selection
Datasets: SimpleQuestions, freebase(FB2M, FB5M)
@@ -0,0 +1,4 @@
Motivation: LSTM fine-tuning -> comparable performance
Method: entity detection: CRF with features engineering; entity linking: n-gram inverse indexing, Levenshtein Distance; relation prediction: RNNs, CNNs, logistic regression (TF-IDF, bi-gram, word embedding, relation words); evidence integration: m entities and n relations -> 1 entity-relation
Datasets: SimpleQuestions
Experiments: entity linking, relation prediction, end2end QA
@@ -0,0 +1,3 @@
Motivation: distance supervised relation extraction; false annotation, inner-sentence noise, random feature extraction is not robust.
Method: 1) STP: Sub-Tree Parser; BGRU: Bidirectional GRU, entity-wise neural extractor; 3) transfer learning: entity classification -> relation extraction
Experiments: held-out evaluation, manual evaluation
@@ -0,0 +1,4 @@
Motivation: embedding triplet and multimodal data into vector space
Methods: encoders: multimodal data into vector; decoders: generate multi-modal values
Datasets: MovieLens-100k, YAGO-10
Experiments: link prediction, generating text and images
@@ -0,0 +1,4 @@
Problems: OOV -> commonsense KG -> triplet without semantic meaning of subgraph
Proposed Mehtod: commonsense knowledge aware conversational model, CCM; subgraph, static graph attention; dynamic graph attention; encoder-decoder seq2seq
Datasets: ConceptNet, reddit post-response
Metirc: perplexity, entity score, crowdsourcing(appropriateness, informativeness)
@@ -0,0 +1,3 @@
Method: graph embedding method considering temporal scopes, represent time as a hyperplane
Experiments: link prediction, temporal scoping
Datasets: YAGO11k, Wikidata12k (with time annotations)
@@ -0,0 +1,4 @@
Motivation: lack of enough prior alignment
Method: bootstrapping approach to embedding-based entity alignment; alignment editing
Datasets: DBP15K, DWY100K

@@ -0,0 +1,3 @@
Motivation: knowledge graphs typically only contain positive facts.
Method: GAN for negtive sample generation.
Experiments: link prediction using FB15k-237, WN18 and WN18RR
@@ -0,0 +1,4 @@
Motivation: multilingual KG; low coverage of entity alignment; literal description of entities
Method: 1) KGEM: knowledge model(TransE), alignment model(MTransE); 2) DEM: attentive gated recurrent unit encoder(AGRU), cross-lingual embedding; 3) KDCoE: iterative co-training.
Datasets: WK3160k extracted from DBPedia
Experiments: cross-lingual entity aligment & zero-shot aligment(Hit@1, Hit@10, MRR), cross-lingual knowledge completion (proportion of ranks no larger than 10 Hit@10, mean reciprocal rank MRR)
@@ -0,0 +1,4 @@
Motivation: network structure, semantic information of edge
Proposed method: structural loss: context node; relational loss: edges
Datasets: ArnetMiner, AmazonReviews
Experiments: multi-label node classification
@@ -0,0 +1,3 @@
Motivation: to fill the gap between effectiveness of KG embeddings and their geometric understanding.
Metrics: 1) ATM, alignment to mean; 2) Conicity; 3) VS, vector spread; 4) AVL, average vector length
Datasets: Freebase(FB15k), WordNet(WN18)
@@ -0,0 +1,4 @@
Motivation: learning first-order rules, scalable techniques
Definitions: closed-pathrule, support degree of r, standard confidence, head coverage
Proposed method: sampling method; argument embedding; co-occurrence socre function; rule evaluation
Datasets: FB15K-237, FB75K, YAGO2s, Wikidata, DBpedia 3.8
@@ -0,0 +1,3 @@
Motivation: long-tail in KG, one-shot setting, metric learning
Proposed method: 1) neighbor encoder: subgraph, one-hop neighbor set, encoding; 2) matching processor: LSTM encoding, similarity
Datasets: NELL-one, Wiki-One derived from NELL, Wikidata

0 comments on commit b267244

Please sign in to comment.
You can’t perform that action at this time.