Unifying Large Language Models and Knowledge Graphs: A Roadmap
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual know...
Uložené v:
| Vydané v: | IEEE transactions on knowledge and data engineering Ročník 36; číslo 7; s. 3580 - 3599 |
|---|---|
| Hlavní autori: | , , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
New York
IEEE
01.07.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Predmet: | |
| ISSN: | 1041-4347, 1558-2191 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia, and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolve by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and, simultaneously, leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely: 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs , in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions. |
|---|---|
| AbstractList | Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which often fall short of capturing and accessing factual knowledge. In contrast, Knowledge Graphs (KGs), Wikipedia, and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge. KGs can enhance LLMs by providing external knowledge for inference and interpretability. Meanwhile, KGs are difficult to construct and evolve by nature, which challenges the existing methods in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and, simultaneously, leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs. Our roadmap consists of three general frameworks, namely: 1) KG-enhanced LLMs, which incorporate KGs during the pre-training and inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs, that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question answering; and 3) Synergized LLMs + KGs , in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We review and summarize existing efforts within these three frameworks in our roadmap and pinpoint their future research directions. |
| Author | Wang, Yufei Pan, Shirui Chen, Chen Wang, Jiapu Luo, Linhao Wu, Xindong |
| Author_xml | – sequence: 1 givenname: Shirui orcidid: 0000-0003-0794-527X surname: Pan fullname: Pan, Shirui email: s.pan@griffith.edu.au organization: School of Information and Communication Technology and Institute for Integrated and Intelligent Systems (IIIS), Griffith University, Nathan, QLD, Australia – sequence: 2 givenname: Linhao orcidid: 0000-0003-0027-942X surname: Luo fullname: Luo, Linhao email: linhao.luo@monash.edu organization: Department of Data Science and AI, Monash University, Melbourne, VIC, Australia – sequence: 3 givenname: Yufei surname: Wang fullname: Wang, Yufei email: garyyufei@gmail.com organization: Department of Data Science and AI, Monash University, Melbourne, VIC, Australia – sequence: 4 givenname: Chen orcidid: 0000-0002-4637-9250 surname: Chen fullname: Chen, Chen email: s190009@ntu.edu.sg organization: Nanyang Technological University, Singapore – sequence: 5 givenname: Jiapu orcidid: 0000-0001-7639-5289 surname: Wang fullname: Wang, Jiapu email: jpwang@emails.bjut.edu.cn organization: Faculty of Information Technology, Beijing University of Technology, Beijing, China – sequence: 6 givenname: Xindong orcidid: 0000-0003-2396-1704 surname: Wu fullname: Wu, Xindong email: xwu@hfut.edu.cn organization: Key Laboratory of Knowledge Engineering with Big Data (the Ministry of Education of China), Hefei University of Technology, Hefei, China |
| BookMark | eNp9kEFLAzEQhYNUsK3-AMHDguetk02yu_EglFqrtCJIew5JNqlbttmabJH-e1PqQTx4mRlm3jcP3gD1XOsMQtcYRhgDv1vOH6ejDDI6IoRlGOAM9TFjZZphjntxBopTSmhxgQYhbACgLErcRw8rV9tD7dbJQvq1idWt9zIOr21lmpBIVyVz1341porLmZe7j3CfjJP3VlZbubtE51Y2wVz99CFaPU2Xk-d08TZ7mYwXqc447dJcG2UlN1JJSRRgXRCeA1isNONWq4oxbDXoSgEtmWIWeDzInKmKZxEmQ3R7-rvz7efehE5s2r130VIQyGmBCSE0qvBJpX0bgjdW7Hy9lf4gMIhjTOIYkzjGJH5iikzxh9F1J7u6dZ2XdfMveXMia2PMLydSFgVm5Bs9pXZz |
| CODEN | ITKEEH |
| CitedBy_id | crossref_primary_10_1016_j_autcon_2025_106237 crossref_primary_10_1016_j_fss_2024_109117 crossref_primary_10_1016_j_knosys_2024_112837 crossref_primary_10_1007_s10844_025_00953_5 crossref_primary_10_3390_fi16120461 crossref_primary_10_1016_j_hcc_2025_100315 crossref_primary_10_1007_s13218_024_00850_z crossref_primary_10_1007_s10489_025_06381_w crossref_primary_10_1007_s41060_025_00752_9 crossref_primary_10_14778_3748191_3748194 crossref_primary_10_1007_s11334_024_00591_0 crossref_primary_10_1109_TKDE_2024_3486747 crossref_primary_10_1016_j_aei_2025_103212 crossref_primary_10_3390_info15080509 crossref_primary_10_1145_3671148 crossref_primary_10_1145_3764584 crossref_primary_10_1016_j_aei_2025_103217 crossref_primary_10_1109_TII_2024_3366977 crossref_primary_10_1016_j_neucom_2025_131269 crossref_primary_10_3390_aerospace12040325 crossref_primary_10_2196_65523 crossref_primary_10_3389_frai_2024_1247712 crossref_primary_10_1093_bioadv_vbae099 crossref_primary_10_3390_math12152318 crossref_primary_10_1145_3763241 crossref_primary_10_1109_TIM_2025_3550999 crossref_primary_10_2196_56128 crossref_primary_10_3390_ijgi13070255 crossref_primary_10_1109_TKDE_2025_3538110 crossref_primary_10_1016_j_procs_2024_09_187 crossref_primary_10_1002_eng2_70368 crossref_primary_10_1007_s11596_024_2929_4 crossref_primary_10_1007_s40264_025_01594_x crossref_primary_10_1007_s12136_024_00619_x crossref_primary_10_3390_fi16080260 crossref_primary_10_1109_TFUZZ_2024_3449317 crossref_primary_10_3389_frma_2024_1364053 crossref_primary_10_1016_j_ipm_2025_104082 crossref_primary_10_1038_s44259_025_00084_5 crossref_primary_10_1016_j_aei_2023_102333 crossref_primary_10_1016_j_aiia_2025_05_006 crossref_primary_10_1109_ACCESS_2025_3582853 crossref_primary_10_1016_j_knosys_2024_112940 crossref_primary_10_1007_s10489_024_05802_6 crossref_primary_10_1080_17538947_2025_2510566 crossref_primary_10_1515_iwp_2024_2052 crossref_primary_10_1007_s40747_024_01690_y crossref_primary_10_1109_TKDE_2025_3559023 crossref_primary_10_1016_j_asoc_2025_113147 crossref_primary_10_1002_mco2_70247 crossref_primary_10_1145_3742420 crossref_primary_10_1016_j_asoc_2024_112421 crossref_primary_10_1016_j_datak_2025_102504 crossref_primary_10_1109_ACCESS_2025_3609662 crossref_primary_10_1007_s11227_023_05817_9 crossref_primary_10_1145_3687273_3687280 crossref_primary_10_1016_j_neucom_2025_131118 crossref_primary_10_1108_IDD_09_2024_0141 crossref_primary_10_1016_j_knosys_2024_112155 crossref_primary_10_1016_j_ipm_2024_103920 crossref_primary_10_1109_MWC_001_2400315 crossref_primary_10_1016_j_oregeorev_2025_106638 crossref_primary_10_1016_j_physa_2025_130789 crossref_primary_10_1016_j_neucom_2025_131230 crossref_primary_10_1007_s10115_025_02434_1 crossref_primary_10_1007_s41060_025_00797_w crossref_primary_10_3390_app142411683 crossref_primary_10_1007_s40747_025_02003_7 crossref_primary_10_1016_j_aei_2025_103134 crossref_primary_10_1007_s12273_025_1235_9 crossref_primary_10_1016_j_ymben_2024_11_006 crossref_primary_10_1109_TKDE_2025_3581606 crossref_primary_10_1016_j_drudis_2024_104272 crossref_primary_10_1097_CM9_0000000000003302 crossref_primary_10_3389_fpls_2025_1583344 crossref_primary_10_1016_j_knosys_2025_114426 crossref_primary_10_1109_TKDE_2024_3464516 crossref_primary_10_1016_j_knosys_2025_114307 crossref_primary_10_1109_JIOT_2024_3522191 crossref_primary_10_1016_j_neunet_2025_107389 crossref_primary_10_1145_3749116_3749122 crossref_primary_10_3389_fcomp_2025_1590632 crossref_primary_10_1007_s41060_024_00699_3 crossref_primary_10_1371_journal_pone_0318644 crossref_primary_10_1016_j_procs_2024_10_206 crossref_primary_10_1016_j_compind_2025_104329 crossref_primary_10_1016_j_procs_2025_08_119 crossref_primary_10_1016_j_inffus_2024_102721 crossref_primary_10_1109_TPAMI_2025_3548729 crossref_primary_10_1177_15705838241307486 crossref_primary_10_1007_s41019_025_00285_y crossref_primary_10_1016_j_asoc_2024_112401 crossref_primary_10_1057_s41599_024_03407_5 crossref_primary_10_3390_app15105719 crossref_primary_10_1016_j_aei_2025_103705 crossref_primary_10_3390_electronics13071210 crossref_primary_10_1002_asi_70004 crossref_primary_10_1016_j_aej_2025_04_089 crossref_primary_10_1145_3749116_3749132 crossref_primary_10_1007_s11280_024_01297_w crossref_primary_10_1186_s13040_025_00466_5 crossref_primary_10_1088_2631_7990_ada8e4 crossref_primary_10_1016_j_aei_2025_103235 crossref_primary_10_1016_j_websem_2025_100862 crossref_primary_10_1080_09544828_2024_2355758 crossref_primary_10_1016_j_ipm_2025_104281 crossref_primary_10_1007_s11430_024_1446_9 crossref_primary_10_1016_j_enbuild_2025_115515 crossref_primary_10_1016_j_knosys_2025_114448 crossref_primary_10_1002_sys_21810 crossref_primary_10_1016_j_knosys_2025_113118 crossref_primary_10_1016_j_atech_2025_101094 crossref_primary_10_1016_j_jss_2025_112629 crossref_primary_10_1007_s44163_024_00175_8 crossref_primary_10_1016_j_compbiomed_2024_109100 crossref_primary_10_1016_j_aei_2025_103244 crossref_primary_10_2478_jaiscr_2025_0014 crossref_primary_10_3390_electronics14112102 crossref_primary_10_3390_math13142244 crossref_primary_10_3847_1538_4365_ad7c43 crossref_primary_10_3390_electronics13214195 crossref_primary_10_3390_app15094989 crossref_primary_10_1680_jsmic_24_00035 crossref_primary_10_1109_JIOT_2025_3586658 crossref_primary_10_1016_j_compchemeng_2025_109318 crossref_primary_10_1109_TGRS_2025_3532349 crossref_primary_10_1093_jamia_ocaf006 crossref_primary_10_1109_ACCESS_2025_3586278 crossref_primary_10_1016_j_pbi_2024_102665 crossref_primary_10_1109_TKDE_2025_3538618 crossref_primary_10_1186_s40537_025_01094_w crossref_primary_10_1007_s10489_024_06211_5 crossref_primary_10_1109_TKDE_2025_3581806 crossref_primary_10_1109_TEVC_2024_3506731 crossref_primary_10_1016_j_aei_2025_103613 crossref_primary_10_1016_j_visinf_2025_100269 crossref_primary_10_1162_tacl_a_00731 crossref_primary_10_32634_0869_8155_2025_395_06_162_166 crossref_primary_10_1186_s10033_025_01275_x crossref_primary_10_3390_math13060949 crossref_primary_10_1016_j_inffus_2024_102868 crossref_primary_10_3390_fi17090414 crossref_primary_10_1080_17538947_2024_2398063 crossref_primary_10_1109_ACCESS_2024_3518952 crossref_primary_10_3390_a18070445 crossref_primary_10_1007_s10844_025_00945_5 crossref_primary_10_1142_S0219686725500180 crossref_primary_10_1136_jitc_2023_007841 crossref_primary_10_1080_00207543_2025_2472298 crossref_primary_10_14778_3749646_3749650 crossref_primary_10_3390_brainsci15050523 crossref_primary_10_3390_make7020038 crossref_primary_10_3390_a18080485 crossref_primary_10_1016_j_knosys_2025_114215 crossref_primary_10_1016_j_knosys_2025_114336 crossref_primary_10_14324_LRE_22_1_40 crossref_primary_10_2196_75279 crossref_primary_10_1038_s41562_025_02162_0 crossref_primary_10_3390_make7030068 crossref_primary_10_1109_TVCG_2024_3456364 crossref_primary_10_1016_j_ipm_2024_104045 crossref_primary_10_1007_s11432_024_4158_4 crossref_primary_10_26907_2541_7746_2023_3_264_281 crossref_primary_10_1016_j_jbi_2024_104730 crossref_primary_10_3390_info15110666 crossref_primary_10_3390_s25133912 crossref_primary_10_1016_j_knosys_2025_113151 crossref_primary_10_1016_j_ipm_2025_104127 crossref_primary_10_1038_s41598_024_77916_3 crossref_primary_10_1093_nar_gkae1080 crossref_primary_10_1097_MCC_0000000000001304 crossref_primary_10_1080_17460441_2025_2490253 crossref_primary_10_1016_j_compag_2025_110396 crossref_primary_10_3390_smartcities7060121 crossref_primary_10_3390_electronics13245037 crossref_primary_10_1016_j_imed_2024_12_001 crossref_primary_10_1016_j_jmsy_2025_07_019 crossref_primary_10_1038_s41598_024_63279_2 crossref_primary_10_1016_j_inffus_2025_103124 crossref_primary_10_1007_s13755_024_00309_3 crossref_primary_10_1109_ACCESS_2025_3551370 crossref_primary_10_1007_s00391_024_02329_w crossref_primary_10_3389_fpubh_2025_1540946 crossref_primary_10_1007_s10639_025_13456_1 crossref_primary_10_1016_j_websem_2024_100844 crossref_primary_10_1093_gigascience_giae082 crossref_primary_10_1002_widm_1570 crossref_primary_10_1080_10447318_2024_2423335 crossref_primary_10_1007_s13042_024_02434_7 crossref_primary_10_1016_j_knosys_2025_114140 crossref_primary_10_1093_jamia_ocaf059 crossref_primary_10_1007_s00607_025_01499_8 crossref_primary_10_1016_j_eswa_2025_129199 crossref_primary_10_1109_LSP_2024_3456636 crossref_primary_10_1109_LRA_2025_3562776 crossref_primary_10_1108_MSCRA_10_2024_0041 crossref_primary_10_1016_j_websem_2024_100858 crossref_primary_10_1016_j_compchemeng_2025_109064 crossref_primary_10_1016_j_csbj_2024_10_017 crossref_primary_10_1016_j_aei_2025_103422 crossref_primary_10_1016_j_autcon_2024_105632 crossref_primary_10_1111_exsy_70127 crossref_primary_10_1007_s10207_025_00992_7 crossref_primary_10_3390_app151810061 crossref_primary_10_1007_s11442_025_2361_0 crossref_primary_10_1016_j_knosys_2025_113283 crossref_primary_10_1007_s11280_024_01315_x crossref_primary_10_1177_1420326X251368756 crossref_primary_10_1080_10095020_2025_2483884 crossref_primary_10_1016_j_rcim_2025_102982 crossref_primary_10_1007_s10462_024_10824_0 crossref_primary_10_1088_1742_6596_2833_1_012001 crossref_primary_10_1017_dce_2025_4 crossref_primary_10_1016_j_aei_2025_103633 crossref_primary_10_1016_j_tibtech_2025_02_008 crossref_primary_10_1007_s10489_025_06549_4 crossref_primary_10_1007_s00530_025_01683_y crossref_primary_10_1371_journal_pone_0323966 crossref_primary_10_1109_ACCESS_2024_3494737 crossref_primary_10_1038_s41598_024_81846_5 crossref_primary_10_3390_info16050335 crossref_primary_10_3390_s24248086 crossref_primary_10_1007_s10270_025_01294_1 crossref_primary_10_1111_tgis_13190 crossref_primary_10_1007_s10506_025_09467_5 crossref_primary_10_1016_j_eswa_2025_127926 crossref_primary_10_3390_app14041508 crossref_primary_10_3390_buildings14082502 crossref_primary_10_14778_3725688_3725718 crossref_primary_10_1109_JIOT_2024_3496491 crossref_primary_10_1016_j_autcon_2025_106362 crossref_primary_10_1007_s40747_024_01658_y crossref_primary_10_1051_matecconf_202440110009 crossref_primary_10_3390_electronics13193936 crossref_primary_10_1016_j_cie_2025_111375 crossref_primary_10_1016_j_knosys_2025_113503 crossref_primary_10_1016_j_knosys_2025_113060 crossref_primary_10_1038_s41586_025_08710_y crossref_primary_10_1109_ACCESS_2025_3601225 crossref_primary_10_1016_j_neucom_2024_128490 |
| Cites_doi | 10.18653/v1/N18-1202 10.18653/v1/2022.naacl-industry.24 10.1145/3587102.3588827 10.24963/ijcai.2020/267 10.1016/j.bdr.2020.100159 10.18653/v1/2021.emnlp-main.388 10.1007/978-3-030-87571-8_28 10.24963/ijcai.2019/720 10.1145/3191513 10.18653/v1/D19-1282 10.18653/v1/D19-1250 10.18653/v1/P19-1598 10.18653/v1/2022.acl-long.329 10.1007/s10489-022-04123-w 10.48550/ARXIV.1706.03762 10.18653/v1/2023.ijcnlp-main.45 10.1145/3442381.3450043 10.1109/ACCESS.2019.2957812 10.18653/v1/2022.acl-long.295 10.24963/ijcai.2020/502 10.1609/aaai.v33i01.33016473 10.18653/v1/2023.findings-emnlp.885 10.1016/j.jnca.2021.103076 10.18653/v1/2022.emnlp-main.764 10.18653/v1/2020.emnlp-main.346 10.1609/aaai.v33i01.33017152 10.1109/GUCON48875.2020.9231227 10.1016/j.inffus.2020.01.011 10.18653/v1/2021.emnlp-main.296 10.1145/1242572.1242667 10.1609/aaai.v24i1.7519 10.1007/978-3-319-68204-4_8 10.1109/CVPR52729.2023.01457 10.18653/v1/2021.nlp4convai-1.20 10.1016/j.eswa.2022.119369 10.1016/j.aiopen.2021.03.001 10.1609/aaai.v38i17.29889 10.1162/tacl_a_00644 10.1109/jproc.2024.3369017 10.1155/2019/9202457 10.1162/tacl_a_00324 10.18653/v1/2022.acl-long.581 10.18653/v1/2023.acl-long.546 10.24963/ijcai.2021/612 10.1007/978-3-319-60045-1_44 10.1145/3289600.3290956 10.18653/v1/p19-1470 10.1145/3178876.3186175 10.18653/v1/2020.inlg-1.14 10.18653/v1/2020.emnlp-main.99 10.18653/v1/2020.emnlp-main.722 10.1016/j.aiopen.2021.06.004 10.18653/v1/P19-1139 10.18653/v1/2022.findings-acl.150 10.1021/acsomega.0c02055 10.1145/3366423.3380107 10.18653/v1/2023.emnlp-main.59 10.18653/v1/2020.acl-main.703 10.1007/978-3-540-76298-0_52 10.18653/v1/D18-1223 10.18653/v1/D19-5709 10.18653/v1/2020.acl-main.374 10.1162/tacl_a_00360 10.1155/2017/5072427 10.18653/v1/2022.emnlp-main.207 10.1007/s11431-020-1647-3 10.1007/978-3-030-77385-4_41 10.1093/nar/gkh061 10.18653/v1/2022.emnlp-main.346 10.18653/v1/P19-1334 10.18653/v1/2023.ijcnlp-demo.4 10.18653/v1/2023.emnlp-main.574 10.24963/ijcai.2020/554 10.1145/3617680 10.1145/3487553.3524238 10.1609/aaai.v35i7.16796 10.1609/aaai.v31i1.11164 10.18653/v1/2022.naacl-main.372 10.1109/ICDE55515.2023.00229 10.18653/v1/2020.coling-main.217 10.24963/ijcai.2022/318 10.1145/3571730 10.1109/ICSE48619.2023.00110 10.1007/s11280-023-01166-y 10.1145/3618295 10.1145/2629489 10.18653/v1/2020.coling-main.327 10.18653/v1/2021.findings-acl.453 10.1609/aaai.v29i1.9491 10.18653/v1/2022.acl-long.201 10.18653/v1/W17-3518 10.1609/aaai.v36i10.21417 10.18653/v1/2020.findings-emnlp.25 10.1162/tacl_a_00300 10.1145/3539618.3591743 10.1109/ACCESS.2021.3113329 10.1609/aaai.v35i7.16792 10.1145/3539618.3591700 10.18653/v1/2022.naacl-main.379 10.18653/v1/2021.emnlp-main.65 10.18653/v1/2021.findings-acl.223 10.1145/1376616.1376746 10.37068/evu.14.9 10.1109/TNNLS.2021.3070843 10.18653/v1/D19-5827 10.18653/v1/2020.emnlp-main.54 10.18653/v1/2023.findings-acl.208 10.1109/icbk50248.2020.00080 10.18653/v1/2020.findings-emnlp.207 10.18653/v1/2023.findings-acl.275 10.1609/aaai.v33i01.33018876 10.1609/aaai.v36i10.21425 10.18653/v1/2023.emnlp-main.632 10.18653/v1/N19-1250 10.1609/aaai.v34i03.5681 10.24963/ijcai.2018/643 10.1145/3485447.3511921 10.18653/v1/2023.eacl-main.145 10.1145/3649506 10.18653/v1/D19-1588 10.1109/ICCV.2013.178 10.1007/978-3-030-21348-0_30 10.1109/tkde.2022.3224228 10.1609/aaai.v33i01.33017208 10.1109/taslp.2023.3325973 10.18653/v1/2021.naacl-main.41 10.1145/3579051.3579053 10.18653/v1/2022.findings-acl.136 10.18653/v1/2021.findings-acl.102 10.1145/3340531.3411947 10.18653/v1/2021.naacl-main.45 10.18653/v1/2020.emnlp-main.697 10.18653/v1/2020.emnlp-main.750 10.18653/v1/D18-1234 10.1007/978-3-030-30793-6_27 10.18653/v1/2020.acl-main.412 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
| DOI | 10.1109/TKDE.2024.3352100 |
| DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Technology Research Database |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 1558-2191 |
| EndPage | 3599 |
| ExternalDocumentID | 10_1109_TKDE_2024_3352100 10387715 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: Australian Research Council grantid: FT210100097; DP240101547 funderid: 10.13039/501100000923 – fundername: National Natural Science Foundation of China grantid: 62120106008 funderid: 10.13039/501100001809 |
| GroupedDBID | -~X .DC 0R~ 1OL 29I 4.4 5GY 5VS 6IK 97E 9M8 AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IEDLZ IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNI RNS RXW RZB TAE TAF TN5 UHB VH1 AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c294t-6cebfa9eabaa3b01c739600f1bc59fcbd551fc0cdb0485b5f09c59a65bd926ce3 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 405 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001245017200018&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1041-4347 |
| IngestDate | Sun Nov 30 04:09:03 EST 2025 Sat Nov 29 02:36:08 EST 2025 Tue Nov 18 21:52:30 EST 2025 Wed Aug 27 02:06:32 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 7 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c294t-6cebfa9eabaa3b01c739600f1bc59fcbd551fc0cdb0485b5f09c59a65bd926ce3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0003-0027-942X 0000-0001-7639-5289 0000-0003-0794-527X 0000-0002-4637-9250 0000-0003-2396-1704 |
| PQID | 3064713334 |
| PQPubID | 85438 |
| PageCount | 20 |
| ParticipantIDs | crossref_primary_10_1109_TKDE_2024_3352100 crossref_citationtrail_10_1109_TKDE_2024_3352100 ieee_primary_10387715 proquest_journals_3064713334 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-07-01 |
| PublicationDateYYYYMMDD | 2024-07-01 |
| PublicationDate_xml | – month: 07 year: 2024 text: 2024-07-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | IEEE transactions on knowledge and data engineering |
| PublicationTitleAbbrev | TKDE |
| PublicationYear | 2024 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | Choudhary (ref45) 2023 ref58 Sun (ref101) 2021 ref53 Wang (ref117) 2023 Danilevsky (ref17) 2020 Tay (ref54) ref50 Shi (ref154) 2019 ref46 ref42 Wei (ref7) 2022 ref8 ref9 ref4 ref6 ref5 Wu (ref201) ref40 Luo (ref111) 2023 ref35 ref34 ref37 ref36 ref31 ref32 ref39 ref38 Brown (ref59) Clark (ref52) 2020 Zhang (ref137) 2022 Zoph (ref56) 2022 ref24 Zhang (ref103) 2020 ref23 Luo (ref112) 2023 ref25 ref20 Ouyang (ref60) ref22 ref21 ref28 ref27 Yin (ref49) 2022 Alt (ref155) ref200 ref129 ref97 ref126 ref96 ref127 ref124 ref125 Zeng (ref57) Adolphs (ref128) 2021 ref133 ref95 ref131 ref94 ref132 ref130 Liu (ref2) 2019 ref90 ref89 ref139 ref86 ref85 ref138 ref88 ref135 ref87 ref136 Raffel (ref3) 2020; 21 Yasunaga (ref44) Han (ref158) 2023 Saravia (ref62) 2022 ref82 ref144 ref81 ref145 ref84 ref142 ref83 ref143 ref141 Zhao (ref11) 2023 ref80 ref79 ref108 ref78 ref109 ref107 ref75 ref104 ref74 ref105 ref77 ref102 ref76 Warren (ref99) ref71 ref70 ref73 Jiang (ref43) ref72 ref110 ref68 ref119 ref67 ref69 ref118 ref64 ref115 ref66 Zhu (ref93) 2023 ref113 ref123 ref120 ref121 Luo (ref116) 2023 Yao (ref26) 2019 Swamy (ref122) 2021 Diao (ref193) 2022 Sun (ref194) ref168 ref169 ref170 Touvron (ref61) 2023 ref175 ref176 ref174 ref171 Zhang (ref173) ref172 ref179 Bordes (ref33) Zhu (ref98) 2023 Wang (ref29) 2022 ref180 ref181 ref188 ref189 ref187 Rosset (ref91) 2020 ref185 Feng (ref177) 2023 ref182 ref183 ref148 ref147 Liu (ref184) 2023 Golovneva (ref30) Zhen (ref47) 2022 ref156 ref153 Warren (ref196) ref151 ref152 ref150 Xiong (ref106) Wei (ref63); 35 Chen (ref146) ref159 ref157 Devlin (ref1) 2018 Thoppilan (ref100) 2022 Wei (ref48) 2021 ref166 ref167 ref164 ref165 ref162 ref163 ref160 Liu (ref10) 2023 ref161 ref13 ref12 ref15 ref14 Shen (ref140) ref16 ref19 Sun (ref178) 2023 Zeng (ref186) 2023 Lan (ref51) Yang (ref134) Cao (ref149) Li (ref191) 2023 Wen (ref65) 2023 Wang (ref18) 2023 Sanh (ref55) ref192 ref190 ref199 ref197 ref198 ref195 Lewis (ref92) Guu (ref114) Melnyk (ref41) |
| References_xml | – ident: ref147 doi: 10.18653/v1/N18-1202 – ident: ref150 doi: 10.18653/v1/2022.naacl-industry.24 – ident: ref8 doi: 10.1145/3587102.3588827 – ident: ref34 doi: 10.24963/ijcai.2020/267 – year: 2022 ident: ref29 article-title: Self-consistency improves chain of thought reasoning in language models – year: 2022 ident: ref49 article-title: A survey of knowledge-intensive NLP with pre-trained language models – ident: ref86 doi: 10.1016/j.bdr.2020.100159 – ident: ref129 doi: 10.18653/v1/2021.emnlp-main.388 – ident: ref157 doi: 10.1007/978-3-030-87571-8_28 – ident: ref87 doi: 10.24963/ijcai.2019/720 – ident: ref24 doi: 10.1145/3191513 – ident: ref38 doi: 10.18653/v1/D19-1282 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref51 article-title: ALBERT: A lite bert for self-supervised learning of language representations – ident: ref14 doi: 10.18653/v1/D19-1250 – ident: ref113 doi: 10.18653/v1/P19-1598 – ident: ref120 doi: 10.18653/v1/2022.acl-long.329 – ident: ref174 doi: 10.1007/s10489-022-04123-w – ident: ref50 doi: 10.48550/ARXIV.1706.03762 – ident: ref28 doi: 10.18653/v1/2023.ijcnlp-main.45 – ident: ref142 doi: 10.1145/3442381.3450043 – ident: ref80 doi: 10.1109/ACCESS.2019.2957812 – ident: ref143 doi: 10.18653/v1/2022.acl-long.295 – volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref33 article-title: Translating embeddings for modeling multi-relational data – volume-title: Proc. 11th Int. Conf. Learn. Representations ident: ref57 article-title: GLM-130B: An open bilingual pre-trained model – ident: ref76 doi: 10.24963/ijcai.2020/502 – start-page: 4005 volume-title: Proc. Int. Conf. Comput. Linguistics ident: ref146 article-title: Knowledge is flat: A Seq2Seq generative framework for various knowledge graph completion – year: 2023 ident: ref111 article-title: ChatKBQA: A generate-then-retrieve framework for knowledge base question answering with fine-tuned large language models – ident: ref161 doi: 10.1609/aaai.v33i01.33016473 – ident: ref121 doi: 10.18653/v1/2023.findings-emnlp.885 – ident: ref23 doi: 10.1016/j.jnca.2021.103076 – ident: ref104 doi: 10.18653/v1/2022.emnlp-main.764 – year: 2022 ident: ref193 article-title: Black-box prompt learning for pre-trained language models – ident: ref119 doi: 10.18653/v1/2020.emnlp-main.346 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref149 article-title: Autoregressive entity retrieval – ident: ref136 doi: 10.1609/aaai.v33i01.33017152 – year: 2023 ident: ref117 article-title: Boosting language models reasoning with chain-of-knowledge prompting – ident: ref95 doi: 10.1109/GUCON48875.2020.9231227 – ident: ref199 doi: 10.1016/j.inffus.2020.01.011 – year: 2023 ident: ref191 article-title: Unveiling the pitfalls of knowledge editing for large language models – volume-title: Proc. Int. Conf. Learn. Representations ident: ref173 article-title: GreaseLM: Graph reasoning enhanced language models – year: 2022 ident: ref62 article-title: Prompt engineering guide – ident: ref170 doi: 10.18653/v1/2021.emnlp-main.296 – volume-title: Proc. AAAI Spring Symp. Combining Mach. Learn. Knowl. Eng. ident: ref99 article-title: Knowledge engineering with image data in real-world settings – year: 2023 ident: ref186 article-title: AgentTuning: Enabling generalized agent abilities for LLMs – volume: 35 start-page: 24824 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref63 article-title: Chain-of-thought prompting elicits reasoning in large language models – ident: ref31 doi: 10.1145/1242572.1242667 – year: 2023 ident: ref65 article-title: MindMap: Knowledge graph prompting sparks graph of thoughts in large language models – start-page: 1877 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref59 article-title: Language models are few-shot learners – ident: ref32 doi: 10.1609/aaai.v24i1.7519 – ident: ref84 doi: 10.1007/978-3-319-68204-4_8 – volume-title: Proc. 1st Conf. Automated Knowl. Base Construction ident: ref155 article-title: Improving relation extraction by pre-trained language representations – start-page: 41 volume-title: Proc. 1st Workshop Subjectivity Ambiguity Disagreement Crowdsourcing ident: ref196 article-title: Bounding ambiguity: Experiences with an image annotation system – ident: ref198 doi: 10.1109/CVPR52729.2023.01457 – ident: ref163 doi: 10.18653/v1/2021.nlp4convai-1.20 – ident: ref176 doi: 10.1016/j.eswa.2022.119369 – ident: ref22 doi: 10.1016/j.aiopen.2021.03.001 – ident: ref185 doi: 10.1609/aaai.v38i17.29889 – ident: ref192 doi: 10.1162/tacl_a_00644 – ident: ref200 doi: 10.1109/jproc.2024.3369017 – ident: ref78 doi: 10.1155/2019/9202457 – ident: ref118 doi: 10.1162/tacl_a_00324 – ident: ref39 doi: 10.18653/v1/2022.acl-long.581 – ident: ref130 doi: 10.18653/v1/2023.acl-long.546 – ident: ref6 doi: 10.24963/ijcai.2021/612 – ident: ref68 doi: 10.1007/978-3-319-60045-1_44 – ident: ref132 doi: 10.1145/3289600.3290956 – ident: ref159 doi: 10.18653/v1/p19-1470 – ident: ref133 doi: 10.1145/3178876.3186175 – ident: ref164 doi: 10.18653/v1/2020.inlg-1.14 – ident: ref168 doi: 10.18653/v1/2020.emnlp-main.99 – ident: ref102 doi: 10.18653/v1/2020.emnlp-main.722 – ident: ref180 doi: 10.1016/j.aiopen.2021.06.004 – ident: ref35 doi: 10.18653/v1/P19-1139 – start-page: 1965 volume-title: Proc. Int. Conf. Comput. Linguistics ident: ref140 article-title: Joint language semantic and structure embedding for knowledge graph completion – year: 2023 ident: ref178 article-title: Think-on-graph: Deep and responsible reasoning of large language model with knowledge graph – ident: ref125 doi: 10.18653/v1/2022.findings-acl.150 – year: 2022 ident: ref137 article-title: Reasoning through memorization: Nearest neighbor knowledge graph embeddings – ident: ref81 doi: 10.1021/acsomega.0c02055 – ident: ref74 doi: 10.1145/3366423.3380107 – ident: ref189 doi: 10.18653/v1/2023.emnlp-main.59 – ident: ref5 doi: 10.18653/v1/2020.acl-main.703 – ident: ref67 doi: 10.1007/978-3-540-76298-0_52 – year: 2022 ident: ref47 article-title: A survey on knowledge-enhanced pre-trained language models – ident: ref135 doi: 10.18653/v1/D18-1223 – ident: ref53 doi: 10.18653/v1/D19-5709 – ident: ref124 doi: 10.18653/v1/2020.acl-main.374 – ident: ref40 doi: 10.1162/tacl_a_00360 – year: 2018 ident: ref1 article-title: BERT: Pre-training of deep bidirectional transformers for language understanding – ident: ref79 doi: 10.1155/2017/5072427 – year: 2020 ident: ref17 article-title: A survey of the state of explainable AI for natural language processing – ident: ref109 doi: 10.18653/v1/2022.emnlp-main.207 – ident: ref12 doi: 10.1007/s11431-020-1647-3 – year: 2023 ident: ref184 article-title: AgentBench: Evaluating LLMs as agents – ident: ref70 doi: 10.1007/978-3-030-77385-4_41 – ident: ref77 doi: 10.1093/nar/gkh061 – ident: ref115 doi: 10.18653/v1/2022.emnlp-main.346 – ident: ref126 doi: 10.18653/v1/P19-1334 – start-page: 3929 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref114 article-title: REALM: Retrieval-augmented language model pre-training – volume-title: Proc. NeurIPS 2021 Workshop Deep Generative Models Downstream Appl. ident: ref41 article-title: Grapher: Multi-stage knowledge graph construction using pretrained language models – year: 2023 ident: ref112 article-title: Reasoning on graphs: Faithful and interpretable large language model reasoning – ident: ref139 doi: 10.18653/v1/2023.ijcnlp-demo.4 – ident: ref175 doi: 10.18653/v1/2023.emnlp-main.574 – ident: ref75 doi: 10.24963/ijcai.2020/554 – year: 2020 ident: ref103 article-title: E-BERT: A phrase and product knowledge enhanced language model for E-commerce – ident: ref16 doi: 10.1145/3617680 – ident: ref96 doi: 10.1145/3487553.3524238 – ident: ref37 doi: 10.1609/aaai.v35i7.16796 – ident: ref71 doi: 10.1609/aaai.v31i1.11164 – year: 2023 ident: ref158 article-title: PiVe: Prompting with iterative verification improving graph-based generative capability of LLMs – ident: ref183 doi: 10.18653/v1/2022.naacl-main.372 – ident: ref90 doi: 10.1109/ICDE55515.2023.00229 – ident: ref165 doi: 10.18653/v1/2020.coling-main.217 – ident: ref138 doi: 10.24963/ijcai.2022/318 – ident: ref15 doi: 10.1145/3571730 – ident: ref9 doi: 10.1109/ICSE48619.2023.00110 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref201 article-title: Pretrained language model in continual learning: A comparative study – ident: ref171 doi: 10.1007/s11280-023-01166-y – ident: ref25 doi: 10.1145/3618295 – ident: ref20 doi: 10.1145/2629489 – ident: ref107 doi: 10.18653/v1/2020.coling-main.327 – year: 2023 ident: ref45 article-title: Complex logical reasoning over knowledge graphs using large language models – ident: ref153 doi: 10.18653/v1/2021.findings-acl.453 – ident: ref144 doi: 10.1609/aaai.v29i1.9491 – ident: ref145 doi: 10.18653/v1/2022.acl-long.201 – start-page: 20 841 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref194 article-title: Black-box tuning for language-model-as-a-service – ident: ref160 doi: 10.18653/v1/W17-3518 – year: 2023 ident: ref116 article-title: ChatRule: Mining logical rules with large language models for knowledge graph reasoning – ident: ref181 doi: 10.1609/aaai.v36i10.21417 – year: 2021 ident: ref48 article-title: Knowledge enhanced pretrained language models: A compreshensive survey – ident: ref94 doi: 10.18653/v1/2020.findings-emnlp.25 – ident: ref152 doi: 10.1162/tacl_a_00300 – ident: ref27 doi: 10.1145/3539618.3591743 – ident: ref141 doi: 10.1109/ACCESS.2021.3113329 – year: 2023 ident: ref11 article-title: A survey of large language models – ident: ref73 doi: 10.1609/aaai.v35i7.16792 – ident: ref97 doi: 10.1145/3539618.3591700 – ident: ref105 doi: 10.18653/v1/2022.naacl-main.379 – year: 2020 ident: ref52 article-title: ELECTRA: Pre-training text encoders as discriminators rather than generators – start-page: 37 309 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref44 article-title: Deep bidirectional language-knowledge graph pretraining – volume-title: Proc. 11th Int. Conf. Learn. Representations ident: ref54 article-title: UL2: Unifying language learning paradigms – year: 2022 ident: ref100 article-title: LaMDA: Language models for dialog applications – ident: ref127 doi: 10.18653/v1/2021.emnlp-main.65 – volume: 21 start-page: 5485 issue: 1 year: 2020 ident: ref3 article-title: Exploring the limits of transfer learning with a unified text-to-text transformer publication-title: J. Mach. Learn. Res. – year: 2023 ident: ref177 article-title: Knowledge solver: Teaching LLMs to search for domain knowledge from knowledge graphs – year: 2023 ident: ref93 article-title: LLMs for knowledge graph construction and reasoning: Recent capabilities and future opportunities – ident: ref42 doi: 10.18653/v1/2021.findings-acl.223 – ident: ref66 doi: 10.1145/1376616.1376746 – ident: ref69 doi: 10.37068/evu.14.9 – ident: ref19 doi: 10.1109/TNNLS.2021.3070843 – year: 2023 ident: ref61 article-title: LLaMA: Open and efficient foundation language models – year: 2023 ident: ref18 article-title: On the robustness of ChatGPT: An adversarial and out-of-distribution perspective – volume-title: Proc. Int. Conf. Learn. Representations ident: ref55 article-title: Multitask prompted training enables zero-shot task generalization – year: 2019 ident: ref26 article-title: KG-BERT: BERT for knowledge graph completion – ident: ref4 doi: 10.18653/v1/D19-5827 – ident: ref72 doi: 10.18653/v1/2020.emnlp-main.54 – ident: ref64 doi: 10.18653/v1/2023.findings-acl.208 – ident: ref82 doi: 10.1109/icbk50248.2020.00080 – ident: ref179 doi: 10.18653/v1/2020.findings-emnlp.207 – ident: ref188 doi: 10.18653/v1/2023.findings-acl.275 – ident: ref88 doi: 10.1609/aaai.v33i01.33018876 – start-page: 9459 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref92 article-title: Retrieval-augmented generation for knowledge-intensive NLP tasks – ident: ref108 doi: 10.1609/aaai.v36i10.21425 – ident: ref190 doi: 10.18653/v1/2023.emnlp-main.632 – year: 2021 ident: ref122 article-title: Interpreting language models through knowledge graph extraction – ident: ref148 doi: 10.18653/v1/N19-1250 – ident: ref36 doi: 10.1609/aaai.v34i03.5681 – ident: ref162 doi: 10.24963/ijcai.2018/643 – ident: ref110 doi: 10.1145/3485447.3511921 – year: 2021 ident: ref128 article-title: How to query language models? – year: 2021 ident: ref101 article-title: ERNIE 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation – ident: ref156 doi: 10.18653/v1/2023.eacl-main.145 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref106 article-title: Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model – start-page: 27 730 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. ident: ref60 article-title: Training language models to follow instructions with human feedback – year: 2020 ident: ref91 article-title: Knowledge-aware language model pretraining – ident: ref13 doi: 10.1145/3649506 – ident: ref151 doi: 10.18653/v1/D19-1588 – ident: ref195 doi: 10.1109/ICCV.2013.178 – year: 2019 ident: ref154 article-title: Simple BERT models for relation extraction and semantic role labeling – year: 2022 ident: ref7 article-title: Emergent abilities of large language models publication-title: Trans. Mach. Learn. Res. – ident: ref85 doi: 10.1007/978-3-030-21348-0_30 – year: 2023 ident: ref10 article-title: Is ChatGPT a good recommender? A preliminary study – volume-title: Proc. 11th Int. Conf. Learn. Representations ident: ref43 article-title: UniKGQA: Unified retrieval and reasoning for solving multi-hop question answering over knowledge graph – ident: ref83 doi: 10.1109/tkde.2022.3224228 – ident: ref182 doi: 10.1609/aaai.v33i01.33017208 – ident: ref46 doi: 10.1109/taslp.2023.3325973 – volume-title: Proc. 11th Int. Conf. Learn. Representations ident: ref30 article-title: ROSCOE: A suite of metrics for scoring step-by-step reasoning – ident: ref58 doi: 10.18653/v1/2021.naacl-main.41 – year: 2022 ident: ref56 article-title: ST-MoE: Designing stable and transferable sparse expert models – ident: ref197 doi: 10.1145/3579051.3579053 – year: 2023 ident: ref98 article-title: MiniGPT-4: Enhancing vision-language understanding with advanced large language models – ident: ref123 doi: 10.18653/v1/2022.findings-acl.136 – ident: ref169 doi: 10.18653/v1/2021.findings-acl.102 – ident: ref89 doi: 10.1145/3340531.3411947 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref134 article-title: Embedding entities and relations for learning and inference in knowledge bases – ident: ref131 doi: 10.18653/v1/2021.naacl-main.45 – year: 2019 ident: ref2 article-title: RoBERTa: A robustly optimized bert pretraining approach – ident: ref166 doi: 10.18653/v1/2020.emnlp-main.697 – ident: ref187 doi: 10.18653/v1/2020.emnlp-main.750 – ident: ref21 doi: 10.18653/v1/D18-1234 – ident: ref172 doi: 10.1007/978-3-030-30793-6_27 – ident: ref167 doi: 10.18653/v1/2020.acl-main.412 |
| SSID | ssj0008781 |
| Score | 2.7643778 |
| Snippet | Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to... |
| SourceID | proquest crossref ieee |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 3580 |
| SubjectTerms | Artificial intelligence bidirectional reasoning Chatbots Cognition Decoding generative pre-training Graphs Inference Knowledge graphs Knowledge representation Large language models Natural language processing Predictive models roadmap Task analysis Training |
| Title | Unifying Large Language Models and Knowledge Graphs: A Roadmap |
| URI | https://ieeexplore.ieee.org/document/10387715 https://www.proquest.com/docview/3064713334 |
| Volume | 36 |
| WOSCitedRecordID | wos001245017200018&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 1558-2191 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0008781 issn: 1041-4347 databaseCode: RIE dateStart: 19890101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3PS8MwFA46POjB6VScTsnBk9At_ZG-1oMwdFOYDJEJu5X8BGF2Y938-03SdgxEwUspNCnlvb4k30u-9yF0Q5XPgQaxR4VUXgQJ9RiPqSfN3CATCpL40olNwHicTKfpa0VWd1wYpZQ7fKa69tbt5cu5WNtUWc8W8wawlPJdgLgka22G3QScIqmBFwYUhRFUW5g-SXuT0ePAQMEg6lqGkW_ZbFuTkFNV-TEUu_ll2Pznlx2hw2ohiful54_RjspbqFmLNOAqZlvoYKvi4Am6N2tMx2zCL_YIuLmW6UpsNdFmBWa5xKM6zYafbDXr4g738ducyU-2OEXvw8Hk4dmrJBQ8EaTRyouF4pqlinHGQk58AaGBLET7XNBUCy7NgkkLIiQ3kUw51SQ1D1hMuUwD0zk8Q418nqtzhFkIKlIByFBCxCRJtEp0CForAOYT2UaktmkmqvriVuZiljmcQdLMuiGzbsgqN7TR7abLoiyu8VfjU2v3rYalyduoU3suq-KvyCyusvA7jC5-6XaJ9u3by5O3HdRYLdfqCu2Jr9VHsbx2v9Y3zAjKnA |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEB5EBfVgfWK1ag6ehK3ZR5pdD4Joq9JaRCr0tuQJgrZiq7_fTDaVgih4WRY2YZeZnSTfJN98ACfMxJKzpBUxpU2U8ZxFQrZYpN3coHPGNY21F5vg_X4-HBYPgazuuTDGGH_4zDTx1u_l67H6wFTZGRbz5hwp5UsonRXoWt8Db869JqkDGA4WpRkPm5gxLc4G3eu2A4NJ1kSOUYx8trlpyOuq_BiM_QzTqf3z2zZgPSwlyWXl-01YMKMtqM1kGkiI2i1Ym6s5uA0XbpXpuU2kh4fA3bVKWBJURXuZEDHSpDtLtJEbrGc9OSeX5HEs9Kt424GnTntwdRsFEYVIJUU2jVrKSCsKI6QQqaSx4qkDLdTGUrHCKqndkskqqrR0scwks7RwD0SLSV0krnO6C4uj8cjsAREpN5lJuE41z4SmuTW5Tbm1hnMRU10HOrNpqUKFcRS6eCk90qBFiW4o0Q1lcEMdTr-7vFXlNf5qvIN2n2tYmbwOjZnnyhCBkxKRFQLwNNv_pdsxrNwO7ntl767fPYBVfFN1DrcBi9P3D3MIy-pz-jx5P_K_2ReGGc3l |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Unifying+Large+Language+Models+and+Knowledge+Graphs%3A+A+Roadmap&rft.jtitle=IEEE+transactions+on+knowledge+and+data+engineering&rft.au=Pan%2C+Shirui&rft.au=Luo%2C+Linhao&rft.au=Wang%2C+Yufei&rft.au=Chen%2C+Chen&rft.date=2024-07-01&rft.issn=1041-4347&rft.eissn=1558-2191&rft.volume=36&rft.issue=7&rft.spage=3580&rft.epage=3599&rft_id=info:doi/10.1109%2FTKDE.2024.3352100&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TKDE_2024_3352100 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1041-4347&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1041-4347&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1041-4347&client=summon |