Waandishi wa: Chengxuan Ying, yingchengsyuan@gmail.com (Dalian University of Technology) Tianle Cai, tianle.cai@princeton.edu (Princeton University) Shengjie Luo, luosj@stu.pku.edu.cn (Peking University) Shuxin Zheng, shuz@microsoft.com (Microsoft Research Asia) Guolin Ke, guoke@microsoft.com (Microsoft Research Asia) Di He, dihe@microsoft.com (Microsoft Research Asia) Yanming Shen, shen@dlut.edu.cn (Dalian University of Technology) Tie-Yan Liu, tyliu@microsoft.com (Microsoft Research Asia) Waandishi wa: Chengxuan Ying, yingchengsyuan@gmail.com (Dalian Chuo Kikuu cha Teknolojia) Tianle Cai, tianle.cai@princeton.edu (Ushirika wa Chuo Kikuu cha Princeton) Shengjie Luo, luosj@stu.pku.edu.cn (Ulimwengu wa Beijing) Shuxin Zheng, shuz@microsoft.com (Utafiti wa Microsoft Asia) Guolin Ke, guoke@microsoft.com (Utafiti wa Microsoft Asia) Di He, dihe@microsoft.com (Utafiti wa Microsoft Asia) Yanming Shen, shen@dlut.edu.cn ( Chuo Kikuu cha Teknolojia cha Dalian) Tie-Yan Liu, tyliu@microsoft.com (Utafiti wa Microsoft Asia) Abstract ya Grapher Architecture imekuwa chaguo kuu katika maeneo mengi, kama vile usindikaji wa lugha ya asili na maono ya kompyuta. Hata hivyo, haijafikia utendaji wa ushindani juu ya boards maarufu ya utabiri wa kiwango cha graph ikilinganishwa na aina za kawaida za GNN. Kwa hiyo, bado ni siri jinsi Transformers inaweza kufanya vizuri kwa ajili ya kujifunza utabiri wa graph. Katika makala hii, sisi kutatua siri hii kwa kuwasilisha Graphormer, ambayo ni kujengwa juu ya usanifu wa kiwango cha Transformer, na inaweza kufikia matokeo bora juu ya idadi kubwa ya kazi za kujifunza graph utabiri, hasa juu ya changamoto ya hivi karibuni ya OGB Large-Scale. ufahamu wetu muhimu wa kutumia Transformer katika graph ni haja ya kuingiza kwa ufanisi habari ya muundo wa graph . https://github.com/Microsoft/Graphormer 1 Maelezo ya Mabadiliko ya [ ] ni inayojulikana sana kama mtandao wenye nguvu zaidi wa neva katika kubuni data ya mfululizo, kama lugha ya asili [ ya ya Katika hotuba yake ya ]. Mabadiliko ya mfano yaliyoundwa juu ya Transformer pia yameonyesha utendaji mzuri katika maono ya kompyuta [ ya Maelezo ya lugha ya programu [ ya ya Hata hivyo, kwa ujuzi wetu bora, Transformer bado haijawahi kuwa kiwango cha de-facto kwenye bodi za uongozi za uwakilishi wa graph ya umma [ ya ya ]. Kuna majaribio mengi ya kutumia Transformer katika uwanja wa graph, lakini njia pekee yenye ufanisi ni kubadilisha baadhi ya modules muhimu (kwa mfano, mchanganyiko wa kipengele) katika aina classic ya GNN na makini ya softmax [ ya ya ya ya , ya Kwa hiyo, bado ni swali la wazi kama usanifu wa Transformer ni sahihi kwa mifano ya graphs na jinsi ya kufanya kazi katika kujifunza uwakilishi wa graph. 49 11 35 6 17 12 36 19 63 44 22 14 21 50 7 23 51 61 46 13 Katika makala hii, sisi kutoa jibu la dhahiri kwa kuendeleza Graphormer, ambayo ni kujengwa moja kwa moja juu ya Transformer kiwango, na kufikia utendaji wa kisasa juu ya mbalimbali ya majukumu ya utabiri wa kiwango cha graph, ikiwa ni pamoja na changamoto ya hivi karibuni Open Graph Benchmark Large-Scale (OGB-LSC) [ ], na baadhi ya viongozi maarufu (kwa mfano, OGB [ ], Benchmarking ya GNN [ ]). Transformer ni awali iliyoundwa kwa ajili ya modeling sequence. Ili kutumia nguvu yake katika graphs, tunaamini muhimu ni kuingiza taarifa ya muundo ya graphs kwa usahihi katika mfano. Mtazamo wa kibinafsi unahesabu tu mfano wa semantic kati ya na viungo vingine, bila kuzingatia habari ya muundo ya grafu iliyoonyeshwa kwenye viungo na uhusiano kati ya viungo vya node. Graphormer inajumuisha mbinu kadhaa za uendeshaji wa muundo wa ufanisi ili kutumia habari hiyo, ambayo inajumuishwa hapa chini. 21 22 14 i i Kwanza, tunapendekeza a katika Graphormer kukamata umuhimu wa node katika chati. Katika chati, node tofauti inaweza kuwa na umuhimu tofauti, kwa mfano, celebrities ni kuchukuliwa kuwa na ushawishi zaidi kuliko idadi kubwa ya watumiaji wa wavuti katika mtandao wa kijamii. Hata hivyo, habari hii haifai katika moduli ya kujali mwenyewe kama inavyohitajika sawa hasa kwa kutumia vipengele semantic ya node. Ili kutatua tatizo, tunapendekeza kuingiza umuhimu wa node katika Graphormer. kwa ajili ya centralization encoding, ambapo vektor inaweza kujifunza inapatikana kwa kila node kulingana na kiwango chake na kuongezwa na vipengele vya node katika kiwango cha kuingia. tafiti za empirical zinaonyesha kwamba centralization rahisi encoding ni ufanisi kwa Transformer katika modeling data graph. Usimamizi wa Encoding Kiwango cha Centrality Pili, tunapendekeza kitabu cha katika Graphormer kukamata uhusiano wa muundo kati ya nodes. Moja ya kipengele cha kipekee cha kijiografia ambacho kinachofichua data ya graph-kujengwa kutoka kwa data nyingine za muundo, kwa mfano, lugha, picha, ni kwamba hakuna gridi ya kanoni ya kuingiza graph. Kwa kweli, nodes inaweza tu kukaa katika nafasi isiyo ya Euclidean na kuunganishwa na mikono. Ili kubadilisha habari hiyo ya muundo, kwa kila uhusiano wa node, tunashughulikia kuingiza ya kujifunza kulingana na uhusiano wao wa nafasi. Ukubwa wa kupima katika fasihi inaweza kutumika kwa ajili ya kubadilisha uhusiano wa nafasi. Kwa madhumuni ya jumla, tunatumia umbali wa njia ndogo kati ya nodes mbili yoyote kama ushuhuda, ambayo itahifadhiwa kama neno Ulimwengu wa Encoding Kwa kutumia encodings iliyopendekezwa hapo juu, tunathibitisha zaidi kuwa Graphormer ina ufafanuzi mkubwa kama aina nyingi maarufu za GNN ni kesi yake ya kipekee. uwezo mkubwa wa mfano husababisha utendaji wa kisasa juu ya idadi kubwa ya kazi katika vitendo. Katika kipindi cha hivi karibuni cha Open Graph Benchmark Large-Scale Challenge (OGB-LSC) ], Graphormer inashinda aina nyingi za kawaida za GNN kwa zaidi ya pointi za 10 katika suala la makosa ya uhusiano. ya Graphormer pia inashinda matokeo bora ya awali, kuonyesha uwezo na uwezekano wa usanifu wa Transformer. 3 21 22 14 2 Maelezo ya awali Katika sehemu hii, tunajifunza mapitio ya Graph Neural Networks na Transformer. Hebu G = (V, E) kuelezea graph ambapo V = {v1, v2, · · · , vn}, n =annooV, ni idadi ya nodes. Hebu vektor ya kipengele cha node vi kuwa xi . GNNs lengo la kujifunza utambulisho wa nodes na graphs. Kwa kawaida, GNNs za kisasa kufuata mpango wa kujifunza ambayo iteratively update utambulisho wa node kwa kuunganisha utambulisho wa jirani yake ya awali au ya juu. Sisi kuelezea h (l) i kama utambulisho wa vi katika ngazi ya l na kufafanua h (0) i = xi. iteration ya l ya mchanganyiko inaweza kufafanuliwa na hatua ya AGGREGATE-COMBINE kama Graph Neural Network (GNN). ambapo N (vi) ni seti ya jirani ya kwanza au ya juu ya vi . kazi ya AGGREGATE hutumiwa kukusanya habari kutoka kwa jirani. kazi ya jumla ya jumla ni pamoja na MEAN, MAX, SUM, ambayo hutumiwa katika miundo tofauti ya GNNs [26, 18, 50, 54]. Lengo la kazi ya COMBINE ni kuunganisha habari kutoka kwa jirani katika muonekano wa node. Zaidi ya hayo, kwa kazi za uwakilishi wa grafu, kazi ya READOUT imeundwa kuunganisha vipengele vya node h (L) i vya iteration ya mwisho katika uwakilishi wa hG ya grafu nzima G: READOUT inaweza kutekelezwa na kazi rahisi ya permutation invariant kama vile summation [54] au kazi ya kifahari ya kiwango cha graph pooling [1]. Mfumo wa Transformer unajumuisha muundo wa ngazi za Transformer [49]. Kila ngazi ya Transformer ina sehemu mbili: moduli ya kujitegemea na mtandao wa kuingia mbele (FFN). Hebu H = h > 1, · · , h> n > ∈ R n×d kuelezea kuingia kwa moduli ya kujitegemea ambapo d ni kiwango cha siri na hi ∈ R 1×d ni uwakilishi uliofichwa katika nafasi ya i. Mwongozo wa H unatengenezwa na mitindo mitatu WQ ∈ R d×dK , WK ∈ R d×dK na WV ∈ R d×dV kwa uwakilishi wa kulinganisha Q, K, V. Mtazamo wa kujitegemea basi unatarajiwa kama: Transformer ambapo ni mstari unaopiga sawa kati ya maswali na vifungo. Kwa urahisi wa kuonyesha, tunachukua akili ya kichwa kimoja na kufikiri = ya = ya Utekelezaji wa tahadhari ya vichwa vingi ni ya kawaida na ya moja kwa moja, na sisi kusahau maneno ya udanganyifu kwa ajili ya urahisi. A ya dk ya DV d 3 Maonyesho ya Katika sehemu hii, tunatoa Graphormer yetu kwa kazi za graph. Kwanza, tunashughulikia miradi kadhaa muhimu katika Graphormer, ambayo inafanya kazi kama udanganyifu wa inductive katika mtandao wa neural kujifunza uwakilishi wa graph. Tunatoa zaidi utekelezaji wa kina wa Graphormer. Hatimaye, tunashuhudia kwamba Graphormer yetu iliyopendekezwa ni nguvu zaidi tangu mifano maarufu ya GNN [ ya , Hii ni kesi yake ya kipekee. 26 54 18 3.1 Mifumo ya Ujenzi katika Graphormer Kama ilivyojadiliwa katika ufunguzi, ni muhimu kuendeleza njia za kutumia habari ya muundo ya graphs katika mfano wa Transformer. Kwa lengo hili, tunatoa miradi mitatu rahisi lakini yenye ufanisi ya encoding katika Graphormer. kwa ajili ya maonyesho. 1 3.1.1 Kuanzishwa kwa kituo cha usajili katika usambazaji wa tahadhari ni kuhesabiwa kulingana na uhusiano wa semantic kati ya nodes. Hata hivyo, centralization ya nodes, ambayo inathiri jinsi muhimu nodes ni katika chati, ni kawaida ishara yenye nguvu kwa uelewa wa graph. Kwa mfano, celebrities ambao wana idadi kubwa ya wafuasi ni mambo muhimu katika kutabiri mwenendo wa mtandao wa kijamii [ ya ]. Taarifa hiyo inachukuliwa katika hesabu ya sasa ya tahadhari, na tunaamini inapaswa kuwa ishara muhimu kwa mifano ya Transformer. Kifungu cha 4, 40 39 Katika Graphormer, tunatumia kiwango cha usaidizi, ambayo ni moja ya vipimo vya kiwango cha usaidizi katika fasihi, kama ishara ya ziada kwa mtandao wa neural. ambayo hutoa kila node viungo viwili vya kuingizwa vya thamani halisi kulingana na indegree yake na outdegree yake. Kama coding ya centralization inatumika kwa kila node, sisi tu kuongeza kwa vipengele vya node kama input. Usimamizi wa Encoding ambapo z −, z + ∈ R d ni vektoru ya kuingizwa inayoweza kujifunza yaliyoelezwa na deg−(vi) na outdegree deg+(vi) respectively. Kwa graphs isiyo ya mwelekeo, deg−(vi) na deg+(vi) inaweza kuunganishwa kwa deg(vi). Kwa kutumia usimamizi wa kituo katika kuingia, makini ya softmax inaweza kukamata ishara ya umuhimu wa node katika maswali na vikwazo. Kwa hiyo, mfano unaweza kukamata uhusiano wa semantic na umuhimu wa node katika utaratibu wa umuhimu. 3.1.2 Ulimwengu wa Ulimwengu Faida ya Transformer ni shamba lake la kupendeza la kimataifa. Katika kila shamba la Transformer, kila token inaweza kuwasiliana na habari katika nafasi yoyote na kisha kusindika uwakilishi wake. Lakini operesheni hii ina tatizo la bidhaa kwamba mfano unapaswa kufafanua wazi nafasi tofauti au kuagiza utegemezi wa nafasi (kama vile eneo) katika shamba. Kwa data ya mfululizo, mtu anaweza au kutoa kila nafasi kuingizwa (yaani, kuagiza nafasi ya ukamilifu [ ]) kama kuingia au coding umbali wa uhusiano wa maeneo mawili yoyote (yaani, relative positional coding katika mfumo wa transformer. 49 [Tafakari ya 45] 77) kwa ajili ya Hata hivyo, kwa graphs, nodes si kupangwa kama mstari. Wanaweza kukaa katika nafasi multi-dimensional na ni kuunganishwa na mikono. Ili kuingiza habari ya muundo ya graph katika mfano, tunapendekeza Novemba Space Encoding. Hasa, kwa graph G, sisi kufikiria kazi φ (vi , vj ) : V × V → R ambayo inathiri uhusiano wa nafasi kati ya vi na vj katika graph G. Function φ inaweza kufafanuliwa na uhusiano kati ya nodes katika graph. Katika makala hii, sisi kuchagua φ(vi , vj ) kuwa umbali wa njia ndogo (SPD) kati ya vi na vj kama nodes mbili ni kuunganishwa. Kama sio, sisi kuweka output ya φ kuwa thamani maalum, yaani, -1. Sisi kila (feasible) thamani ya output ambapo ( ) ni scalar ya kujifunza iliyoorodheshwa na ( (Kwa ujumla, na kwa ujumla, kwa ajili ya vifaa vyote. BF ya kwa ajili ya VJ φ kwa ajili ya vj Hapa tunazungumzia faida kadhaa za mbinu yetu iliyopendekezwa. Kwanza, ikilinganishwa na GNNs za kawaida zilizotajwa katika Sehemu 2, ambapo shamba la kupokea ni mdogo kwa jirani, tunaweza kuona kwamba katika Eq. , kiwango cha Transformer hutoa habari ya jumla kwamba kila node inaweza kuhudhuria nodes nyingine zote katika chati. ( ), kila node katika ngazi moja ya Transformer inaweza kukabiliana adaptively na nodes nyingine zote kulingana na maelezo ya muundo ya graph. Kwa mfano, ikiwa ( (Kwa kuwa (6) ya BF ya kwa ajili ya VJ BF ya kwa ajili ya VJ kujifunza kuwa kazi ya kupungua kwa ajili ya ( ), kwa kila node, mfano utakuwa uwezekano wa kulipa kipaumbele zaidi kwa nodes karibu naye na kulipa kipaumbele kidogo kwa nodes mbali kutoka kwake. φ kwa ajili ya vj 3.1.3 Edge Encoding katika Maoni Katika kazi nyingi za chati, vichwa pia zina sifa za muundo, kwa mfano, katika chati ya molekuli, wanandoa wa atomi wanaweza kuwa na sifa ambazo zinaelezea aina ya uhusiano kati yao. Sifa hizo ni muhimu kwa muonekano wa chati, na kuingiza pamoja na sifa za node kwenye mtandao ni muhimu. Kuna njia mbili za kuingiza vichwa zinazotumiwa katika kazi za awali. Katika njia ya kwanza, sifa za vichwa zinaongezwa na sifa za node zinazohusiana [ ya Katika njia ya pili, kwa kila node, vipengele vyake vinavyohusiana vinatumiwa pamoja na vipengele vya node katika muunganisho [ ya ya Hata hivyo, njia hizi za kutumia kipengele cha edge tu kueneza habari ya edge kwa nodes zake zinazohusiana, ambayo inaweza kuwa njia yenye ufanisi ya kutumia habari ya edge katika kuwakilisha grafu nzima. 22 30 15 54 26 Kuanzisha mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi ( Tunatarajia kuwa watumiaji wa wavuti wa wavuti wa wavuti wa wavuti wa wavuti wa wavuti wa wavuti wa wavuti wa wavuti wa wavuti wa wavuti wa wavuti wa wavuti wa wavuti. ya Kila kitu kimoja kinachofanyika katika mzunguko ( Tunaweza kutafuta njia ya muda mfupi zaidi. kwa ajili ya ( 1 ya E , ... , eN * ) kutoka kwa ya Kuanzisha mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi juu ya mlinzi ( (i) Sehemu ya kwa ajili ya EQ. Zaidi juu ya Edge Encoding Kama ya: kwa ajili ya vj 34 51 kwa ajili ya vj ya e 2 ya vi wa kwa ajili ya j A (3) ya wa wa ambapo xen ni kipengele cha n-th edge en katika SPij , w E n ∈ R dE ni uingizaji wa n-th uzito, na dE ni ukubwa wa kipengele cha edge. 3.2 Maelezo ya utekelezaji wa Graphormer Graphormer ni kujengwa juu ya utekelezaji wa awali wa encoder classic Transformer iliyoelezwa katika [ Aidha, tunatumia normalization layer (LN) kabla ya multi-head self-attention (MHA) na feed-forward blocks (FFN) badala ya baada ya [ ]. Mabadiliko haya yamechukuliwa kwa umakini na utekelezaji wote wa Transformer sasa kwa sababu yanaongoza kwa ufanisi zaidi [ ]. Hasa, kwa ngazi ya chini ya FFN, tumeweka ukubwa wa input, output, na ngazi ya ndani kwa kiwango sawa na Sisi rasmi kufafanua kiwango cha Graphormer kama ifuatavyo: Graphormer Layer. 49 53 43 d Kama ilivyoelezwa katika sehemu ya awali, kazi mbalimbali za kuunganisha graph zinapendekezwa kuwakilisha uingizaji wa graph. ], katika Graphormer, tunaongeza kiungo maalum kinachojulikana kama [VNode] kwenye chati, na kuunganisha kati ya [VNode] na kila kiungo tofauti. Katika hatua ya AGGREGATE-COMBINE, muonekano wa [VNode] umeboreshwa kama viungo vya kawaida katika chati, na muonekano wa chati nzima Hivyo basi, itakuwa inahesabiwa kuwa katika mstari wa mwisho wa baridi. ya ], kuna token sawa, yaani, [CLS], ambayo ni token maalum iliyounganishwa mwanzoni mwa kila mfululizo, kuwakilisha kipengele cha kiwango cha mfululizo kwenye majukumu ya chini. Wakati [VNode] ni kuunganishwa na nodes nyingine zote katika grafu, ambayo ina maana umbali wa njia ndogo ni 1 kwa yoyote (Mfano wa Neno) ) na ( [VNode]), uhusiano ni si ya kimwili. Kufafanua uhusiano wa kimwili na virtual, kuongozwa na [ ], tunaweka upya codings zote za nafasi kwa ajili ya (Mfano wa Neno) ) na ( Kuanzisha mlinzi juu ya msalaba. Special Node. 15 kwa hg 11 35 φ kwa ajili ya VJ φ kwa ajili ya, 25 BF ya ya VJ BF ya kwa ajili ya, 3.3 Ni kiasi gani cha nguvu ni Graphormer? Katika sehemu za awali, tunajifunza codings tatu ya muundo na usanifu wa Graphormer. Kisha swali la asili ni: Katika sehemu hii, kwanza tunatoa jibu la kukubalika kwa kuonyesha kwamba Graphormer inaweza kuwakilisha hatua za AGGREGATE na COMBINE katika mifano maarufu ya GNN: Je, mabadiliko haya hufanya Graphormer yenye nguvu zaidi kuliko aina nyingine za GNN? Fact 1. Kwa kuchagua uzito sahihi na kazi ya umbali φ, kiwango cha Graphormer kinaweza kuwakilisha hatua za AGGREGATE na COMBINE ya mifano maarufu ya GNN kama vile GIN, GCN, GraphSAGE. Mchoro wa ushahidi wa kuondokana na matokeo haya ni: 1) Encoding ya nafasi inaruhusu moduli ya kujitegemea kutambua mstari wa jirani wa N (vi) wa node vi ili kazi ya softmax inaweza kuhesabu takwimu za wastani juu ya N (vi); 2) Kujua kiwango cha node, wastani juu ya jirani inaweza kutafsiriwa kwa jumla juu ya jirani; 3) Kwa vichwa vingi na FFN, mawakala wa vi na N (vi) wanaweza kusindika tofauti na kuunganishwa pamoja baadaye. Zaidi ya hayo, tunashuhudia zaidi kwamba kwa kutumia encoding yetu ya nafasi, Graphormer inaweza kwenda zaidi ya ujumbe wa kawaida wa kupitisha GNNs ambao nguvu ya kuonyesha ni si zaidi ya 1-Weisfeiler-Lehman (WL) mtihani. Mbali na ufafanuzi bora kuliko GNNs maarufu, pia tunaona uhusiano wa kuvutia kati ya kutumia kujali mwenyewe na heuristic ya node virtual [ ya ya ya Kama ilivyoonyeshwa katika orodha ya OGB [ ], mashaka ya node virtual, ambayo huongeza graphs na supernodes za ziada zinazohusiana na node zote katika graphs za awali, inaweza kuboresha kwa kiasi kikubwa utendaji wa GNNs zilizopo. (kama kazi ya READOUT) na kisha kueneza kwa . However, a naive addition of a supernode to a graph can potentially lead to inadvertent over-smoothing of information propagation [ ]. Badala yake, tunaona kwamba mchanganyiko wa kiwango cha graph na operesheni ya kupanua inaweza kutimizwa kwa asili na kujitegemea vanilla bila encodings ziada. Connection between Self-attention and Virtual Node. 15 31 24 22 22 Graph kwa ujumla Kila kitu kimoja 24 Fact 2. Kwa kuchagua uzito sahihi, kila muonekano wa node ya matokeo ya kiwango cha Graphormer bila encodings ya ziada inaweza kuwakilisha kazi za MEAN READOUT. Ukweli huu unachukua faida ya kujali mwenyewe kwamba kila node inaweza kuhudhuria nodes nyingine zote. Hivyo inaweza simulate operesheni ya readout ya kiwango cha graph ili kuunganisha habari kutoka kwa graph nzima. Mbali na uhalali wa nadharia, tunaona empirically kwamba Graphormer haina kukutana na tatizo la over-smoothing, ambayo inafanya kuboresha kuboresha. Ukweli pia inatufundisha kuanzisha node maalum kwa readout ya graph (tazama sehemu ya awali). 4 Utafiti wa Tunafanya majaribio ya kwanza juu ya hivi karibuni ya OGB-LSC [ ] regression kemia quantum (yaani, PCQM4M-LSC) changamoto, ambayo kwa sasa ni kubwa graph-kiwango utabiri dataset na ina zaidi ya graphs 3.8M kwa jumla. Kisha, sisi ripoti matokeo juu ya kazi nyingine tatu maarufu: ogbg-molhiv, ogbg-molpcba na ZINC, ambayo hutoka OGB [ (Kwa mfano, uongofu wa benchmarking) Mwisho, tunashughulikia mambo muhimu ya kubuni ya Graphormer. Maelezo ya kina ya datasets na mikakati ya mafunzo yanaweza kupatikana katika Kielelezo B. 21 22 14 4.1 OGB Large-Scale Challenge We benchmark the proposed Graphormer with GCN [ ] and GIN [ ], and their variants with virtual node (-VN) [ ]. They achieve the state-of-the-art valid and test mean absolute error (MAE) on the official leaderboard [ ]. In addition, we compare to GIN’s multi-hop variant [ ], and 12-layer deep graph network DeeperGCN [ ], ambayo pia inaonyesha utendaji wa matumaini kwenye vichwa vingine vya kuongoza. Tunashughulikia zaidi Graphormer yetu na Transformer-based graph model GT ya hivi karibuni Baselines. 26 54 15 4 21 5 30 [Tafakari ya 13] We primarily report results on two model sizes: (Kwa = 12*, d* = 768), and a smaller one ( = 6*, d* = 512). Both the number of attention heads in the attention module and the dimensionality of edge features are set to 32. We use AdamW as the optimizer, and set the hyper-parameter to 1e-8 and ( 1*, β*2) to (0.99,0.999). The peak learning rate is set to 2e-4 (3e-4 for ) with a 60k-step warm-up stage followed by a linear decay learning rate scheduler. The total training steps are 1M. The batch size is set to 1024. All models are trained on 8 NVIDIA V100 GPUS for about 2 days. Settings. Graphormer L GraphormerSMALL L ya ϵ β GraphormerSMALL Table summarizes performance comparisons on PCQM4M-LSC dataset. From the table, GIN-VN achieves the previous state-of-the-art validate MAE of 0.1395. The original implementation of GT [ ] employs a hidden dimension of 64 to reduce the total number of parameters. For a fair comparison, we also report the result by enlarging the hidden dimension to 768, denoted by GT-Wide, which leads to a total number of parameters of 83.2M. While, both GT and GT-Wide do not outperform GIN-VN and DeeperGCN-VN. Especially, we do not observe a performance gain along with the growth of parameters of GT. Results. 1 13 Compared to the previous state-of-the-art GNN architecture, Graphormer noticeably surpasses GIN-VN by a large margin, e.g., 11.5% relative validate MAE decline. By using the ensemble with ExpC [ ], we got a 0.1200 MAE on complete test set and won the first place of the graph-level track in OGB Large-Scale Challenge[ , Kama ilivyoelezwa katika sehemu we further find that the proposed Graphormer does not encounter the problem of over-smoothing, i.e., the train and validate error keep going down along with the growth of depth and width of models. 55 21 58 3.3, 4.2 Graph Representation In this section, we further investigate the performance of Graphormer on commonly used graph-level prediction tasks of popular leaderboards, i.e., OGB [ ] (OGBG-MolPCBA, OGBG-MolHIV), and benchmarking-GNN [ ] (ZINC). Since pre-training is encouraged by OGB, we mainly explore the transferable capability of a Graphormer model pre-trained on OGB-LSC (i.e., PCQM4M-LSC). Please note that the model configurations, hyper-parameters, and the pre-training performance of pre-trained Graphormers used for MolPCBA and MolHIV are different from the models used in the previous subsection. Please refer to Appendix B for detailed descriptions. For benchmarking-GNN, which does not encourage large pre-trained model, we train an additional GraphormerSLIM ( = 12*, d* = 80, total param.= 489 ) from scratch on ZINC. 22 14 L K We report performance of GNNs which achieve top-performance on the official leader-boards . Considering that the pre-trained Graphormer leverages external data, for a fair comparison on OGB datasets, we additionally report performance for fine-tuning GIN-VN pre-trained on PCQM4M-LSC dataset, which achieves the previous state-of-the-art valid and test MAE on that dataset. Baselines. 5 without additional domain-specific features We report detailed training strategies in Appendix B. In addition, Graphormer is more easily trapped in the over-fitting problem due to the large size of the model and the small size of the dataset. Therefore, we employ a widely used data augmentation for graph - FLAG [ ], to mitigate the over-fitting problem on OGB datasets. Settings. 27 Table and summarize performance of Graphormer comparing with other GNNs on MolHIV, MolPCBA and ZINC datasets. Especially, GT [ ] and SAN [ ] in Table are recently proposed Transformer-based GNN models. Graphormer consistently and significantly outperforms previous state-of-the-art GNNs on all three datasets by a large margin. Specially, except Graphormer, the other pre-trained GNNs do not achieve competitive performance, which is in line with previous literature [ ]. In addition, we conduct more comparisons to fine-tuning the pre-trained GNNs, please refer to Appendix C. Results. 2 ya 3 4 13 28 4 20 4.3 Ablation Studies Tunafanya mfululizo wa masomo ya ablation juu ya umuhimu wa miundo katika Graphormer yetu iliyopendekezwa, kwenye dataset ya PCQM4M-LSC. To save the computation resources, the Transformer models in table have 12 layers, and are trained for 100K iterations. 5. 5 We compare previously used positional encoding (PE) to our proposed spatial encoding, which both aim to encode the information of distinct node relation to Transformers. There are various PEs employed by previous Transformer-based GNNs, e.g., Weisfeiler-Lehman-PE (WL-PE) [ ] and Laplacian PE [ , ]. Tunaripoti utendaji wa Laplacian PE kwa sababu inafanya vizuri ikilinganishwa na mfululizo wa PEs kwa Graph Transformer katika fasihi iliyopita [ ]. Transformer architecture with the spatial encoding outperforms the counterpart built on the positional encoding, which demonstrates the effectiveness of using spatial encoding to capture the node spatial information. Node Relation Encoding. 61 3 14 13 Transformer architecture with degree-based centrality encoding yields a large margin performance boost in comparison to those without centrality information. This indicates that the centrality encoding is indispensable to Transformer architecture for modeling graph data. Centrality Encoding. We compare our proposed edge encoding (denoted as via attn bias) to two commonly used edge encodings described in Section kuingiza vipengele vya edge katika GNN, zilizotajwa kama kupitia node na kupitia Aggr katika Meza From the table, the gap of performance is minor between the two conventional methods, but our proposed edge encoding performs significantly better, which indicates that edge encoding as attention bias is more effective for Transformer to capture spatial information on edges. Edge Encoding. 3.1.3 ya 5. 5 Related Work In this section, we highlight the most recent works which attempt to develop standard Transformer architecture-based GNN or graph structural encoding, but spend less effort on elaborating the works by adapting attention mechanism to GNNs [33, 60 ya 7, 23, 1, 50, 51, 61, 48]. 5.1 Graph Transformer Kuna kazi kadhaa ambazo zinajifunza utendaji wa usanifu wa Transformer safi (kuunganishwa na ngazi za transformer) na mabadiliko kwenye kazi za uwakilishi wa graph, ambayo ni zaidi kuhusiana na Graphormer yetu. Kwa mfano, sehemu kadhaa za ngazi ya transformer zinabadilika katika [ ], ikiwa ni pamoja na GNN ya ziada iliyotumika katika sublayer ya tahadhari ili kuzalisha vectors ya , , and , long-range residual connection, and two branches of FFN to produce node and edge representations separately. They pre-train their model on 10 million unlabelled molecules and achieve excellent results by fine-tuning on downstream tasks. Attention module is modified to a soft adjacency matrix in [ ] by directly adding the adjacency matrix and RDKit -computed inter-atomic distance matrix to the attention probabilites. Very recently, Dwivedi [ ] revisit a series of works for Transformer-based GNNs, and suggest that the attention mechanism in Transformers on graph data should only aggregate the information from neighborhood (i.e., using adjacent matrix as attention mask) to ensure graph sparsity, and propose to use Laplacian eigenvector as positional encoding. Their model GT surpasses baseline GNNs on graph representation task. A concurrent work [ ] propose a novel full Laplacian spectrum to learn the position of each node in a graph, and empirically shows better results than GT. 46 Q K V 41 6 et al. 13 28 5.2 Mifumo ya Mifumo ya GNNs Information of path and distance is commonly used in GNNs. For example, an attention-based aggregation is proposed in [ ] where the node features, edge features, one-hot feature of the distance and ring flag feature are concatenated to calculate the attention probabilites; similar to path-based attention is leveraged in to model the influence between the center node and its higher-order neighbors; a distance-weighted aggregation scheme on graph is proposed in [ ]; it has been proved in [ ] that adopting distance encoding (i.e., one-hot feature of the distance as extra node attribute) could lead to a strictly more expressive power than the 1-WL test. Path and Distance in GNNs. 9 [9], [56] 59 32 Several works introduce positional encoding (PE) to Transformer-based GNNs to help the model capture the node position information. For example, Graph-BERT [ ] introduces three types of PE to embed the node position information to model, i.e., an absolute WL-PE which represents different nodes labeled by Weisfeiler-Lehman algorithm, an intimacy based PE and a hop based PE which are both variant to the sampled subgraphs. Absolute Laplacian PE is employed in [ ] and empircal study shows that its performance surpasses the absolute WL-PE used in Positional Encoding in Transformer on Graph. 61 13 [61]. Mbali na mbinu za kawaida zinazotumiwa kuingiza kipengele cha edge, ambazo zimeelezewa katika sehemu iliyopita, kuna majaribio kadhaa ambayo yanatumia jinsi ya kuingiza vipengele vya edge bora: kiwango cha GNN kilichounganishwa na tahadhari kinatengenezwa katika [ Kwenye mstari wa juu wa mstari wa juu wa mstari wa juu wa juu wa mstari wa juu wa juu wa mstari wa juu wa juu wa mstari wa juu wa mstari wa juu wa mstari wa juu wa mstari wa juu wa mstari wa juu wa mstari wa juu wa mstari wa juu wa mstari wa juu wa mstari wa juu wa mstari wa juu. ] in [ ]; in [ ], the authors propose to project edge features to an embedding vector, then multiply it by attention coefficients, and send the result to an additional FFN sub-layer to produce edge representations; Edge Feature. 16 54 5 13 6 Conclusion We have explored the direct application of Transformers to graph representation. With three novel graph structural encodings, the proposed Graphormer works surprisingly well on a wide range of popular benchmark datasets. While these initial results are encouraging, many challenges remain. For example, the quadratic complexity of the self-attention module restricts Graphormer’s application on large graphs. Therefore, future development of efficient Graphormer is necessary. Performance improvement could be expected by leveraging domain knowledge-powered encodings on particular graph datasets. Finally, an applicable graph sampling strategy is desired for node representation extraction with Graphormer. We leave them for future works. 7 Acknowledgement Tunapenda kuwashukuru Mingqi Yang na Shanda Li kwa majadiliano mazuri. References [1] Jinheon Baek, Minki Kang, and Sung Ju Hwang. Accurate learning of graph representations with graph multiset pooling. , 2021. ICLR [2] Dominique Beaini, Saro Passaro, Vincent Létourneau, William L Hamilton, Gabriele Corso, and Pietro Liò. Directional graph networks. In , 2021. International Conference on Machine Learning [3] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representa-tion. , 15(6):1373–1396, mwaka wa 2003. Neural computation [4] Xavier Bresson and Thomas Laurent. Residual gated graph convnets. , 2017. arXiv preprint arXiv:1711.07553 [5] Rémy Brossard, Oriel Frigo, and David Dehaene. Graph convolutions that can finally model local structure. , 2020. arXiv preprint arXiv:2011.15069 [6] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, , volume 33, pages 1877–1901. Curran Associates, Inc., 2020. Advances in Neural Information Processing Systems [7] Deng Cai and Wai Lam. Graph transformer for graph-to-sequence learning. In , sura ya 34, kurasa za 7464-7471, 2020. Utaratibu wa Mkutano wa AAAI juu ya Intellectual Intelligence [8] Tianle Cai, Shengjie Luo, Keyulu Xu, Di He, Tie-yan Liu, and Liwei Wang. Graphnorm: A principled approach to accelerating graph neural network training. In , 2021. International Conference on Machine Learning [9] Benson Chen, Regina Barzilay, na Tommi Jaakkola. Njia-kuongezeka graph transformer mtandao. kwa mwaka 2019. arXiv mapitio ya awali arXiv:1905.12712 [10] Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Velicˇkovic´. Principal neighbour-hood aggregation for graph nets. , 33, 2020. Maendeleo katika mifumo ya usindikaji wa habari ya neural [11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, na Kristina Toutanova. Bert: Mafunzo ya kabla ya mabadiliko ya kina ya bidi-rectional kwa uelewa wa lugha. , ukurasa wa 4171–4186, 2019. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) [12] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. Picha ni thamani ya maneno 16x16: Transformers kwa kutambua picha kwa kiwango. ya mwaka 2020. arXiv mapitio ya awali arXiv:2010.11929 [13] Vijay Prakash Dwivedi na Xavier Bresson. Ufafanuzi wa mitandao ya transformer kwa graphs. , 2021. AAAI Workshop on Deep Learning on Graphs: Methods and Applications Vijay Prakash Dwivedi, Chaitanya K Joshi, Thomas Laurent, Yoshua Bengio, na Xavier Bresson. Bench-marking graph mitandao ya neural. ya mwaka 2020. arXiv preprint arXiv:2003.00982 [15] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In , ukurasa wa 1263–1272. PMLR, 2017. International Conference on Machine Learning [16] Liyu Gong and Qiang Cheng. Exploiting edge features for graph neural networks. In , pages 9211–9219, 2019. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition [17] Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, et al. Conformer: Convolution-augmented transformer for speech recognition. , 2020. arXiv preprint arXiv:2005.08100 [18] William L Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In , 2017. NIPS [19] Vincent J Hellendoorn, Charles Sutton, Rishabh Singh, Petros Maniatis, and David Bieber. Global relational models of source code. In , 2019. International conference on learning representations [20] W Hu, B Liu, J Gomes, M Zitnik, P Liang, V Pande, and J Leskovec. Strategies for pre-training graph neural networks. In , 2020. International Conference on Learning Representations (ICLR) [21] Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, and Jure Leskovec. Ogb-lsc: A large-scale challenge for machine learning on graphs. Mwaka 2021. arXiv preprint arXiv:2103.09430 [22] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. , 2020. arXiv preprint arXiv:2005.00687 [23] Ziniu Hu, Yuxiao Dong, Kuansan Wang, and Yizhou Sun. Heterogeneous graph transformer. In , pages 2704–2710, 2020. Proceedings of The Web Conference 2020 [24] Katsuhiko Ishiguro, Shin-ichi Maeda, and Masanori Koyama. Graph warp module: an auxiliary module for boosting the power of graph neural networks in molecular graph analysis. , 2019. arXiv preprint arXiv:1902.01020 [25] Guolin Ke, Di He, and Tie-Yan Liu. Rethinking the positional encoding in language pre-training. , 2020. ICLR [26] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. , 2016. arXiv preprint arXiv:1609.02907 [27] Kezhi Kong, Guohao Li, Mucong Ding, Zuxuan Wu, Chen Zhu, Bernard Ghanem, Gavin Taylor, and Tom Goldstein. Flag: Adversarial data augmentation for graph neural networks. , 2020. arXiv mapitio ya awali arXiv:2010.09891 [28] Devin Kreuzer, Dominique Beaini, William Hamilton, Vincent Létourneau, and Prudencio Tossou. Re-thinking graph transformers with spectral attention. , 2021. arXiv preprint arXiv:2106.03893 [29] Tuan Le, Marco Bertolini, Frank Noé, and Djork-Arné Clevert. Parameterized hypercomplex graph neural networks for graph classification. , 2021. arXiv preprint arXiv:2103.16584 [30] Guohao Li, Chenxin Xiong, Ali Thabet, and Bernard Ghanem. Deepergcn: All you need to train deeper gcns. , 2020. arXiv preprint arXiv:2006.07739 [31] Junying Li, Deng Cai, and Xiaofei He. Learning graph-level representation for drug discovery. , 2017. arXiv preprint arXiv:1709.03741 [32] Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. Distance encoding: Design provably more powerful neural networks for graph representation learning. , 33, 2020. Advances in Neural Information Processing Systems [33] Yuan Li, Xiaodan Liang, Zhiting Hu, Yinbo Chen, and Eric P. Xing. Graph transformer, 2019. [34] Xi Victoria Lin, Richard Socher, na Caiming Xiong. Maelezo ya maarifa ya multi-hop na kujenga tuzo. , 2018. arXiv preprint arXiv:1808.10568 [35] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, na Veselin Stoyanov. , 2019. arXiv preprint arXiv:1907.11692 [36] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. , 2021. arXiv preprint arXiv:2103.14030 [37] Shengjie Luo, Shanda Li, Tianle Cai, Di He, Dinglan Peng, Shuxin Zheng, Guolin Ke, Liwei Wang, and Tie-Yan Liu. Stable, fast and accurate: Kernelized attention with relative positional encoding. Mwaka 2021. NeurIPS [38] Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, , volume 32. Curran Associates, Inc., 2019. Advances in Neural Information Processing Systems [39] P David Marshall. The promotion and presentation of the self: celebrity as marker of presentational media. , 1(1): 35-48, mwaka wa 2010. Celebrity studies [40] Alice Marwick and Danah Boyd. To see and be seen: Celebrity practice on twitter. , 17(2):139–158, 2011. Convergence [41] Łukasz Maziarka, Tomasz Danel, Sławomir Mucha, Krzysztof Rataj, Jacek Tabor, and Stanisław Jastrze˛bski. Molecule attention transformer. , 2020. arXiv preprint ya arXiv:2002.08264 [42] Maho Nakata and Tomomi Shimazaki. Pubchemqc project: a large-scale first-principles electronic structure database for data-driven chemistry. , 57(6):1300–1308, 2017. Journal of chemical information and modeling [43] Sharan Narang, Hyung Won Chung, Yi Tay, William Fedus, Thibault Fevry, Michael Matena, Karishma Malkan, Noah Fiedel, Noam Shazeer, Zhenzhong Lan, et al. Do transformer modifications transfer across implementations and applications? , 2021. arXiv preprint arXiv:2102.11972 [44] Dinglan Peng, Shuxin Zheng, Yatao Li, Guolin Ke, Di He, and Tie-Yan Liu. How could neural networks understand programs? In . PMLR, 2021. International Conference on Machine Learning [45] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, na Peter J. Liu. Kuangalia mipaka ya kujifunza uhamisho na muundo wa maandishi kwa maandishi. , 21(140):1 hadi 67, 2020. Journal of Machine Learning Research [46] Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, and Junzhou Huang. Self-supervised graph transformer on large-scale molecular data. Tarehe 33 ya mwaka 2020. Advances in Neural Information Processing Systems [47] Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations. In , pages 464–468, 2018. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) [48] Yunsheng Shi, Zhengjie Huang, Wenjin Wang, Hui Zhong, Shikun Feng, and Yu Sun. Masked label predic-tion: Unified message passing model for semi-supervised classification. , 2020. arXiv preprint arXiv:2009.03509 [49] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In , 2017. NIPS [50] Petar Velicˇkovic ́, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, na Yoshua Bengio. Mtandao wa machozi. , 2018. ICLR [51] Guangtao Wang, Rex Ying, Jing Huang, and Jure Leskovec. Direct multi-hop attention based graph neural network. , 2020. arXiv mapitio ya awali arXiv:2009.14332 [52] Sinong Wang, Belinda Li, Madian Khabsa, Han Fang, na Hao Ma. Linformer: Kujali mwenyewe na utata wa linear. , 2020. arXiv preprint arXiv:2006.04768 [53] Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. On layer normalization in the transformer architecture. In , pages 10524–10533. PMLR, 2020. International Conference on Machine Learning [54] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In , 2019. International Conference on Learning Representations [55] Mingqi Yang, Yanming Shen, Heng Qi, and Baocai Yin. Breaking the expressive bottlenecks of graph neural networks. , 2020. arXiv preprint arXiv:2012.07219 [56] Yiding Yang, Xinchao Wang, Mingli Song, Junsong Yuan, and Dacheng Tao. Spagan: Shortest path graph attention network. , 2019. Advances in IJCAI [57] Chengxuan Ying, Guolin Ke, Di He, and Tie-Yan Liu. Lazyformer: Self attention with lazy update. Mwaka 2021. arXiv preprint arXiv:2102.12702 [58] Chengxuan Ying, Mingqi Yang, Shuxin Zheng, Guolin Ke, Shengjie Luo, Tianle Cai, Chenglin Wu, Yuxin Wang, Yanming Shen, na Di He. Suluhisho la kwanza la kdd cup 2021 & ogb changamoto kubwa graph-level track. , 2021. arXiv preprint arXiv:2106.08279 [59] Jiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. In , pages 7134–7143. PMLR, 2019. International Conference on Machine Learning [60] Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, and Hyunwoo J Kim. Graph transformer networks. 32 ya mwaka 2019. Advances in Neural Information Processing Systems [61] Jiawei Zhang, Haopeng Zhang, Congying Xia, and Li Sun. Graph-bert: Only attention is needed for learning graph representations. , 2020. arXiv preprint arXiv:2001.05140 [62] Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. Freelb: Enhanced adversarial training for natural language understanding. In , 2020. ICLR [63] Daniel Zügner, Tobias Kirschstein, Michele Catasta, Jure Leskovec, and Stephan Günnemann. Language-agnostic representation learning of source code from structure and context. In , 2020. International Conference on Learning Representations A Proofs A.1 SPD can Be Used to Improve WL-Test 1-WL-test fails in many cases [ , ], thus classic message passing GNNs also fail to distinguish many pairs of graphs. We show that SPD might help when 1-WL-test fails, for example, in Figure where 1-WL-test fails, the sets of SPD from all nodes to others successfully distinguish the two graphs. 38 32 2 A.2 Proof of Fact 1 1 Tutaanza kwa kuonyesha kwamba moduli ya kujali mwenyewe na Encoding ya nafasi inaweza kuwakilisha mchanganyiko wa MEAN. : 1) setting = 0 kama = 1 and = otherwise where is the SPD; 2) setting = = 0 and to be the identity matrix. Then softmax ( ) gives the average of representations of the neighbors. MEAN AGGREGATE. (6) ya bφ φ bφ −∞ φ WQ WK WV A V Mkusanyiko wa SUM unaweza kufanywa kwa kufanya Mkusanyiko wa MEAN kwanza na kisha kuongezeka kwa viwango vya node. Hasa, viwango vya node vinaweza kuondolewa kutoka kwa Centrality Encoding na kichwa cha ziada na kuunganishwa na mawakala baada ya Mkusanyiko wa MEAN. Kisha moduli ya FFN katika Graphormer inaweza kuwakilisha kazi ya kuongezeka kwa kiwango kwa ukubwa wa mawakala wa wastani na nadharia ya karibu ya FFN. SUM AGGREGATE. Kuwakilisha mchanganyiko wa MAX ni ngumu zaidi kuliko MEAN na SUM. ya vektor ya uwakilishi, tunahitaji kichwa moja ili kuchagua thamani ya juu juu ya -th dimension in the neighbor by in Eq. : 1) setting = 0 kama = 1 and = Vinginevyo ambapo is the SPD; 2) setting = which is the -th standard basis; = 0 na maneno ya udanganyifu (ambayo hutolewa katika maelezo ya awali kwa ajili ya urahisi) ya to be ; and = ya , where is the temperature that can be chosen to be large enough so that the softmax function can approximate hard max and ni vektor ambayo vipengele vyote ni 1. MAX AGGREGATE. t t (6) bφ φ BF ya −∞ φ wa et t WQ Q T 1 wa wa wa et T 1 The COMBINE step takes the result of AGGREGATE and the previous representation of current node as input. This can be achieved by the AGGREGATE operations described above together with an additional head which outputs the features of present nodes, i.e., in Eq. : 1) setting = 0 if = 0 and = ya otherwise where ni SPD; 2) kuweka = = 0 and to be the identity matrix. Then the FFN module can approximate any COMBINE function by the universal approximation theorem of FFN. COMBINE. (6) bφ φ bφ −∞ φ WQ wa wa wa wa A.3 Proof of Fact 2 2 Hii inaweza kuthibitishwa kwa kuweka = = 0, the bias terms of to be na ya kuwa matrix ya utambulisho ambapo should be much larger than the scale of so that 2 T inaongoza neno la Encoding ya Ulimwengu. MEAN READOUT. WQ WK Q, K T 1 wa wa wa T bφ T 11 B Experiment Details B.1 Details of Datasets We summarize the datasets used in this work in Table PCQM4m-LSC is a quantum chemistry graph-level prediction task in recent OGB Large-Scale Challenge, originally curated under the PubChemQC project [ kwa ajili ya 6. 42 The task of PCQM4M-LSC is to predict DFT(density functional theory)-calculated HOMO-LUMO energy gap of molecules given their 2D molecular graphs, which is one of the most practically-relevant quantum chemical properties of molecule science. PCQM4M-LSC is unprecedentedly large in scale comparing to other labeled graph-level prediction datasets, which contains more than 3.8M graphs. Besides, we conduct experiments on two molecular graph datasets in popular OGB leaderboards, i.e., OGBG-MolPCBA and OGBG-MolHIV. They are two molecular property prediction datasets with different sizes. The pre-trained knowledge of molecular graph on PCQM4M-LSC could be easily leveraged on these two datasets. We adopt official scaffold split on three datasets following [ ya Aidha, tunatumia orodha nyingine maarufu ya viongozi, yaani, benchmarking-gnn [ ]. We use the ZINC datasets, which is the most popular real-world molecular dataset to predict graph property regression for contrained solubility, an important chemical property for designing generative GNNs for molecules. Different from the scaffold spliting in OGB, uniform sampling is adopted in ZINC for data splitting. 21 22 14 B.2 Details of Training Strategies B.2.1 PCQM4M-LSC Tunaripoti mipangilio ya kina ya hyper-parameter kutumika kwa mafunzo ya Graphormer katika Meza Tunapunguza ukubwa wa ngazi ya ndani ya FFN ya 4 in [ kwa ajili ya , which does not appreciably hurt the performance but significantly save the parameters. The embedding dropout ratio is set to 0.1 by default in many previous Transformer works [ ya ]. Hata hivyo, tunaona empirically kwamba kiwango kidogo cha kuingizwa cha kupoteza (kwa mfano, 0.1) kinasababisha kupungua kwa utendaji unaoonekana kwenye seti ya kuthibitisha ya PCQM4M-LSC. Sababu moja inawezekana ni kwamba chati ya molekuli ni ndogo (yaani, kiwango cha kati cha #atoms katika kila molekuli ni kuhusu 15), na kufanya kipengele cha graph zaidi cha kuingizwa kwa kila node. 7. d 49 d 11 35 B.2.2 OGBG-MolPCBA Kwanza, tunaripoti mipangilio ya mfano na hyper-parameters ya Graphormer iliyofundishwa kabla ya PCQM4M-LSC. Kwa ujuzi, tunaona kwamba utendaji wa MolPCBA unafaidika na ukubwa mkubwa wa mfano wa kabla ya mafunzo. Kwa hiyo, tunajifunza Graphormer ya kina na ngazi za 18 za Transformer kwenye PCQM4M-LSC. Ukubwa wa siri na kiwango cha ndani cha FFN ni kuweka kwenye 1024. Tumeweka kiwango cha kujifunza cha juu kwenye 1e-4 kwa kiwango cha kina Pre-training. Graphormer. Zaidi ya hayo, tumeongeza kiwango cha kupoteza tahadhari kutoka 0.1 hadi 0.3 katika mazoezi ya awali na upya ili kuzuia mfano wa kupunguzwa. Viwango vingine vya hyper hazibadilika. Graphormer iliyopangwa iliyotumika kwa MolPCBA hufikia MAE halali ya 0.1253 kwenye PCQM4M-LSC, ambayo ni mbaya kidogo kuliko ripoti katika Meza 1. Meza ya summarizes the hyper-parameters used for fine-tuning Graphormer on OGBG-MolPCBA. We conduct a grid search for several hyper-parameters to find the optimal configuration. The experimental results are reported by the mean of 10 independent runs with random seeds. We use FLAG [ ] with minor modifications for graph data augmentation. In particular, except the step size and the number of steps , we also employ a projection step in [ (Kwa uharibifu wa juu kabisa) . The performance of Graphormer on MolPCBA is quite robust to the hyper-parameters of FLAG. The rest of hyper-parameters are the same with the pre-training model. Fine-tuning. 8 27 α m 62 g B.2.3 OGBG-MolHIV Tunatumia Graphormer hasa katika Meza as the pre-trained model for OGBG-MolHIV, where the pre-training hyper-parameters are summarized in Table Pre-training. 1 7. Parameters hyper kwa Graphormer fine-tuning juu ya OGBG-MolHIV ni kuwasilishwa katika Meza Empirically, we find that the different choices of hyper-parameters of FLAG (i.e., step size , number of steps Uharibifu wa kiwango cha juu ) ingekuwa kuathiri sana utendaji wa Graphormer juu ya OGBG-MolHiv. Kwa hiyo, tunatumia juhudi zaidi kuendesha utafutaji wa grid kwa ajili ya hyper-parameter ya FLAG. Tunaripoti bora hyper-parameter kwa wastani wa 10 kuendesha kujitegemea na mbegu random. Fine-tuning. 9. α m g B.2.4 Maji ya Zinc To keep the total parameters of Graphormer less than 500K per the request from benchmarking-GNN leader-board [ ], tunajifunza Graphormer ngumu ya kiwango cha 12 na ukubwa wa siri wa 80, ambayo inajulikana kama GraphormerSLIM katika Meza and has about 489K learnable parameters. The number of attention heads is set to 8. Table inazungumzia vipengele vya kina vya hyper kwenye ZINC. Tunajifunza hatua za 400K kwenye dataset hii, na kutumia uharibifu wa uzito wa 0.01. 14 4, 10 B.3 maelezo ya hyper-parameters kwa mbinu za msingi In this section, we present the details of our re-implementation of the baseline methods. B.3.1 PCQM4M-LSC kwa ajili ya Ripoti rasmi ya Github ya OGB-LSC provides hyper-parameters and codes to reproduce the results on leaderboard. These hyper-parameters work well on almost all popular GNN variants, except the DeeperGCN-VN, which results in a training divergence. Therefore, for DeeperGCN-VN, we follow the official hyper-parameter setting Kwa mujibu wa waandishi wa habari [ ]. Kwa kulinganisha kwa uadilifu na Graphormer, tunajifunza kiwango cha 12 cha DeeperGCN. Kiwango cha siri kimewekwa na 600. Kiwango cha mkusanyiko kimewekwa na 256. Kiwango cha kujifunza kimewekwa na 1e-3, na mpango wa kiwango cha kujifunza wa hatua unachukuliwa na ukubwa wa hatua ya kuanguka na kiwango cha kuanguka. as 30 epochs and 0.25. The model is trained for 100 epochs. 7 8 30 γ The default dimension of laplacian PE of GT [ Kwa hiyo, kwa GT na GT-Wide, sisi kuweka ukubwa wa laplacian PE kwa 4, ambayo ina matokeo ya tu 0.08% filtering nje. Tunachukua mipangilio ya default ya hyper-parameter ilivyoelezwa katika [ ], except that we decrease the learning rate to 1e-4, which leads to a better convergence on PCQM4M-LSC. 13 13 B.3.2 OGBG-MolPCBA Ili kurekebisha GIN-VN iliyopangwa kabla ya MolPCBA, tunafuata mipangilio ya hyper-parameter iliyotolewa katika karatasi ya OGB ya awali [ ]. To be more concrete, we load the pre-trained checkpoint reported in Table and fine-tune it on OGBG-MolPCBA dataset. We use the grid search on the hyper-parameters for better fine-tuning performance. In particular, the learning rate is selected from {1e − 5, 1e − 4, 1e − 3}; the dropout ratio is selected from {0.0, 0.1, 0.5}; the batch size is selected from {32, 64}. 22 1 B.3.3 OGBG-MolHIV Vivyo hivyo, tunashughulikia GIN-VN iliyopangwa kabla ya MolHIV kwa kufuata mipangilio ya hyper-parameter iliyotolewa katika karatasi ya OGB ya awali [ ]. We also conduct the grid search to look for optimal hyper-parameters. The ranges for each hyper-parameter of grid search are the same as the previous subsection. 22 C. Utafiti zaidi Kama ilivyoelezwa katika kazi inayohusiana, GROVER ni GNN ya msingi ya Transformer, ambayo ina vigezo milioni 100 na imefundishwa juu ya molekuli milioni 10 zisizojulikana kwa kutumia GPU 250 za Nvidia V100. Katika sehemu hii, tunaripoti alama za upya za GROVER kwenye MolHIV na MolPCBA, na kulinganisha na Graphormer iliyopendekezwa. Sisi kupakua pre-kufundishwa GROVER mifano kutoka rasmi Github tovuti , follow the official instruc-tions na kurekebisha vituo vya kudhibiti vilivyowekwa kabla ya mafunzo na utafutaji wa makini wa hyper-parameters (katika Meza Tunaona kwamba GROVER inaweza kufikia utendaji wa ushindani juu ya MolHIV tu ikiwa inatumia vipengele vingine vya molekuli, yaani, picha za molekuli za morgan na vipengele vya 2D Kwa hiyo, tunaripoti takwimu za GROVER kwa kuchukua vipengele hivi viwili vya molekuli. Tafadhali kumbuka kwamba, kutoka kwenye orodha ya , we can know such additional molecular features are very effective on MolHIV dataset. 9 10 11). 11 12 Meza ya and kuimarisha utendaji wa GROVER na GROVERLARGE ikilinganishwa na Graphormer juu ya MolHIV na MolPCBA. Kutoka kwenye meza, tunaona kwamba Graphormer inaweza daima kufanikiwa kuliko GROVER hata bila ya sifa yoyote ya molekuli. 12 13 D Mjadala na kazi ya baadaye Kama ilivyo kwa Transformer ya kawaida, mchakato wa tahadhari katika Graphormer hupunguza kwa kiwango cha mraba na idadi ya nodes katika chati ya kuingia, ambayo inaweza kuwa gharama kubwa kwa na kuzuia matumizi yake katika mipangilio na rasilimali za kompyuta zilizo mdogo. Hivi karibuni, ufumbuzi wengi umependekezwa kukabiliana na tatizo hili katika Transformer [ ya ya ya Jibu hili litakuwa na faida kubwa kutoka kwa maendeleo ya baadaye ya Graphormer yenye ufanisi. Complexity. n n 25 52 57 37 Katika Graphormer, kuna chaguzi nyingi kwa ajili ya kituo cha mtandao na kazi ya encoding ya nafasi ( ). For example, one can leverage the 2 umbali katika muundo wa 3D kati ya atomi mbili katika molekuli. Katika makala hii, sisi hasa tathmini jumla centrality na umbali metric katika nadharia graph, yaani, ngazi centrality na njia ya mfupi zaidi. kuboresha utendaji inaweza kutarajia kwa kutumia ujuzi wa ujuzi wa ujuzi encodings juu ya mfululizo maalum graph data. Choice of centrality and φ φ vi, vj L Kuna aina mbalimbali ya majukumu ya uwakilishi wa node juu ya data ya muundo wa graph, kama vile fedha, mitandao ya kijamii, na utabiri wa muda. Graphormer inaweza kwa kawaida kutumika kwa uchimbaji wa uwakilishi wa node na mkakati wa sampuli ya graph inayotumika. Node Representation. Makala hii inapatikana kwenye archiv chini ya leseni ya CC by 4.0 Deed (Attribution 4.0 International). Makala hii inapatikana kwenye archiv chini ya leseni ya CC by 4.0 Deed (Attribution 4.0 International).