Autè yo: Yichen Zhang Gan He Lei Ma Xiaofei Liu J. J. Johannes Hjorth Alexander Kozlov Yutao He Shenjian Zhang Jeanette Hellgren Kotaleski Yonghong Tian Sten Grillner Kai Du Tiejun Huang Autè yo: Jwenn nan Zhang Li nan Li nan Liu Liu nan J. J. Johannes Hjorth nan Alexander Kozlov nan Li nan Jèn nan Zhang Jeanette Hellgren nan Kotaleski Tian nan Kòmanse Grillner Si ou Tiyèj Huang Abstraksyon Biophysically detaye modèl milti-divizyon yo se zouti pwisan yo eksplore prensip yo òdinatè nan sèvo a ak tou sèvi kòm yon ankadreman teyori yo kreye algorithms pou òdinatè atifisyèl (AI) sistèm. Sepandan, pri a kalkile chè limite grav aplikasyon yo nan tou de jaden yo neuroscience ak AI. Bottleneck la prensipal la pandan simulation modèl divizyon detaye se kapasite a nan yon simulateur yo rezoud gwo sistèm nan egzanp lineyè. Isit la, nou prezante yon nouvo Endrikòl Irigasyon cheduling (DHS) metòd pou byen vit vit vit vitès yon pwosesis sa a. Nou teyori demontre ke implemantasyon an DHS se òdinatè optimisé ak presizyon. Metòd sa a ki baze sou GPU pèfòmans ak 2-3 lòd nan gwo vitès pi wo pase metòd la klasik seri Hines nan konvansyonèl platfòm CPU la. Nou bati yon ankadreman DeepDendrite, ki entegre metòd la DHS ak motè a òdinatè GPU nan similatè a NEURON ak demontre aplikasyon yo nan DeepDendrite nan travay neuroscience. Nou rechèch ki jan modèl espesyalis nan entwodiksyon spine afekte excitabilite neuronal nan yon modèl piramidal manm ak 25,000 spines. Anplis de sa, nou bay yon diskisyon kout sou pot D H S Introduction nan Dekripsyon prensip yo kodaj ak òdinatè nan neuron yo esansyèl pou neuroscience. sèvo mamay yo konpoze de plis pase milye de kalite diferan nan neuron ak pwopriyete inik morfoloji ak biophysical. Malgre ke li pa plis kontèksyèlman vre, doktrin la "pout-neuron" , nan ki neuron yo te konsidere kòm inite senp summing, se toujou lajman aplike nan òdinatè nèvral, espesyalman nan analiz rezo nèvral. Nan dènye ane yo, modèn entèlijans atifisyèl (AI) te itilize prensip sa a ak devlope zouti pouvwa, tankou rezo nèvral atifisyèl (ANN) Sepandan, nan adisyon a kominikasyon konplè nan nivo sèl-neuròn, pati subcellular, tankou neuronal dendrites, kapab tou fè operasyon ki pa lineyè kòm inite kominikasyon independan. , , , , Anplis de sa, spines dendritic, ti pwopozisyon ki densman kouvri dendrites nan neuron spiny, ka compartmentalize siyal sinaptik, ki pèmèt yo dwe separe de dendrites orijinal yo ex vivo ak in vivo. , , , . 1 2 3 4 5 6 7 8 9 10 11 Similasyon lè l sèvi avèk neuron biolojikman detaye bay yon anviwònman teyori pou lye detay biolojik ak prensip òdinatè. Kò a nan anviwònman an modèl multi-divizyon biolojikman detaye , ki pèmèt nou modèl neuron ak morfoloji dendritik reyèl, konduktans ionik intrinsè, ak extrinsic sinaptik entwodiksyon. Nivo a nan modèl la detaye plizyè pati, sa vle di, dendrites, se bati sou teyori a klasik kab , ki modèl pwopriyete biophysical membran nan dendrit kòm kab pasif, bay yon deskripsyon matematik nan ki jan siyal elektwonik envade ak propage atravè pwosesis neuronal konplèks. Pa entegre teyori kab ak mekanis biophysical aktif tankou channels ion, eksitasyon ak inibitè kouri sinaptik, elatriye, yon modèl detaye multi-divizyon ka jwenn kominikasyon selilè ak subcellular neuronal plis limitasyon eksperyans , . 12 13 12 4 7 Anplis de enpak li yo profond sou neuroscience, modèl neuron biolojik detaye te dènyèman itilize yo bloke gap la ant detaye estrikti ak biophysical neuronal ak AI. Teknoloji a prensipal nan jaden an modèn AI se ANNs ki konsiste de neuron pwen, yon analog nan rezo neryèl biolojik. Malgre ke ANNs ak "backpropagation-of-error" (backprop) algorithm jwenn pèfòmans remakab nan aplikasyon espesyalize, menm bat tèt jwè pwofesyonèl moun nan jwèt Go ak chèk , , sèvo imen an toujou depase ANNs nan domèn ki enplike anviwònman plis dinamik ak bruyant , Teyori dènye etid yo suggere ke entegre dendritik se enpòtan nan jere efikas algorithms aprantisaj ki potansyèlman depase backprop nan pwosesis enfòmasyon paralèl , , Anplis de sa, yon sèl detay modèl milti-divizyon ka aprann rezo-nivo kalkil nonlineyè pou neuron pwen pa ajiste sèlman fòs la sinaptik. , , demontre potansyèl la plen nan modèl yo detaye nan konstriksyon sistèm AI ki pi fò tankou sèvo. Se poutèt sa, li se yon priorite segondè nan ekspandans paradigm nan sèvo ki tankou AI soti nan modèl sèl detaye neuron nan rezo gwo-scale biolojikman detaye. 14 15 16 17 18 19 20 21 22 Youn nan repitasyon ki long nan metòd la nan similasyon detaye se nan pri a òdinatè ekselan li yo, ki te gravman limite aplikasyon li yo nan syans sinirèl ak AI. Boutèl la prensipal nan similasyon an se rezoud egzanp lineyè ki baze sou teyori fondasyonal yo nan modèl detaye. , , Pou amelyore efikasite, metòd la klasik Hines diminye complexity tan pou solisyon de ekivalan soti nan O(n3) nan O(n), ki te lajman aplike kòm algorithm la debaz nan similatè popilè tankou NEURON nan Genesis Sepandan, metòd sa a sèvi ak yon metòd seri pou pwosesis chak pati sekansèlman. Lè yon similasyon enplike plizyè dendrit biophysically detaye ak spines dendritic, matris la ekivalan liy ("Hines Matrix") skalè respektivman ak yon kantite ogmante dendrites oswa spines (Fig. ), fè metòd la Hines pa plis pratik, paske li posede yon pousantaj trè grav sou tout simulasyon an. 12 23 24 25 26 1e nan Yon rekonstriksyon nan yon modèl piramidal newòn layer-5 ak fòmil matematik ki te itilize ak modèl newòn detaye. Workflow lè nimewo similasyon modèl newòn detaye. Faz la ekwazyon-resolve se boutèy la nan similasyon an. Yon egzanp nan egzanp liy nan similasyon an. Done depansite nan metòd la Hines lè solvant ekivalan liy nan nan Gwosè a nan matris la Hines echèl ak complexity modèl la. Nimewo a nan sistèm ekivalan lineyè yo dwe solisyon gen yon ogmantasyon enpòtan lè modèl yo ap grandi plis detaye. Kou konpetitif (etap pran nan faz la solisyon ekwazyon) nan metòd la seri Hines sou diferan kalite modèl newòn. Ilistrasyon nan metòd solisyon diferan. Pwodwi diferan nan yon neuron yo atake nan plizyè inite pwosesis nan metòd paralèl (mid, dwat), ki montre ak koulè diferan. Nan metòd seri (lwa), tout koutim yo konvèti ak yon sèl inite. Kòmanse pri a nan twa metòd nan lè solisyon ekivalan nan yon modèl piramidal ak spines. tan kouri nan metòd diferan sou solisyon ekivalan pou 500 modèl piramidal ak spines. tan kouri montre tan konsomasyon nan similasyon 1 s (resolve ekivalan 40,000 fwa ak yon etap tan nan 0,025 ms). metòd paralèl p-Hines nan CoreNEURON (sou GPU), Metòd paralèl branch ki baze sou branch (sou GPU), DHS Dendritic metòd jerealizasyon (sou GPU). a b c d c e f g h g i Pandan dekad ki sot pase yo, yon avanse enpòtan te fè nan akselere metòd la Hines lè l sèvi avèk metòd paralèl nan nivo selilè, ki pèmèt paralelize kalkil la nan pati diferan nan chak selil , , , , , Sepandan, metòd paralèl kounye a nan nivo selilè souvan manke yon estrateji paralelizasyon efikas oswa manke yon presizyon numerik ase konpare ak metòd la orijinal Hines. 27 28 29 30 31 32 Isit la, nou devlope yon konplètman otomatik, numerik presizyon, ak optimize zouti similasyon ki ka ogmante efikasite òdinatè ak diminye pri òdinatè. Anplis de sa, sa a zouti similasyon ka san danje adopte pou etablisman ak tès rezo neural ak detay biolojik pou aprantisaj machin ak aplikasyon AI. Krizman, nou fòme òdinatè paralèl nan metòd la Hines kòm yon pwoblèm planifikasyon matematik ak jere yon Dendritic Hierarchical Scheduling (DHS) metòd ki baze sou optimizasyon konbinatwa Teyori nan òdinatè paralèl . Nou demontre ke algorithm nou an bay planifikasyon optime san yo pa gen okenn pèdi nan presizyon. Anplis de sa, nou te optimize DHS pou chip GPU ki pi avanse kounye a lè l sèvi avèk hierarchi memwa GPU ak mekanis aksè memwa. Anplis de sa, DHS ka akselere kominikasyon 60-1,500 fwa (Tabl Supplementary ) konpare ak similatè a klasik NEURON pandan y ap kenbe egzanp ak presizyon. 33 34 1 25 Pou pèmèt similasyon dendritik detaye pou itilize nan AI, nou Lè sa a, etabli ankadreman an DeepDendrite pa entegre platfòm la DHS-embedded CoreNEURON (an optimized calculation engine pou NEURON) kòm motè a similasyon ak de modil adjuvant (modil I / O ak modil aprantisaj) sipòte algorithms aprantisaj dendritik pandan similasyon yo. DeepDendrite kouri sou platfòm lojisyèl GPU a, sipòte tou de travay regilye similasyon nan syans nervo ak travay aprantisaj nan AI. 35 Last but not least, nou prezante tou plizyè aplikasyon lè l sèvi avèk DeepDendrite, mete atansyon sou kèk defi enpòtan nan neuroscience ak AI: (1) Nou demontre ki jan modèl espesyalis nan entwodiksyon dendritic espin yo afekte aktivite nèvron ak nèvron ki gen espin atravè anbalaj dendritic (modèl plen espin). DeepDendrite pèmèt nou eksplore kominikasyon nèvron nan yon modèl nèvron piramidal imen simile ak ~ 25,000 dendritic espin. (2) Nan diskisyon an nou tou konsidere potansyèl la nan DeepDendrite nan kontexte a nan AI, espesyalman, nan kreye ANNs ak morfoloji detaye nèvron piramidal moun. Rezilta nou yo suggere DeepDendrite Tout kòd sous pou DeepDendrite, modèl plen-pwen ak modèl rezo dendritik detaye yo piblikman disponib sou entènèt la (se Kòd Disponibilite). Framework aprantisaj sous louvri nou an ka fasil entegre ak lòt règ aprantisaj dendritik, tankou règ aprantisaj pou dendrit pa lineyè (plis-aktif) , eksplozyon-dependant plastik sinaptik , ak aprann ak prediksyon spike Jeneralman, etid nou an bay yon seri konplè nan zouti ki gen potansyèl pou chanje kounye a òkosistèm nan kominikasyon neuroscience kominikasyon kominote. Pa sèvi ak pouvwa a nan GPU kominikasyon, nou anseye ke zouti sa yo pral fasilite eksplorasyon nan nivo sistèm nan prensip kominikasyon nan estrikti fin nan sèvo a, osi byen ke ankouraje interaksyon an ant neuroscience ak modern AI. 21 20 36 Rezilta Metòd nan Dendritic Hierarchical Scheduling (DHS) Calculating ionic currents and solving linear equations are two critical phases when simulating biophysically detailed neurons, ki se tan-consuming and pose severe computational burdens. Fortunately, calculating ionic currents of each compartment is a fully independent process so that it can be naturally parallelized on devices with massive parallel-computing units like GPUs Kòm yon rezilta, solisyon an nan egzanp lineyè vin boutèy la rete pou pwosesis la paralelizasyon (Fig. ) nan 37 1a-f nan Pou rezoud sa a boutèy, metòd paralèl nan nivo selilè yo te devlope, ki akselere òdinatè sèl selilè pa "divize" yon selil sèl nan plizyè pati ki ka òdinatè paralèlman. , , Sepandan, metòd sa yo depann anpil sou konesans anvan yo kreye estrateji pratik sou ki jan yo divize yon sèl neuron nan pati (Fig. • Pwodwi pou egzanp. Dapre sa, li vin mwens efikas pou neuron ak morfoloji asimetrik, egzanp, piramidal neuron ak Purkinje neuron. 27 28 38 1g nan 1 Nou vle devlope yon metòd paralèl pi efikas ak presizyon pou similasyon nan rezo nervo biolojikman detaye. Premye, nou etabli kritè yo pou akòzite a nan yon metòd paralèl nan nivo selilè. Baze sou teyori yo nan konpitè paralèl , nou ofri twa kondisyon yo asire ke yon metòd paralèl pral pwodwi solisyon yo menm jan ak metòd la konvèsyon seri Hines dapre depans la done nan metòd la Hines (vle Metòd). Lè sa a, yo te teyori evalye tan kouri, sa vle di efikasite, nan metòd la konvèsyon seri ak paralèl, nou prezante ak fòme konsèp la nan pri konvèsyonal kòm kantite etap yon metòd pran nan solisyon de ekwi (vle Metòd). 34 Baze sou presizyon an nan similasyon an ak pri a òdinatè, nou fòme pwoblèm la nan paralelizasyon kòm yon pwoblèm planifikasyon matematik (considere Metòd). Nan tèm senp, nou konsidere yon sèl neuron kòm yon fwi ak anpil nodes (divizyon). Pou fil paralèl, nou ka konvèti nan maksimòm nodes nan chak etap, men nou bezwen asire ke yon nodes se kalkil sèlman si tout nodes timoun li yo te pwosesis; objektif nou an se yo jwenn yon estrateji ak nimewo minimòm nan etap pou pwosesis la tout antye. k k Pou kreye yon partisyon òptimèl, nou ofri yon metòd rele Dendritic Hierarchical Scheduling (DHS) (teyori pwouve se prezante nan Metòd yo). Ide a kle nan DHS se priorite nodes profond (Fig. Metòd DHS gen ladan de etap: analize topoloji dendritik ak jwenn partisyon an pi bon: (1) Done yon modèl detaye, nou an premye jwenn arbre a depansyon ki korespondan li yo ak kalkil dyamèt la nan chak nod (depans la nan yon nod se nimewo a nan nodes orijinal li yo) sou arbre a (Fig. ). (2) Apre analiz topoloji, nou rechèch kandida yo ak chwazi nan maksimòm se yon node yon kandida sèlman si tout node timoun li yo te pwosesis). Pwosesis sa a repete jiskaske tout node yo pwosesis (Fig. ) nan 2a nan 2b ak C k 2D nan DHS travay flux. DHS pwosesis pi profond kandida nodes nan chak iterasyon. Illustration of calculating node depth of a compartmental model. Modèl la se premye konvèti nan yon estrikti arbre Lè sa a, depans la nan chak node se konvèti. koulè indique diferan valè depans. Analiz topoloji sou diferan modèl newòn. Sèt newòn ak diferan morfoloji yo montre isit la. Pou chak modèl, se soma a chwazi kòm wòch la nan bwa a se konsa dyamèt la nan node a ogmante soti nan soma (0) nan dendrit distal yo. Ilistrasyon nan fè DHS sou modèl la nan ak kat fil. Candidates: nodes ki ka pwosesis. Candidates chwazi: nodes ki yo chwazi pa DHS, sa vle di, nan Kandida ki pi profond. Nòd pwosesis: Nòd ki te pwosesis anvan. Strateji paralelizasyon jwenn pa DHS apre pwosesis la nan DHS redwi etap yo nan pwosesis node seri soti nan 14 nan 5 pa distribye node yo nan plizyè node. Kou relatif, sa vle di, rapò a nan pri a kalkil nan DHS ak ki nan metòd la seri Hines, lè aplike DHS ak diferan kantite fil sou diferan kalite modèl. a k b c d b k e d f Jwenn yon modèl senp ak 15 compartments kòm yon egzanp, lè l sèvi avèk metòd la konvèsyon seri Hines, li pran 14 etap nan pwosesis tout nodes, pandan y ap lè l sèvi avèk DHS ak kat inite paralèl ka partition nodes li yo nan senk sousèt (Fig. ): {{9,10,12,14}, {1,7,11,13}, {2,3,4,8}, {6}, {5}}. Paske nodes nan menm sousèt ka pwosesis paralèlman, li pran sèlman senk etap yo pwosesis tout nodes lè l sèvi avèk DHS (Fig. ) nan 2D nan 2e nan Lè sa a, nou aplike metòd la DHS sou sis modèl rapò detaye neuron (selekte soti nan ModelDB ) ak nimewo diferan nan fil (Fig. ):, ki gen ladan cortical ak hippocampal piramidal neurons , , Neurons nan Cerebellar Purkinje Neuròn pwojèti striatèl (SPN) ), ak olfactory bulb mitral selil yo , kouvri prensipal nèfòn prensipal yo nan zòn sensyèl, kortikal ak subkortikal. Lè sa a, nou medye pri a òdinatè. Kou a òdinatè relatif isit la se definye pa pousantaj la nan pri òdinatè a nan DHS a ki nan metòd la seri Hines. Kou a òdinatè a, ki se kantite etap yo te pran nan rezoud echantiyon, diminye dramatik ak ogmante nimewo fil. Pou egzanp, ak 16 fil, pri òdinatè a nan DHS se 7%-10% konpare ak metòd la seri Hines. Intriguously, metòd la DHS rive nan limit ki pi ba nan pri òdinatè yo pou nèfòn prezante lè bay 16 oswa menm 8 fil paralèl (Fig. ), sijere ajoute plis thread pa amelyore pèfòmans an plis nanòz depannasyon ki genyen ant compartments. 39 2F nan 40 41 42 43 44 45 2F nan Jwenn ansanm, nou kreye yon metòd DHS ki pèmèt analiz otomatik nan topoloji a dendritik ak partisyon optimum pou òdinatè paralèl. Li vo anyen ke DHS jwenn partisyon an optimum anvan kòmanse similasyon an, ak pa gen okenn òdinatè siperyè nesesè solisyon ekivalan. Akselerasyon DHS pa boost memwa GPU DHS konvèti chak neuron ak plizyè fil, ki konsome yon kantite gwo fil lè kouri similasyon rezo nèvral. Graphics Processing Units (GPUs) konsiste de masiv processeurs (i.e., Streaming processeurs, SPs, Figi. ) pou òdinatè paralèl Teoryèman, anpil SPs sou GPU a ta dwe sipòte similasyon efikas pou rezo idwolik gwo (Fig. ). Sepandan, nou konsistan observe ke efikasite a nan DHS diminye anpil lè gwosè a nan rezo a ogmante, ki ta ka rezilta nan magazen done disperse oswa aksè memwa adisyonèl ki te koze pa chaje ak ekri rezilta enteryè (Fig. nan tèt la). 3a, b nan 46 3C nan 3D nan Arkitektur GPU ak hierarchi memwa li yo. Chak GPU gen yon gwo pwosesis inite (procesè flux). Diferent kalite memwa gen diferan pèdi. Architecture of Streaming Multiprocessors (SMs). Chak SM gen plizyè pwosesis Streaming, rejis, ak L1 cache. Aplikasyon DHS sou de neuron, chak ak kat fil. Pandan similasyon an, chak fil kouri sou yon sèl processeur flux. estrateji optimize memwa sou GPU la. Panèl tèt la, alokasyon fil ak depo done nan DHS, anvan (lwa) ak apre (dwa) memwa boost. Bottom, yon egzanp nan yon sèl etap nan triangularizasyon lè simile de neuron nan Processeurs voye yon demann done yo chaje done pou chak fil soti nan memwa mondyal la. San yo pa memwa boost (lwa), li pran sèt tranzaksyon chaje tout done demann ak kèk tranzaksyon adisyonèl pou rezilta enteryè. Avèk memwa boost (dwa), li pran sèlman de tranzaksyon chaje tout done demann, rejis yo itilize pou rezilta enteryè, ki amelyore plis pase memwa. Tan kouri nan DHS (32 fil nan chak selil) ak ak san yo pa memwa boost sou plizyè-layer 5 modèl piramidal ak spines. Akselerasyon nan memwa boost sou plizyè layer 5 modèl piramidal ak spines. Memwa boost pote 1.6-2 fwa boost. a b c d d e f Nou rezoud pwoblèm sa a pa boost memwa GPU, yon metòd pou ogmante pase memwa pa benefisye de ierarchi memwa GPU a ak mekanis aksè. Baze sou mekanis la memwa chaje nan GPU a, fil konsekuent chaje done aliye ak konsekuent-lagre mennen nan yon pase memwa segondè konpare ak aksè nan done ki estoke, ki diminye pase memwa. , . Pou reyalize pèfòmans segondè, nou an premye aliye lòd yo òdinatè nan nodes ak rearanje fil dapre kantite nodes sou yo. Lè sa a, nou permute depo done nan memwa mondyal, konsistans ak lòd òdinatè, sa vle di, nodes ki ap pwosesis nan etap menm yo anrejistre konsèvansman nan memwa mondyal la. Anplis de sa, nou itilize rejistrasyon GPU nan magazen rezilta enteryè, amelyore plis pèfòmans memwa. Egzanp la montre ke amelyore memwa pran sèlman de tranzaksyon memwa yo chaje de done demann (Fig. , dwat). Anplis de sa, eksperyans sou nimewo miltip nan neuron piramidal ak spines ak modèl tipik nan neuron (Fig. • Pwodwi pou egzanp. ) montre ke amelyorasyon memwa reyalize yon 1.2-3.8 fwa akselerasyon konpare ak naiv DHS la. 46 47 3D nan 3e, F nan 2 Pou yon tès konplè nan pèfòmans nan DHS ak GPU memwa boost, nou chwazi sis modèl tipik neuron ak evalye tan kouri nan rezoud ekivalan kab sou nimewo masiv nan chak modèl (Fig. ). Nou te egzamine DHS ak kat filament (DHS-4) ak seksan filament (DHS-16) pou chak neuron, respektivman. Konpare ak metòd GPU nan CoreNEURON, DHS-4 ak DHS-16 ka akselere sou 5 ak 15 fwa respektivman (Fig. Anplis de sa, konpare ak metòd la konvansyonèl seri Hines nan NEURON kouri ak yon sèl fil nan CPU, DHS akselere similasyon an pa 2-3 lòd nan gwosè (Fig. Supplementary. ), pandan y ap kenbe egzanp ak presizyon nimewo a nan prezans la nan spines dense (Supplementary Figs. ak ), aktyèl dendrit (Supplementary Fig. ) ak diferan estrateji segmanasyon (Supplementary Fig. ) nan 4 4a nan 3 4 8 7 7 Tan kouri nan solisyon ekivalan pou yon similasyon 1s sou GPU (dt = 0,025 ms, 40,000 iterasyon nan total). CoreNEURON: metòd la paralèl ki itilize nan CoreNEURON; DHS-4: DHS ak kat fil pou chak neuron; DHS-16: DHS ak 16 fil pou chak neuron. nan Visualizasyon nan partisyon pa DHS-4 ak DHS-16, chak koulè endike yon sèl fil. Pandan kalkil la, chak fil konvèti ant diferan fil. a b c DHS kreye selil-tip espesifik partition optimum Pou jwenn entèlijans nan mekanis travay nan metòd la DHS, nou vizyèlize pwosesis la nan partitioning pa mape pati nan chak fil (chaje koulè prezante yon fil sèl nan Figi. ). Visualizasyon an montre ke yon sèl fil fredi switch ant branch diferan (Fig. Interessantman, DHS genere partitions aligned nan morphologically symmetrical neurons tankou neuron la projection striatal (SPN) ak mitral selil la (Fig. ). Nan kontrè, li kreye partisyon fragman nan morfolojikman asimetrik neuron tankou piramidal neuron yo ak Purkinje selil (Fig. ), ki sanble ke DHS divize ti bebe a nèvral la nan yon varyete de pati endividyèl (i.e., node a nèvral) olye pou varyete branch. Sa a selil-tip espesifik fèy-grenn partisyon pèmèt DHS yo plen sèvi ak tout fil ki disponib. 4b ak C 4b ak C 4b ak C 4b ak C Kòm yon rezime, DHS ak memwa boosting kreye yon solisyon teyori-provize optimum pou solisyon ekwi lineyè paralèlman ak efikasite san yo pa anyen. Sèvi ak prensip sa a, nou te bati platfòm la Open-Access DeepDendrite, ki ka itilize pa neuroscientists aplike modèl san yo pa gen okenn konesans spesifik nan pwogramasyon GPU. Anba a, nou demontre ki jan nou ka sèvi ak DeepDendrite nan travay neuroscience. Nou diskite tou potansyèl la nan ankadreman an DeepDendrite pou travay ki gen rapò ak AI nan seksyon an Diskisyon. DHS pèmèt modèl nivo espin Kòm spines dendritik resevwa pifò nan entwodiksyon eksitasyon nan cortical ak hippocampal piramidal neurons, striatal projection neurons, elatriye, morfoloji yo ak plastikite yo esansyèl pou regilye excitabilite neuronal , , , , Sepandan, spines yo trè ti ( ~ 1 μm long) yo dwe dirèkteman medye eksperimantèlman akòz pwosesis ki depann sou voltaj. Se poutèt sa, travay teyori se kritik pou konpreyansyon konplè nan calculations spine. 10 48 49 50 51 Nou ka modèl yon sèl vèb la ak de pati: tèt la vèb la kote sinaps yo sitiye ak kache a vèb la ki konekte tèt la vèb la ak dendrites Teyori a prezante ke kouto a espinal trè ti (0.1-0.5 um nan dyamèt) elektwonikman izole tèt la espinal soti nan dendrit la madanm li yo, ak sa a compartmentalize sinyal yo ki te kreye nan tèt la espinal la Sepandan, modèl la detaye ak spines konplètman distribye sou dendrit ("modèl plen spine") se òdinatèman trè chè. Yon solisyon kompromisaj komen se modifye kapasite a ak rezistans nan membran la pa yon Faktori espin , anviwònman an nan modèl tout spines eksplisitman. Isit la, spinal faktori apwopriye efè a spinal sou pwopriyete yo biophysical nan membran selilè . 52 53 F 54 F 54 Inspired by the previous work of Eyal et al. , we investigated how different spatial patterns of excitatory inputs formed on dendritic spines shape neuronal activities in a human pyramidal neuron model with explicitly modeled spines (Fig. ). Notèman, Eyal et al. te itilize nan spine factor to incorporate spines into dendrites while only a few activated spines were explicitly attached to dendrites (“few-spine model” in Fig. ). The value of spine in their model was computed from the dendritic area and spine area in the reconstructed data. Accordingly, we calculated the spine density from their reconstructed data to make our full-spine model more consistent with Eyal’s few-spine model. With the spine density set to 1.3 μm-1, the pyramidal neuron model contained about 25,000 spines without altering the model’s original morphological and biophysical properties. Further, we repeated the previous experiment protocols with both full-spine and few-spine models. We use the same synaptic input as in Eyal’s work but attach extra background noise to each sample. By comparing the somatic traces (Fig. ) and spike probability (Fig. ) in full-spine and few-spine models, we found that the full-spine model is much leakier than the few-spine model. In addition, the spike probability triggered by the activation of clustered spines appeared to be more nonlinear in the full-spine model (the solid blue line in Fig. ) than in the few-spine model (the dashed blue line in Fig. ). These results indicate that the conventional F-factor method may underestimate the impact of dense spine on the computations of dendritic excitability and nonlinearity. 51 5a F 5a nan F 5b, c 5D nan 5d 5d Experiment setup. We examine two major types of models: few-spine models and full-spine models. Few-spine models (two on the left) are the models that incorporated spine area globally into dendrites and only attach individual spines together with activated synapses. In full-spine models (two on the right), all spines are explicitly attached over whole dendrites. We explore the effects of clustered and randomly distributed synaptic inputs on the few-spine models and the full-spine models, respectively. Somatic voltages recorded for cases in . Colors of the voltage curves correspond to , scale bar: 20 ms, 20 mV. Kòd koulè voltaj pandan similasyon an at specific times. Colors indicate the magnitude of voltage. Somatic spike probability as a function of the number of simultaneously activated synapses (as in Eyal et al.’s work) for four cases in . Background noise is attached. Run time of experiments in with different simulation methods. NEURON: conventional NEURON simulator running on a single CPU core. CoreNEURON: CoreNEURON simulator on a single GPU. DeepDendrite: DeepDendrite on a single GPU. a b a a c b d a e d Nan platfòm la DeepDendrite, tou de modèl plen-spin ak kèk-spin reyalize 8 fwa akselerasyon konpare ak CoreNEURON sou platfòm la GPU ak 100 fwa akselerasyon konpare ak seri NEURON sou platfòm la CPU (Fig. · Tables Supplementè ) while keeping the identical simulation results (Supplementary Figs. and ). Therefore, the DHS method enables explorations of dendritic excitability under more realistic anatomic conditions. 5e 1 4 8 Discussion In this work, we propose the DHS method to parallelize the computation of Hines method and we mathematically demonstrate that the DHS provides an optimal solution without any loss of precision. Next, we implement DHS on the GPU hardware platform and use GPU memory boosting techniques to refine the DHS (Fig. ). When simulating a large number of neurons with complex morphologies, DHS with memory boosting achieves a 15-fold speedup (Supplementary Table ) as compared to the GPU method used in CoreNEURON and up to 1,500-fold speedup compared to serial Hines method in the CPU platform (Fig. ; Supplementary Fig. and Supplementary Table ). Furthermore, we develop the GPU-based DeepDendrite framework by integrating DHS into CoreNEURON. Finally, as a demonstration of the capacity of DeepDendrite, we present a representative application: examine spine computations in a detailed pyramidal neuron model with 25,000 spines. Further in this section, we elaborate on how we have expanded the DeepDendrite framework to enable efficient training of biophysically detailed neural networks. To explore the hypothesis that dendrites improve robustness against adversarial attacks Nou montre ke DeepDendrite ka sipòte tou de simulations neuroscience ak AI-related detaye rezo rezo rezo a ak vitès ki pa prèv, Se poutèt sa, siyifikativman ankouraje detaye simulations neuroscience ak potansyèl pou eksplorasyon AI nan tan kap vini an. 55 3 1 4 3 1 56 Decades of efforts have been invested in speeding up the Hines method with parallel methods. Early work mainly focuses on network-level parallelization. In network simulations, each cell independently solves its corresponding linear equations with the Hines method. Network-level parallel methods distribute a network on multiple threads and parallelize the computation of each cell group with each thread , . With network-level methods, we can simulate detailed networks on clusters or supercomputers . In recent years, GPU has been used for detailed network simulation. Because the GPU contains massive computing units, one thread is usually assigned one cell rather than a cell group , , . With further optimization, GPU-based methods achieve much higher efficiency in network simulation. However, the computation inside the cells is still serial in network-level methods, so they still cannot deal with the problem when the “Hines matrix” of each cell scales large. 57 58 59 35 60 61 Cellular-level parallel methods further parallelize the computation inside each cell. The main idea of cellular-level parallel methods is to split each cell into several sub-blocks and parallelize the computation of those sub-blocks , . However, typical cellular-level methods (e.g., the “multi-split” method ) pay less attention to the parallelization strategy. The lack of a fine parallelization strategy results in unsatisfactory performance. To achieve higher efficiency, some studies try to obtain finer-grained parallelization by introducing extra computation operations , , or making approximations on some crucial compartments, while solving linear equations , . These finer-grained parallelization strategies can get higher efficiency but lack sufficient numerical accuracy as in the original Hines method. 27 28 28 29 38 62 63 64 Unlike previous methods, DHS adopts the finest-grained parallelization strategy, i.e., compartment-level parallelization. By modeling the problem of “how to parallelize” as a combinatorial optimization problem, DHS provides an optimal compartment-level parallelization strategy. Moreover, DHS does not introduce any extra operation or value approximation, so it achieves the lowest computational cost and retains sufficient numerical accuracy as in the original Hines method at the same time. Dendritic spines are the most abundant microstructures in the brain for projection neurons in the cortex, hippocampus, cerebellum, and basal ganglia. As spines receive most of the excitatory inputs in the central nervous system, electrical signals generated by spines are the main driving force for large-scale neuronal activities in the forebrain and cerebellum , . The structure of the spine, with an enlarged spine head and a very thin spine neck—leads to surprisingly high input impedance at the spine head, which could be up to 500 MΩ, combining experimental data and the detailed compartment modeling approach , Akòz impedans entwodiksyon segondè sa yo, yon sèl entwodiksyon sinaptik ka evoke yon "gigant" EPSP ( ~ 20 mV) nan nivo a nan sèvo-kèb , , ak sa a, ogmante NMDA kouri ak chèn ion kouri nan espinal la . However, in the classic single detailed compartment models, all spines are replaced by the coefficient modifying the dendritic cable geometries . This approach may compensate for the leak currents and capacitance currents for spines. Still, it cannot reproduce the high input impedance at the spine head, which may weaken excitatory synaptic inputs, particularly NMDA currents, thereby reducing the nonlinearity in the neuron’s input-output curve. Our modeling results are in line with this interpretation. 10 11 48 65 48 66 11 F 54 Soti nan lòt men, repartibilizasyon elektrik nan espinal la se toujou ki gen ladan repartibilizasyon biokimik , , , resulting in a drastic increase of internal [Ca2+], within the spine and a cascade of molecular processes involving synaptic plasticity of importance for learning and memory. Intriguingly, the biochemical process triggered by learning, in turn, remodels the spine’s morphology, enlarging (or shrinking) the spine head, or elongating (or shortening) the spine neck, which significantly alters the spine’s electrical capacity , , , . Such experience-dependent changes in spine morphology also referred to as “structural plasticity”, have been widely observed in the visual cortex , , somatosensory cortex , , motor cortex , hippocampus , ak bazal ganglion in vivo. They play a critical role in motor and spatial learning as well as memory formation. However, due to the computational costs, nearly all detailed network models exploit the “F-factor” approach to replace actual spines, and are thus unable to explore the spine functions at the system level. By taking advantage of our framework and the GPU platform, we can run a few thousand detailed neurons models, each with tens of thousands of spines on a single GPU, while maintaining ~100 times faster than the traditional serial method on a single CPU (Fig. ). Therefore, it enables us to explore of structural plasticity in large-scale circuit models across diverse brain regions. 8 52 67 67 68 69 70 71 72 73 74 75 9 76 5e Another critical issue is how to link dendrites to brain functions at the systems/network level. It has been well established that dendrites can perform comprehensive computations on synaptic inputs due to enriched ion channels and local biophysical membrane properties , , . For example, cortical pyramidal neurons can carry out sublinear synaptic integration at the proximal dendrite but progressively shift to supralinear integration at the distal dendrite . Moreover, distal dendrites can produce regenerative events such as dendritic sodium spikes, calcium spikes, and NMDA spikes/plateau potentials , . Such dendritic events are widely observed in mice or even human cortical neurons in vitro, which may offer various logical operations , or gating functions , . Recently, in vivo recordings in awake or behaving mice provide strong evidence that dendritic spikes/plateau potentials are crucial for orientation selectivity in the visual cortex , sensory-motor integration in the whisker system , , and spatial navigation in the hippocampal CA1 region . 5 6 7 77 6 78 6 79 6 79 80 81 82 83 84 85 Pou etabli lyen kauzal ant dendrites ak bèt (ki gen ladan moun) modèl konpòtman, gwo-scale biophysically detaye modèl sik neuronal yo se yon zouti konpitè konpetitif pou reyalize misyon sa a. Sepandan, kouri yon gwo-scale detaye modèl sikilasyon nan 10,000-100,000 sikilasyon an jeneralman mande pou pouvwa a konpitè nan supercomputers. Li se menm plis difisil optimize modèl sa yo pou done in vivo, paske li bezwen similasyon iteratif nan modèl yo. Framework la DeepDendrite ka dirèkteman sipòte anpil state-of-the-art gwo-scale modèl sikilasyon , , , ki te orijinèlman devlope ki baze sou NEURON. Anplis de sa, lè l sèvi avèk ankadreman nou an, yon sèl kat GPU tankou Tesla A100 ka fasilman sipòte operasyon an nan modèl circuit detaye nan jiska 10,000 neuron, ak sa a bay plan efikas kabòn ak abòdab pou laboratwa regilye pou devlope ak optimize pwòp yo gwo-scale modèl detaye. 86 87 88 Recent works on unraveling the dendritic roles in task-specific learning have achieved remarkable results in two directions, i.e., solving challenging tasks such as image classification dataset ImageNet with simplified dendritic networks , and exploring full learning potentials on more realistic neuron , . However, there lies a trade-off between model size and biological detail, as the increase in network scale is often sacrificed for neuron-level complexity , , Anplis de sa, modèl neuron pi detaye yo se mwens matematiken traceable ak òdinatèman chè. . 20 21 22 19 20 89 21 There has also been progress in the role of active dendrites in ANNs for computer vision tasks. Iyer et al. . proposed a novel ANN architecture with active dendrites, demonstrating competitive results in multi-task and continual learning. Jones and Kording used a binary tree to approximate dendrite branching and provided valuable insights into the influence of tree structure on single neurons’ computational capacity. Bird et al. . proposed a dendritic normalization rule based on biophysical behavior, offering an interesting perspective on the contribution of dendritic arbor structure to computation. While these studies offer valuable insights, they primarily rely on abstractions derived from spatially extended neurons, and do not fully exploit the detailed biological properties and spatial information of dendrites. Further investigation is needed to unveil the potential of leveraging more realistic neuron models for understanding the shared mechanisms underlying brain computation and deep learning. 90 91 92 In response to these challenges, we developed DeepDendrite, a tool that uses the Dendritic Hierarchical Scheduling (DHS) method to significantly reduce computational costs and incorporates an I/O module and a learning module to handle large datasets. With DeepDendrite, we successfully implemented a three-layer hybrid neural network, the Human Pyramidal Cell Network (HPC-Net) (Fig. ). This network demonstrated efficient training capabilities in image classification tasks, achieving approximately 25 times speedup compared to training on a traditional CPU-based platform (Fig. ; Supplementary Table ). 6a, b 6f 1 The illustration of the Human Pyramidal Cell Network (HPC-Net) for image classification. Images are transformed to spike trains and fed into the network model. Learning is triggered by error signals propagated from soma to dendrites. Training with mini-batch. Multiple networks are simulated simultaneously with different images as inputs. The total weight updates ΔW are computed as the average of ΔWi from each network. Comparison of the HPC-Net before and after training. Left, the visualization of hidden neuron responses to a specific input before (top) and after (bottom) training. Right, hidden layer weights (from input to hidden layer) distribution before (top) and after (bottom) training. Workflow of the transfer adversarial attack experiment. We first generate adversarial samples of the test set on a 20-layer ResNet. Then use these adversarial samples (noisy images) to test the classification accuracy of models trained with clean images. Prediction accuracy of each model on adversarial samples after training 30 epochs on MNIST (left) and Fashion-MNIST (right) datasets. Run time of training and testing for the HPC-Net. The batch size is set to 16. Left, run time of training one epoch. Right, run time of testing. Parallel NEURON + Python: training and testing on a single CPU with multiple cores, using 40-process-parallel NEURON to simulate the HPC-Net and extra Python code to support mini-batch training. DeepDendrite: training and testing the HPC-Net on a single GPU with DeepDendrite. a b c d e f Anplis de sa, li se lajman rekonèt ke pèfòmans nan Rezo a Neural Atik (ANNs) ka diminye pa atak adversaryal. —Intensifman devlope perturbations devlope pou entèdi ANNs. Intriguantly, yon teyezi ki egziste suggere ke dendrites ak synapses ka natirèlman defann kont atak sa yo . Our experimental results utilizing HPC-Net lend support to this hypothesis, as we observed that networks endowed with detailed dendritic structures demonstrated some increased resilience to transfer adversarial attacks konpare ak ANN yo estanda, tankou yo montre nan MNIST and Fashion-MNIST Pwodwi pou Telefòn (Fig. ). This evidence implies that the inherent biophysical properties of dendrites could be pivotal in augmenting the robustness of ANNs against adversarial interference. Nonetheless, it is essential to conduct further studies to validate these findings using more challenging datasets such as ImageNet . 93 56 94 95 96 6d, e 97 In conclusion, DeepDendrite has shown remarkable potential in image classification tasks, opening up a world of exciting future directions and possibilities. To further advance DeepDendrite and the application of biologically detailed dendritic models in AI tasks, we may focus on developing multi-GPU systems and exploring applications in other domains, such as Natural Language Processing (NLP), where dendritic filtering properties align well with the inherently noisy and ambiguous nature of human language. Challenges include testing scalability in larger-scale problems, understanding performance across various tasks and domains, and addressing the computational complexity introduced by novel biological principles, such as active dendrites. By overcoming these limitations, we can further advance the understanding and capabilities of biophysically detailed dendritic neural networks, potentially uncovering new advantages, enhancing their robustness against adversarial attacks and noisy inputs, and ultimately bridging the gap between neuroscience and modern AI. Methods Simulation with DHS Koresponn Pwodwi pou Telefòn ( ) uses the NEURON Architecture ak se optimisé pou tou de itilizasyon memwa ak vitès kominikasyon. Nou aplike metòd Dendritic Hierarchical Scheduling (DHS) nou an nan anviwònman an CoreNEURON pa modifye kòd sous li yo. Tout modèl ki ka simile sou GPU ak CoreNEURON ka tou simile ak DHS pa ekzekite lòd sa a: 35 https://github.com/BlueBrain/CoreNeuron 25 coreneuron_exec -d /path/to/models -e time --cell-permute 3 --cell-nthread 16 --gpu The usage options are as in Table . 1 Accuracy of the simulation using cellular-level parallel computation To ensure the accuracy of the simulation, we first need to define the correctness of a cellular-level parallel algorithm to judge whether it will generate identical solutions compared with the proven correct serial methods, like the Hines method used in the NEURON simulation platform. Based on the theories in parallel computing , a parallel algorithm will yield an identical result as its corresponding serial algorithm, if and only if the data process order in the parallel algorithm is consistent with data dependency in the serial method. The Hines method has two symmetrical phases: triangularization and back-substitution. By analyzing the serial computing Hines method , we find that its data dependency can be formulated as a tree structure, where the nodes on the tree represent the compartments of the detailed neuron model. In the triangularization process, the value of each node depends on its children nodes. In contrast, during the back-substitution process, the value of each node is dependent on its parent node (Fig. ). Thus, we can compute nodes on different branches in parallel as their values are not dependent. 34 55 1d Based on the data dependency of the serial computing Hines method, we propose three conditions to make sure a parallel method will yield identical solutions as the serial computing Hines method: (1) The tree morphology and initial values of all nodes are identical to those in the serial computing Hines method; (2) In the triangularization phase, a node can be processed if and only if all its children nodes are already processed; (3) In the back-substitution phase, a node can be processed only if its parent node is already processed. Once a parallel computing method satisfies these three conditions, it will produce identical solutions as the serial computing method. Metòd kominikasyon paralèl nan nivo selilè To theoretically evaluate the run time, i.e., efficiency, of the serial and parallel computing methods, we introduce and formulate the concept of computational cost as follows: given a tree and fil (basik inite òdinatè) yo fè triangularizasyon, paralèl triangularizasyon se menm pou divize nòt la of into subsets, i.e., = { , , … } where the size of each subset | | ≤ , i.e., at most nodes can be processed each step since there are only threads. The process of the triangularization phase follows the order: → → ... → , ak nodes nan menm subset la ka pwosesis nan paralèl. Se konsa, nou defini Wouj (dimansyon nan yon set , i.e., here) as the computational cost of the parallel computing method. In short, we define the computational cost of a parallel method as the number of steps it takes in the triangularization phase. Because the back-substitution is symmetrical with triangularization, the total cost of the entire solving equation phase is twice that of the triangularization phase. T k V T n V V1 V2 nan Vn Vi k k k V1 V2 Vn Vi nan V V n Mathematical scheduling problem Based on the simulation accuracy and computational cost, we formulate the parallelization problem as a mathematical scheduling problem: Given a tree = nan { , } and a positive integer Ki kote is the node-set and Is the edge set. Defini yon partition ( ) = { nan ... ... }, | Jwenn ≤ , 1 ≤ ≤ n, where | Dapre nimewo cardinal nan sousèt la , sa vle di, kantite nodes nan , and for each node ∈ , all its children nodes { | ∈children( )} must in a previous subset , where 1 ≤ < . Our goal is to find an optimal partition ( ) whose computational cost | ( (Sit la se minimòm. T V E k V E P V V1 V2 nan Vi k i Vi nan Vi Vi nan v Vi nan c c v Vj j i P * V P* V Here subset konsiste de tout nodes ki pral òdinatè nan -th step (Fig. ), so | | ≤ Li montre ke nou ka kalkil nodes each step at most because the number of available threads is . The restriction “for each node ∈ Tout timoun li yo nodes { | ∈children( )} must in a previous subset , where 1 ≤ « Nan ” indicates that node can be processed only if all its child nodes are processed. Vi i 2e nan Vi nan k k k v Vi c c v Vj j i v DHS aplikasyon We aim to find an optimal way to parallelize the computation of solving linear equations for each neuron model by solving the mathematical scheduling problem above. To get the optimal partition, DHS first analyzes the topology and calculates the depth ( ) for all nodes ∈ Lè sa a, de etap sa yo pral ekzekite iteratifman jiskaske chak nòt ∈ is assigned to a subset: (1) find all candidate nodes and put these nodes into candidate set . A node is a candidate only if all its child nodes have been processed or it does not have any child nodes. (2) if | | ≤ , i.e., the number of candidate nodes is smaller or equivalent to the number of available threads, remove all nodes in and put them into Se poutèt sa, retire deepest nodes from and add them to subset Etiye nodes sa yo kòm nodes pwosesis (Fig. ). After filling in subset , go to step (1) to fill in the next subset . d v v V v V Q Q k Q V*i k Q Vi 2d Vi Vi+1 Correctness proof for DHS After applying DHS to a neural tree = nan { , }, we get a partition ( ) = { nan , , … Kòmanse Jwenn ≤ , 1 ≤ ≤ . Nodes in the same subset yo pral konvèti nan paralèl, pran steps to perform triangularization and back-substitution, respectively. We then demonstrate that the reordering of the computation in DHS will result in a result identical to the serial Hines method. T V E P V V1 V2 Vn Vi k i n Vi nan n The partition ( ) yo te jwenn soti nan DHS decides òganizasyon òganizasyon an nan tout nodes nan yon fòm nèvral. Anba nou demontre ke òganizasyon òganizasyon an determiné pa ( ) satisfies the correctness conditions. ( ) is obtained from the given neural tree . Operations in DHS do not modify the tree topology and values of tree nodes (corresponding values in the linear equations), so the tree morphology and initial values of all nodes are not changed, which satisfies condition 1: the tree morphology and initial values of all nodes are identical to those in serial Hines method. In triangularization, nodes are processed from subset to . As shown in the implementation of DHS, all nodes in subset are selected from the candidate set , and a node can be put into se sèlman si tout nòt timoun li yo te pwosesis. Se konsa, nòt timoun nan tout nòt nan yo nan { , , … }, meaning that a node is only computed after all its children have been processed, which satisfies condition 2: in triangularization, a node can be processed if and only if all its child nodes are already processed. In back-substitution, the computation order is the opposite of that in triangularization, i.e., from to . As shown before, the child nodes of all nodes in are in { , , … }, se konsa node nan node nan yo nan { , , … }, ki satisfè kondisyon la 3: nan back-substitution, yon node ka pwosesis sèlman si node a orijinal li se deja pwosesis. P V P V P V T V1 Vn Vi nan Q Q Vi V1 V2 Vi-1 Vn V1 Vi nan V1 V2 V1 nan Vi Pwodwi + 1 Vi+2 nan Optimality proof for DHS The idea of the proof is that if there is another optimal solution, it can be transformed into our DHS solution without increasing the number of steps the algorithm requires, thus indicating that the DHS solution is optimal. Pou chak subset nan ( ), DHS se deplase (nimewo fil) nodes ki pi profond soti nan seri kandida ki korespondan nan . If the number of nodes in se pi piti pase , deplase tout nodes soti nan nan Pou senplisite, nou prezante , indicating the depth sum of Nòd ki pi profond nan Tout sous yo nan ( ) satisfè kritè yo maksimòm fondamantal (Supplementary Fig. Apre sa, nou demontre ke seleksyon nodes ki pi profond nan chak iterasyon fè yon partition optimum. Si gen yon partition optimum = { , , … } containing subsets that do not satisfy the max-depth criteria, we can modify the subsets in ( ) se konsa ke tout sousèt yo konsiste de nodes ki pi profond soti nan and the number of subsets ( | ( )|) remain the same after modification. Vi nan P V k Ki nan Vi nan Ki nan k Ki nan Vi nan nan k Ki nan P V 6a nan P(V) nan P*(V) V * 1 V*2 V nan P * V Q P* V Without any loss of generalization, we start from the first subset not satisfying the criteria, i.e., . There are two possible cases that will make not satisfy the max-depth criteria: (1) | | < ak gen kèk nodes valab nan Li pa mete nan ; (2) | | = but nodes in are not the Nòd ki pi profond nan . V * I V * I V * I k Ki nan V * I V*i k V*i k Qi Pou ka (1), paske kèk nòt kandida yo pa mete nan , nodes sa yo dwe nan sousèt sa yo. Kòm Pwensipal , we can move the corresponding nodes from the subsequent subsets to , which will not increase the number of subsets and make satisfè kritè yo (Supplementary Fig. , top). For case (2), | Pwensipal = , these deeper nodes that are not moved from the candidate set into must be added to subsequent subsets (Supplementary Fig. , anba). Nòd sa yo pi profond yo ka deplase soti nan sousèt ki pita nan through the following method. Assume that after filling , is picked and one of the -th deepest nodes is still in , thus will be put into a subsequent subset ( nan > ). We first move soti nan to + nan , then modify subset + Kòm sa a se: si + | ≤ Pa gen okenn nòt nan + is the parent of node , stop modifying the latter subsets. Otherwise, modify + as follows (Supplementary Fig. ): if the parent node of se nan + deplase node sa a nan + ; altènativman deplase nòt la ak depans minimòm soti nan + to + . After adjusting , modify subsequent subsets + , + , … with the same strategy. Finally, move from to . V*i V*i < k V*i V*i 6b V * I k Qi V * I 6b V*i V*i v k v’ Ki nan v’ V*j j i v V*i V*i 1 V*i 1 V*i 1 k V * I 1 v V*i 1 6c v V*i 1 V*i 2 V*i 1 V*i 2 V*i V * I 1 V*i 2 V*j-1 v’ V*j V*i With the modification strategy described above, we can replace all shallower nodes in with the Nòd ki pi profond nan epi kenbe nimewo a nan sousèt, sa vle di, ( )| the same after modification. We can modify the nodes with the same strategy for all subsets in ( ) that do not contain the deepest nodes. Finally, all subsets ∈ ( ) ka satisfè kritè yo max-pwoteksyon, ak ( ) NAN pa chanje apre modifye. V*i k Ki nan P * V P* V V*i P * V P* V In conclusion, DHS generates a partition ( ), ak tout sousèt ∈ ( ) satisfy the max-depth condition: . For any other optimal partition ( ) nou ka modifye sousèt li yo fè estrikti li yo menm jan ak ( ), sa vle di, chak sousèt konsiste de nodes ki pi profòch nan seti a kandida, ak kenbe ( ) the same after modification. So, the partition ( ) obtained from DHS is one of the optimal partitions. P V Vi P V P* V P V P* V | P V GPU implementation and memory boosting To achieve high memory throughput, GPU utilizes the memory hierarchy of (1) global memory, (2) cache, (3) register, where global memory has large capacity but low throughput, while registers have low capacity but high throughput. We aim to boost memory throughput by leveraging the memory hierarchy of GPU. GPU sèvi ak SIMT (Single-Instruction, Multiple-Thread) estrikti. Warps yo se inite a plan prensipal sou GPU a (a warp se yon gwoup de 32 fil paralèl). Yon warp kouri menm enstriksyon ak done diferan pou fil diferan . Koresponn òganize nodes yo esansyèl pou sa a batch nan òganize nan warps, asire DHS jwenn rezilta yo menm jan ak metòd la seri Hines. Lè implementing DHS sou GPU, nou premye gwoup tout selil yo nan plizyè warps ki baze sou morfoloji yo. Selil ki gen morfoloji menm jan an se gwoup nan menm warp la. Lè sa a, nou aplike DHS sou tout neuron yo, asye pati nan chak neuron nan plizyè fil. Paske neurons yo gwoup nan warps, fil yo pou menm neuron an se nan menm warp la. Se poutèt sa, sinkronizasyon an intrinsik nan warps kenbe lòd la òganize konsistan ak depannasyon done a nan metòd la seri Hines. Finalman, fil yo nan chak warp yo 46 Lè yon varp chaje done pre-aligned ak seri-lagre soti nan memwa mondyal la, li ka fè tout itilizasyon nan cache a, ki mennen nan segondè memwa pase, pandan y ap aksè nan done dispatch-lagre ta diminye pase pase memwa a. Apre reparasyon reparasyon ak rearrangement fil, nou permute done nan memwa mondyal la yo fè li konsistan ak lòd òdinatè, se konsa warps ka chaje done seri-lagre lè kouri pwogram la. Anplis de sa, nou mete sa yo varyab temporal nesesè nan rejistrasyon anvan memwa mondyal la. Rejistrasyon gen pi wo pase pase memwa, se konsa lè l sèvi avèk rejistrasyon an plis akselere DHS la. Modèl biophysical plen-spin ak kèk-spin Nou te itilize ki te pibliye neuron piramidal moun Kapasite nan membran m = 0,44 μF cm-2, rezistans nan membran m = 48,300 Ω cm2, and axial resistivity a = 261,97 Ω cm. Nan modèl sa a, tout dendrit yo te modèl kòm kab pasif pandan y ap somas yo aktif. Potansyèl reversibilite nan lek l = -83.1 mV. Ion channels such as Na+ and K+ were inserted on soma and initial axon, and their reversal potentials were Na = 67,6 mV, K = -102 mV respectively. All these specific parameters were set the same as in the model of Eyal, et al. , pou plis detay tanpri konsidere modèl la pibliye (ModelDB, aksè No. 238347). 51 c r r E E E 51 In the few-spine model, the membrane capacitance and maximum leak conductance of the dendritic cables 60 μm away from soma were multiplied by a spine factor to approximate dendritic spines. In this model, spine was set to 1.9. Only the spines that receive synaptic inputs were explicitly attached to dendrites. F F In the full-spine model, all spines were explicitly attached to dendrites. We calculated the spine density with the reconstructed neuron in Eyal, et al. . The spine density was set to 1.3 μm-1, and each cell contained 24994 spines on dendrites 60 μm away from the soma. 51 The morphologies and biophysical mechanisms of spines were the same in few-spine and full-spine models. The length of the spine neck kach = 1.35 μm ak dyamèt la neck = 0.25 μm, whereas the length and diameter of the spine head were 0.944 μm, i.e., the spine head area was set to 2.8 μm2. Both spine neck and spine head were modeled as passive cables, with the reversal potential = -86 mV. The specific membrane capacitance, membrane resistance, and axial resistivity were the same as those for dendrites. L D nan Synaptic entwodiksyon Nou te etidye excitabilite nève pou tou de distribye ak clustered synaptic entèvyou yo. Tout sinaps aktif te ajoute nan tèminal la nan tèt la espinal la. Pou entèvyou distribye, tout sinaps aktif te aleganman distribye sou tout dendrites yo. Pou entèvyou clustered, chak cluster te konsiste de 20 sinaps aktif ki te distribye uniformman sou yon sèl koutim sele alegance. Tout sinaps yo te aktive siman pandan similasyon an. AMPA ki baze sou ak NMDA ki baze sou kouri sinaptik te simile kòm nan travay Eyal et al. AMPA konduktans te modèl kòm yon fonksyon doub-eksponansyèl ak NMDA konduktans kòm yon fonksyon doub-eksponansyèl depann sou voltaj. Pou modèl la AMPA, espesifik la rise and degradasyon yo mete nan 0.3 ak 1.8 ms. Pou modèl la NMDA, rise and degradasyon yo mete nan 8.019 ak 34.9884 ms, respektivman. maksimòm konduktans nan AMPA ak NMDA te 0.73 nS ak 1.31 nS. τ τ τ τ Soumèt background Nou anbake zwazo background nan chak selil yo simile yon anviwònman plis reyèl. modèl zwazo yo te aplike kòm Poisson pike tren ak yon frekans konstan de 1,0 Hz. Chak modèl te kòmanse nan kòmanse = 10 ms ak kenbe jiska fen nan simulasyon an. Nou te pwodwi 400 tren piki zwazo pou chak selil ak anbake yo nan sinapse alegan-selekte. Modèl la ak paramèt espesifik nan koule sinaptik yo te menm jan yo dekri nan , except that the maximum conductance of NMDA was uniformly distributed from 1.57 to 3.275, resulting in a higher AMPA to NMDA ratio. t Synaptic Inputs Exploring neuronal excitability Nou te etidye probabilite a pike lè plizyè sinaps yo te aktive simultan. Pou entwodiksyon distribiye, nou te teste 14 ka, soti nan 0 a 240 sinaps aktif. Pou entwodiksyon cluster, nou te teste 9 ka nan total, aktivasyon soti nan 0 a 12 klavye respektivman. Chak klavye te konsiste de 20 sinaps. Pou chak ka nan tou de entwodiksyon distribye ak klavye, nou kalkilate probabilite a pike ak 50 echantiyon alegan. Probabilite a pike te defini kòm rapò a nimewo a nan nèfòn ki te lanse nan kantite echantiyon total. Tout echantiyon yo 1150 te simile siman sou platfòm DeepDendrite nou an, diminye tan similasyon soti nan jou a minit. Kòmanse travay AI ak platfòm la DeepDendrite Conventional detailed neuron simulators lack two functionalities important to modern AI tasks: (1) alternately performing simulations and weight updates without heavy reinitialization and (2) simultaneously processing multiple stimuli samples in a batch-like manner. Here we present the DeepDendrite platform, which supports both biophysical simulating and performing deep learning tasks with detailed dendritic models. DeepDendrite consists of three modules (Supplementary Fig. ): (1) yon modil I/O; (2) yon modil similasyon ki baze sou DHS; (3) yon modil aprantisaj. Lè fòmasyon yon modèl biophysically detaye pou fè travay aprantisaj, itilizatè yo premye definye règ la aprantisaj, Lè sa a, manje tout echantiyon aprantisaj nan modèl la detaye pou aprantisaj. Nan chak etap pandan fòmasyon an, modil la I/O chwazi yon stimilasyon espesifik ak sinyal pwofesè ki korespondan li (si nesesè) soti nan tout echantiyon aprantisaj ak ajoute stimilasyon an nan modèl la rezo. Lè sa a, modil la similasyon ki baze sou DHS inisyalize modèl la ak kòmanse similasyon an. Apre similasyon, modil aprantisaj ajou tout pwa sinaptik dapre diferans la ant repons 5 HPC-Net model Classification imaj se yon travay tipik nan jaden an nan AI. Nan travay sa a, yon modèl ta dwe aprann rekonèt kontni a nan yon imaj bay ak pwodui etikèt la korespondan. Isit la nou prezante HPC-Net la, yon rezo ki konsiste de modèl detaye nan neuron piramidal imen ki ka aprann fè travay klasizasyon imaj lè l sèvi avèk platfòm la DeepDendrite. HPC-Net has three layers, i.e., an input layer, a hidden layer, and an output layer. The neurons in the input layer receive spike trains converted from images as their input. Hidden layer neurons receive the output of input layer neurons and deliver responses to neurons in the output layer. The responses of the output layer neurons are taken as the final output of HPC-Net. Neurons between adjacent layers are fully connected. For each image stimulus, we first convert each normalized pixel to a homogeneous spike train. For pixel with coordinates ( ) nan imaj la, tren an spike ki korespondan gen yon interval entèspike konstan ISI( ) (in ms) which is determined by the pixel value ( ) kòm te montre nan Eq. ( ) nan X, Y τ X, Y p X, Y 1 Nan eksperyans nou an, similasyon pou chak stimil la te pran 50 ms. Tout tren pike te kòmanse nan 9 + ISI ms and lasted until the end of the simulation. Then we attached all spike trains to the input layer neurons in a one-to-one manner. The synaptic current triggered by the spike arriving at time is given by τ T0 nan Ki kote se voltaj la post-synaptik, potansyèl la reverse syn = 1 mV, maksimòm konduktans sinaptik max = 0.05 μS, ak tan konstan = 0.5 ms nan v E g τ Neuròn nan kouch envantè yo te modèl ak yon modèl pasif yon sèl-divizyon. Parametres espesifik yo te mete tankou sa a: kapasite membran m = 1.0 μF cm-2, membrane resistance m = 104 Ω cm2, axial resistivity a = 100 Ω cm, potansyèl reverse nan kabin pasif L = 0 mV. c r r E Pwodwi a kache gen yon gwoup de modèl neuron piramidal imen, resevwa voltaj yo somatik nan neuron ki enpòte. Morfoloji a te soti nan Eyal, et al. , ak tout neuron yo te modèl ak kab pasif. Kapasite a membrane espesifik la m = 1.5 μF cm-2, rezistans nan membran m = 48,300 Ω cm2, rezistivite axial a = 261,97 Ω cm, ak potansyèl reverse nan tout kab pasif l = 0 mV. Neuron entwodiksyon ka fè koneksyon plizyè nan kote aleksyon-selekte sou dendrit yo nan neuron kache. Synapse nan -th entwodiksyon neuron sou neuron ’s dendrite is defined as in Eq. ( Pandan ke se kondwi a sinaptik, is the synaptic weight, is the ReLU-like somatic activation function, and is the somatic voltage of the -th entwodiksyon neuron nan tan . 51 c r r E k i j 4 Jiyè Vwayaj i t Neurons in the output layer were also modeled with a passive single-compartment model, and each hidden neuron only made one synaptic connection to each output neuron. All specific parameters were set the same as those of the input neurons. Synaptic currents activated by hidden neurons are also in the form of Eq. ( ) nan 4 Klasifikasyon imaj ak HPC-Net For each input image stimulus, we first normalized all pixel values to 0.0-1.0. Then we converted normalized pixels to spike trains and attached them to input neurons. Somatic voltages of the output neurons are used to compute the predicted probability of each class, as shown in equation , where Èske chans yo nan -th klas prezante pa HPC-Net, se medyeval la voltaj somatik soti nan 20 ms nan 50 ms nan -th pwodiksyon neuron, ak se nimewo a nan klas, ki se menm jan ak nimewo a nan neuron pwodiksyon. Klas la ak probabilite a maksimòm prezante se rezilta a final nan klasifikasyon an. Nan papye sa a, nou te bati HPC-Net la ak 784 neuron enpòte, 64 neuron kache, ak 10 neuron pwodiksyon. 6 Pi nan i i C Règ nan Plasticity Synaptic pou HPC-Net Enspire soti nan travay anvan , nou itilize yon règ aprantisaj ki baze sou gradient yo fòme HPC-Net nou an pou fè travay la klasifikasyon imaj. Fonksyon an pèdi nou itilize isit la se cross-entropy, bay nan Eq. ( Pandan ke se probabilite a prezante pou yon klas nan indicates the actual class the stimulus image belongs to, = 1 si imaj envantè se nan yon klas E = 0 if not. 36 7 Pi nan i yi nan i yi Nan fòmasyon HPC-Net, nou calculate ajou pou pwa (the synaptic weight of the -th synapse connecting neuron Neuròn nan ) nan chak etap tan. Apre simulation nan chak estrikti imaj, se ajou kòm montre nan Eq. ( ) nan: Vwayaj k i j Vwayaj 8 Isit la se vitès la aprantisaj, se valè a mete ajou nan tan nan nan are somatic voltages of neuron ak respectively, Li se -th kouri sinaptik aktive pa neuron on neuron nan konduktans sinaptik li yo, se rezistans transfè ant -th konekte pati nan yon neuron Neuròn nan Dendrite nan neuron nan yon soma, s = 30 ms, e = 50 ms se tan kòmanse ak tan fini pou aprantisaj respektivman. Pou pwodiksyon neuron, tèm erè ka kalkil kòm montre nan Eq. ( ). For hidden neurons, the error term is calculated from the error terms in the output layer, given in Eq. ( ). t vj Vwa i j Pwodwi k i j Jiyè rijk k i j j t t 10 11 Kòm tout neuron pwodiksyon yo se yon sèl-divizyon, menm jan ak rezistans enprime nan divizyon ki koresponn, transfè ak rezistans enprime yo calculated pa NEURON. Mini-batch fòmasyon se yon metòd tipik nan aprantisaj fondamantal pou reyalize pi wo presizyon prediksyon ak akselere konvergans. DeepDendrite tou sipòte mini-batch fòmasyon. Lè fòmasyon HPC-Net ak mini-batch gwosè batch, nou fè kopye batch nan HPC-Net. Pandan fòmasyon an, chak kopye se manje ak yon echantiyon fòmasyon diferan soti nan batch la. DeepDendrite an premye kalkile ajou pwa pou chak kopye separe. Apre tout kopye nan batch la fòmasyon kounye a te fè, ajou pwa mwayèn se kalkilye ak pwa nan tout kopye yo mete ajou pa kantite a menm. N N Robisite kont atak avanse ak HPC-Net Pou demontre robustite a nan HPC-Net, nou te tès presizyon pratik li yo sou echantiyon adversarial ak konpare li ak yon ANN analog (yon sèl ak estrikti menm 784-64-10 ak aktivasyon ReLU, pou konparezon egzak nan HPC-Net nou an, chak nèvron envantè te sèlman fè yon sèl koneksyon sinaptik ak chak nèvron kache). Nou premye fòmasyon HPC-Net ak ANN ak seri fòmasyon orijinal la (original imaj net). Lè sa a, nou te ajoute zwazo adversarial nan seri a tès ak mete presizyon pratik yo sou seri a tès. Nou itilize Foolbox la , pou yo kreye zwazo adversarial ak metòd la FGSM ANN te fòmasyon ak PyTorch , ak HPC-Net te fòmasyon ak DeepDendrite nou an. Pou egzanp, nou te kreye zwazo adversarial sou yon modèl rezo diferan, yon 20-layer ResNet Nivo zwazo a varye soti nan 0.02 a 0.2. Nou te eksperyans sou de seri done tipik, MNIST nan Fashion-MNIST Rezilta yo montre ke presizyon an prediksyon nan HPC-Net se 19% ak 16.72% pi wo pase nan ANN analog, respektivman. 98 99 93 100 101 95 96 Rapòte rezime Further information on research design is available in the Lyen nan atik sa a. Nature Portfolio Reporting Summary Disponibilite nan done Done ki sipòte konklizyon yo nan etid sa a yo disponib nan papye a, Supplementary Information ak Source Data dosye ki bay ak papye sa a. Kòd la sous ak done ki te itilize pou replike rezilta yo nan Figs. – Disponib nan . Dataset la MNIST se piblikman disponib nan Dataset la Fashion-MNIST se piblikman disponib nan nan yo te bay ak papye sa a. 3 6 https://github.com/pkuzyc/DeepDendrite http://yann.lecun.com/exdb/mnist https://github.com/zalandoresearch/fashion-mnist Soti nan done Kòd Disponibilite Kòd lan sous nan DeepDendrite kòm byen ke modèl yo ak kòd ki te itilize yo reprodike Figs. – Nan etid sa a yo disponib nan . 3 6 https://github.com/pkuzyc/DeepDendrite References McCulloch, W. S. & Pitts, W. Yon kalkil logik nan lide ki enpòtan nan aktivite nervozyon. Bul. Math. Biophys. 5, 115-133 (1943). LeCun, Y., Bengio, Y. & Hinton, G. Deep aprantisaj. Natirèl 521, 436–444 (2015). Poirazi, P., Brannon, T. & Mel, B. W. Aritmetik nan sumasyon sinaptik sous-tèb nan yon modèl selil piramid CA1. Neuron 37, 977–987 (2003). London, M. & Häusser, M. Dendritik kominikasyon. Annu. Rev. Neurosci. 28, 503–532 (2005). Branco, T. & Häusser, M. Yon branch sèl dendritik kòm yon inite fonksyonèl fonksyonèl nan sistèm nervo. Curr. Opin. Neurobiol. 20, 494–502 (2010). Stuart, G. J. & Spruston, N. Dendritik entegre: 60 ane nan pwogrè. Nat. Neurosci. 18, 1713–1721 (2015). Poirazi, P. & Papoutsi, A. Illuminating dendritic fonksyon ak modèl òdinatè. Nat. Rev. Neurosci. 21, 303–321 (2020). Yuste, R. & Denk, W. Spines Dendritic kòm inite fonksyonèl debaz nan entegre nève. Nature 375, 682–684 (1995). Engert, F. & Bonhoeffer, T. Chanjman Dendritik nan espinal la ki asosye ak plastikite a long tèm sinaptik nan hippocampal la. Nature 399, 66-70 (1999). Yuste, R. Spines Dendritic ak circuits distribiye. Neuron 71, 772–781 (2011). Yuste, R. Distribisyon elektrik nan spines dendritik. Annu. Rev. Neurosci. 36, 429–449 (2013). Rall, W. Branching twal dendritik ak motoneuron membran rezistivite. Eksp. Neurol. 1, 491-527 (1959). Segev, I. & Rall, W. Etudyasyon òdinatè nan yon espin dendritik eksitasyon. J. Neurophysiol. 60, 499-523 (1988). Silver, D. et al. Mastering jwèt la nan ale ak rezo neryè profond ak rechèch an arbr. Nature 529, 484–489 (2016). Silver, D. et al. Yon algorithm aprantisaj jeneral ki pwòp tèt ou nan sak, shogi, ak ale nan jwèt pwòp tèt ou. Syans 362, 1140-1144 (2018). McCloskey, M. & Cohen, N. J. Katastrophic interference nan rezo connectivist: pwoblèm la nan aprantisaj seri. Psychol. Aprann. Motif. 24, 109-165 (1989). Frans, R. M. Katastrophic oublisman nan rezo connectivist. Tendans Cogn. Sci. 3, 128-135 (1999). Naud, R. & Sprekeler, H. Sparse eksplozyon optimize transmisyon enfòmasyon nan yon kòd neural multiplexed. Proc. Natl Acad. Sci. USA 115, E6329-E6338 (2018). Sacramento, J., Costa, R. P., Bengio, Y. & Senn, W. Dendritic microcircuits cortical apwopriye algorithm la backpropagation. nan Avances nan sistèm pwosesis enfòmasyon neural 31 (NeurIPS 2018) (NeurIPS*,* 2018). Payeur, A., Guerguiev, J., Zenke, F., Richards, B. A. & Naud, R. Plastisite a sinaptik ki depann sou Burst ka koordine aprantisaj la nan kouch hierarchique. Nat. Neurosci. 24, 1010–1019 (2021). Bicknell, B. A. & Häusser, M. Yon règ nan aprantisaj sinaptik pou eksplike òdinatè dendritik ki pa liy. Neuron 109, 4001–4017 (2021). Moldwin, T., Kalmenson, M. & Segev, I. Clusteron a gradient: yon modèl neuron ki aprann rezoud travay klasifikasyon atravè nonlinearities dendritik, plastik estrikti, ak descent gradient. PLoS Comput. Biol. 17, e1009015 (2021). Hodgkin, A. L. & Huxley, A. F. Yon deskripsyon kantite nan koule membran ak aplikasyon li yo nan kondiksyon ak eksitasyon nan nervo. J. Physiol. 117, 500-544 (1952). Rall, W. Teyori nan pwopriyete fiziolojik nan dendrit. Ann. N. Y. Acad. Sci. 96, 1071-1092 (1962). Hines, M. L. & Carnevale, N. T. anviwònman an nan similasyon nan NEURON. Neural Comput. 9, 1179-1209 (1997). Bower, J. M. & Beeman, D. nan Liv la nan GENESIS: Exploring Realistic Neural Models ak Sistèm nan Similasyon Neural Jeneral (eds Bower, J. M. & Beeman, D.) 17–27 (Springer New York, 1998). Hines, M. L., Eichner, H. & Schürmann, F. Neuron divizyon nan similasyon rezo paralèl òdinatè-ki gen ladan pèmèt ranpli tan ak de fwa plis processeurs. J. Comput. Neurosci. 25, 203–210 (2008). Hines, M. L., Markram, H. & Schürmann, F. Konplètman implisit similasyon paralèl nan sèl neuron. J. Comput. Neurosci. 25, 439-448 (2008). Ben-Shalom, R., Liberman, G. & Korngreen, A. Accelerating modélisation compartiments on a graphical processing unit. Front. Neuroinform. 7, 4 (2013). Tsuyuki, T., Yamamoto, Y. & Yamazaki, T. Efikas similasyon nimewoz nan modèl neuron ak estrikti espasyèl sou inite pwosesis grafik. In Proc. 2016 International Conference on Neural Information Processing (eds Hirose894Akiraet al.) 279-285 (Springer International Publishing, 2016). Vooturi, D. T., Kothapalli, K. & Bhalla, U.S. Parallelizing Hines Matrix Solver nan Similasyon Neuron sou GPU. In Proc. IEEE 24th International Conference on High Performance Computing (HiPC) 388-397 (IEEE, 2017). Huber, F. Efficient solver twal pou matris hin sou GPU a. Preprint nan https://arxiv.org/abs/1810.12742 (2018). Korte, B. & Vygen, J. Teyori optimizasyon konbinatè ak algorithms 6 edn (Springer, 2018). Gebali, F. Algorithms ak òdinatè paralèl (Wiley, 2011). Kumbhar, P. et al. CoreNEURON: Yon optimize motè òdinatè pou similatè a NEURON. Front. Neuroinform. 13, 63 (2019). Urbanczik, R. & Senn, W. Aprann pa prediksyon an dendritik nan spiking somatik. Neuron 81, 521-528 (2014). Ben-Shalom, R., Aviv, A., Razon, B. & Korngreen, A. Optimization modèl chèn ion lè l sèvi avèk yon algorithm jèn paralèl sou processeurs grafik. J. Neurosci. Metòd 206, 183-194 (2012). Mascagni, M. Yon algorithm paralelize pou solisyon òdinatè a nan modèl selilèman ranje neuron kab. J. Neurosci. Metòd 36, 105-114 (1991). McDougal, R. A. et al. Twenty years of modelDB and beyond: Building essential modeling tools for the future of neuroscience. J. Comput. Neurosci. 42, 1–10 (2017). Migliore, M., Messineo, L. & Ferrante, M. Dendritic Ih selektivman bloke sumasyon tan nan nesynchronized entwodiksyon distal nan nèvron piramidal CA1. J. Comput. Neurosci. 16, 5–13 (2004). Hemond, P. et al. Klase diferan nan selil piramid ekspoze patnè tire reciprocely exclusive nan zòn hippocampal CA3b. Hippocampus 18, 411-424 (2008). Hay, E., Hill, S., Schürmann, F., Markram, H. & Segev, I. Modèl yo nan selil piramidal ki nan plak 5b neocortical ki retire yon varyete nan Dendritic ak perisomatic pwopriyete aktif. PLoS Comput. Biol. 7, e1002107 (2011). Masoli, S., Solinas, S. & D'Angelo, E. pwosesis potansyèl aksyon nan yon modèl selil purkinje detaye revele yon wòl kritik pou compartmentalization aksonal. Front. Cell. Neurosci. 9, 47 (2015). Lindroos, R. et al. Neuromodulation nan ganglia bazal sou plizyè echèl tan ak estrikti-similasyon nan MSN dirèkteman chemen rechèch kòmansman an byen vit nan efè dopaminergic ak prezante wòl la nan Kv4.2. Front. Neural Circuits 12, 3 (2018). Migliore, M. et al. Clusters sinaptik fonksyon kòm operatè odè nan bulb oliv. Proc. Natl Acad. Sci. USa 112, 8499–8504 (2015). NVIDIA. CUDA C++ Programming Guide. https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html (2021). NVIDIA. CUDA C++ Best Practices Guide. https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html (2021). Harnett, M. T., Makara, J. K., Spruston, N., Kath, W. L. & Magee, J. C. Amplifikasyon sinaptik pa spines dendritik amelyore kooperativite enprime. Natirèl la 491, 599–602 (2012). Chiu, C. Q. et al. Compartmentalization of GABAergic inhibition by dendritic spines. Syans 340, 759–762 (2013). Tønnesen, J., Katona, G., Rózsa, B. & Nägerl, U. V. Plastikite a nan kwen espinal reglemante compartmentalization nan sinaps. Nat. Neurosci. 17, 678–685 (2014). Eyal, G. et al. Neurons piramidal cortical moun: soti nan spines nan piki atravè modèl. Front. Cell. Neurosci. 12, 181 (2018). Koch, C. & Zador, A. Fonksyon an nan spines dendritik: aparèy subserving biokimik anplis de kompartmanalizasyon elektrik. J. Neurosci. 13, 413-422 (1993). Koch, C. Dendritic spines. nan Biophysics nan òdinatè (Oxford University Press, 1999). Rapp, M., Yarom, Y. & Segev, I. Efè a nan aktivite fon fibre paralèl sou pwopriyete kab nan selil purkinje cerebrèl. Neural Comput. 4, 518-533 (1992). Hines, M. Efikas kominikasyon nan ekivalan nervo ranje. Int. J. Bio-Med. Comput. 15, 69-76 (1984). Nayebi, A. & Ganguli, S. Biolojikman enspire pwoteksyon nan rezo dyp soti nan atak adverse. Preprint nan https://arxiv.org/abs/1703.09202 (2017). Goddard, N. H. & Hood, G. Similasyon Large-scale lè l sèvi avèk GENESIS paralèl. Nan Liv la nan GENESIS: Exploring Realistic Neural Models with the General Neural Simulation System (eds Bower James M. & Beeman David) 349-379 (Springer New York, 1998). Migliore, M., Cannia, C., Lytton, W. W., Markram, H. & Hines, M. L. Similasyon rezo paralèl ak NEURON. J. òdinatè Neurosci. 21, 119 (2006). Lytton, W. W. et al. Similasyon neurotechnologies pou avanse rechèch sèvo: paralelizing gwo rezo nan NEURON. Neural Comput. 28, 2063–2090 (2016). Valero-Lara, P. et al. cuHinesBatch: Solving multiple Hines systèmes on GPUs human brain project. In Proc. 2017 International Conference on Computational Science 566-575 (IEEE, 2017). Akar, N. A. et al. Arbor — Yon morfoloji-detay rezo neural similasyon bibliyotèk pou architectures òdinatè segondè pèfòmans kontan. In Proc. 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP) 274–282 (IEEE, 2019). Ben-Shalom, R. et al. NeuroGPU: Accelerating multi-compartment, biophysically detaye similasyon neuron sou GPUs. J. Neurosci. Metòd 366, 109400 (2022). Rempe, M. J. & Chopp, D. L. Yon algorithm prediktè-korektè pou egzanp reaksiyon-difuzyon ki asosye ak aktivite nève sou estrikti ranje. SIAM J. Sci. Comput. 28, 2139-2161 (2006). Kozloski, J. & Wagner, J. Yon solisyon ultrascalable nan similasyon tisi nwaj gwo-scale. Front. Neuroinform. 5, 15 (2011). Jayant, K. et al. Targeted enskripsyon voltaj intracellular soti nan spines dendritic lè l sèvi avèk nanopipèt kvanti-pòch-koupe. Nat. Nanotechnol. 12, 335–342 (2017). Palmer, L. M. & Stuart, G. J. Potansyèl membran chanjman nan spines dendritik pandan potansyèl aksyon ak entwodiksyon sinaptik. J. Neurosci. 29, 6897-6903 (2009). Nishiyama, J. & Yasuda, R. Biokimik kominikasyon pou plasticite estrikti a nan espinal la. Neuron 87, 63–75 (2015). Yuste, R. & Bonhoeffer, T. Chanjman morfolojik nan spines dendritik ki asosye ak plastikite sinaptik long tèm. Annu. Rev. Neurosci. 24, 1071–1089 (2001). Holtmaat, A. & Svoboda, K. Eksperyans-dependant estriktiplastik sinaptik nan sèvo mamay. Nat. Rev. Neurosci. 10, 647–658 (2009). Caroni, P., Donato, F. & Muller, D. Plasticity estrikti sou aprantisaj: regilasyon ak fonksyon. Nat. Rev. Neurosci. 13, 478-490 (2012). Keck, T. et al. Restrukturisasyon masiv nan sik neuronal pandan reorganizasyon fonksyonèl la nan kortex vizyèl granmoun. Nat. Neurosci. 11, 1162 (2008). Hofer, S. B., Mrsic-Flogel, T. D., Bonhoeffer, T. & Hübener, M. Eksperyans la kite yon tèkstur estriktibil nan sik cortical. Nature 457, 313–317 (2009). Trachtenberg, J. T. et al. Long-term in vivo imajinasyon nan eksperyans-dependant plasticite sinaptik nan cortex la granmoun. Nature 420, 788-794 (2002). Marik, S. A., Yamahachi, H., McManus, J. N., Szabo, G. & Gilbert, C. D. Dinamik aksonal nan neron eksitasyon ak inibitè nan cortex somatosensory. PLoS Biol. 8, e1000395 (2010). Xu, T. et al. Rapid fòmasyon ak selektiv estabilizasyon nan sinaps pou memwa motè durable. Nature 462, 915-919 (2009). Albarran, E., Raissi, A., Jáidar, O., Shatz, C. J. & Ding, J. B. Enhancing motor learning by increasing the stability of newly formed dendritic spines in the motor cortex. , 3298–3311 (2021). Neuron 109 Branco, T. & Häusser, M. Gradients entegre sinaptik nan yon sèl dendrit piramidal selilè kortikal. Neuron 69, 885-892 (2011). Major, G., Larkum, M. E. & Schiller, J. pwopriyete aktif nan neocortical piramidal neuron dendrites. Annu. Rev. Neurosci. 36, 1–24 (2013). Gidon, A. et al. Potansyèl aksyon Dendritik ak konpoze nan kouch la omwen 2/3 nan neuron kortikal. Syans 367, 83-87 (2020). Doron, M., Chindemi, G., Muller, E., Markram, H. & Segev, I. Timed inhibisyon sinaptik fòm NMDA piki, enfliyanse pwosesis lokal dendritik ak pwopriyete I / O mondyal nan neuron kortikal. Cell Rep. 21, 1550–1561 (2017). Du, K. et al. Inhibisyon selil-tip-spesifik nan potansyèl la nan plato dendritik nan neuron pwojè spinal striatal. Proc. Natl Acad. Sci. USA 114, E7612-E7621 (2017). Smith, S. L., Smith, I. T., Branco, T. & Häusser, M. Dendritik piki ogmante selektivite nan stimulus nan neuron kortikal in vivo. Nature 503, 115-120 (2013). Xu, N.-l et al. Nonlinear dendritic integration of sensory and motor input during an active sensing task. , 247–251 (2012). Nature 492 Takahashi, N., Oertner, T. G., Hegemann, P. & Larkum, M. E. Aktiv dendritis kortikal modile pèspeksyon. Syans 354, 1587–1590 (2016). Sheffield, M. E. & Dombeck, D. A. Prevalans tranzitif nan kalsyòm atravè arbour dendritic prezante pwopriyete nan zòn nan zòn nan. Nature 517, 200–204 (2015). Markram, H. et al. Rekonstriksyon ak similasyon nan microcircuitry neocortical. selil 163, 456-492 (2015). Billeh, Y. N. et al. Sistèmik entegre done estrikti ak fonksyonèl nan modèl milti-scale nan mouse primè cortex vizyèl. Neuron 106, 388–403 (2020). Hjorth, J. et al. Microcircuits nan striatum nan siliko. Proc. Natl Acad. Sci. USA 117, 202000671 (2020). Guerguiev, J., Lillicrap, T. P. & Richards, B. A. Nan direksyon pou aprantisaj dyab ak dendrit segregated. elife 6, e22901 (2017). Iyer, A. et al. Evite katastrof: dendrit aktif pèmèt aprantisaj milti-task nan anviwònman dinamik. Front. Neurorobot. 16, 846219 (2022). Jones, I. S. & Kording, K. P. Èske yon sèl neuron ka rezoud pwoblèm enteresan nan aprantisaj machin nan kominikasyon suede sou fwi dendritik li yo? Neural Comput. 33, 1554–1571 (2021). Bird, A. D., Jedlicka, P. & Cuntz, H. Dendritic normalisation improves learning in sparsely connected artificial neural networks. , e1009202 (2021). PLoS Comput. Biol. 17 Goodfellow, I. J., Shlens, J. & Szegedy, C. Explaining and leveraging adversarial examples. Nan 3rd International Conference on Learning Representations (ICLR) (ICLR, 2015). Papernot, N., McDaniel, P. & Goodfellow, I. Transfere nan aprantisaj machin: soti nan fenomèn yo nan atak bouk nwa lè l sèvi avèk echantiyon adversarial. Preprint nan https://arxiv.org/abs/1605.07277 (2016). Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. aprantisaj ki baze sou gradient aplike nan rekonesans dokiman. Proc. IEEE 86, 2278–2324 (1998). Xiao, H., Rasul, K. & Vollgraf, R. Fashion-MNIST: yon nouvo dataset imaj pou benchmarking algorithms aprantisaj machin. Preprint nan http://arxiv.org/abs/1708.07747 (2017). Bartunov, S. et al. Evalyasyon nan skalabilite nan algorithms ak aritèk nan aprantisaj fondamantal biolojik-motifye. Nan avanse nan sistèm pwosesis enfòmasyon neural 31 (NeurIPS 2018) (NeurIPS, 2018). Rauber, J., Brendel, W. & Bethge, M. Foolbox: Yon bwat zouti nan Python pou benchmarking robustite nan modèl aprantisaj machin. Nan Reliable Machine Learning nan atelye vil, 34yèm Konferans Entènasyonal sou Aprantisaj machin (2017). Rauber, J., Zimmermann, R., Bethge, M. & Brendel, W. Foolbox native: fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX. , 2607 (2020). J. Open Source Softw. 5 Paszke, A. et al. PyTorch: An imperative style, high-performance deep learning library. In (NeurIPS, 2019). Advances in Neural Information Processing Systems 32 (NeurIPS 2019) Li, K., Zhang, X., Ren, S. & Sun, J. Deep aprantisaj rezidansyèl pou rekonesans imaj. Nan 2016 IEEE Konferans sou konpitè vizyon ak rekonesans modèl (CVPR) 770-778 (IEEE, 2016). rekonesans Yon travay sa a te sipòte pa National Key R&D Program of China (No. 2018B030338001) nan K.D. ak T.H., National Natural Science Foundation of China (No. 61825101) nan Y.T., Swedish Research Council (VR-M-2020-01652), Swedish e-Science Research Centre (SeRC), EU/Horizon 2020 No. 945539 (HBP SGA3), KTH ak Digital Futures nan J.H.K., J.K., A.H., PDIC, Swedish pou simulation nan 2018 (ki pa gen resous bay pa S.H.C. 94539) ak Swedish Rechèch Fondasyon nan S.M.K.-2021 (ki pa gen resous bay pa S.M.K.-2033) pa Kompatibilite Nasyonal la Papye sa a se disponib nan natirèl anba lisans CC by 4.0 Deed (Attribution 4.0 entènasyonal). Papye sa a se disponib nan natirèl anba lisans CC by 4.0 Deed (Attribution 4.0 entènasyonal).