Die skrywers: Yichen Zhang Gaan Hy Lei Ma Sjoe Liu J. J. Johannes Hjorth se werk Alexander Kozlov Hy het Shenjian Zhang Jeanette Hellgren Kotaleski Yonghong Tian Steen Griller Kai Du Huang Huang Die skrywers: Geskryf deur Zhang Gaan Hy Lei Ma Sjoe Liu J. J. Johannes Hjorth se werk Alexander Kozlov Hy het Geskryf deur Zhang Jeanette Hellgren van Kotaleski Yonghong Tian Steen Griller Wanneer jy Huang Huang abstrakte Biofisiek gedetailleerde multi-divisie modelle is kragtige gereedskap om berekeningsbeginsels van die brein te verken en dien ook as 'n teoretiese raamwerk om algoritmes vir kunsmatige intelligensie (AI) stelsels te genereer. Maar die duur berekeningskoste beperk die toepassings in beide die neurowetenskap en AI velde ernstig. Endritiese Hierarchieë cheduling (DHS) metode om 'n sulke proses aansienlik te versnel. Ons bewys teoreties dat die DHS implementasie berekeningsoptimaal en akkuraat is. Hierdie GPU-gebaseerde metode werk met 2-3 orde van magnitude hoër spoed as wat van die klassieke seriële Hines-metode in die konvensionele CPU-platform. Ons bou 'n DeepDendrite raamwerk, wat die DHS-metode en die GPU-berekeningsmotor van die NEURON-simulator integreer en demonstreer toepassings van DeepDendrite in neurowetenskap taak. Ons ondersoek hoe ruimtelike patrone van spin-inputs neuronale opwinding beïnvloed in 'n gedetailleerde menslike piramide neuron model met 25,000 spin. Verder bied ons D H S Inleiding Die ontcijfering van die kodeer- en berekeningsbeginsels van neurone is noodsaaklik vir neurowetenskap. Soogdiere brein bestaan uit meer as duisende verskillende tipes neurone met unieke morfologiese en biofisiese eienskappe. , waarin neurone as eenvoudige opsommingseenhede beskou is, word steeds wyd toegepas in neurale berekening, veral in neurale netwerkanalise.In die afgelope jare het moderne kunsmatige intelligensie (AI) hierdie beginsel benut en kragtige gereedskap ontwikkel, soos kunsmatige neurale netwerke (ANN) Behalwe omvattende berekenings op die enkele neuronvlak, kan subcellulêre afdelings, soos neuronale dendrites, ook nie-lineêre bedrywighede as onafhanklike berekeningseenhede uitvoer. , , , , Daarbenewens kan dendrietiese spinne, klein uitsteeksels wat die dendriete in die spinale neurone dicht bedek, sinaptiese signale compartmentaliseer, sodat hulle van hul ouer dendriete ex vivo en in vivo geskei kan word. , , , . 1 2 3 4 5 6 7 8 9 10 11 Simulasie met behulp van biologiese gedetailleerde neurone bied 'n teoretiese raamwerk vir die koppeling van biologiese besonderhede met berekeningsbeginsels. , ons toelaat om neurone te model met realistiese dendrietiese morfologieë, intrinsieke ioniese geleiding, en extrinsieke sinaptiese inputs. , wat die biofisiese membraan eienskappe van dendriete as passiewe kabels model, bied 'n wiskundige beskrywing van hoe elektroniese signale binnedring en versprei deur komplekse neuronale prosesse. , . 12 13 12 4 7 Benewens sy diepgaande impak op neurowetenskap, is biologiese gedetailleerde neuronmodelle onlangs gebruik om die gaping tussen neuronale strukturele en biofisiese besonderhede en AI te brug. Die oorheersende tegniek in die moderne AI-veld is ANN's wat bestaan uit puntneurone, 'n analoog aan biologiese neurale netwerke. , , die menslike brein oorskry ANNs in domeine wat meer dinamiese en lawaaierige omgewings behels , Onlangse teoretiese studies dui daarop dat dendriet-integrasie noodsaaklik is om doeltreffende leeralgoritme te genereer wat moontlik die backprop in parallelle inligtingverwerking oorskry. , , Daarbenewens kan 'n enkele gedetailleerde multi-compartment model netwerkvlak nonlineêre berekenings vir puntneurone leer deur slegs die sinaptiese sterkte aan te pas. , , demonstreer die volle potensiaal van die gedetailleerde modelle in die bou van meer kragtige brein-soortgelyke AI-stelsels.Daarom is dit van hoë prioriteit om paradigmas in brein-soortgelyke AI uit te brei van enkele gedetailleerde neuronmodelle tot groot skaal biologiese gedetailleerde netwerke. 14 15 16 17 18 19 20 21 22 Een langdurige uitdaging van die gedetailleerde simulasie benadering lê in sy buitengewoon hoë berekeningskoste, wat sy toepassing aan neurowetenskap en AI ernstig beperk het. , , Om doeltreffendheid te verbeter, verminder die klassieke Hines-metode die kompleksiteit van die tyd vir die oplossing van vergelykings van O(n3) na O(n), wat wyd toegepas is as die kernalgoritme in gewilde simulators soos NEURON. Die Genesis Wanneer 'n simulasie verskeie biofisiek gedetailleerde dendriete met dendrietiese spinne betref, skaal die lineêre vergelykingsmatrix ("Hines Matrix") dienovereenkomstig met 'n toenemende aantal dendriete of spinne (Fig. ), maak die Hines-metode nie meer praktyk nie, aangesien dit 'n baie swaar las op die hele simulasie veroorsaak. 12 23 24 25 26 1e 'N Rekonstrueerde laag-5-piramidaal neuronmodel en die wiskundige formule wat met gedetailleerde neuronmodelle gebruik word. Werkstroom wanneer numeriese simulasie gedetailleerde neuron modelle. 'N Voorbeeld van lineêre vergelykings in die simulasie. Data afhanklikheid van die Hines-metode wanneer lineêre vergelykings opgelos word Die Die grootte van die Hines matrix skaal met model kompleksiteit. Die aantal lineêre vergelykingsstelsel wat opgelos moet word ondergaan 'n beduidende toename wanneer modelle groei meer gedetailleerde. Berekeningskoste (stappe wat in die gelykingsoplosfase geneem word) van die seriële Hines-metode op verskillende soorte neuronmodelle. Verskeie dele van 'n neuron word toegewys aan verskeie verwerkings eenhede in parallelle metodes (mid, reg), getoon met verskillende kleure. Die berekeningskoste van drie metodes in die oplossing van gelykhede van 'n piramidale model met spinne. Die loop tyd dui op die tyd verbruik van 1 s simulasie (oplos die vergelyking 40.000 keer met 'n tyd stap van 0,025 ms). p-Hines parallelle metode in CoreNEURON (op GPU), Branch-gebaseerde filiaal-gebaseerde parallelle metode (op GPU), DHS Dendritic hierarchiese skedulering metode (op GPU). a b c d c e f g h g i Gedurende die afgelope dekades is groot vooruitgang bereik om die Hines-metode te versnel deur parallelle metodes op die selvlak te gebruik, wat toelaat om die berekening van verskillende dele in elke sel te paralleliseer. , , , , , Die huidige parallelle metodes op selvlak het egter dikwels 'n effektiewe paralleliseringsstrategie ontbreek of ontbreek voldoende numeriese akkuraatheid in vergelyking met die oorspronklike Hines-metode. 27 28 29 30 31 32 Hier ontwikkel ons 'n ten volle outomatiese, numeriese akkurate en geoptimaliseerde simulasie gereedskap wat rekenaardoeltreffendheid aansienlik kan versnel en rekenaarkoste kan verminder.Daarbenewens kan hierdie simulasie gereedskap naadloos gebruik word vir die vestiging en toetsing van neurale netwerke met biologiese besonderhede vir masjienlering en AI-toepassings. Parallele Computing Teorie Ons het bewys dat ons algoritme optimale beplanning bied sonder enige verlies van akkuraatheid.Daarbenewens het ons DHS geoptimaliseer vir die huidige mees gevorderde GPU chip deur die GPU geheue hiërargie en geheue toegang meganismes te benut.Samen, DHS kan berekening versnel 60-1,500 keer (Supplementêre Tabel Vergelyk met die klassieke simulators NEURON Met dieselfde akkuraatheid. 33 34 1 25 Om gedetailleerde dendriese simulasie vir gebruik in AI te toelaat, stel ons die DeepDendrite-raamwerk volgende deur die DHS-gebaseerde CoreNEURON-platform ( 'n geoptimaliseerde berekeningsmotor vir NEURON) te integreer. as die simulasie-motor en twee hulpmodules (I/O-module en leermodule) wat dendrietiese leeralgoritme ondersteun tydens simulasie. DeepDendrite loop op die GPU-hardwareplatform, wat beide gereelde simulasie-taak in neurowetenskap en leertaak in AI ondersteun. 35 Laaste maar nie minstens, ons bied ook verskeie toepassings met behulp van DeepDendrite, wat op 'n paar kritieke uitdagings in neurowetenskap en AI aangewend word: (1) Ons toon hoe ruimtelike patrone van dendrietiese spin-inputs neuronale aktiwiteite beïnvloed met spin-inhoudende neurone regoor die dendrietiese bome (volledige spin-modelle). DeepDendrite stel ons in staat om neuronale berekening te ondersoek in 'n gesimuleerde menslike piramidale neuronmodel met ~25,000 dendrietiese spin. (2) In die bespreking oorweeg ons ook die potensiaal van DeepDendrite in die konteks van AI, spesifiek, in die skep van ANNs met morfologiese gedetailleerde menslike piramidale neurone. Al die broncode vir DeepDendrite, die volledige spin-modelle en die gedetailleerde dendrietiese netwerkmodel is publieklik online beskikbaar (sien Code Availability). Ons open-source leerraamwerk kan maklik geïntegreer word met ander dendrietiese leerreëls, soos leerreëls vir nie-lineêre (volledig aktiewe) dendriete Uitbraak-afhanklike sinaptiese plasticiteit , en leer met spike voorspelling Algehele, ons studie bied 'n volledige stel gereedskap wat die potensiaal het om die huidige rekenaar neurowetenskap gemeenskap ekosisteem te verander.Draai die krag van GPU rekenaar, ons voorsien dat hierdie gereedskap sal fasiliteer stelsel-vlak verkenning beginsels van rekenaar strukture van die brein, sowel as die interaksie tussen neurowetenskap en moderne AI bevorder. 21 20 36 Resultate Dendrietiese hiërargiese skeduleer (DHS) Die berekening van ioniese strome en die oplossing van lineêre vergelykings is twee kritieke fases wanneer biofisiek gedetailleerde neurone simuleer word, wat tydverwagend is en ernstige berekeningsbelastings veroorsaak.Gelukkig is die berekening van ioniese strome van elke afdeling 'n heeltemal onafhanklike proses sodat dit natuurlik op toestelle met massiewe parallelle berekeningseenhede soos GPU's gepaard gaan. As gevolg hiervan word die oplossing van lineêre vergelykings die oorblywende bottleneck vir die parallelisasieproses (Fig. die 37 1a - F Om hierdie bottleneck aan te spreek, is parallelle metodes op selvlak ontwikkel, wat enkele selberekening versnel deur 'n enkele sel in verskeie afdelings te "split" wat parallel bereken kan word. , , Maar sulke metodes vertrou sterk op voorafgaande kennis om praktiese strategieë te genereer oor hoe om 'n enkele neuron in afdelings te verdeel (Fig. Die aanvullende figuur. Daarom word dit minder doeltreffend vir neurone met asimetriese morfologieë, byvoorbeeld piramidale neurone en Purkinje neurone. 27 28 38 1 g 1 Ons streef na die ontwikkeling van 'n meer doeltreffende en presiese parallelle metode vir die simulasie van biologiese gedetailleerde neurale netwerke. Eerstens, ons stel die kriteria vir die akkuraatheid van 'n sel-vlak parallelle metode. , stel ons drie voorwaardes voor om seker te maak dat 'n parallelle metode identiese oplossings as die seriële berekening Hines-metode sal lewer volgens die data-afhanklikheid in die Hines-metode (sien Methodes). 34 Op grond van die simulasie akkuraatheid en berekeningskoste, formuleer ons die paralleliseringsprobleem as 'n wiskundige beplanningsprobleem (sien Methods). Parallele draad, kan ons die meeste bereken nodes op elke stap, maar ons moet seker maak dat 'n knoop slegs bereken word as al sy kinders knooppe verwerk is; ons doel is om 'n strategie te vind met die minimum aantal stappe vir die hele prosedure. k k Om 'n optimale partisie te genereer, stel ons 'n metode voor genaamd Dendritic Hierarchical Scheduling (DHS) (teoriese bewyse word in die Methods aangebied). Die DHS-metode sluit twee stappe in: die dendriese topologie analiseer en die beste partisie vind: (1) Gegewe 'n gedetailleerde model, kry ons eers die ooreenstemmende afhankingsboom en bereken die diepte van elke knoop (die diepte van 'n knoop is die aantal van sy voorvaders) op die boom (Fig. (2) Na topologie-analise, soek ons die kandidate en kies die meeste die diepste kandidaat knooppunt ('n knooppunt is 'n kandidaat slegs as al sy kinders knooppuntjies verwerk is). Hierdie prosedure herhaal totdat alle knooppuntjies verwerk word (Fig. die 2a 2b en C k 2D DHS werkstrome. DHS prosesse Die diepste kandidaat nodes elke iterasie. Illustrasie van die berekening van die nodediepte van 'n compartimentale model. Die model word eers omskep in 'n boomstruktuur, dan word die diepte van elke knoop bereken. Topologie-analise op verskillende neuronmodelle. Zes neurone met verskillende morfologieë word hier getoon. Vir elke model word die soma as die wortel van die boom gekies sodat die diepte van die knoop van die soma (0) na die distale dendriete toeneem. Illustrasie van die uitvoering van DHS op die model in met vier draad. Kandidate: nodes wat verwerk kan word. Geselekteerde kandidate: nodes wat deur DHS gekies word, d.w.z. die Verwerkte nodes: nodes wat voorheen verwerk is. Paralleliseringsstrategie wat deur DHS verkry is na die proses in DHS verminder die stappe van seriële knoppieverwerking van 14 tot 5 deur knoppe te verdeel na verskeie drade. Relatiewe koste, d.w.z. die verhouding van die berekeningskoste van DHS tot die van die seriële Hines-metode, wanneer DHS met verskillende getalle drade op verskillende tipes modelle toegepas word. a k b c d b k e d f Neem 'n vereenvoudigde model met 15 afdelings as 'n voorbeeld, met behulp van die seriële berekening Hines metode, dit neem 14 stappe om al die nodes te verwerk, terwyl die gebruik van DHS met vier parallelle eenhede kan partisieer sy nodes in vyf subset (Fig. [9] [10] [11] [12] [13] [13] [14] [15] [15] [16] [15] [16] [16] [16] [16] [16] [17] [17] [17] [17] [17] [18] ] Omdat nodes in dieselfde subset parallel verwerk kan word, neem dit slegs vyf stappe om al die nodes met behulp van DHS te verwerk. die 2D 2e Volgende, ons toepas die DHS metode op ses verteenwoordigende gedetailleerde neuron modelle (selekteer uit ModelDB ) met verskillende getalle van drade (Fig. ):, insluitend kortikale en hippocampale piramidale neurone , , Cerebellêre Purkinje neurone Striaatprojeksie neurone (SPN) ), en olfactory bulb mitrale selle , wat die hoofneurone in die sensoriese, corticale en subcorticale gebiede dek. Ons het dan die berekeningskoste gemet. Die relatiewe berekeningskoste hier word gedefinieer deur die verhouding van die berekeningskoste van DHS tot die van die seriële Hines-metode. Die berekeningskoste, dit wil sê die aantal stappe wat in die oplossing van vergelykings geneem word, val dramaties met toenemende thread nommers. Byvoorbeeld, met 16 drade, is die berekeningskoste van DHS 7%-10% in vergelyking met die seriële Hines-metode. Intrigant, die DHS-metode bereik die laer grense van hul berekeningskoste vir aangebied neurone wanneer 16 of selfs 8 parallelle drade gegee word (Figuur. ), stel voor dat die byvoeging van meer drade nie prestasie verder verbeter nie as gevolg van die afhankings tussen afdelings. 39 2F 40 41 42 43 44 45 2F Saam genereer ons 'n DHS-metode wat geautomatiseerde analise van die dendriet topologie en optimale partisie vir parallelle berekening moontlik maak.Dit is die moeite werd om op te let dat DHS die optimale partisie vind voordat die simulasie begin, en daar is geen ekstra berekening nodig om vergelykings op te los nie. Versnel DHS deur GPU geheue boosting DHS bereken elke neuron met verskeie drade, wat 'n groot hoeveelheid drade verbruik wanneer neurale netwerksimulasie uitgevoer word. Vir die parallelle rekenaar In teorie, baie SP's op die GPU moet ondersteun effektiewe simulasie vir groot skaal neurale netwerke (Fig. Ons het egter konsekwent waargeneem dat die doeltreffendheid van DHS aansienlik verminder het toe die netwerkgrootte gegroei het, wat kan wees as gevolg van verspreide data-opslag of ekstra geheue-toegang wat veroorsaak word deur die laai en skryf van tussenliggende resultate (Fig. Die linkerkant). 3a en b 46 3c Die 3D GPU-architektuur en sy geheue hiërargie. Elke GPU bevat massiewe verwerkingseenhede (stroomprocessors). Verskeie tipes geheue het verskillende deurvoer. Streaming Multiprocessors (SM's) - elke SM bevat verskeie streamingprocessors, registre en L1-cache. Die toepassing van DHS op twee neurone, elkeen met vier draad.Tydens simulasie, elke draad uitgevoer op een stroom prosesor. Memory optimalisering strategie op GPU. Top paneel, draad toekenning en data-opslag van DHS, voor (links) en na (rechts) geheue boost. Bottom, 'n voorbeeld van 'n enkele stap in driehoekisering wanneer simuleer twee neurone in Processors stuur 'n data versoek om data te laai vir elke draad van globale geheue. sonder geheue boost (links), dit neem sewe transaksie om al die versoek data te laai en 'n paar ekstra transaksie vir tussenliggende resultate. Runtime van DHS (32 drade elke sel) met en sonder geheue boosting op verskeie laag 5 piramidaal modelle met spinne. Versnelling van geheue boost op multi-laag 5 piramidaal modelle met spinne. geheue boost bring 1.6-2 keer boost. a b c d d e f Ons los hierdie probleem op deur GPU geheue boosting, 'n metode om geheue deurvoer te verhoog deur die geheue hiërargie en toegang meganisme van die GPU te benut. Gebaseer op die geheue laai meganisme van die GPU, opeenvolgende drade laai gelink en opeenvolgende-opgeslagen data lei tot 'n hoë geheue deurvoer in vergelyking met toegang tot versprei-opgeslagte data, wat geheue deurvoer verminder. , Om 'n hoë deurvoer te bereik, stel ons eers die berekeningsorders van nodes af en rearrange die drade volgens die aantal nodes op hulle. Dan permuteer ons data-opslag in globale geheue, in ooreenstemming met berekeningsorders, dit wil sê, nodes wat op dieselfde stap verwerk word, word opeenvolgend in globale geheue opgeslagen. Verder gebruik ons GPU-registers om tussenliggende resultate te stoor, wat die geheue-doorvoer verder versterk. Verder, eksperimente op verskeie getalle van piramidale neurone met spin en die tipiese neuron modelle (Fig. Die aanvullende figuur. ) wys dat geheue boosting 'n 1,2-3,8 keer versnelling bereik in vergelyking met die naïeve DHS. 46 47 Die 3D 3e en f 2 Om die prestasie van DHS met GPU geheue boosting omvattende te toets, kies ons ses tipiese neuron modelle en evalueer die loop tyd van die oplossing van kabelvergelykings op massiewe getalle van elke model (Fig. Ons het DHS ondersoek met vier draad (DHS-4) en sestien draad (DHS-16) vir elke neuron, respectievelik. in vergelyking met die GPU metode in CoreNEURON, DHS-4 en DHS-16 kan versnel ongeveer 5 en 15 keer, respectievelik (Figuur. Daarbenewens, in vergelyking met die konvensionele seriële Hines-metode in NEURON wat met 'n enkele draad van die CPU hardloop, versnel DHS die simulasie met 2-3 orde van magnitude (Supplementêre Figuur. ), terwyl die identiese numeriese akkuraatheid in die teenwoordigheid van dichte spinne (Supplementêre Figs. en ), aktiewe dendriete (Supplementêre figuur. ) en verskillende segmentasie strategieë (Supplementêre figuur. die 4 4a 3 4 8 7 7 Runtime van die oplossing van vergelykings vir 'n 1 s simulasie op GPU (dt = 0,025 ms, 40,000 iterasies in totaal). CoreNEURON: die parallelle metode wat gebruik word in CoreNEURON; DHS-4: DHS met vier draad vir elke neuron; DHS-16: DHS met 16 draad vir elke neuron. die Visualisering van die partisie deur DHS-4 en DHS-16, elke kleur dui op 'n enkele draad. a b c DHS creates cell-type-specific optimal partitioning Om insigte te kry oor die werkmeganisme van die DHS-metode, het ons die partisieerproses gevisueer deur afdelings op elke draad te kaarteer (elke kleur bied 'n enkele draad in Figuur. Die visualisering toon dat 'n enkele draad dikwels tussen verskillende takke wissel (Fig. Interessant genoeg genereer DHS gerig partisies in morfologiese simmetrie neurone soos die striatale projeksie neuron (SPN) en die Mitral sel (Fig. In teenstelling, dit genereer gefragmenteerde partisies van morfologiese asymmetrieke neurone soos die piramidale neurone en Purkinje sel (Fig. ), indicating that DHS splits the neural tree at individual compartment scale (i.e., tree node) rather than branch scale. This cell-type-specific fine-grained partition enables DHS to fully exploit all available threads. 4B en C 4B en C 4B en C 4B en C Kortom, DHS en geheue boosting genereer 'n teoreties bewese optimale oplossing vir die oplos van lineêre vergelykings parallel met ongekende doeltreffendheid. Met behulp van hierdie beginsel het ons die oop-toegang DeepDendrite-platform gebou, wat deur neurowetenskaplikes gebruik kan word om modelle te implementeer sonder enige spesifieke GPU-programmeringskennis. DHS maak spinale-vlak modeling moontlik Aangesien dendrietiese spinale die meeste van die opwindingsaanvoer ontvang tot kortikale en hippocampale piramidale neurone, striatale projeksie neurone, ens., is hul morfologieë en plasticiteit noodsaaklik vir die regulering van neuronale opwinding. , , , , Die spinne is egter te klein ( ~ 1 μm lank) om direk eksperimenteel te meet met betrekking tot spanning-afhanklike prosesse. 10 48 49 50 51 Ons kan 'n enkele spinale model met twee afdelings model: die spinale kop waar synapse geleë is en die spinale nek wat die spinale kop met dendriete verbind. Die teorie voorspel dat die baie dun spinale nek (0.1-0.5 um in diameter) elektronies die spinale kop van sy ouer dendriet isoleer, en dus die signale wat by die spinale kop gegenereer word, compartmentaliseer. Die gedetailleerde model met volledig verspreide spinne op dendriete (“full-spine model”) is egter berekeningsmatig baie duur. 'N Algemene kompromieeroplossing is om die kapasiteit en weerstand van die membraan deur 'n Spin faktor , in plaas van al die spinne eksplisiet te model. hier, die Spinale faktor is bedoel om die spinale effek op die biofisiese eienskappe van die selmembraan te benader. . 52 53 F 54 F 54 Inspired by the previous work of Eyal et al. , we investigated how different spatial patterns of excitatory inputs formed on dendritic spines shape neuronal activities in a human pyramidal neuron model with explicitly modeled spines (Fig. ). Noticeably, Eyal et al. employed the spine factor to incorporate spines into dendrites while only a few activated spines were explicitly attached to dendrites (“few-spine model” in Fig. ). The value of spine in their model was computed from the dendritic area and spine area in the reconstructed data. Accordingly, we calculated the spine density from their reconstructed data to make our full-spine model more consistent with Eyal’s few-spine model. With the spine density set to 1.3 μm-1, the pyramidal neuron model contained about 25,000 spines without altering the model’s original morphological and biophysical properties. Further, we repeated the previous experiment protocols with both full-spine and few-spine models. We use the same synaptic input as in Eyal’s work but attach extra background noise to each sample. By comparing the somatic traces (Fig. ) and spike probability (Fig. ) in full-spine and few-spine models, we found that the full-spine model is much leakier than the few-spine model. In addition, the spike probability triggered by the activation of clustered spines appeared to be more nonlinear in the full-spine model (the solid blue line in Fig. ) than in the few-spine model (the dashed blue line in Fig. ). These results indicate that the conventional F-factor method may underestimate the impact of dense spine on the computations of dendritic excitability and nonlinearity. 51 5a F 5a F 5B en C 5d 5d 5d Experiment setup. We examine two major types of models: few-spine models and full-spine models. Few-spine models (two on the left) are the models that incorporated spine area globally into dendrites and only attach individual spines together with activated synapses. In full-spine models (two on the right), all spines are explicitly attached over whole dendrites. We explore the effects of clustered and randomly distributed synaptic inputs on the few-spine models and the full-spine models, respectively. Somatic voltages recorded for cases in . Colors of the voltage curves correspond to , scale bar: 20 ms, 20 mV. Color-coded voltages during the simulation in at specific times. Colors indicate the magnitude of voltage. Somatic spike probability as a function of the number of simultaneously activated synapses (as in Eyal et al.’s work) for four cases in . Background noise is attached. Run time of experiments in with different simulation methods. NEURON: conventional NEURON simulator running on a single CPU core. CoreNEURON: CoreNEURON simulator on a single GPU. DeepDendrite: DeepDendrite on a single GPU. a b a a c b d a e d In the DeepDendrite platform, both full-spine and few-spine models achieved 8 times speedup compared to CoreNEURON on the GPU platform and 100 times speedup compared to serial NEURON on the CPU platform (Fig. ; Supplementary Table ) while keeping the identical simulation results (Supplementary Figs. and ). Therefore, the DHS method enables explorations of dendritic excitability under more realistic anatomic conditions. 5e 1 4 8 Diskusie In this work, we propose the DHS method to parallelize the computation of Hines method and we mathematically demonstrate that the DHS provides an optimal solution without any loss of precision. Next, we implement DHS on the GPU hardware platform and use GPU memory boosting techniques to refine the DHS (Fig. By die simulasie van 'n groot aantal neurone met komplekse morfologieë, bereik DHS met geheue-boosting 'n 15-voudige versnel (Supplementêre Tabel) ) as compared to the GPU method used in CoreNEURON and up to 1,500-fold speedup compared to serial Hines method in the CPU platform (Fig. ; Supplementary Fig. and Supplementary Table ). Furthermore, we develop the GPU-based DeepDendrite framework by integrating DHS into CoreNEURON. Finally, as a demonstration of the capacity of DeepDendrite, we present a representative application: examine spine computations in a detailed pyramidal neuron model with 25,000 spines. Further in this section, we elaborate on how we have expanded the DeepDendrite framework to enable efficient training of biophysically detailed neural networks. To explore the hypothesis that dendrites improve robustness against adversarial attacks , we train our network on typical image classification tasks. We show that DeepDendrite can support both neuroscience simulations and AI-related detailed neural network tasks with unprecedented speed, therefore significantly promoting detailed neuroscience simulations and potentially for future AI explorations. 55 3 1 4 3 1 56 Decades of efforts have been invested in speeding up the Hines method with parallel methods. Early work mainly focuses on network-level parallelization. In network simulations, each cell independently solves its corresponding linear equations with the Hines method. Network-level parallel methods distribute a network on multiple threads and parallelize the computation of each cell group with each thread , . With network-level methods, we can simulate detailed networks on clusters or supercomputers In die afgelope jare is GPU's gebruik vir gedetailleerde netwerksimulasie.Omdat die GPU groot berekeningseenhede bevat, word een draad gewoonlik aan een sel toegewys eerder as 'n selgroep. , , . With further optimization, GPU-based methods achieve much higher efficiency in network simulation. However, the computation inside the cells is still serial in network-level methods, so they still cannot deal with the problem when the “Hines matrix” of each cell scales large. 57 58 59 35 60 61 Cellular-level parallel methods further parallelize the computation inside each cell. The main idea of cellular-level parallel methods is to split each cell into several sub-blocks and parallelize the computation of those sub-blocks , . However, typical cellular-level methods (e.g., the “multi-split” method ) pay less attention to the parallelization strategy. The lack of a fine parallelization strategy results in unsatisfactory performance. To achieve higher efficiency, some studies try to obtain finer-grained parallelization by introducing extra computation operations , , or making approximations on some crucial compartments, while solving linear equations , . These finer-grained parallelization strategies can get higher efficiency but lack sufficient numerical accuracy as in the original Hines method. 27 28 28 29 38 62 63 64 Unlike previous methods, DHS adopts the finest-grained parallelization strategy, i.e., compartment-level parallelization. By modeling the problem of “how to parallelize” as a combinatorial optimization problem, DHS provides an optimal compartment-level parallelization strategy. Moreover, DHS does not introduce any extra operation or value approximation, so it achieves the lowest computational cost and retains sufficient numerical accuracy as in the original Hines method at the same time. Dendritic spines are the most abundant microstructures in the brain for projection neurons in the cortex, hippocampus, cerebellum, and basal ganglia. As spines receive most of the excitatory inputs in the central nervous system, electrical signals generated by spines are the main driving force for large-scale neuronal activities in the forebrain and cerebellum , Die struktuur van die spinale, met 'n vergrote spinale kop en 'n baie dun spinale nek, lei tot verrassend hoë invoerimpedansie by die spinale kop, wat tot 500 MΩ kan wees, kombineer eksperimentele data en die gedetailleerde afdeling modellering benadering. , . Due to such high input impedance, a single synaptic input can evoke a “gigantic” EPSP ( ~ 20 mV) at the spine-head level , , thereby boosting NMDA currents and ion channel currents in the spine . However, in the classic single detailed compartment models, all spines are replaced by the coefficient modifying the dendritic cable geometries . This approach may compensate for the leak currents and capacitance currents for spines. Still, it cannot reproduce the high input impedance at the spine head, which may weaken excitatory synaptic inputs, particularly NMDA currents, thereby reducing the nonlinearity in the neuron’s input-output curve. Our modeling results are in line with this interpretation. 10 11 48 65 48 66 11 F 54 On the other hand, the spine’s electrical compartmentalization is always accompanied by the biochemical compartmentalization , , , resulting in a drastic increase of internal [Ca2+], within the spine and a cascade of molecular processes involving synaptic plasticity of importance for learning and memory. Intriguingly, the biochemical process triggered by learning, in turn, remodels the spine’s morphology, enlarging (or shrinking) the spine head, or elongating (or shortening) the spine neck, which significantly alters the spine’s electrical capacity , , , . Such experience-dependent changes in spine morphology also referred to as “structural plasticity”, have been widely observed in the visual cortex , , somatosensory cortex , , motor cortex , hippocampus , and the basal ganglia in vivo. They play a critical role in motor and spatial learning as well as memory formation. However, due to the computational costs, nearly all detailed network models exploit the “F-factor” approach to replace actual spines, and are thus unable to explore the spine functions at the system level. By taking advantage of our framework and the GPU platform, we can run a few thousand detailed neurons models, each with tens of thousands of spines on a single GPU, while maintaining ~100 times faster than the traditional serial method on a single CPU (Fig. ). Therefore, it enables us to explore of structural plasticity in large-scale circuit models across diverse brain regions. 8 52 67 67 68 69 70 71 72 73 74 75 9 76 5e Another critical issue is how to link dendrites to brain functions at the systems/network level. It has been well established that dendrites can perform comprehensive computations on synaptic inputs due to enriched ion channels and local biophysical membrane properties , , Byvoorbeeld, kortikale piramidale neurone kan sublineêre sinaptiese integrasie by die proksimale dendrite uitvoer, maar geleidelik verskuif na supralineêre integrasie by die distale dendrite. . Moreover, distal dendrites can produce regenerative events such as dendritic sodium spikes, calcium spikes, and NMDA spikes/plateau potentials , . Such dendritic events are widely observed in mice or even human cortical neurons in vitro, which may offer various logical operations , or gating functions , . Recently, in vivo recordings in awake or behaving mice provide strong evidence that dendritic spikes/plateau potentials are crucial for orientation selectivity in the visual cortex , sensory-motor integration in the whisker system , , and spatial navigation in the hippocampal CA1 region . 5 6 7 77 6 78 6 79 6 79 80 81 82 83 84 85 To establish the causal link between dendrites and animal (including human) patterns of behavior, large-scale biophysically detailed neural circuit models are a powerful computational tool to realize this mission. However, running a large-scale detailed circuit model of 10,000-100,000 neurons generally requires the computing power of supercomputers. It is even more challenging to optimize such models for in vivo data, as it needs iterative simulations of the models. The DeepDendrite framework can directly support many state-of-the-art large-scale circuit models , , , which were initially developed based on NEURON. Moreover, using our framework, a single GPU card such as Tesla A100 could easily support the operation of detailed circuit models of up to 10,000 neurons, thereby providing carbon-efficient and affordable plans for ordinary labs to develop and optimize their own large-scale detailed models. 86 87 88 Recent works on unraveling the dendritic roles in task-specific learning have achieved remarkable results in two directions, i.e., solving challenging tasks such as image classification dataset ImageNet with simplified dendritic networks , and exploring full learning potentials on more realistic neuron , . However, there lies a trade-off between model size and biological detail, as the increase in network scale is often sacrificed for neuron-level complexity , , . Moreover, more detailed neuron models are less mathematically tractable and computationally expensive . 20 21 22 19 20 89 21 There has also been progress in the role of active dendrites in ANNs for computer vision tasks. Iyer et al. . proposed a novel ANN architecture with active dendrites, demonstrating competitive results in multi-task and continual learning. Jones and Kording het 'n binêre boom gebruik om dendrietvertakking te benader en waardevolle insigte gegee oor die invloed van boomstruktuur op enkele neurone se berekeningskapasiteit. . proposed a dendritic normalization rule based on biophysical behavior, offering an interesting perspective on the contribution of dendritic arbor structure to computation. While these studies offer valuable insights, they primarily rely on abstractions derived from spatially extended neurons, and do not fully exploit the detailed biological properties and spatial information of dendrites. Further investigation is needed to unveil the potential of leveraging more realistic neuron models for understanding the shared mechanisms underlying brain computation and deep learning. 90 91 92 In response to these challenges, we developed DeepDendrite, a tool that uses the Dendritic Hierarchical Scheduling (DHS) method to significantly reduce computational costs and incorporates an I/O module and a learning module to handle large datasets. With DeepDendrite, we successfully implemented a three-layer hybrid neural network, the Human Pyramidal Cell Network (HPC-Net) (Fig. ). This network demonstrated efficient training capabilities in image classification tasks, achieving approximately 25 times speedup compared to training on a traditional CPU-based platform (Fig. ; Supplementary Table ). 6a, b 6f 1 The illustration of the Human Pyramidal Cell Network (HPC-Net) for image classification. Images are transformed to spike trains and fed into the network model. Learning is triggered by error signals propagated from soma to dendrites. Training with mini-batch. Multiple networks are simulated simultaneously with different images as inputs. The total weight updates ΔW are computed as the average of ΔWi from each network. Comparison of the HPC-Net before and after training. Left, the visualization of hidden neuron responses to a specific input before (top) and after (bottom) training. Right, hidden layer weights (from input to hidden layer) distribution before (top) and after (bottom) training. Workflow of the transfer adversarial attack experiment. We first generate adversarial samples of the test set on a 20-layer ResNet. Then use these adversarial samples (noisy images) to test the classification accuracy of models trained with clean images. Prediction accuracy of each model on adversarial samples after training 30 epochs on MNIST (left) and Fashion-MNIST (right) datasets. Run time of training and testing for the HPC-Net. The batch size is set to 16. Left, run time of training one epoch. Right, run time of testing. Parallel NEURON + Python: training and testing on a single CPU with multiple cores, using 40-process-parallel NEURON to simulate the HPC-Net and extra Python code to support mini-batch training. DeepDendrite: training and testing the HPC-Net on a single GPU with DeepDendrite. a b c d e f Additionally, it is widely recognized that the performance of Artificial Neural Networks (ANNs) can be undermined by adversarial attacks —intentionally engineered perturbations devised to mislead ANNs. Intriguingly, an existing hypothesis suggests that dendrites and synapses may innately defend against such attacks . Our experimental results utilizing HPC-Net lend support to this hypothesis, as we observed that networks endowed with detailed dendritic structures demonstrated some increased resilience to transfer adversarial attacks compared to standard ANNs, as evident in MNIST and Fashion-MNIST datasets (Fig. ). This evidence implies that the inherent biophysical properties of dendrites could be pivotal in augmenting the robustness of ANNs against adversarial interference. Nonetheless, it is essential to conduct further studies to validate these findings using more challenging datasets such as ImageNet . 93 56 94 95 96 6d, e 97 In conclusion, DeepDendrite has shown remarkable potential in image classification tasks, opening up a world of exciting future directions and possibilities. To further advance DeepDendrite and the application of biologically detailed dendritic models in AI tasks, we may focus on developing multi-GPU systems and exploring applications in other domains, such as Natural Language Processing (NLP), where dendritic filtering properties align well with the inherently noisy and ambiguous nature of human language. Challenges include testing scalability in larger-scale problems, understanding performance across various tasks and domains, and addressing the computational complexity introduced by novel biological principles, such as active dendrites. By overcoming these limitations, we can further advance the understanding and capabilities of biophysically detailed dendritic neural networks, potentially uncovering new advantages, enhancing their robustness against adversarial attacks and noisy inputs, and ultimately bridging the gap between neuroscience and modern AI. Methods Simulation with DHS CoreNEURON simulator ( Gebruik die neurone architecture and is optimized for both memory usage and computational speed. We implement our Dendritic Hierarchical Scheduling (DHS) method in the CoreNEURON environment by modifying its source code. All models that can be simulated on GPU with CoreNEURON can also be simulated with DHS by executing the following command: 35 https://github.com/BlueBrain/CoreNeuron 25 coreneuron_exec -d /path/to/models -e time --cell-permute 3 --cell-nthread 16 --gpu The usage options are as in Table . 1 Accuracy of the simulation using cellular-level parallel computation To ensure the accuracy of the simulation, we first need to define the correctness of a cellular-level parallel algorithm to judge whether it will generate identical solutions compared with the proven correct serial methods, like the Hines method used in the NEURON simulation platform. Based on the theories in parallel computing , a parallel algorithm will yield an identical result as its corresponding serial algorithm, if and only if the data process order in the parallel algorithm is consistent with data dependency in the serial method. The Hines method has two symmetrical phases: triangularization and back-substitution. By analyzing the serial computing Hines method , we find that its data dependency can be formulated as a tree structure, where the nodes on the tree represent the compartments of the detailed neuron model. In the triangularization process, the value of each node depends on its children nodes. In contrast, during the back-substitution process, the value of each node is dependent on its parent node (Fig. So kan ons nodes op verskillende takke parallel bereken, aangesien hul waardes nie afhanklik is nie. 34 55 1d Op grond van die data afhanklikheid van die seriële berekening Hines metode, stel ons drie voorwaardes voor om seker te maak dat 'n parallelle metode dieselfde oplossings sal lewer as die seriële berekening Hines metode: (1) Die boom morfologie en aanvanklike waardes van alle knooppe is dieselfde as in die seriële berekening Hines metode; (2) In die driehoekfase, 'n knoop kan verwerk word as en slegs as al sy kinders knooppe reeds verwerk word; (3) In die terugvervangingsfase, 'n knoop kan slegs verwerk word as sy moedernood reeds verwerk word. Computational cost of cellular-level parallel computing method To theoretically evaluate the run time, i.e., efficiency, of the serial and parallel computing methods, we introduce and formulate the concept of computational cost as follows: given a tree and threads (basic computational units) to perform triangularization, parallel triangularization equals to divide the node set van into die onderwerpe, dws = { , , … } where the size of each subset | | ≤ , i.e., at most nodes can be processed each step since there are only threads. The process of the triangularization phase follows the order: → → … → , and nodes in the same subset can be processed in parallel. So, we define | | (the size of set , i.e., here) as the computational cost of the parallel computing method. In short, we define the computational cost of a parallel method as the number of steps it takes in the triangularization phase. Because the back-substitution is symmetrical with triangularization, the total cost of the entire solving equation phase is twice that of the triangularization phase. T k V T n V Die V1 Die V2 Vn Vi k k k V1 V2 Vn Vi V V n Mathematical scheduling problem Based on the simulation accuracy and computational cost, we formulate the parallelization problem as a mathematical scheduling problem: Given a tree = { , } and a positive integer Waar waar is the node-set and is the edge set. Define partition ( ) = { , , … }, | | ≤ , 1 ≤ ≤ n, where | | indicates the cardinal number of subset , i.e., the number of nodes in , and for each node ∈ , all its children nodes { | ∈children( )} must in a previous subset , where 1 ≤ < . Our goal is to find an optimal partition ( (Wie se berekeningskoste ( )| is minimal. T V E k V E P V V1 V2 Vn Vi k i Vi Vi Vi v Vi c c v Vj j i P* V P* V Here subset Dit bestaan uit alle nodes wat bereken sal word op Die volgende stap (Fig. ), so | Vrou ≤ indicates that we can compute nodes each step at most because the number of available threads is . The restriction “for each node ∈ , all its children nodes { die ∈children( )} must in a previous subset , where 1 ≤ < die ” indicates that node can be processed only if all its child nodes are processed. Vi i 2e Vi k k k v Vi c c v Vj j i v DHS implementasie We aim to find an optimal way to parallelize the computation of solving linear equations for each neuron model by solving the mathematical scheduling problem above. To get the optimal partition, DHS first analyzes the topology and calculates the depth ( ) for all nodes ∈ . Then, the following two steps will be executed iteratively until every node ∈ is assigned to a subset: (1) find all candidate nodes and put these nodes into candidate set . A node is a candidate only if all its child nodes have been processed or it does not have any child nodes. (2) if | | ≤ , i.e., the number of candidate nodes is smaller or equivalent to the number of available threads, remove all nodes in and put them into , otherwise, remove deepest nodes from Voeg hulle toe aan subset . Label these nodes as processed nodes (Fig. ). After filling in subset , go to step (1) to fill in the next subset . d v v V v V Q Q k Q V*i k Q Vi 2d Vi Ek + 1 Correctness proof for DHS After applying DHS to a neural tree = { , }, we get a partition ( ) = { , , … }, | | ≤ , 1 ≤ ≤ die . Nodes in the same subset will be computed in parallel, taking steps to perform triangularization and back-substitution, respectively. We then demonstrate that the reordering of the computation in DHS will result in a result identical to the serial Hines method. T V E P V V1 V2 Die VN Vi k i n Vi n The partition ( ) obtained from DHS decides the computation order of all nodes in a neural tree. Below we demonstrate that the computation order determined by ( ) satisfies the correctness conditions. ( ) is obtained from the given neural tree Operasies in DHS verander nie die boom topologie en waardes van boom knooppuntjies nie (ooreenstemmende waardes in die lineêre vergelykings), so die boom morfologie en aanvanklike waardes van alle knooppuntjies word nie verander nie, wat voldoen aan voorwaarde 1: die boom morfologie en aanvanklike waardes van alle knooppuntjies is dieselfde as dié in die seriële Hines metode. to . As shown in the implementation of DHS, all nodes in subset are selected from the candidate set , and a node can be put into only if all its child nodes have been processed. Thus the child nodes of all nodes in are in { die , … }, meaning that a node is only computed after all its children have been processed, which satisfies condition 2: in triangularization, a node can be processed if and only if all its child nodes are already processed. In back-substitution, the computation order is the opposite of that in triangularization, i.e., from to . As shown before, the child nodes of all nodes in Hy is in , , … }, so parent nodes of nodes in are in { , , … }, which satisfies condition 3: in back-substitution, a node can be processed only if its parent node is already processed. P V P V P V T Die V1 Vn Vi Q Q Vi V1 Die V2 Vi-1 Vn V1 Vi Die V1 V2 Vi-1 Vi Vi+1 Vi+2 Vn Optimality proof for DHS The idea of the proof is that if there is another optimal solution, it can be transformed into our DHS solution without increasing the number of steps the algorithm requires, thus indicating that the DHS solution is optimal. For each subset in ( ), DHS moves (trade nommer) die diepste nodes van die ooreenstemmende kandidaat stel to . If the number of nodes in is smaller than , move all nodes from to . To simplify, we introduce , indicating the depth sum of deepest nodes in . All subsets in ( ) satisfy the max-depth criteria (Supplementary Fig. ): . We then prove that selecting the deepest nodes in each iteration makes 'n optimale partisie. As daar 'n optimale partisie is = { , , … } containing subsets that do not satisfy the max-depth criteria, we can modify the subsets in ( ) so that all subsets consist of the deepest nodes from and the number of subsets ( | ( )|) remain the same after modification. Vi P V k Qi Vi Die Qi k Qi Vi Di k Qi P V 6a P(V) P*(V) V*1 V*2 V*s P* V Q P* V Without any loss of generalization, we start from the first subset not satisfying the criteria, i.e., . There are two possible cases that will make not satisfy the max-depth criteria: (1) | | < and there exist some valid nodes in that are not put to ; (2) | | = but nodes in are not the deepest nodes in . V*i V*i V * I k Qi V*i V*i k V*i k Qi For case (1), because some candidate nodes are not put to , these nodes must be in the subsequent subsets. As | | , we can move the corresponding nodes from the subsequent subsets to , which will not increase the number of subsets and make satisfy the criteria (Supplementary Fig. , top). For case (2), | Genoeg = , these deeper nodes that are not moved from the candidate set into must be added to subsequent subsets (Supplementary Fig. , bottom). These deeper nodes can be moved from subsequent subsets to through the following method. Assume that after filling , is picked and one of the -th deepest nodes is still in So die will be put into a subsequent subset ( > ). We first move from to + , then modify subset + as follows: if | + | ≤ and none of the nodes in + is the parent of node , stop modifying the latter subsets. Otherwise, modify + as follows (Supplementary Fig. ): if the parent node of is in + , move this parent node to + ; else move the node with minimum depth from + to + . After adjusting , modify subsequent subsets + , + die , … with the same strategy. Finally, move from Twee . V*i V*i < k V*i V*i 6b V*i k Qi V*i 6b V*i V * I v k V’ Qi v’ V*j j i v V*i V*i 1 V*i 1 V*i 1 k V*i 1 v V*i 1 6c v V*i 1 V * I 2 V*i 1 V*i 2 V*i V*i 1 V*i 2 V*j-1 V’ V*j V*i With the modification strategy described above, we can replace all shallower nodes in with the -th deepest node in and keep the number of subsets, i.e., | ( )| the same after modification. We can modify the nodes with the same strategy for all subsets in ( ) that do not contain the deepest nodes. Finally, all subsets ∈ ( ) can satisfy the max-depth criteria, and | ( )| does not change after modifying. V*i k Qi P* V P* V V*i P* V P* V Ten slotte genereer DHS 'n partisie ( (en al die onderwerpe ∈ ( ) satisfy the max-depth condition: . For any other optimal partition ( ) ons kan sy subset verander om sy struktuur dieselfde te maak as ( ), i.e., each subset consists of the deepest nodes in the candidate set, and keep | ( ) the same after modification. So, the partition ( ) obtained from DHS is one of the optimal partitions. P V Vi P V P* V P V P * V | P V GPU implementasie en geheue boost Om 'n hoë geheue deurvoer te bereik, GPU gebruik die geheue hiërargie van (1) globale geheue, (2) cache, (3) register, waar globale geheue het 'n groot kapasiteit, maar lae deurvoer, terwyl registre het lae kapasiteit, maar hoë deurvoer. GPU gebruik SIMT (Single-Instruction, Multiple-Thread) argitektuur. Warps is die basiese beplanning eenhede op GPU ('n warp is 'n groep van 32 parallelle drade). 'n warp voer dieselfde instruksies uit met verskillende data vir verskillende drade Die korrekte ordening van die nodes is noodsaaklik vir hierdie batch van berekening in warps, om te verseker dat DHS dieselfde resultate as die seriële Hines-metode verkry. Wanneer ons DHS op GPU implementeer, groepeer ons eers al die selle in verskeie warps gebaseer op hul morfologieë. Celle met soortgelyke morfologieë word gegroepeer in dieselfde warp. Ons toepas dan DHS op alle neurone, wat die afdelings van elke neuron aan verskeie drade toewys. Omdat neurone in warps gegroepeer word, is die drade vir dieselfde neuron in dieselfde warp. Daarom hou die intrinsieke sinchronisasie in warps die berekeningsreeks konsekwent met die data afhanklikheid van die seriële Hines-metode. Ten slotte 46 Wanneer 'n warp vooraf-aangepaste en opeenvolgende opgeslagen data van globale geheue laai, kan dit ten volle gebruik maak van die cache, wat lei tot hoë geheue deurvoer, terwyl toegang tot verspreide opgeslagte data geheue deurvoer sal verminder. Na afdeling toewysing en draad herarrangement, we permuteer data in globale geheue om dit in ooreenstemming te maak met berekening bestellings sodat warps kan opeenvolgende opgeslagte data laai wanneer die program uitgevoer word. Full-spin en weinig-spin biofisiese modelle We used the published human pyramidal neuron . The membrane capacitance m = 0.44 μF cm-2, membrane resistance m = 48,300 Ω cm2, and axial resistivity a = 261.97 Ω cm. In this model, all dendrites were modeled as passive cables while somas were active. The leak reversal potential l = -83.1 mV. Ion channels such as Na+ and K+ were inserted on soma and initial axon, and their reversal potentials were Na = 67.6 mV, K = -102 mV respectively. All these specific parameters were set the same as in the model of Eyal, et al. , for more details please refer to the published model (ModelDB, access No. 238347). 51 c r r E E E 51 In die paar-spin model, die membraan kapasitansie en maksimum lek geleiding van die dendriet kabels 60 μm van soma is vermenigvuldig met 'n spine factor to approximate dendritic spines. In this model, spine was set to 1.9. Only the spines that receive synaptic inputs were explicitly attached to dendrites. F F In the full-spine model, all spines were explicitly attached to dendrites. We calculated the spine density with the reconstructed neuron in Eyal, et al. . The spine density was set to 1.3 μm-1, and each cell contained 24994 spines on dendrites 60 μm away from the soma. 51 The morphologies and biophysical mechanisms of spines were the same in few-spine and full-spine models. The length of the spine neck neck = 1.35 μm and the diameter neck = 0.25 μm, whereas the length and diameter of the spine head were 0.944 μm, i.e., the spine head area was set to 2.8 μm2. Both spine neck and spine head were modeled as passive cables, with the reversal potential = -86 mV. The specific membrane capacitance, membrane resistance, and axial resistivity were the same as those for dendrites. L D El Synaptic inputs Ons het neuronale opwinding vir beide verspreide en geklusterde sinaptiese inputs ondersoek. Alle geaktiveerde sinapses is by die terminal van die spinale kop aangesluit. Vir verspreide inputs is alle geaktiveerde sinapses ewekansig versprei op al die dendrites. Vir geklusterde inputs bestaan elke cluster uit 20 geaktiveerde sinapses wat ewekansig versprei is op 'n enkele ewekansig geselekteerde afdeling. AMPA-gebaseerde en NMDA-gebaseerde sinaptiese strome is gesimuleer soos in Eyal et al. se werk. AMPA-geleiding is gemodelleer as 'n dubbele-eksponensieel funksie en NMDA-geleiding as 'n spanning-afhanklike dubbele-eksponensieel funksie. rise and decay were set to 0.3 and 1.8 ms. For the NMDA model, rise and decay were set to 8.019 and 34.9884 ms, respectively. The maximum conductance of AMPA and NMDA were 0.73 nS and 1.31 nS. τ τ τ τ Background noise Ons het agtergrondlawaai aan elke sel bygevoeg om 'n meer realistiese omgewing te simuleer. lawaaipatrone is geïmplementeer as Poisson-spike-treine met 'n konstante spoed van 1,0 Hz. start = 10 ms and lasted until the end of the simulation. We generated 400 noise spike trains for each cell and attached them to randomly-selected synapses. The model and specific parameters of synaptic currents were the same as described in , except that the maximum conductance of NMDA was uniformly distributed from 1.57 to 3.275, resulting in a higher AMPA to NMDA ratio. t Synaptic Inputs Exploring neuronal excitability Ons het die spike waarskynlikheid ondersoek wanneer verskeie sinapses gelyktydig geaktiveer is. Vir verspreide inputs het ons 14 gevalle getest, van 0 tot 240 geaktiveerde synapses. Vir geklusterde inputs het ons in totaal 9 gevalle getest, wat van 0 tot 12 clusters geaktiveer het. Elke cluster bestaan uit 20 synapses. Vir elke geval in beide verspreide en geklusterde inputs het ons die spike waarskynlikheid bereken met 50 ewekansige monsters. Spike waarskynlikheid is gedefinieer as die verhouding van die aantal neurone wat afgevuur is tot die totale aantal monsters. Alle 1150 monsters is simuleer op dieselfde tyd op ons DeepDendrite platform, wat die simulasie tyd van dae tot minute verminder. Performing AI tasks with the DeepDendrite platform Conventional detailed neuron simulators lack two functionalities important to modern AI tasks: (1) alternately performing simulations and weight updates without heavy reinitialization and (2) simultaneously processing multiple stimuli samples in a batch-like manner. Here we present the DeepDendrite platform, which supports both biophysical simulating and performing deep learning tasks with detailed dendritic models. DeepDendrite consists of three modules (Supplementary Fig. ): (1) an I/O module; (2) a DHS-based simulating module; (3) a learning module. When training a biophysically detailed model to perform learning tasks, users first define the learning rule, then feed all training samples to the detailed model for learning. In each step during training, the I/O module picks a specific stimulus and its corresponding teacher signal (if necessary) from all training samples and attaches the stimulus to the network model. Then, the DHS-based simulating module initializes the model and starts the simulation. After simulation, the learning module updates all synaptic weights according to the difference between model responses and teacher signals. After training, the learned model can achieve performance comparable to ANN. The testing phase is similar to training, except that all synaptic weights are fixed. 5 HPC-Net model Image classification is a typical task in the field of AI. In this task, a model should learn to recognize the content in a given image and output the corresponding label. Here we present the HPC-Net, a network consisting of detailed human pyramidal neuron models that can learn to perform image classification tasks by utilizing the DeepDendrite platform. HPC-Net has three layers, i.e., an input layer, a hidden layer, and an output layer. The neurons in the input layer receive spike trains converted from images as their input. Hidden layer neurons receive the output of input layer neurons and deliver responses to neurons in the output layer. The responses of the output layer neurons are taken as the final output of HPC-Net. Neurons between adjacent layers are fully connected. For each image stimulus, we first convert each normalized pixel to a homogeneous spike train. For pixel with coordinates ( ) in the image, the corresponding spike train has a constant interspike interval ISI( ) (in ms) which is determined by the pixel value ( ) as shown in Eq. ( die x, y τ x, y p x, y 1 In our experiment, the simulation for each stimulus lasted 50 ms. All spike trains started at 9 + ISI ms and lasted until the end of the simulation. Then we attached all spike trains to the input layer neurons in a one-to-one manner. The synaptic current triggered by the spike arriving at time Dit word gegee deur τ t0 where is the post-synaptic voltage, the reversal potential syn = 1 mV, the maximum synaptic conductance max = 0.05 μS, and the time constant = 0.5 ms. v E g τ Neurons in the input layer were modeled with a passive single-compartment model. The specific parameters were set as follows: membrane capacitance m = 1.0 μF cm-2, membrane resistance m = 104 Ω cm2, axial resistivity a = 100 Ω cm, reversal potential of passive compartment l = 0 mV. c r r E The hidden layer contains a group of human pyramidal neuron models, receiving the somatic voltages of input layer neurons. The morphology was from Eyal, et al. , en al die neurone is gemodelleer met passiewe kabels. m = 1.5 μF cm-2, membrane resistance m = 48,300 Ω cm2, axial resistivity a = 261.97 Ω cm, and the reversal potential of all passive cables l = 0 mV. Input neurons could make multiple connections to randomly-selected locations on the dendrites of hidden neurons. The synaptic current activated by the Die sinapse van die -th input neuron on neuron ’n Dendriet word gedefinieer as in Eq. ( ), where is the synaptic conductance, is the synaptic weight, is the ReLU-like somatic activation function, and is the somatic voltage of the -th input neuron at time . 51 c r r E k i j 4 gijk Wijk i t Neurons in the output layer were also modeled with a passive single-compartment model, and each hidden neuron only made one synaptic connection to each output neuron. All specific parameters were set the same as those of the input neurons. Synaptic currents activated by hidden neurons are also in the form of Eq. ( ). 4 Image classification with HPC-Net For each input image stimulus, we first normalized all pixel values to 0.0-1.0. Then we converted normalized pixels to spike trains and attached them to input neurons. Somatic voltages of the output neurons are used to compute the predicted probability of each class, as shown in equation , where is the probability of -th class predicted by the HPC-Net, is the average somatic voltage from 20 ms to 50 ms of the -th output neuron, and indicates the number of classes, which equals the number of output neurons. The class with the maximum predicted probability is the final classification result. In this paper, we built the HPC-Net with 784 input neurons, 64 hidden neurons, and 10 output neurons. 6 pi i i C Synaptic plasticity rules for HPC-Net Inspired by previous work , we use a gradient-based learning rule to train our HPC-Net to perform the image classification task. The loss function we use here is cross-entropy, given in Eq. ( Waar waar is the predicted probability for class die indicates the actual class the stimulus image belongs to, = 1 as die ingang beeld behoort aan die klas , and = 0 if not. 36 7 pi i yi yi i die When training HPC-Net, we compute the update for weight (the synaptic weight of the Synapse wat neurone verbind to neuron ) at each time step. After the simulation of each image stimulus, is updated as shown in Eq. ( ): Wijk k i j Wijk 8 Here is the learning rate, is the update value at time , die are somatic voltages of neuron and respectievelik, is die -th synaptic current activated by neuron on neuron , its synaptic conductance, is the transfer resistance between the Die verbindde afdeling van neurone on neuron ’s dendrite to neuron ’s soma, s = 30 ms, e = 50 ms are start time and end time for learning respectively. For output neurons, the error term can be computed as shown in Eq. ( ). For hidden neurons, the error term is calculated from the error terms in the output layer, given in Eq. ( ). t vj vi i j Iijk k i j gijk rijk k i j j t t 10 11 Aangesien al die uitgangsneurone enkel-afdeling is, gelyk aan die invoerweerstand van die ooreenstemmende afdeling, word oordrag- en invoerweerstands bereken deur NEURON. Mini-batch opleiding is 'n tipiese metode in diepe leer vir die bereiking van hoër voorspelling akkuraatheid en versnelde konvergensie. DeepDendrite ondersteun ook mini-batch opleiding. batch, we make batch copies of HPC-Net. During training, each copy is fed with a different training sample from the batch. DeepDendrite first computes the weight update for each copy separately. After all copies in the current training batch are done, the average weight update is calculated and weights in all copies are updated by this same amount. N N Robustness against adversarial attack with HPC-Net To demonstrate the robustness of HPC-Net, we tested its prediction accuracy on adversarial samples and compared it with an analogous ANN (one with the same 784-64-10 structure and ReLU activation, for fair comparison in our HPC-Net each input neuron only made one synaptic connection to each hidden neuron). We first trained HPC-Net and ANN with the original training set (original clean images). Then we added adversarial noise to the test set and measured their prediction accuracy on the noisy test set. We used the Foolbox , to generate adversarial noise with the FGSM method ANN is opgelei met PyTorch , and HPC-Net was trained with our DeepDendrite. For fairness, we generated adversarial noise on a significantly different network model, a 20-layer ResNet . The noise level ranged from 0.02 to 0.2. We experimented on two typical datasets, MNIST and Fashion-MNIST . Results show that the prediction accuracy of HPC-Net is 19% and 16.72% higher than that of the analogous ANN, respectively. 98 99 93 100 101 95 96 Reporting summary Verdere inligting oor navorsing ontwerp is beskikbaar in die linked to this article. Nature Portfolio Reporting Summary Data availability The data that support the findings of this study are available within the paper, Supplementary Information and Source Data files provided with this paper. The source code and data that used to reproduce the results in Figs. – are available at Die MNIST dataset is openbaar beskikbaar by . The Fashion-MNIST dataset is publicly available at . are provided with this paper. 3 6 https://github.com/pkuzyc/DeepDendrite http://yann.lecun.com/exdb/mnist https://github.com/zalandoresearch/fashion-mnist Source data Code availability The source code of DeepDendrite as well as the models and code used to reproduce Figs. – in this study are available at . 3 6 https://github.com/pkuzyc/DeepDendrite References McCulloch, W. S. & Pitts, W. 'n logiese berekening van die idees wat inherent is aan senuweesaktiwiteit. LeCun, Y., Bengio, Y. & Hinton, G. Diepe leer. Natuur 521, 436–444 (2015). Poirazi, P., Brannon, T. & Mel, B. W. Aritmetiese van subdrempel synaptiese som in 'n model CA1 piramidale sel. Neuron 37, 977–987 (2003). Londen, M. & Häusser, M. Dendritieke berekening. Annu. Rev. Neurosci. 28, 503–532 (2005). Branco, T. & Häusser, M. Die enkele dendritiese tak as 'n fundamentele funksionele eenheid in die senuweestelsel. Curr. Opin. Neurobiol. 20, 494–502 (2010). Stuart, G. J. & Spruston, N. Dendritieke integrasie: 60 jaar van vordering. Nat. Neurosci. 18, 1713–1721 (2015). Poirazi, P. en Papoutsi, A. Illuminating dendritic funksie met berekeningsmodelle. Nat. Rev. Neurosci. 21, 303-321 (2020). Yuste, R. & Denk, W. Dendrietspine as basiese funksionele eenhede van neuronale integrasie. Engert, F. & Bonhoeffer, T. Dendrietiese wisselings in die ruggengraat wat geassosieer word met hippocampale langtermyn-synaptiese plasticiteit. Yuste, R. Dendritic spines and distributed circuits. , 772–781 (2011). Neuron 71 Yuste, R. Elektriese compartimentalisering in dendrietiese spine. Annu. Rev. Neurosci. 36, 429–449 (2013). Rall, W. Branching dendritic trees and motoneuron membrane resistivity. , 491–527 (1959). Exp. Neurol. 1 Segev, I. & Rall, W. Computational study of an excitable dendritic spine. , 499–523 (1988). J. Neurophysiol. 60 Silver, D. et al. Meester die spel van gaan met diepe neurale netwerke en boomsoek. natuur 529, 484–489 (2016). Silver, D. et al. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. , 1140–1144 (2018). Science 362 McCloskey, M. & Cohen, N. J. Katastrofale inmenging in verbindingsnetwerke: die opeenvolgende leerprobleem. Frans, R. M. Katastrofiese vergeet in verbindingsnetwerke. Trends Cogn. Sci. 3, 128-135 (1999). Naud, R. & Sprekeler, H. Sparse ontploffings optimaliseer inligting oordrag in 'n multiplexed neurale kode. Proc. Natl Acad. Sci. USA 115, E6329-E6338 (2018). Sacramento, J., Costa, R. P., Bengio, Y. & Senn, W. Dendrietiese kortikale mikrosirkuite benader die backpropagation algoritme. in Vooruitgang in Neurale Informasie Verwerkingsstelsels 31 (NeurIPS 2018) (NeurIPS*,* 2018). Payeur, A., Guerguiev, J., Zenke, F., Richards, B. A. & Naud, R. Burst-afhanklike sinaptiese plasticiteit kan leer in hiërargiese kringe koördineer. Bicknell, B. A. & Häusser, M. A synaptic learning rule for exploiting nonlinear dendritic computation. , 4001–4017 (2021). Neuron 109 Moldwin, T., Kalmenson, M. & Segev, I. The gradient clusteron: a model neuron that learns to solve classification tasks via dendritic nonlinearities, structural plasticity, and gradient descent. , e1009015 (2021). PLoS Comput. Biol. 17 Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and Its application to conduction and excitation in nerve. , 500–544 (1952). J. Physiol. 117 Rall, W. Theory of physiological properties of dendrites. , 1071–1092 (1962). Ann. N. Y. Acad. Sci. 96 Hines, M. L. en Carnevale, N. T. Die Neuron simulasie omgewing. Neurale Comput. 9, 1179-1209 (1997). Bower, J. M. & Beeman, D. in (eds Bower, J.M. & Beeman, D.) 17–27 (Springer New York, 1998). The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System Hines, M. L., Eichner, H. & Schürmann, F. Neuron splitsing in rekenaar-gebonde parallelle netwerksimulasie toelaat runtime skaal met twee keer so baie prosesse. Hines, M. L., Markram, H. & Schürmann, F. Fully implicit parallel simulation of single neurons. , 439–448 (2008). J. Comput. Neurosci. 25 Ben-Shalom, R., Liberman, G. & Korngreen, A. Versnelde compartimentale modellering op 'n grafiese verwerkingseenheid. Tsuyuki, T., Yamamoto, Y. & Yamazaki, T. Effektiewe numeriese simulasie van neuronmodelle met ruimtelike struktuur op grafiese verwerkende eenhede. In Proc. 2016 Internasionale Konferensie oor Neurale Informasie Verwerking (eds Hirose894Akiraet al.) 279–285 (Springer International Publishing, 2016). Vooturi, D. T., Kothapalli, K. & Bhalla, U.S. Parallelizing Hines Matrix Solver in Neuron Simulations op GPU. In Proc. IEEE 24th International Conference on High Performance Computing (HiPC) 388-397 (IEEE, 2017). Huber, F. Effektiewe boom solver vir hines matrices op die GPU. Voordruk op https://arxiv.org/abs/1810.12742 (2018). Korte, B. & Vygen, J. Kombinatoriale optimaliseringsteorie en algoritmes 6 edn (Springer, 2018). Gebali, F. Algoritme en Parallel Computing (Wiley, 2011) Kumbhar, P. et al. CoreNEURON: 'N Geoptimaliseerde berekeningsmotor vir die NEURON-simulator. Front. Neuroinform. 13, 63 (2019). Urbanczik, R. & Senn, W. Leer deur die dendritiese voorspelling van somatiese spiking. Neuron 81, 521-528 (2014). Ben-Shalom, R., Aviv, A., Razon, B. & Korngreen, A. Optimalisering van ioonkanaalmodelle met behulp van 'n parallelle genetiese algoritme op grafiese prosesors. Mascagni, M. 'n Paralleliseer algoritme vir die berekening van oplossings vir willekeurig vertakte kabel neuron modelle. McDougal, R. A. et al. Twintig jaar van modelDB en verder: die bou van noodsaaklike modelleer gereedskap vir die toekoms van neurowetenskap. Migliore, M., Messineo, L. & Ferrante, M. Dendritic Ih selectively blocks temporal summation of unsynchronized distal inputs in CA1 pyramidal neurons. , 5–13 (2004). J. Comput. Neurosci. 16 Hemond, P. et al. Verskeie klasse piramidale selle wys wederzijds uitsluitende vuurpatrone in die hippocampale gebied CA3b. Hay, E., Hill, S., Schürmann, F., Markram, H. & Segev, I. Modelle van neokortikale laag 5b piramidale selle wat 'n wye verskeidenheid van dendriese en perisomatiese aktiewe eienskappe vang. PLoS Comput. Biol. 7, e1002107 (2011). Masoli, S., Solinas, S. & D’Angelo, E. Action potential processing in a detailed purkinje cell model reveals a critical role for axonal compartmentalization. , 47 (2015). Front. Cell. Neurosci. 9 Lindroos, R. et al. Basale ganglia neuromodulasie oor verskeie tyds- en strukturele skaal – simulasie van direkte pad MSNs ondersoek die vinnige aanvang van dopaminergische effekte en voorspel die rol van Kv4.2. Migliore, M. et al. Synaptic clusters function as odor operators in the olfactory bulb. , 8499–8504 (2015). Proc. Natl Acad. Sci. USa 112 NVIDIA. CUDA C++ Programmering Gids. https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html (2021). NVIDIA. CUDA C++ Best Practices Guide. https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html (2021). Harnett, M. T., Makara, J. K., Spruston, N., Kath, W. L. & Magee, J. C. Synaptiese versterking deur dendrietiese spine verbeter input-koöperativiteit. Natuur 491, 599–602 (2012). Chiu, C. Q. et al. Compartmentalization van GABAergic hemming deur dendritic spines. Wetenskap 340, 759–762 (2013). Tønnesen, J., Katona, G., Rózsa, B. & Nägerl, U. V. Spinale nek plasticiteit reguleer die compartimentalisering van sinapses. Nat. Neurosci. 17, 678–685 (2014). Eyal, G. et al. Human cortical pyramidal neurons: from spines to spikes via models. , 181 (2018). Front. Cell. Neurosci. 12 Koch, C. & Zador, A. Die funksie van dendrietiese spinne: toestelle wat biokemiese eerder as elektriese compartmentalisering ondersteun. Koch, C. Dendritic spines. In (Oxford University Press, 1999). Biophysics of Computation Rapp, M., Yarom, Y. & Segev, I. The impact of parallel fiber background activity on the cable properties of cerebellar purkinje cells. , 518–533 (1992). Neural Comput. 4 Hines, M. Effektiewe berekening van vertakte senuwees. Int. J. Bio-Med. Comput. 15, 69-76 (1984). Nayebi, A. & Ganguli, S. Biologiese geïnspireerde beskerming van diepe netwerke teen opstandige aanvalle. Voordruk op https://arxiv.org/abs/1703.09202 (2017). Goddard, N. H. & Hood, G. Large-Scale Simulation Using Parallel GENESIS. In (eds Bower James M. & Beeman David) 349-379 (Springer New York, 1998). The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System Migliore, M., Cannia, C., Lytton, W. W., Markram, H. & Hines, M. L. Parallel netwerk simulasie met NEURON. Lytton, W. W. et al. Simulasie neurotegnologieë vir die bevordering van brein navorsing: paralleliseer groot netwerke in NEURON. Valero-Lara, P. et al. cuHinesBatch: Solving multiple Hines systems on GPUs human brain project. In 566–575 (IEEE, 2017). Proc. 2017 International Conference on Computational Science Akar, N. A. et al. Arbor—A morphologically-detailed neural network simulation library for contemporary high-performance computing architectures. In 274–282 (IEEE, 2019). Proc. 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP) Ben-Shalom, R. et al. NeuroGPU: Accelerating multi-compartment, biophysically detailed neuron simulations on GPUs. , 109400 (2022). J. Neurosci. Methods 366 Rempe, M. J. & Chopp, D. L. 'n voorspel-corrector algoritme vir reaksie-diffusie ooreenkomste wat verband hou met neurale aktiwiteit op vertakte strukture. SIAM J. Sci. Comput. 28, 2139-2161 (2006). Kozloski, J. & Wagner, J. An ultrascalable solution to large-scale neural tissue simulation. , 15 (2011). Front. Neuroinform. 5 Jayant, K. et al. Gegewe intracellulêre spanning opname van dendrietiese spines met behulp van kwantum-punt-bewerkte nanopipettes. Nat. Nanotechnol. 12, 335–342 (2017). Palmer, L. M. & Stuart, G. J. Membrane potensiële veranderinge in dendrietiese spine tydens aksie potensiaal en sinaptiese invoer. Nishiyama, J. & Yasuda, R. Biokemiese berekening vir die strukturele plasticiteit van die spinale. Neuron 87, 63–75 (2015). Yuste, R. & Bonhoeffer, T. Morfologiese veranderinge in dendriese spine wat geassosieer word met langtermyn synaptiese plasticiteit. Holtmaat, A. & Svoboda, K. Ervarings-afhanklike strukturele sinaptiese plasticiteit in die soogdiere brein. Caroni, P., Donato, F. & Muller, D. Strukturele plasticiteit by leer: regulasie en funksies. Nat. Rev. Neurosci. 13, 478-490 (2012). Keck, T. et al. Massiewe herstructurering van neurale kringe tydens funksionele reorganisasie van volwasse visuele cortex. Nat. Neurosci. 11, 1162 (2008). Hofer, S. B., Mrsic-Flogel, T. D., Bonhoeffer, T. & Hübener, M. Ervaring laat 'n duurzame strukturele spoor in kortikale kringe. Trachtenberg, J. T. et al. Langtermyn in vivo beeldvorming van ervaringsafhanklike sinaptiese plasticiteit in volwasse cortex. natuur 420, 788-794 (2002). Marik, S. A., Yamahachi, H., McManus, J. N., Szabo, G. & Gilbert, C. D. Axonal dynamics of excitatory and inhibitory neurons in somatosensory cortex. , e1000395 (2010). PLoS Biol. 8 Xu, T. et al. Vinnig vorming en selektiewe stabilisering van sinapses vir volgehoue motorherinneringe. natuur 462, 915-919 (2009). Albarran, E., Raissi, A., Jáidar, O., Shatz, C. J. & Ding, J. B. Verbetering van motoriese leer deur die stabiliteit van nuut gevormde dendrietiese spine in die motoriese cortex te verhoog. Branco, T. & Häusser, M. Synaptiese integrasie gradiente in enkele kortikale piramidale sel dendriete. Neuron 69, 885–892 (2011). Major, G., Larkum, M. E. & Schiller, J. Active properties of neocortical pyramidal neuron dendrites. , 1–24 (2013). Annu. Rev. Neurosci. 36 Gidon, A. et al. Dendriese aksie potensiaal en berekening in menslike laag 2/3 kortikale neurone. wetenskap 367, 83-87 (2020). Doron, M., Chindemi, G., Muller, E., Markram, H. & Segev, I. Timed synaptic inhibition shapes NMDA spikes, influencing local dendritic processing and global I/O properties of cortical neurons. , 1550–1561 (2017). Cell Rep. 21 Du, K. et al. Cell-type-specific inhibition of the dendritic plateau potential in striatal spiny projection neurons. , E7612–E7621 (2017). Proc. Natl Acad. Sci. USA 114 Smith, S. L., Smith, I. T., Branco, T. & Häusser, M. Dendrietiese spike verhoog stimulus selektiwiteit in kortikale neurone in vivo. Natuur 503, 115-120 (2013). Xu, N.-l et al. Nonlinear dendritic integration of sensory and motor input during an active sensing task. , 247–251 (2012). Nature 492 Takahashi, N., Oertner, T. G., Hegemann, P. & Larkum, M. E. Aktiewe cortical dendrites modulate persepsie. Wetenskap 354, 1587–1590 (2016). Sheffield, M. E. & Dombeck, D. A. Kalsium oorgang prevalensie oor die dendriet boom voorspel plek veld eienskappe. natuur 517, 200–204 (2015). Markram, H. et al. Rekonstruksie en simulasie van neokortikale mikrosircuitry. Sel 163, 456-492 (2015). Billeh, Y. N. et al. Systematic integration of structural and functional data into multi-scale models of mouse primary visual cortex. , 388–403 (2020). Neuron 106 Hjorth, J. et al. The microcircuits of striatum in silico. , 202000671 (2020). Proc. Natl Acad. Sci. USA 117 Guerguiev, J., Lillicrap, T. P. & Richards, B. A. Naar diepe leer met gesegreerde dendriete. elife 6, e22901 (2017). Iyer, A. et al. Avoiding catastrophe: active dendrites enable multi-task learning in dynamic environments. , 846219 (2022). Front. Neurorobot. 16 Jones, I. S. & Kording, K. P. Might a single neuron solve interesting machine learning problems through successive computations on its dendritic tree? , 1554–1571 (2021). Neural Comput. 33 Bird, A. D., Jedlicka, P. & Cuntz, H. Dendritic normalisation improves learning in sparsely connected artificial neural networks. , e1009202 (2021). PLoS Comput. Biol. 17 Goodfellow, I. J., Shlens, J. & Szegedy, C. Verduideliking en benutting van tegengestelde voorbeelde. in 3rd International Conference on Learning Representations (ICLR) (ICLR, 2015). Papernot, N., McDaniel, P. & Goodfellow, I. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. Preprint at (2016). https://arxiv.org/abs/1605.07277 Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. , 2278–2324 (1998). Proc. IEEE 86 Xiao, H., Rasul, K. & Vollgraf, R. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. Preprint at (2017). http://arxiv.org/abs/1708.07747 Bartunov, S. et al. Assessing the scalability of biologically-motivated deep learning algorithms and architectures. In (NeurIPS, 2018). Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Rauber, J., Brendel, W. & Bethge, M. Foolbox: A Python toolbox to benchmark the robustness of machine learning models. In (2017). Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning Rauber, J., Zimmermann, R., Bethge, M. & Brendel, W. Foolbox native: fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX. , 2607 (2020). J. Open Source Softw. 5 Paszke, A. et al. PyTorch: An imperative style, high-performance deep learning library. In (NeurIPS, 2019). Advances in Neural Information Processing Systems 32 (NeurIPS 2019) He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In 770–778 (IEEE, 2016). Proc. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) erkennings Die skrywers het die werk ondersteun deur die Nasionale Key R&D Program van China (nr. 2020AAA0130400) aan K.D. en T.H., Nasionale Natuurwetenskapsfonds van China (nr. 61825101) aan Y.T., Swedish Research Council (VR-M-2020-01652), Swedish e-Science Research Centre (SeRC), EU/Horizon 2020 No. 945539 (HBP SGA3), KTH en Digital Futures to J.H.K., J.K., A.H., PDIC, Switserland vir die simulasie en verskaffing van navorsingsfondse deur die SNS-K521/Switserlandse Raad (of die SNS-K521/Switserlandse Raad vir navorsing deur die SNS-K Hierdie artikel is beskikbaar in die natuur onder CC by 4.0 Deed (Attribution 4.0 International) lisensie. Hierdie artikel is beskikbaar in die natuur onder CC by 4.0 Deed (Attribution 4.0 International) lisensie.