Autori : Zrenjanin Zhang Gana on Lei Ma Lijepa Lija J. J. Johannes Hjorth Aleksandar Kozlov Sljedeći članakYutao He Šenđijan Zhang Jeanette Hellgren Kotaleski Zrenjanin Tian Stanić Grillner kada si Zrenjanin Huang Autori : Zrenjanin Zhang Gana on Lea Ma Lijepa Lija Tagovi: J. J. Johannes Hjorth Aleksandar Kozlov Sljedeći članakYutao He Šenđijan Zhang Žaneta Hellgren Kotaleski Zrenjanin Tian Stanić Grillner kada si Zrenjanin Huang Abstrakcija Biofizički detaljni multi-odjeljni modeli su moćni alati za istraživanje računovodstvenih principa mozga i služe kao teorijski okvir za generisanje algoritama za sisteme veštačke inteligencije (AI). Međutim, skupi računovodstveni troškovi ozbiljno ograničavaju aplikacije i u neuroznanosti i AI poljima. Glavna prepreka prilikom simuliranja detaljnih modela odjeljaka je sposobnost simulatora da riješi velike sisteme linearnih jednadžbi. Endometrija Hierarhija Cheduling (DHS) metoda za značajno ubrzanje takvog procesa. Teoretski dokazujemo da je implementacija DHS-a računarski optimalna i precizna. Ova metoda zasnovana na GPU-u radi sa 2-3 reda veličine većom brzinom od one klasične serijske Hines metode u konvencionalnoj CPU platformi. Izgradili smo DeepDendrite okvir, koji integrira DHS metodu i GPU računski motor NEURON simulatora i demonstrira aplikacije DeepDendrite u neuroznanstvenim zadatcima. Istražujemo kako prostorni obrasci ulazaka kralježnice utiču na neuronsku uzbuđenost u detaljnom modelu ljudskog piramidalnog neurona s 25.000 kralježnica. Nadalje, pružamo kratku raspravu o potencijalu DeepDendrite za AI, posebno naglašavajući njegov D H S Introduction Dešifriranje kodiranja i računalne principe neurona je neophodno za neuroznanost. Mozak sisavaca sastoji se od više od hiljada različitih tipova neurona s jedinstvenim morfološkim i biofizičkim svojstvima. , u kojem su neuroni smatrani jednostavnim sumirajućim jedinicama, i dalje se široko primjenjuje u neuronskom računanju, posebno u analizi neuronskih mreža. Međutim, pored sveobuhvatnih izračuna na nivou jednog neurona, subcelularni odjeljci, kao što su neuronski dendriti, mogu obavljati i nelinearne operacije kao nezavisne računalne jedinice. , , , , Nadalje, dendrični spinovi, male izbočine koje glatko prekrivaju dendrite u spinovim neuronima, mogu odvojiti sinaptičke signale, omogućujući im da budu odvojene od roditeljskih dendritesa ex vivo i in vivo. , , , . 1 2 3 4 5 6 7 8 9 10 11 Simulacije pomoću biološki detaljnih neurona pružaju teorijski okvir za povezivanje bioloških detalja sa računskim principima. , omogućava nam da modeliramo neurone sa realističnim dendričnim morfologijama, intrinzičnom jonskom prevodljivošću i ekstriničnim sinaptičkim ulazima. , koji modelira biofizičke membranske svojstva dendritisa kao pasivnih kablova, pružajući matematički opis kako elektronički signali invadiraju i propagiraju kroz složene neuronske procese.Uključivanjem teorije kablova s aktivnim biofizičkim mehanizmima kao što su ionički kanali, uzbuđujuće i inhibitivne sinaptičke struje, itd., detaljan multi-kompartmentni model može postići ćelijske i subcelularne neuronske izračune izvan eksperimentalnih ograničenja , . 12 13 12 4 7 Pored svog dubokog utjecaja na neuroznanost, nedavno su korišćeni biološki detaljni modeli neurona kako bi se prebrojao jaz između neuronskih strukturnih i biofizičkih detalja i AI. Prevladavajuća tehnika u modernom AI polju je ANN-ovi koji se sastoje od bodovnih neurona, analogni biološkim neuronskim mrežama. Iako ANN-ovi s "backpropagation-of-error" (backprop) algoritmom ostvaruju izuzetne performanse u specijalizovanim aplikacijama, čak i pobeđujući vrhunske ljudske profesionalne igrače u igrama Go i šaha. , , ljudski mozak i dalje nadmašuje ANN-ove u domenama koje uključuju dinamičnije i buke okruženja , Nedavne teorijske studije sugeriraju da je dendrična integracija ključna u stvaranju efikasnih algoritama učenja koji potencijalno premašuju backprop u paralelnoj obradi informacija. , , Nadalje, jedan detaljan multi-odjeljni model može naučiti ne-linearne izračune na nivou mreže za bodovne neurone prilagođavanjem samo sinaptičke snage. , Stoga je od visokog prioriteta proširiti paradigme u AI-u poput mozga od pojedinačnih detaljnih modela neurona do velikih biološki detaljnih mreža. 14 15 16 17 18 19 20 21 22 Jedan od dugotrajnih izazova detaljnog pristupa simulaciji leži u njegovim izuzetno visokim računskim troškovima, što je ozbiljno ograničilo njegovu primjenu na neuroznanost i AI. , , Da bi se poboljšala efikasnost, klasična Hines metoda smanjuje vremensku složenost za rješavanje jednadžbi od O(n3) do O(n), koji je široko primijenjen kao osnovni algoritam u popularnim simulatorima kao što je NEURON. and GENESIS Međutim, ova metoda koristi serijski pristup za obradu svakog odjeljka sekvencijalno.Kada simulacija uključuje više biofizički detaljnih dendritsa s dendritskim spinovima, linearna matrica jednadžbe (“Hines Matrix”) skala prema tome sa sve većim brojem dendritsa ili spinova (Slik. ), čineći Hinesovu metodu više nepraktičnom, budući da predstavlja veoma teško opterećenje na čitavoj simulaciji. 12 23 24 25 26 1E Rekonstruisani model piramidalnog neurona sloja 5 i matematička formula koja se koristi sa detaljnim modelima neurona. Tok rada prilikom numeričkog simuliranja detaljnih modela neurona. Faza rješavanja jednadžbi je flašica u simulaciji. Primer linearnih jednadžbi u simulaciji. Ovisnost podataka o Hinesovoj metodi pri rješavanju linearnih jednadžbi Naši Broj linearnih sistema jednadžbi koje treba riješiti doživljava značajno povećanje kada modeli rastu detaljnije. Computational cost (steps taken in the equation solving phase) of the serial Hines method on different types of neuron models. Ilustracija različitih metoda rješavanja. Različiti dijelovi neurona dodeljeni su višestrukim jedinicama za obradu u paralelnim metodama (sred, desno), prikazanim u različitim bojama. Computational cost of three methods in prilikom rješavanja jednadžbi piramidalnog modela sa spinovima. Trčanje vremena različitih metoda na rješavanje jednadžbi za 500 piramidalnih modela sa spinovima. Trčanje vremena ukazuje na potrošnju vremena 1 s simulacije (rješavanje jednadžba 40.000 puta s vremenskim korakom od 0,025 ms). p-Hines paralelna metoda u CoreNEURON (na GPU), Branch-based paralelna metoda bazirana na granama (na GPU), DHS Dendritic hijerarhijski način planiranja (na GPU). a b c d c e f g h g i Tokom proteklih desetljeća postignut je ogroman napredak u ubrzavanju Hinesove metode pomoću paralelnih metoda na staničnom nivou, što omogućuje paralelizaciju izračuna različitih delova u svakoj ćeliji. , , , , , Međutim, trenutne paralelne metode na staničnoj razini često nemaju efikasnu strategiju paralelizacije ili nemaju dovoljnu numeričku točnost u poređenju s originalnom Hines metodom. 27 28 29 30 31 32 Ovde razvijamo potpuno automatizovani, numerički precizni i optimizovani alat za simulaciju koji može značajno ubrzati računarsku efikasnost i smanjiti računarske troškove. Osim toga, ovaj alat za simulaciju može se besprijekorno usvojiti za uspostavljanje i testiranje neuronskih mreža s biološkim detaljima za strojno učenje i AI aplikacije. Teorija paralelnog računanja Pokazujemo da naš algoritam obezbeđuje optimalno rasporedavanje bez ikakvog gubitka preciznosti. Nadalje, optimizovali smo DHS za trenutno najnapredniji GPU čip iskoristeći hijerarhiju GPU memorije i mehanizme za pristup memorije. Zajedno, DHS može ubrzati izračunavanje 60-1500 puta (Dopunjska tabela) ) u poređenju sa klasičnim simulatorom NEURON Uz istu preciznost. 33 34 1 25 Kako bi se omogućile detaljne dendrične simulacije za upotrebu u AI-u, uspostavljamo DeepDendrite okvir tako što ćemo integrisati DHS-u ugrađenu CoreNEURON platformu (optimizovani računarski motor za NEURON). kao simulacijski motor i dva pomoćna modula (I/O modul i modul učenja) koji podržavaju dendrične algoritme učenja tokom simulacija. DeepDendrite radi na GPU hardverskoj platformi, podržavajući i redovne zadatke simulacije u neuroznanosti i zadatke učenja u AI. 35 Posljednje, ali ne najmanje, predstavljamo i nekoliko aplikacija koje koriste DeepDendrite, usmjeravajući se na nekoliko kritičnih izazova u neuroznanosti i AI-u: (1) Pokazujemo kako prostorni obrasci dendričnih ulaznica kralježnice utiču na neuronske aktivnosti s neuronima koji sadrže kralježnice kroz dendrična stabla (modeli punog kralježnice). DeepDendrite nam omogućuje da istražimo neuronsko izračunavanje u simuliranom modelu ljudskog piramidalnog neurona sa ~ 25.000 dendričnih kralježnica. (2) U raspravi smo takođe razmotrili potencijal DeepDendrite-a u kontekstu AI-a, posebno u stvaranju ANN-ova s morfološki detaljnim ljudskim piramidalnim neuronima. Svi izvorni kod za DeepDendrite, full-spine modeli i detaljan dendrični model mreže su javno dostupni na internetu (vidi Koda Dostupnost). Naš framework za učenje otvorenog koda može se lako integrirati sa drugim dendričnim pravilima učenja, kao što su pravila učenja za nonlinear (full-active) dendrit , eksplozivno ovisna sinaptička plastičnost , i učenje sa spike predviđanje . Overall, our study provides a complete set of tools that have the potential to change the current computational neuroscience community ecosystem. By leveraging the power of GPU computing, we envision that these tools will facilitate system-level explorations of computational principles of the brain’s fine structures, as well as promote the interaction between neuroscience and modern AI. 21 20 36 Rezultati Dendrično hijerarhijsko planiranje (DHS) Izračunavanje ionskih struja i rješavanje linearnih jednadžbi su dve kritične faze prilikom simulacije biofizički detaljnih neurona, koji su vremenski zahtjevni i predstavljaju ozbiljna računalna opterećenja. Srećom, izračunavanje ionskih struja svakog odjeljka je potpuno nezavisni proces tako da se može prirodno paralelizovati na uređajima s masivnim paralelnim računarskim jedinicama poput GPU-a Kao rezultat toga, rješavanje linearnih jednadžbi postaje preostala bočna točka za proces paralelizacije (Fig. U svakom slučaju) 37 1a – f Kako bi se riješila ta prepreka, razvijene su paralelne metode na staničnoj razini, koje ubrzavaju izračun pojedinačnih ćelija tako što „podjeljuju“ jednu ćeliju na nekoliko odjeljaka koji se mogu izračunati paralelno. , , Međutim, takve metode uvelike se oslanjaju na prethodno znanje da bi se generisale praktične strategije o tome kako podijeliti jedan neuron u odjeljke (Fig. • Dodatni fig. Stoga postaje manje efikasan za neurone s asimetričnim morfologijama, npr. piramidalne neurone i Purkinje neurone. 27 28 38 1g i 1 Cilj nam je da razvijemo efikasniju i precizniju paralelnu metodu za simulaciju biološki detaljnih neuronskih mreža. Prvo, uspostavljamo kriterije za tačnost paralelne metode na staničnom nivou. , predlažemo tri uvjeta kako bi se osiguralo da paralelni metod donese identična rešenja kao i serijski računovodstveni Hines metod prema ovisnosti o podacima u Hines metodu (vidjeti Metode). Zatim da bi se teorijski ocijenilo radno vrijeme, tj. efikasnost, serijskih i paralelnih računovodstvenih metoda, uvodimo i formuliramo koncept računovodstvenih troškova kao broj koraka koje metoda poduzima u rješavanju jednadžbi (vidjeti Metode). 34 Na osnovu točnosti simulacije i računarskih troškova, problem paralelizacije formuliramo kao problem matematičkog rasporeda (vidi Metode). paralelne žice, možemo izračunati najviše čvorova na svakom koraku, ali moramo osigurati da se čvor izračunava samo ako su obradjeni svi njegovi dječji čvorovi; naš cilj je pronaći strategiju s minimalnim brojem koraka za ceo postupak. k k Da bi se stvorila optimalna particija, predlažemo metodu nazvanu Dendritic Hierarchical Scheduling (DHS) (teoretski dokaz je predstavljen u Metodama). DHS metoda uključuje dva koraka: analizu dendrične topologije i pronalaženje najbolje particije: (1) Dajući detaljan model, prvo dobijemo njegovo odgovarajuće drvo ovisnosti i izračunamo dubinu svakog čvora (dubina čvora je broj njegovih predaka čvorova) na drvetu (Slik. ). (2) Nakon topološke analize, pretražujemo kandidate i odaberemo najviše najdublji kandidati čvorova (jedan čvor je kandidat samo ako su obradjeni svi njegovi sin čvorovi). U svakom slučaju) 2a 2b i c k 3d DHS tok posla. DHS procesi Najduži kandidacijski čvorovi svake iteracije. Ilustracija izračunavanja dubine čvora odlomnog modela. Model se prvo pretvara u strukturu stabla, a zatim se izračunava dubina svakog čvora. Boje ukazuju na različite vrijednosti dubine. Topološka analiza na različitim modelima neurona. Ovde je prikazano šest neurona s različitim morfologijama. Za svaki model, soma je odabran kao koren stabla tako da se dubina čvora povećava od soma (0) do distalnih dendritsa. Ilustracija obavljanja DHS-a na modelu u sa četiri žice. Kandidati: čvorovi koji se mogu obrađivati. Odabrani kandidati: čvorovi koje odabire DHS, tj. čvorovi koji se mogu obrađivati Procesirani čvorovi: čvorovi koji su prethodno bili obrađeni. Strategija paralelizacije koju je DHS dobio nakon procesa u Svaki čvor je dodijeljen jednoj od četiri paralelne žice. DHS smanjuje korake serijske obrade čvorova sa 14 na 5 distribucijom čvorova na više žica. Relativni trošak, tj. omjer izračunanih troškova DHS-a prema serijskom Hines metodu, kada se DHS primjenjuje s različitim brojem žica na različitim tipovima modela. a k b c d b k e d f Uzmimo pojednostavljeni model sa 15 odjeljaka kao primer, koristeći serijsko računanje Hines metoda, to traje 14 koraka za obradu svih čvorova, dok koristeći DHS sa četiri paralelne jedinice može podeliti svoje čvorove u pet podskupina (Fig. Budući da čvorovi u istom podskupinu mogu biti obrađeni paralelno, potrebno je samo pet koraka za obradu svih čvorova pomoću DHS-a. U svakom slučaju) 3d 2E Zatim primenjujemo DHS metodu na šest reprezentativnih detaljnih modela neurona (odabranih iz ModelDB-a). ) са различитим бројем жица (Fig. ):, uključujući kortikalne i hipokampalne piramidalne neurone , , cerebelarni Purkinje neuroni Strijatalni projekcijski neuroni (SPN) ), i mirisne žarulje mitralne ćelije , pokrivajući glavne glavne neurone u senzorskim, kortikalnim i subkortikalnim područjima. Zatim smo izmjerili računalne troškove. Relativni računalni troškovi ovde su definisani udjelom računarskih troškova DHS-a u odnosu na serijsku Hines metodu. Računarski troškovi, tj. Broj koraka poduzetih u rješavanju jednadžbi, dramatično padaju s povećanjem broja nitova. Na primjer, sa 16 nitova, računarska troškovi DHS-a su 7%-10% u odnosu na serijsku Hines metodu. Zanimljivo je da DHS metoda dostiže niže granice svoje računalne troškove za predstavljene neurone kada se daju 16 ili čak 8 paralelnih nitova (Fig. ), sugerirajući dodavanje više nitova ne poboljšava performanse dalje zbog ovisnosti između odeljaka. 39 2f 40 41 42 43 44 45 2f Zajedno, generiramo DHS metodu koja omogućuje automatsku analizu dendrične topologije i optimalnu particiju za paralelno računanje. Vrijedno je napomenuti da DHS pronađe optimalnu particiju pre nego što počne simulacija, i nema potrebe za dodatnim izračunima za rješavanje jednadžbi. Ubrzanje DHS pomoću GPU memorijskog pojačanja DHS izračunava svaki neuron sa više nitova, koji troši ogromnu količinu nitova prilikom pokretanja simulacija neuronske mreže. Graphics Processing Units (GPUs) se sastoje od masivnih procesorskih jedinica (tj. streaming procesora, SPs, Fig. za paralelno računanje U teoriji, mnogi SP-ovi na GPU-u trebali bi podržati efikasnu simulaciju za velike neuronske mreže (Slik. Međutim, dosljedno smo promatrali da se efikasnost DHS-a značajno smanjila kada se veličina mreže povećala, što bi moglo biti rezultat raspršene pohrane podataka ili dodatnog pristupa memoriji uzrokovanog učitavanjem i pisanjem intermedijarnih rezultata (Fig. na levoj strani). 3a i b 46 3c Uslovi 3D GPU arhitektura i njegova hijerarhija memorije. Svaki GPU sadrži masivne procesorske jedinice (procesore toka). Arhitektura Streaming Multiprocesora (SM). Svaki SM sadrži više streaming procesora, registara i L1 cache. Primjena DHS na dva neurona, svaki sa četiri žice. Tokom simulacije, svaka žica radi na jednom procesoru toka. Strategija optimizacije memorije na GPU-u. gornja ploča, dodjela nitova i skladištenje podataka DHS-a, prije (levo) i nakon (desno) povećanja memorije. Procesori šalju upit za podatke za učitavanje podataka za svaku nit iz globalne memorije. Bez povećanja memorije (levo), potrebno je sedam transakcija da se učitaju svi podaci zahteva i neke dodatne transakcije za intermedijarne rezultate. S povećanjem memorije (desno), potrebno je samo dvije transakcije da se učitaju svi podaci zahteva, registri se koriste za intermedijarne rezultate, što dodatno poboljšava kapacitet memorije. Radno vreme DHS (32 niti po ćeliji) sa i bez memorijskog pojačanja na više slojeva 5 piramidalnih modela sa spinovima. Ubrzanje povećanja memorije na više slojeva 5 piramidalnih modela sa spinovima. Povećanje memorije donosi 1,6-2 puta ubrzanje. a b c d d e f Rješavamo ovaj problem pomoću GPU memorijskog pojačanja, metode za povećanje memorijskog protokola iskoristeći hijerarhiju memorije i mehanizam pristupa GPU-a. Na osnovu mehanizma memorijskog učitavanja GPU-a, uzastopne žice koje učitavaju usklađene i uzastopno pohranjene podatke dovode do visokog memorijskog protokola u poređenju sa pristupom podacima pohranjenim u raspršivaču, što smanjuje memorijski protok. , Da bi se postigao visok protok, prvo usklađujemo računarske naloge čvorova i preuređujemo žice prema broju čvorova na njima. Zatim prenesemo skladištenje podataka u globalnu memoriju, u skladu sa računarskim nalogima, tj. čvorovi koji se obrađuju na istom koraku se skladište uzastopno u globalnu memoriju. Štoviše, koristimo GPU registre za skladištenje srednjih rezultata, što dodatno jača memorijski protok. Štoviše, eksperimenti na višestrukim brojevima piramidalnih neurona sa kralježnicama i tipičnim modelima neurona (Fig. • Dodatni fig. ) pokazuju da poboljšanje memorije postiže 1.2-3.8 puta ubrzanje u poređenju s naivnim DHS. 46 47 Uslovi 3D 3a i f 2 Da bismo sveobuhvatno testirali performanse DHS-a pomoću unapređenja memorije GPU-a, odabrali smo šest tipičnih modela neurona i procijenili radno vrijeme rješavanja kablovskih jednadžbi na masivnim brojevima svakog modela (Slik. ). Ispitivali smo DHS sa četiri niti (DHS-4) i šesnaest niti (DHS-16) za svaki neuron, odnosno.U poređenju s metodom GPU u CoreNEURON, DHS-4 i DHS-16 mogu ubrzati oko 5 i 15 puta, odnosno (Slika. Štoviše, u poređenju sa konvencionalnim serijskim Hines metodom u NEURON-u koji radi sa jednim nitom CPU-a, DHS ubrzava simulaciju za 2-3 reda veličine (Dopunski lik. ), zadržavajući istu numeričku tačnost u prisustvu gustih spinova (Dopunski figovi. i ), aktivni dendritis (Dopunski fig. ) и различите стратегије сегментације (Додатни Фиг. U svakom slučaju) 4 Četvrta A 3 4 8 7 7 Radno vreme za rješavanje jednadžbi za 1 s simulaciju na GPU-u (dt = 0,025 ms, ukupno 40.000 iteracija). CoreNEURON: paralelna metoda koja se koristi u CoreNEURON-u; DHS-4: DHS sa četiri niti za svaki neuron; DHS-16: DHS sa 16 niti za svaki neuron. U pitanju je Vizualizacija particije DHS-4 i DHS-16, svaka boja označava jednu niti. a b c DHS stvara optimalno particioniranje specifično za tip ćelije Da bismo dobili uvid u mehanizam rada DHS metode, vizualizirali smo proces particioniranja tako što smo mapirali odjeljke na svaku nit (svaka boja predstavlja jednu nit na slici. Vizualizacija pokazuje da jedna nit često prebacuje između različitih grana (Slik. Zanimljivo je da DHS generira poravnane particije u morfološki simetričnim neuronima kao što su striatalni projekcijski neuron (SPN) i Mitralna ćelija (Slik. U suprotnosti, on generira fragmentirane particije morfološki asimetričnih neurona kao što su piramidalni neuroni i Purkinjeova ćelija (Slik. ), što ukazuje na to da DHS dijeli neuralno stablo na skali pojedinih odjeljaka (tj. stablo čvor) umjesto na skali granica. 4b i c 4b i c 4b i c 4b i c Ukratko, DHS i unapređenje memorije stvaraju teorijski dokazano optimalno rešenje za rješavanje linearnih jednadžbi paralelno s neviđenom efikasnošću. Koristeći ovaj princip, izgradili smo platformu DeepDendrite otvorenog pristupa, koju neuroznanstvenici mogu iskoristiti za implementiranje modela bez ikakvih specifičnih GPU programskih znanja. Ispod, demonstriramo kako možemo koristiti DeepDendrite u neuroznanstvenim zadatcima. DHS omogućava modeliranje na nivou kralježnice Budući da dendrične kralježnice dobijaju većinu uzbuđujućeg ulaza u kortikalne i hipokampalne piramidalne neurone, striatalne projekcijske neurone itd., njihove morfologije i plastičnost su ključni za regulaciju neuronske uzbuđenosti. , , , , Međutim, spinovi su suviše mali ( ~ 1 μm dužine) da bi se direktno mjerili eksperimentalno u odnosu na procese ovisne o napona. 10 48 49 50 51 Možemo modelirati jednu kralježnicu sa dva odjeljka: glavom kralježnice gde se nalaze sinapsi i vratom kralježnice koji povezuje glavu kralježnice sa dendritima. Teorija predviđa da vrlo tanak vrat kralježnice (0,1-0,5 um u promjeru) elektronički izolira glavu kralježnice od svog roditelja dendrita, čime compartmentalizuje signale generisane na glavi kralježnice. Međutim, detaljan model s potpuno distribuiranim spinovima na dendritima („full-spine model”) je izračunato veoma skup. Spinalni faktor , umjesto da modelira sve spinove eksplicitno. spinalni faktor ima za cilj približavanje spinalnog učinka na biofizička svojstva stanične membrane . 52 53 F 54 F 54 Inspired by the previous work of Eyal et al. , we investigated how different spatial patterns of excitatory inputs formed on dendritic spines shape neuronal activities in a human pyramidal neuron model with explicitly modeled spines (Fig. ). Noticeably, Eyal et al. employed the spine factor to incorporate spines into dendrites while only a few activated spines were explicitly attached to dendrites (“few-spine model” in Fig. ). The value of spine in their model was computed from the dendritic area and spine area in the reconstructed data. Accordingly, we calculated the spine density from their reconstructed data to make our full-spine model more consistent with Eyal’s few-spine model. With the spine density set to 1.3 μm-1, the pyramidal neuron model contained about 25,000 spines without altering the model’s original morphological and biophysical properties. Further, we repeated the previous experiment protocols with both full-spine and few-spine models. We use the same synaptic input as in Eyal’s work but attach extra background noise to each sample. By comparing the somatic traces (Fig. ) and spike probability (Fig. ) in full-spine and few-spine models, we found that the full-spine model is much leakier than the few-spine model. In addition, the spike probability triggered by the activation of clustered spines appeared to be more nonlinear in the full-spine model (the solid blue line in Fig. ) nego u modelu nekoliko kralježnice (plava crta na slici. ). These results indicate that the conventional F-factor method may underestimate the impact of dense spine on the computations of dendritic excitability and nonlinearity. 51 5a F 5a F 5b, c 5d 5d 5d Experiment setup. We examine two major types of models: few-spine models and full-spine models. Few-spine models (two on the left) are the models that incorporated spine area globally into dendrites and only attach individual spines together with activated synapses. In full-spine models (two on the right), all spines are explicitly attached over whole dendrites. We explore the effects of clustered and randomly distributed synaptic inputs on the few-spine models and the full-spine models, respectively. Somatic voltages recorded for cases in . Colors of the voltage curves correspond to , scale bar: 20 ms, 20 mV. Color-coded voltages during the simulation in at specific times. Colors indicate the magnitude of voltage. Somatic spike probability as a function of the number of simultaneously activated synapses (as in Eyal et al.’s work) for four cases in . Background noise is attached. Radno vreme eksperimenata u with different simulation methods. NEURON: conventional NEURON simulator running on a single CPU core. CoreNEURON: CoreNEURON simulator on a single GPU. DeepDendrite: DeepDendrite on a single GPU. a b a a c b d a e d In the DeepDendrite platform, both full-spine and few-spine models achieved 8 times speedup compared to CoreNEURON on the GPU platform and 100 times speedup compared to serial NEURON on the CPU platform (Fig. ; Supplementary Table ) while keeping the identical simulation results (Supplementary Figs. and ). Therefore, the DHS method enables explorations of dendritic excitability under more realistic anatomic conditions. 5e 1 4 8 Discussion In this work, we propose the DHS method to parallelize the computation of Hines method and we mathematically demonstrate that the DHS provides an optimal solution without any loss of precision. Next, we implement DHS on the GPU hardware platform and use GPU memory boosting techniques to refine the DHS (Fig. ). When simulating a large number of neurons with complex morphologies, DHS with memory boosting achieves a 15-fold speedup (Supplementary Table ) as compared to the GPU method used in CoreNEURON and up to 1,500-fold speedup compared to serial Hines method in the CPU platform (Fig. • Dodatni fig. and Supplementary Table ). Furthermore, we develop the GPU-based DeepDendrite framework by integrating DHS into CoreNEURON. Finally, as a demonstration of the capacity of DeepDendrite, we present a representative application: examine spine computations in a detailed pyramidal neuron model with 25,000 spines. Further in this section, we elaborate on how we have expanded the DeepDendrite framework to enable efficient training of biophysically detailed neural networks. To explore the hypothesis that dendrites improve robustness against adversarial attacks Pokazujemo da DeepDendrite može podržati i simulacije neuroznanosti i AI-povezane detaljne neuronske mrežne zadatke bez presedana, čime se značajno promiču detaljne simulacije neuroznanosti i potencijalno za buduća istraživanja AI. 55 3 1 4 3 1 56 Decades of efforts have been invested in speeding up the Hines method with parallel methods. Early work mainly focuses on network-level parallelization. In network simulations, each cell independently solves its corresponding linear equations with the Hines method. Network-level parallel methods distribute a network on multiple threads and parallelize the computation of each cell group with each thread , . With network-level methods, we can simulate detailed networks on clusters or supercomputers . In recent years, GPU has been used for detailed network simulation. Because the GPU contains massive computing units, one thread is usually assigned one cell rather than a cell group , , . With further optimization, GPU-based methods achieve much higher efficiency in network simulation. However, the computation inside the cells is still serial in network-level methods, so they still cannot deal with the problem when the “Hines matrix” of each cell scales large. 57 58 59 35 60 61 Cellular-level parallel methods further parallelize the computation inside each cell. The main idea of cellular-level parallel methods is to split each cell into several sub-blocks and parallelize the computation of those sub-blocks , . However, typical cellular-level methods (e.g., the “multi-split” method ) pay less attention to the parallelization strategy. The lack of a fine parallelization strategy results in unsatisfactory performance. To achieve higher efficiency, some studies try to obtain finer-grained parallelization by introducing extra computation operations , , or making approximations on some crucial compartments, while solving linear equations , . These finer-grained parallelization strategies can get higher efficiency but lack sufficient numerical accuracy as in the original Hines method. 27 28 28 29 38 62 63 64 Unlike previous methods, DHS adopts the finest-grained parallelization strategy, i.e., compartment-level parallelization. By modeling the problem of “how to parallelize” as a combinatorial optimization problem, DHS provides an optimal compartment-level parallelization strategy. Moreover, DHS does not introduce any extra operation or value approximation, so it achieves the lowest computational cost and retains sufficient numerical accuracy as in the original Hines method at the same time. Dendritic spines are the most abundant microstructures in the brain for projection neurons in the cortex, hippocampus, cerebellum, and basal ganglia. As spines receive most of the excitatory inputs in the central nervous system, electrical signals generated by spines are the main driving force for large-scale neuronal activities in the forebrain and cerebellum , . The structure of the spine, with an enlarged spine head and a very thin spine neck—leads to surprisingly high input impedance at the spine head, which could be up to 500 MΩ, combining experimental data and the detailed compartment modeling approach , . Due to such high input impedance, a single synaptic input can evoke a “gigantic” EPSP ( ~ 20 mV) at the spine-head level , , thereby boosting NMDA currents and ion channel currents in the spine . However, in the classic single detailed compartment models, all spines are replaced by the coefficient modifying the dendritic cable geometries Ovaj pristup može nadoknaditi curenje curenja i struje kapacitancije za spinove. Ipak, ne može reproducirati visoku impedanciju unosa na glavi kralježnice, što može oslabiti uzbuđujuće sinaptičke ulaze, posebno NMDA struje, čime se smanjuje nelinearnost u krivulji ulaza i izlaza neurona. 10 11 48 65 48 66 11 F 54 On the other hand, the spine’s electrical compartmentalization is always accompanied by the biochemical compartmentalization , , , resulting in a drastic increase of internal [Ca2+], within the spine and a cascade of molecular processes involving synaptic plasticity of importance for learning and memory. Intriguingly, the biochemical process triggered by learning, in turn, remodels the spine’s morphology, enlarging (or shrinking) the spine head, or elongating (or shortening) the spine neck, which significantly alters the spine’s electrical capacity , , , . Such experience-dependent changes in spine morphology also referred to as “structural plasticity”, have been widely observed in the visual cortex , , somatosensory cortex , , motor cortex , hippocampus , and the basal ganglia in vivo. They play a critical role in motor and spatial learning as well as memory formation. However, due to the computational costs, nearly all detailed network models exploit the “F-factor” approach to replace actual spines, and are thus unable to explore the spine functions at the system level. By taking advantage of our framework and the GPU platform, we can run a few thousand detailed neurons models, each with tens of thousands of spines on a single GPU, while maintaining ~100 times faster than the traditional serial method on a single CPU (Fig. ). Therefore, it enables us to explore of structural plasticity in large-scale circuit models across diverse brain regions. 8 52 67 67 68 69 70 71 72 73 74 75 9 76 5e Another critical issue is how to link dendrites to brain functions at the systems/network level. It has been well established that dendrites can perform comprehensive computations on synaptic inputs due to enriched ion channels and local biophysical membrane properties , , . For example, cortical pyramidal neurons can carry out sublinear synaptic integration at the proximal dendrite but progressively shift to supralinear integration at the distal dendrite . Moreover, distal dendrites can produce regenerative events such as dendritic sodium spikes, calcium spikes, and NMDA spikes/plateau potentials , . Such dendritic events are widely observed in mice or even human cortical neurons in vitro, which may offer various logical operations , or gating functions , . Recently, in vivo recordings in awake or behaving mice provide strong evidence that dendritic spikes/plateau potentials are crucial for orientation selectivity in the visual cortex , sensory-motor integration in the whisker system , , and spatial navigation in the hippocampal CA1 region . 5 6 7 77 6 78 6 79 6 79 80 81 82 83 84 85 To establish the causal link between dendrites and animal (including human) patterns of behavior, large-scale biophysically detailed neural circuit models are a powerful computational tool to realize this mission. However, running a large-scale detailed circuit model of 10,000-100,000 neurons generally requires the computing power of supercomputers. It is even more challenging to optimize such models for in vivo data, as it needs iterative simulations of the models. The DeepDendrite framework can directly support many state-of-the-art large-scale circuit models , , , which were initially developed based on NEURON. Moreover, using our framework, a single GPU card such as Tesla A100 could easily support the operation of detailed circuit models of up to 10,000 neurons, thereby providing carbon-efficient and affordable plans for ordinary labs to develop and optimize their own large-scale detailed models. 86 87 88 Recent works on unraveling the dendritic roles in task-specific learning have achieved remarkable results in two directions, i.e., solving challenging tasks such as image classification dataset ImageNet with simplified dendritic networks , and exploring full learning potentials on more realistic neuron , . However, there lies a trade-off between model size and biological detail, as the increase in network scale is often sacrificed for neuron-level complexity , , . Moreover, more detailed neuron models are less mathematically tractable and computationally expensive . 20 21 22 19 20 89 21 There has also been progress in the role of active dendrites in ANNs for computer vision tasks. Iyer et al. . proposed a novel ANN architecture with active dendrites, demonstrating competitive results in multi-task and continual learning. Jones and Kording Koristio je binarno stablo za približavanje razdvajanja dendrita i pružao dragocen uvid u uticaj strukture stabla na izračunavanje sposobnosti pojedinih neurona. . proposed a dendritic normalization rule based on biophysical behavior, offering an interesting perspective on the contribution of dendritic arbor structure to computation. While these studies offer valuable insights, they primarily rely on abstractions derived from spatially extended neurons, and do not fully exploit the detailed biological properties and spatial information of dendrites. Further investigation is needed to unveil the potential of leveraging more realistic neuron models for understanding the shared mechanisms underlying brain computation and deep learning. 90 91 92 In response to these challenges, we developed DeepDendrite, a tool that uses the Dendritic Hierarchical Scheduling (DHS) method to significantly reduce computational costs and incorporates an I/O module and a learning module to handle large datasets. With DeepDendrite, we successfully implemented a three-layer hybrid neural network, the Human Pyramidal Cell Network (HPC-Net) (Fig. ). This network demonstrated efficient training capabilities in image classification tasks, achieving approximately 25 times speedup compared to training on a traditional CPU-based platform (Fig. ; Supplementary Table ). 6a, b 6f 1 The illustration of the Human Pyramidal Cell Network (HPC-Net) for image classification. Images are transformed to spike trains and fed into the network model. Learning is triggered by error signals propagated from soma to dendrites. Training with mini-batch. Multiple networks are simulated simultaneously with different images as inputs. The total weight updates ΔW are computed as the average of ΔWi from each network. Comparison of the HPC-Net before and after training. Left, the visualization of hidden neuron responses to a specific input before (top) and after (bottom) training. Right, hidden layer weights (from input to hidden layer) distribution before (top) and after (bottom) training. Workflow of the transfer adversarial attack experiment. We first generate adversarial samples of the test set on a 20-layer ResNet. Then use these adversarial samples (noisy images) to test the classification accuracy of models trained with clean images. Prediction accuracy of each model on adversarial samples after training 30 epochs on MNIST (left) and Fashion-MNIST (right) datasets. Run time of training and testing for the HPC-Net. The batch size is set to 16. Left, run time of training one epoch. Right, run time of testing. Parallel NEURON + Python: training and testing on a single CPU with multiple cores, using 40-process-parallel NEURON to simulate the HPC-Net and extra Python code to support mini-batch training. DeepDendrite: training and testing the HPC-Net on a single GPU with DeepDendrite. a b c d e f Additionally, it is widely recognized that the performance of Artificial Neural Networks (ANNs) can be undermined by adversarial attacks —intentionally engineered perturbations devised to mislead ANNs. Intriguingly, an existing hypothesis suggests that dendrites and synapses may innately defend against such attacks . Our experimental results utilizing HPC-Net lend support to this hypothesis, as we observed that networks endowed with detailed dendritic structures demonstrated some increased resilience to transfer adversarial attacks compared to standard ANNs, as evident in MNIST and Fashion-MNIST datasets (Fig. ). This evidence implies that the inherent biophysical properties of dendrites could be pivotal in augmenting the robustness of ANNs against adversarial interference. Nonetheless, it is essential to conduct further studies to validate these findings using more challenging datasets such as ImageNet . 93 56 94 95 96 6d, e 97 In conclusion, DeepDendrite has shown remarkable potential in image classification tasks, opening up a world of exciting future directions and possibilities. To further advance DeepDendrite and the application of biologically detailed dendritic models in AI tasks, we may focus on developing multi-GPU systems and exploring applications in other domains, such as Natural Language Processing (NLP), where dendritic filtering properties align well with the inherently noisy and ambiguous nature of human language. Challenges include testing scalability in larger-scale problems, understanding performance across various tasks and domains, and addressing the computational complexity introduced by novel biological principles, such as active dendrites. By overcoming these limitations, we can further advance the understanding and capabilities of biophysically detailed dendritic neural networks, potentially uncovering new advantages, enhancing their robustness against adversarial attacks and noisy inputs, and ultimately bridging the gap between neuroscience and modern AI. Metode Simulation with DHS CoreNEURON simulator ( ) uses the NEURON architecture and is optimized for both memory usage and computational speed. We implement our Dendritic Hierarchical Scheduling (DHS) method in the CoreNEURON environment by modifying its source code. All models that can be simulated on GPU with CoreNEURON can also be simulated with DHS by executing the following command: 35 https://github.com/BlueBrain/CoreNeuron 25 coreneuron_exec -d /path/to/models -e time --cell-permute 3 --cell-nthread 16 --gpu The usage options are as in Table . 1 Accuracy of the simulation using cellular-level parallel computation To ensure the accuracy of the simulation, we first need to define the correctness of a cellular-level parallel algorithm to judge whether it will generate identical solutions compared with the proven correct serial methods, like the Hines method used in the NEURON simulation platform. Based on the theories in parallel computing , a parallel algorithm will yield an identical result as its corresponding serial algorithm, if and only if the data process order in the parallel algorithm is consistent with data dependency in the serial method. The Hines method has two symmetrical phases: triangularization and back-substitution. By analyzing the serial computing Hines method , we find that its data dependency can be formulated as a tree structure, where the nodes on the tree represent the compartments of the detailed neuron model. In the triangularization process, the value of each node depends on its children nodes. In contrast, during the back-substitution process, the value of each node is dependent on its parent node (Fig. ). Thus, we can compute nodes on different branches in parallel as their values are not dependent. 34 55 1d Na osnovu ovisnosti o podacima o Hines metodi serijskog računanja, predlažemo tri uvjeta kako bi se osiguralo da paralelni metod donese identična rješenja kao i Hines metoda serijskog računanja: (1) Morfologija stabla i početne vrijednosti svih čvorova su identične onima u Hines metodi serijskog računanja; (2) U fazi triangularizacije, čvor može biti obrađen ako i samo ako su svi njegovi sinovi čvorovi već obrađeni; (3) U fazi povratne zamjene, čvor može biti obrađen samo ako je njegov roditeljski čvor već obrađen. Computational cost of cellular-level parallel computing method To theoretically evaluate the run time, i.e., efficiency, of the serial and parallel computing methods, we introduce and formulate the concept of computational cost as follows: given a tree and threads (basic computational units) to perform triangularization, parallel triangularization equals to divide the node set of into subsets, i.e., = { , , … } where the size of each subset | Šibenik ≤ , i.e., at most čvorovi se mogu obrađivati svaki korak jer postoje samo proces faze triangularizacije slijedi redoslijed: → → … → , and nodes in the same subset can be processed in parallel. So, we define | | (the size of set , i.e., here) as the computational cost of the parallel computing method. In short, we define the computational cost of a parallel method as the number of steps it takes in the triangularization phase. Because the back-substitution is symmetrical with triangularization, the total cost of the entire solving equation phase is twice that of the triangularization phase. T k V T n V V1 V2 Vn Vi k k k V1 V2 Preduzetništvo Vn Vi V V n Mathematical scheduling problem Based on the simulation accuracy and computational cost, we formulate the parallelization problem as a mathematical scheduling problem: Given a tree = { , } i pozitivan čitav , where is the node-set and is the edge set. Define partition ( ) = { , , … }, | | ≤ , 1 ≤ ≤ n, where | | indicates the cardinal number of subset , i.e., the number of nodes in i za svaki čvor ∈ , all its children nodes { | ∈children( )} must in a previous subset , where 1 ≤ < . Our goal is to find an optimal partition ( ) whose computational cost | ( )| is minimal. T V E k V E P V V1 V2 Vn Vi k i Vi Vi Vi v Vi c c v Vj j i P* V P* V Here subset consists of all nodes that will be computed at Prvi korak (Fig. ), so | | ≤ indicates that we can compute čvorova svakog koraka najviše jer je broj dostupnih žica . The restriction “for each node ∈ , all its children nodes { | ∈children( ) mora u prethodnom podsetu , where 1 ≤ < ” indicates that node can be processed only if all its child nodes are processed. Vi i 2E Vi k k k v Vi c c v Vj j i v DHS implementation We aim to find an optimal way to parallelize the computation of solving linear equations for each neuron model by solving the mathematical scheduling problem above. To get the optimal partition, DHS first analyzes the topology and calculates the depth ( ) for all nodes ∈ . Then, the following two steps will be executed iteratively until every node ∈ is assigned to a subset: (1) find all candidate nodes and put these nodes into candidate set . A node is a candidate only if all its child nodes have been processed or it does not have any child nodes. (2) if | | ≤ , i.e., the number of candidate nodes is smaller or equivalent to the number of available threads, remove all nodes in and put them into , otherwise, remove deepest nodes from and add them to subset . Label these nodes as processed nodes (Fig. ). After filling in subset , go to step (1) to fill in the next subset . d v v V v V Q Q k Q V*i k Q Vi 2d Vi Vi+1 Dokaz o ispravnosti za DHS After applying DHS to a neural tree = { , }, dobijamo particiju ( ) = { , , … }, | | ≤ , 1 ≤ ≤ . Nodes in the same subset će se računati paralelno, uzimajući u obzir steps to perform triangularization and back-substitution, respectively. We then demonstrate that the reordering of the computation in DHS will result in a result identical to the serial Hines method. T V E P V V1 V2 Preduzetništvo Vn Vi k i n Vi n The partition ( ) obtained from DHS decides the computation order of all nodes in a neural tree. Below we demonstrate that the computation order determined by ( ) satisfies the correctness conditions. ( ) is obtained from the given neural tree . Operations in DHS do not modify the tree topology and values of tree nodes (corresponding values in the linear equations), so the tree morphology and initial values of all nodes are not changed, which satisfies condition 1: the tree morphology and initial values of all nodes are identical to those in serial Hines method. In triangularization, nodes are processed from subset to . As shown in the implementation of DHS, all nodes in subset are selected from the candidate set , and a node can be put into only if all its child nodes have been processed. Thus the child nodes of all nodes in Oni su u , , … }, meaning that a node is only computed after all its children have been processed, which satisfies condition 2: in triangularization, a node can be processed if and only if all its child nodes are already processed. In back-substitution, the computation order is the opposite of that in triangularization, i.e., from to . As shown before, the child nodes of all nodes in are in { U pitanju je , … }, so parent nodes of nodes in are in { , , … }, which satisfies condition 3: in back-substitution, a node can be processed only if its parent node is already processed. P V P V P V T V1 Vn Vi Q Q Vi V1 V2 Vi-1 Vn V1 izveštaji Vi V1 V2 Preduzetništvo Vi-1 Vi Vi+1 Vi+2 Vn Optimality proof for DHS The idea of the proof is that if there is another optimal solution, it can be transformed into our DHS solution without increasing the number of steps the algorithm requires, thus indicating that the DHS solution is optimal. Za svaku podskupinu U ( DHS kreće (thread number) deepest nodes from the corresponding candidate set to . If the number of nodes in is smaller than , move all nodes from to . To simplify, we introduce , indicating the depth sum of deepest nodes in . All subsets in ( ) satisfy the max-depth criteria (Supplementary Fig. ): . We then prove that selecting the deepest nodes in each iteration makes an optimal partition. If there exists an optimal partition = { , , … } containing subsets that do not satisfy the max-depth criteria, we can modify the subsets in ( ) so that all subsets consist of the deepest nodes from and the number of subsets ( | ( )|) remain the same after modification. Vi P V k Qi Vi Qi k Qi Vi u k Qi P V 6a P(V) P*(V) V*1 V*2 V*s P* V Q P * V Without any loss of generalization, we start from the first subset not satisfying the criteria, i.e., . There are two possible cases that will make not satisfy the max-depth criteria: (1) | | < and there exist some valid nodes in that are not put to ; (2) | | = but nodes in are not the Najduži čvorovi u . V*i V*i V*i k Qi V*i V*i k V*i k Qi For case (1), because some candidate nodes are not put to , these nodes must be in the subsequent subsets. As | | , we can move the corresponding nodes from the subsequent subsets to koji neće povećati broj podsetaka i učiniti ispunjavaju kriterijume (Dopunski Fig. , top). For case (2), | | = , these deeper nodes that are not moved from the candidate set into must be added to subsequent subsets (Supplementary Fig. , bottom). These deeper nodes can be moved from subsequent subsets to through the following method. Assume that after filling , is picked and one of the -th deepest nodes is still in , thus will be put into a subsequent subset (Naravno > ). We first move from to + , a zatim promijeniti podset + za as follows: if | + | ≤ and none of the nodes in + is the parent of node , stop modifying the latter subsets. Otherwise, modify + as follows (Supplementary Fig. ): if the parent node of is in + , move this parent node to + ; else move the node with minimum depth from + to + za . After adjusting , modify subsequent subsets + U pitanju je + , … with the same strategy. Finally, move from to . V*i V*i < k V*i V*i 6b V * I k Qi V*i 6b V * I V*i v k v’ Qi v’ V*j j i v V*i V*i 1 V*i 1 V*i 1 k V*i 1 v V*i 1 Šest c v V*i 1 V*i 2 V*i 1 V*i 2 V*i V*i 1 V*i 2 V*j-1 v’ V*j V*i With the modification strategy described above, we can replace all shallower nodes in with the -th deepest node in and keep the number of subsets, i.e., | ( )| the same after modification. We can modify the nodes with the same strategy for all subsets in ( ) that do not contain the deepest nodes. Finally, all subsets ∈ ( ) može zadovoljiti kriterijume maksimalne dubine, i ( )| does not change after modifying. V*i k Qi P* V P* V V*i P* V P* V Na kraju, DHS generira particiju ( ) i sve podstavke ∈ ( ) zadovoljavaju maksimalnu dubinu: . za bilo koju drugu optimalnu particiju ( ) we can modify its subsets to make its structure the same as ( ), i.e., each subset consists of the deepest nodes in the candidate set, and keep | ( ) the same after modification. So, the partition ( ) obtained from DHS is one of the optimal partitions. P V Vi P V P * V P V P* V | P V GPU implementacija i poboljšanje memorije To achieve high memory throughput, GPU utilizes the memory hierarchy of (1) global memory, (2) cache, (3) register, where global memory has large capacity but low throughput, while registers have low capacity but high throughput. We aim to boost memory throughput by leveraging the memory hierarchy of GPU. GPU koristi SIMT (Single-Instruction, Multiple-Thread) arhitekturu. Warps su osnovne rasporedne jedinice na GPU-u (warp je grupa od 32 paralelne niti). Pravilno poravnavanje čvorova je neophodno za ovo batiranje izračuna u varpima, kako bi se osiguralo da DHS dobije identične rezultate kao serija Hines metoda. Kada implementiramo DHS na GPU, prvo grupiramo sve ćelije u više varpova na osnovu njihovih morfologija. Ćelije sa sličnim morfologijama su grupirane u istoj varp. Zatim primenjujemo DHS na sve neurone, dodjeljujući odjeljke svakog neurona na višestruke žice. Budući da su neuroni grupirani u varpe, žice za isti neuron su u istoj varp. Stoga, intrinska sinhronizacija u varpima održava redoslijed izračuna u skladu sa zavisnošću podataka serije Hines metoda. Konačno, žice u 46 Kada varp učita prethodno poravnane i uzastopno pohranjene podatke iz globalne memorije, može u potpunosti iskoristiti cache, što dovodi do visokog memorijskog prijenosa, dok bi pristup skladištenim podacima smanjio memorijski prijenos. Nakon dodjele odjeljaka i preuređivanja nitaka, prenosimo podatke u globalnu memoriju kako bismo ga učinili usklađenim sa računarskim nalogima tako da varpovi mogu učitati uzastopno pohranjene podatke prilikom izvođenja programa. Biofizički modeli punog i malo kralježnjaka We used the published human pyramidal neuron . The membrane capacitance m = 0.44 μF cm-2, membrane resistance m = 48,300 Ω cm2, and axial resistivity a = 261.97 Ω cm. In this model, all dendrites were modeled as passive cables while somas were active. The leak reversal potential l = -83.1 mV. Ion channels such as Na+ and K+ were inserted on soma and initial axon, and their reversal potentials were Na = 67.6 mV, K = -102 mV respectively. All these specific parameters were set the same as in the model of Eyal, et al. , for more details please refer to the published model (ModelDB, access No. 238347). 51 c r r E E E 51 In the few-spine model, the membrane capacitance and maximum leak conductance of the dendritic cables 60 μm away from soma were multiplied by a spine factor to approximate dendritic spines. In this model, spine was set to 1.9. Only the spines that receive synaptic inputs were explicitly attached to dendrites. F F U modelu punog kralježnice, svi kralježnici su bili eksplicitno povezani sa dendritima. Izračunali smo gustoću kralježnice s rekonstruisanim neuronom u Eyal et al. Densitet kralježnice je postavljen na 1,3 μm-1, a svaka ćelija sadrži 24994 spinova na dendritima udaljenim 60 μm od some. 51 The morphologies and biophysical mechanisms of spines were the same in few-spine and full-spine models. The length of the spine neck neck = 1.35 μm and the diameter vrat = 0,25 μm, dok su dužina i promjer glave kralježnice bili 0,944 μm, tj. površina glave kralježnice postavljena je na 2,8 μm2. = -86 mV. The specific membrane capacitance, membrane resistance, and axial resistivity were the same as those for dendrites. L D El Synaptic inputs We investigated neuronal excitability for both distributed and clustered synaptic inputs. All activated synapses were attached to the terminal of the spine head. For distributed inputs, all activated synapses were randomly distributed on all dendrites. For clustered inputs, each cluster consisted of 20 activated synapses that were uniformly distributed on a single randomly-selected compartment. All synapses were activated simultaneously during the simulation. AMPA-based and NMDA-based synaptic currents were simulated as in Eyal et al.’s work. AMPA conductance was modeled as a double-exponential function and NMDA conduction as a voltage-dependent double-exponential function. For the AMPA model, the specific rise and decay were set to 0.3 and 1.8 ms. For the NMDA model, Uspon i decay were set to 8.019 and 34.9884 ms, respectively. The maximum conductance of AMPA and NMDA were 0.73 nS and 1.31 nS. τ τ τ τ Background noise We attached background noise to each cell to simulate a more realistic environment. Noise patterns were implemented as Poisson spike trains with a constant rate of 1.0 Hz. Each pattern started at start = 10 ms and lasted until the end of the simulation. We generated 400 noise spike trains for each cell and attached them to randomly-selected synapses. The model and specific parameters of synaptic currents were the same as described in , except that the maximum conductance of NMDA was uniformly distributed from 1.57 to 3.275, resulting in a higher AMPA to NMDA ratio. t Synaptic Inputs Exploring neuronal excitability We investigated the spike probability when multiple synapses were activated simultaneously. For distributed inputs, we tested 14 cases, from 0 to 240 activated synapses. For clustered inputs, we tested 9 cases in total, activating from 0 to 12 clusters respectively. Each cluster consisted of 20 synapses. For each case in both distributed and clustered inputs, we calculated the spike probability with 50 random samples. Spike probability was defined as the ratio of the number of neurons fired to the total number of samples. All 1150 samples were simulated simultaneously on our DeepDendrite platform, reducing the simulation time from days to minutes. Performing AI tasks with the DeepDendrite platform Conventional detailed neuron simulators lack two functionalities important to modern AI tasks: (1) alternately performing simulations and weight updates without heavy reinitialization and (2) simultaneously processing multiple stimuli samples in a batch-like manner. Here we present the DeepDendrite platform, which supports both biophysical simulating and performing deep learning tasks with detailed dendritic models. DeepDendrite consists of three modules (Supplementary Fig. ): (1) an I/O module; (2) a DHS-based simulating module; (3) a learning module. When training a biophysically detailed model to perform learning tasks, users first define the learning rule, then feed all training samples to the detailed model for learning. In each step during training, the I/O module picks a specific stimulus and its corresponding teacher signal (if necessary) from all training samples and attaches the stimulus to the network model. Then, the DHS-based simulating module initializes the model and starts the simulation. After simulation, the learning module updates all synaptic weights according to the difference between model responses and teacher signals. After training, the learned model can achieve performance comparable to ANN. The testing phase is similar to training, except that all synaptic weights are fixed. 5 HPC-Net model Image classification is a typical task in the field of AI. In this task, a model should learn to recognize the content in a given image and output the corresponding label. Here we present the HPC-Net, a network consisting of detailed human pyramidal neuron models that can learn to perform image classification tasks by utilizing the DeepDendrite platform. HPC-Net has three layers, i.e., an input layer, a hidden layer, and an output layer. The neurons in the input layer receive spike trains converted from images as their input. Hidden layer neurons receive the output of input layer neurons and deliver responses to neurons in the output layer. The responses of the output layer neurons are taken as the final output of HPC-Net. Neurons between adjacent layers are fully connected. Za svaku stimulaciju slike, prvo pretvorimo svaki normalizovani piksel u homogeni šipak. ) in the image, the corresponding spike train has a constant interspike interval Uslovi poslovanja ( ) (in ms) which is determined by the pixel value ( ) as shown in Eq. ( ). x, y τ x, y p x, y 1 In our experiment, the simulation for each stimulus lasted 50 ms. All spike trains started at 9 + ISI ms and lasted until the end of the simulation. Then we attached all spike trains to the input layer neurons in a one-to-one manner. The synaptic current triggered by the spike arriving at time is given by τ t0 where is the post-synaptic voltage, the reversal potential syn = 1 mV, maksimalna sinaptička vodljivost max = 0.05 μS, a vremenska konstanta = 0.5 ms. v E g τ Neurons in the input layer were modeled with a passive single-compartment model. The specific parameters were set as follows: membrane capacitance m = 1,0 μF cm-2, otpornost membrane m = 104 Ω cm2, axial resistivity a = 100 Ω cm, reverzni potencijal pasivnog prostora l = 0 mV. c r r E Sakriven sloj sadrži grupu piramidalnih modela ljudskih neurona, primajući somatske napetosti ulaznih slojeva neurona. , and all neurons were modeled with passive cables. The specific membrane capacitance m = 1.5 μF cm-2, membrane resistance m = 48,300 Ω cm2, axial resistivity a = 261.97 Ω cm, and the reversal potential of all passive cables l = 0 mV. Input neurons could make multiple connections to randomly-selected locations on the dendrites of hidden neurons. The synaptic current activated by the -th synapse of the -th input neuron on neuron ’s dendrite je definisan kao u Eq. ( ), where je sinaptičko vodstvo, is the synaptic weight, is the ReLU-like somatic activation function, and is the somatic voltage of the -th input neuron at time . 51 c r r E k i j 4 gijk Wijk i t Neurons in the output layer were also modeled with a passive single-compartment model, and each hidden neuron only made one synaptic connection to each output neuron. All specific parameters were set the same as those of the input neurons. Synaptic currents activated by hidden neurons are also in the form of Eq. ( ). 4 Klasifikacija slika pomoću HPC-Net For each input image stimulus, we first normalized all pixel values to 0.0-1.0. Then we converted normalized pixels to spike trains and attached them to input neurons. Somatic voltages of the output neurons are used to compute the predicted probability of each class, as shown in equation , where Je li verovatnoća -th klasa predviđena od strane HPC-Net, je prosječna somatska napetost od 20 ms do 50 ms -th output neuron, and indicates the number of classes, which equals the number of output neurons. The class with the maximum predicted probability is the final classification result. In this paper, we built the HPC-Net with 784 input neurons, 64 hidden neurons, and 10 output neurons. 6 pi i i C Synaptic plasticity rules for HPC-Net Inspiracija iz prethodnog rada , we use a gradient-based learning rule to train our HPC-Net to perform the image classification task. The loss function we use here is cross-entropy, given in Eq. ( ), where is the predicted probability for class , indicates the actual class the stimulus image belongs to, = 1 if input image belongs to class i = 0 if not. 36 7 pi i Džej yi i yi When training HPC-Net, we compute the update for weight (the synaptic weight of the -th sinapse povezivanje neurona to neuron ) at each time step. After the simulation of each image stimulus, is updated as shown in Eq. ( Posebno : Wijk k i j Wijk 8 Ovo je stopa učenja, je vrednost ažuriranja u vremenu , , are somatic voltages of neuron and U svakom slučaju, is the -th sinaptička struja aktivirana od strane neurona on neuron , its synaptic conductance, je transfer otpora između Povezani odjeljak neurona Za neurone ’s dendrite to neuron S druge strane, s = 30 ms e = 50 ms su početak i kraj vremena za učenje, odnosno. za izlazne neurone, termin greške se može izračunati kao što je prikazano u Eq. ( ). For hidden neurons, the error term is calculated from the error terms in the output layer, given in Eq. ( ). t vj vi i j Iijk k i j gijk Kraljevstvo k i j j t t 10 11 Since all output neurons are single-compartment, equals to the input resistance of the corresponding compartment, . Transfer and input resistances are computed by NEURON. Mini-batch trening je tipična metoda u dubokom učenju za postizanje veće preciznosti predviđanja i ubrzavanje konvergencije. DeepDendrite takođe podržava mini-batch trening. batch, we make batch kopije HPC-Net. Tokom obuke, svaka kopija se hrani različitim uzorkom obuke iz serije. DeepDendrite prvo izračunava ažuriranje težine za svaku kopiju odvojeno. Nakon što su sve kopije u trenutnoj seriji obuke napravljene, izračunava se prosječno ažuriranje težine i težine u svim kopijama se ažuriraju istim iznosom. N N Robustness against adversarial attack with HPC-Net To demonstrate the robustness of HPC-Net, we tested its prediction accuracy on adversarial samples and compared it with an analogous ANN (one with the same 784-64-10 structure and ReLU activation, for fair comparison in our HPC-Net each input neuron only made one synaptic connection to each hidden neuron). We first trained HPC-Net and ANN with the original training set (original clean images). Then we added adversarial noise to the test set and measured their prediction accuracy on the noisy test set. We used the Foolbox , to generate adversarial noise with the FGSM method . ANN was trained with PyTorch , and HPC-Net was trained with our DeepDendrite. For fairness, we generated adversarial noise on a significantly different network model, a 20-layer ResNet . The noise level ranged from 0.02 to 0.2. We experimented on two typical datasets, MNIST i modni-MNIST . Results show that the prediction accuracy of HPC-Net is 19% and 16.72% higher than that of the analogous ANN, respectively. 98 99 93 100 101 95 96 Izvješće sažetak Dodatne informacije o dizajnu istraživanja dostupne su u linked to this article. Nature Portfolio Reporting Summary Dostupnost podataka Podaci koji podržavaju nalaze ove studije dostupni su u papiru, dodatnim informacijama i izvornim datotekama podataka koji su dostavljeni uz ovaj papir. izvorni kod i podaci koji su korišteni za reprodukciju rezultata u slikama. – Dostupni su na MNIST dataset je javno dostupan na . The Fashion-MNIST dataset is publicly available at . are provided with this paper. 3 6 https://github.com/pkuzyc/DeepDendrite http://yann.lecun.com/exdb/mnist https://github.com/zalandoresearch/fashion-mnist Izvor podataka Code availability The source code of DeepDendrite as well as the models and code used to reproduce Figs. – U ovoj studiji dostupni su . 3 6 https://github.com/pkuzyc/DeepDendrite Referencije McCulloch, W. S. & Pitts, W. Logički izračun ideja imanentnih u živčanoj aktivnosti. LeCun, Y., Bengio, Y. i Hinton, G. Duboko učenje. Priroda 521, 436–444 (2015). Poirazi, P., Brannon, T. & Mel, B. W. Aritmetička podgranična sinaptička suma u modelu piramidalne ćelije CA1. London, M. & Häusser, M. Dendritic računanje. Annu. Rev. Neurosci. 28, 503–532 (2005). Branco, T. & Häusser, M. Jedinstvena dendrična grana kao temeljna funkcionalna jedinica u živčanom sustavu. Curr. Opin. Neurobiol. 20, 494–502 (2010). Stuart, G. J. & Spruston, N. Dendrična integracija: 60 godina napretka. Nat. Neurosci. 18, 1713-1721 (2015). Poirazi, P. i Papoutsi, A. Osvetljenje dendrične funkcije pomoću računskih modela. Nat. Rev. Neurosci. 21, 303–321 (2020). Yuste, R. & Denk, W. Dendrične kralježnice kao osnovne funkcionalne jedinice neuronske integracije. priroda 375, 682-684 (1995). Engert, F. & Bonhoeffer, T. Dendrične promene kralježnice povezane s hipokampalnom dugotrajnom sinaptičkom plastičnost. priroda 399, 66–70 (1999). Yuste, R. Dendrične kralježnice i distribuirani krugovi. Neuron 71, 772–781 (2011). Yuste, R. Električna kompartmentacija u dendričnim kralježnicama. Annu. Rev. Neurosci. 36, 429–449 (2013). Rall, W. Branching dendritic stabala i motoneuronske membrane otpornosti. Eksp. Neurol. 1, 491–527 (1959). Segev, I. & Rall, W. Računsko istraživanje uzbudljive dendrične kralježnice. J. Neurophysiol. 60, 499-523 (1988). Silver, D. et al. Mastering the game of go with deep neural networks and tree search. Nature 529, 484–489 (2016). Silver, D. et al. Opći algoritam za učenje o pojačavanju koji obučava šah, šogi i prolazi kroz samo-igru. Science 362, 1140–1144 (2018). McCloskey, M. & Cohen, N. J. Katastrofalno uplitanje u konekcionističke mreže: problem sekvencijalnog učenja. Francuski, R. M. Katastrofalno zaboravljanje u konekcionističkim mrežama. Trends Cogn. Sci. 3, 128–135 (1999). Naud, R. & Sprekeler, H. Sparse bursts optimize information transmission in a multiplexed neural code. , E6329–E6338 (2018). Proc. Natl Acad. Sci. USA 115 Sacramento, J., Costa, R. P., Bengio, Y. & Senn, W. Dendritic cortical microcircuits approximate the backpropagation algorithm. in (NeurIPS*,* 2018). Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Payeur, A., Guerguiev, J., Zenke, F., Richards, B. A. & Naud, R. Burst-zavisna sinaptička plastičnost može koordinirati učenje u hijerarhijskim krugovima. Bicknell, B. A. & Häusser, M. Pravilo sinaptičkog učenja za iskorištavanje nelinearnog dendričnog računanja. Neuron 109, 4001–4017 (2021). Moldwin, T., Kalmenson, M. & Segev, I. The gradient clusteron: a model neuron that learns to solve classification tasks via dendritic nonlinearities, structural plasticity, and gradient descent. , e1009015 (2021). PLoS Comput. Biol. 17 Hodgkin, A. L. & Huxley, A. F. Kvantitativni opis membranske struje i njena primjena na vodljivost i uzbuđenje u živcu. Rall, W. Teorija fizioloških svojstava dendrita. Ann. N. Y. Acad. Sci. 96, 1071-1092 (1962). Hines, M. L. & Carnevale, N. T. The NEURON simulation environment. , 1179–1209 (1997). Neural Comput. 9 Bower, J. M. & Beeman, D. u Knjizi Geneza: Istraživanje realističkih neuronskih modela pomoću općeg neuronskog sistema simulacije (eds Bower, J. M. & Beeman, D.) 17–27 (Springer New York, 1998). Hines, M. L., Eichner, H. & Schürmann, F. Neuronska podjela u računalno vezanim paralelnim mrežnim simulacijama omogućava skalanje radnog vremena s dvostruko više procesora. Hines, M. L., Markram, H. & Schürmann, F. Potpuno implicitna paralelna simulacija pojedinih neurona. Ben-Shalom, R., Liberman, G. & Korngreen, A. Ubrzanje modeliranja odjeljka na grafičkoj procesorskoj jedinici. Tsuyuki, T., Yamamoto, Y. & Yamazaki, T. Efficient numerical simulation of neuron models with spatial structure on graphics processing units. In (eds Hirose894Akiraet al.) 279–285 (Springer International Publishing, 2016). Proc. 2016 International Conference on Neural Information Processing Vooturi, D. T., Kothapalli, K. & Bhalla, U. S. Parallelizing Hines Matrix Solver in Neuron Simulations on GPU. In Proc. IEEE 24th International Conference on High Performance Computing (HiPC) 388-397 (IEEE, 2017). Huber, F. Efikasno drvo solver za hines matrice na GPU. Preprint na https://arxiv.org/abs/1810.12742 (2018). Korte, B. & Vygen, J. Teorija kombinirane optimizacije i algoritmi 6 edn (Springer, 2018). Gebali, F. (Wiley, 2011). Algorithms and Parallel Computing Kumbhar, P. et al. CoreNEURON: Optimizovani računalni motor za NEURON simulator. Front. Neuroinform. 13, 63 (2019). Urbanczik, R. & Senn, W. Učenje dendričnom predviđanjem somatskog šikinga. Neuron 81, 521-528 (2014). Ben-Shalom, R., Aviv, A., Razon, B. & Korngreen, A. Optimizacija modela jonskih kanala pomoću paralelnog genetskog algoritma na grafičkim procesorima. Mascagni, M. A parallelizing algorithm for computing solutions to arbitrarily branched cable neuron models. , 105–114 (1991). J. Neurosci. Methods 36 McDougal, R. A. et al. Dvadeset godina modelaDB i dalje: izgradnja esencijalnih alata za modeliranje za budućnost neuroznanosti. Migliore, M., Messineo, L. & Ferrante, M. Dendritic Ih selectively blocks temporal summation of unsynchronized distal inputs in CA1 pyramidal neurons. , 5–13 (2004). J. Comput. Neurosci. 16 Hemond, P. et al. Distinct classes of pyramidal cells exhibit mutually exclusive firing patterns in hippocampal area CA3b. , 411–424 (2008). Hippocampus 18 Hay, E., Hill, S., Schürmann, F., Markram, H. & Segev, I. Modeli neokortikalnog sloja 5b piramidalne ćelije koje hvata širok raspon dendritskih i perizomatskih aktivnih svojstava. PLoS Comput. Biol. 7, e1002107 (2011). Masoli, S., Solinas, S. & D’Angelo, E. Obrada potencijala akcije u detaljnom modelu purkinje ćelije otkriva kritičnu ulogu za axonalnu kompartmentaciju. Lindroos, R. et al. Basal ganglia neuromodulacija preko višestrukih vremenskih i strukturnih skala – simulacije MSN-ova izravnog puta istražuju brz početak dopaminergijskih efekata i predviđaju ulogu Kv4.2. Migliore, M. et al. Sinaptički skupovi funkcioniraju kao operatori mirisa u mirisnoj žarulji. Proc. Natl Acad. Sci. USa 112, 8499–8504 (2015). NVIDIA. CUDA C++ Priručnik za programiranje. https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html (2021). NVIDIA. . (2021). CUDA C++ Best Practices Guide https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html Harnett, M. T., Makara, J. K., Spruston, N., Kath, W. L. & Magee, J. C. Sinaptičko pojačavanje dendričnih kralježaka poboljšava ulaznu kooperativnost. Priroda 491, 599–602 (2012). Chiu, C. Q. et al. Compartmentalization of GABAergic inhibition by dendritic spines. Science 340, 759–762 (2013). Tønnesen, J., Katona, G., Rózsa, B. & Nägerl, U. V. Plastičnost leđnog vrata regulira odjeljivanje sinapsa. Nat. Neurosci. 17, 678–685 (2014). Eyal, G. et al. Ljudski kortikalni piramidalni neuroni: od kralježaka do štapića putem modela. Front. Cell. Neurosci. 12, 181 (2018). Koch, C. & Zador, A. The function of dendritic spines: devices subserving biochemical rather than electrical compartmentalization. , 413–422 (1993). J. Neurosci. 13 Koch, C. Dendritic spines. u Biophysics of Computation (Oxford University Press, 1999). Rapp, M., Yarom, Y. & Segev, I. The impact of parallel fiber background activity on the cable properties of cerebellar purkinje cells. , 518–533 (1992). Neural Comput. 4 Hines, M. Efektivno izračunavanje razgranatih živčanih jednadžbi. Int. J. Bio-Med. Comput. 15, 69–76 (1984). Nayebi, A. & Ganguli, S. Biologically inspired protection of deep networks from adversarial attacks. Preprint at (2017). https://arxiv.org/abs/1703.09202 Goddard, N. H. & Hood, G. Simulacija velikih razmjera pomoću paralelnog Geneza. u Knjizi Geneza: Istraživanje realističkih neuronskih modela pomoću općeg neuronskog simulacijskog sistema (eds Bower James M. & Beeman David) 349-379 (Springer New York, 1998). Migliore, M., Cannia, C., Lytton, W. W., Markram, H. & Hines, M. L. Paralelne mrežne simulacije s NEURON-om. Lytton, W. W. et al. Simulacijske neurotehnologije za unapređenje istraživanja mozga: paralelizacija velikih mreža u NEURON-u. Valero-Lara, P. et al. cuHinesBatch: Rješavanje višestrukih Hines sustava na GPU-ima ljudski mozak projekt. u Proc. 2017 Međunarodna konferencija o računovodstvu 566-575 (IEEE, 2017). Akar, N. A. et al. Arbor – Morfološki detaljna biblioteka za simulaciju neuronskih mreža za suvremene arhitekture visokih performansi. u Proc. 27. međunarodna konferencija Euromicro o paralelnoj, distribuiranoj i mrežnoj obradi (PDP) 274–282 (IEEE, 2019). Ben-Shalom, R. et al. NeuroGPU: Ubrzavanje multi-odjeljenja, biofizički detaljne simulacije neurona na GPU-ovima. Rempe, M. J. & Chopp, D. L. A predictor-corrector algorithm for reaction-diffusion equations associated with neural activity on branched structures. , 2139–2161 (2006). SIAM J. Sci. Comput. 28 Kozloski, J. & Wagner, J. An ultrascalable solution to large-scale neural tissue simulation. , 15 (2011). Front. Neuroinform. 5 Jayant, K. et al. Ciljane intracelularne napetosti snimanja iz dendričnih spina koristeći kvantno-dot-premazane nanopipete. Nat. Nanotechnol. 12, 335–342 (2017). Palmer, L. M. & Stuart, G. J. Membrane potential changes in dendritic spines during action potentials and synaptic input. , 6897–6903 (2009). J. Neurosci. 29 Nishiyama, J. & Yasuda, R. Biohemijsko računanje za strukturnu plastičnost kralježnice. Neuron 87, 63–75 (2015). Yuste, R. & Bonhoeffer, T. Morphological changes in dendritic spines associated with long-term synaptic plasticity. , 1071–1089 (2001). Annu. Rev. Neurosci. 24 Holtmaat, A. & Svoboda, K. Iskustvo-ovisna strukturna sinaptička plastičnost u mozgu sisavaca. Nat. Rev. Neurosci. 10, 647–658 (2009). Caroni, P., Donato, F. & Muller, D. Strukturna plastičnost na učenju: regulacija i funkcije. Keck, T. et al. Massive restructuring of neuronal circuits during functional reorganization of adult visual cortex. , 1162 (2008). Nat. Neurosci. 11 Hofer, S. B., Mrsic-Flogel, T. D., Bonhoeffer, T. & Hübener, M. Iskustvo ostavlja trajan strukturni trag u kortikalnim krugovima. Trachtenberg, J. T. et al. Dugotrajno in vivo slikanje sinaptičke plastičnosti ovisne o iskustvu u odraslom korteksu. priroda 420, 788-794 (2002). Marik, S. A., Yamahachi, H., McManus, J. N., Szabo, G. & Gilbert, C. D. Axonal dynamics of excitatory and inhibitory neurons in somatosensory cortex. , e1000395 (2010). PLoS Biol. 8 Xu, T. et al. Brza formacija i selektivna stabilizacija sinapsa za trajne motoričke uspomene. priroda 462, 915-919 (2009). Albarran, E., Raissi, A., Jáidar, O., Shatz, C. J. & Ding, J. B. Enhancing motor learning by increasing the stability of newly formed dendritic spines in the motor cortex. , 3298–3311 (2021). Neuron 109 Branco, T. & Häusser, M. Gradienti sinaptičke integracije u pojedinačnim kortikalnim piramidalnim dendritima. Neuron 69, 885–892 (2011). Major, G., Larkum, M. E. & Schiller, J. Aktivna svojstva neokortikalne piramidalne neuron dendrites. Annu. Rev. Neurosci. 36, 1–24 (2013). Gidon, A. et al. Dendritski akcijski potencijali i izračun u ljudskom sloju 2/3 kortikalnih neurona. nauka 367, 83-87 (2020). Doron, M., Chindemi, G., Muller, E., Markram, H. & Segev, I. Timed sinaptička inhibicija oblikuje NMDA šipke, utječući na lokalnu dendričnu obradu i globalne I/O svojstva kortikalnih neurona. Du, K. et al. Cell-type-specific inhibition of the dendritic plateau potential in striatal spiny projection neurons. , E7612–E7621 (2017). Proc. Natl Acad. Sci. USA 114 Smith, S. L., Smith, I. T., Branco, T. & Häusser, M. Dendritic spikes enhance stimulus selectivity in cortical neurons in vivo. , 115–120 (2013). Nature 503 Xu, N.-l et al. Nonlinear dendritic integration of sensory and motor input during an active sensing task. , 247–251 (2012). Nature 492 Takahashi, N., Oertner, T. G., Hegemann, P. & Larkum, M. E. Active cortical dendrites modulate perception. , 1587–1590 (2016). Science 354 Sheffield, M. E. & Dombeck, D. A. Calcium transient prevalence across the dendritic arbour predicts place field properties. , 200–204 (2015). Nature 517 Markram, H. et al. Reconstruction and simulation of neocortical microcircuitry. , 456–492 (2015). Cell 163 Billeh, Y. N. et al. Systematic integration of structural and functional data into multi-scale models of mouse primary visual cortex. , 388–403 (2020). Neuron 106 Hjorth, J. et al. The microcircuits of striatum in silico. , 202000671 (2020). Proc. Natl Acad. Sci. USA 117 Guerguiev, J., Lillicrap, T. P. & Richards, B. A. Towards deep learning with segregated dendrites. , e22901 (2017). elife 6 Iyer, A. et al. Izbjegavanje katastrofe: aktivni dendriti omogućuju učenje sa više zadataka u dinamičnim okruženjima. Jones, I. S. & Kording, K. P. Might a single neuron solve interesting machine learning problems through successive computations on its dendritic tree? , 1554–1571 (2021). Neural Comput. 33 Bird, A. D., Jedlicka, P. & Cuntz, H. Dendritic normalisation improves learning in sparsely connected artificial neural networks. , e1009202 (2021). PLoS Comput. Biol. 17 Goodfellow, I. J., Shlens, J. & Szegedy, C. Explaining and harnessing adversarial examples. In (ICLR, 2015). 3rd International Conference on Learning Representations (ICLR) Papernot, N., McDaniel, P. & Goodfellow, I. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. Preprint at (2016). https://arxiv.org/abs/1605.07277 Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. , 2278–2324 (1998). Proc. IEEE 86 Xiao, H., Rasul, K. & Vollgraf, R. Moda-MNIST: novi skup podataka o slikama za benchmarking algoritama za strojno učenje. Preprint na http://arxiv.org/abs/1708.07747 (2017). Bartunov, S. et al. Assessing the scalability of biologically-motivated deep learning algorithms and architectures. In (NeurIPS, 2018). Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Rauber, J., Brendel, W. & Bethge, M. Foolbox: A Python toolbox to benchmark the robustness of machine learning models. In (2017). Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning Rauber, J., Zimmermann, R., Bethge, M. & Brendel, W. Foolbox native: brzi protivrečni napadi za usporedbu robusnosti modela strojnog učenja u PyTorch, TensorFlow, i JAX. Paszke, A. et al. PyTorch: Imperativni stil, biblioteka dubokog učenja visokih performansi. u napredak u neuralnim sistemima za obradu informacija 32 (NeurIPS 2019) (NeurIPS, 2019). He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In 770–778 (IEEE, 2016). Proc. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Priznanja Autori Saveta iskreno zahvaljuju dr Rita Zhang, Daochen Shi i članovima u NVIDIA za vrijednu tehničku podršku GPU računovodstva. Ovaj rad je podržan od strane Nacionalnog ključnog programa R&D Kine (No. 2020AAA0130400) do K.D. i T.H., Nacionalne zaklade za prirodne nauke Kine (No. 6182588102) do Y.T., Švedskog istraživačkog saveta (VR-M-2020-01652), Švedskog centra za istraživanje e-znanosti (SeRC), EU/Horizon 2020 ključnog područja istraživačkog programa (No. 2018B030338001) do T.H., Nacionalnog fonda za prirodne nauke Kine (No. 61825101) do Y.T., Švedskog istraživačkog saveta (VR-M-2020-016 Ovaj članak je dostupan u prirodi pod licencom CC by 4.0 Deed (Attribution 4.0 International). Ovaj papir je pod licencom CC by 4.0 Deed (Attribution 4.0 International). Dostupan u prirodi