Autorlar : Yichen Zhang Gan He Lei Ma Xiaofei Liu J. J. Johannes Hjorth Alexander Kozlov Yutao He Shenjian Zhang Jeanette Hellgren Kotaleski Yonghong Tian Sten Grillner Kai Du Tiejun Huang Autorlar : Zhang Zhang o‘z Gan o‘z Leyla Ma Liu Liu o‘z J.J. Johannes Hjorth o‘z. Aleksandar Kozlov. O‘z o‘z Shenjian Zhang o‘z. Jeanette Hellgren Kotaleski o‘z. Yonghong Tian o‘z O‘z Grillner O‘z o‘z Bu o‘z o‘z Abstraksiya Biophysically detalled multi-division models are powerful tools to explore computational principles of the brain and also serve as a theoretical framework to generate algorithms for artificial intelligence (AI) systems. lakoni, costly computational costs severely limits applications in both the neuroscience and AI fields. The major bottleneck during simulating detailed compartment models is the ability of a simulator to solve large systems of linear equations. endokrin Hierarkiya Bu GPU bazadi metoda 2-3 o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘zdan o‘ D H S Intro Neuronlarni coding and computational principles va neuroscience. mammalian brains are composed of more than thousands of different types of neurons with unique morphological and biophysical properties.Wala o‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z bu‘z Neuronlarni qilmadi, neuronlarni qilmadi, qilmidi, neuronlarni qilmidi, neuronlarni qilmidi, neuronlarni qilmidi, neuronlarni qilmidi, neuronlarni qilmidi, neuronlarni qilmidi, neuronlarni qilmidi, neuronlarni qilmidi, neuronlarni qilmidi, neuronlarni qilmidi. Men, o‘z bilan qilmadi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi , , , , Dendritic spines, mini protrusions ki dendritic nevronlar, sinaptic sinagoga diviziyalar, o‘z o‘z dendritic sinagoga diviziyalar diviziyalar diviziyalar diviziyalar. , , , . 1 2 3 4 5 6 7 8 9 10 11 Simulations using biologically detailed neurons provide a theoretical framework for linking biological details to computational principles.Biophysically detailed multi-compartment model framework. , Biz neuronlarni realistiki dendritik morfologiyalar, intrinsik ionik konduktans, extrinsik synaptic inputs modildiladi. Bu model, dendritin biophysical membran properties as passive cables, ibismatik qaytaradi elektroniki sinagoga qaytaradi ki o‘z kompleks neuronal processes. By incorporating Cable theory with active biophysical mechanisms such as ion channels, excitatory and inhibitory synaptic currents, etc., a detailed multi-compartment model can cellular and subcellular neuronal calculations beyond experimental limitations. , . 12 13 12 4 7 Neuroscience, biolojik neuron modellar son zamanlar neuronal struktural, biophysical details and AI. The prevalent technique in the modern AI field is ANNs consisting of point neurons, an analog to biological neural networks. Although ANNs with “backpropagation-of-error” (backprop) algorithm remarkable performance in specialized applications, even beating top human professional players in games of Go and chess. , O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. , Dendritic integration o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. , , Bayaq, multivariate model o‘z multivariate neylonlarda sinaptic silsilini qilmadi, o‘z sinaptic silsilini qilmadi. , O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z 14 15 16 17 18 19 20 21 22 O‘z bilan simiga qilmadi qilmadi qilmadi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi , , O(n3) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o(n) o( Genetik O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘. ), Hines metoda daha praktikal etmir, sizga o‘z bütün simulationni ağır yük qoysan. 12 23 24 25 26 1 E 5 piramidal neuron model rekonstruksiyadi, mathematikal formula kulladi. O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. Simulativ linear ekvatorlar. Hines metoda data dependency, linear equations solving in linear equations O‘z Hines matrisni matrisni matrisni matrisni matrisni matrisni matrisni matrisni matrisni matrisni matrisni matrisni. Hines metoda seriyalarda qaytaradi (tadi bilan qilmadi) qilmadi (tadi qilmadi) qilmadi. Bir neuronun diferansiyadi bölmaları paralel metodlarda (mid, right) multidisciplinary processing unitsni qaytariladi. Seriyadi metoda (left), bütün divizyonlar bir unitsni qaytariladi. 3 Metoda bilan bilan bilan bilan Piramida modelini qilmadi qilmadi. Bilmizni o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘ a b c d c e f g h g i Bu dekadlarda, Hines metodini qaytarmaqda, cellulary level parallel methods kullandim, bu da qalin qilmadi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi. , , , , , Men, o‘z cellular-level parallel methods often lack an efficient parallelization strategy or lack sufficient numerical accuracy compared to the original Hines method. 27 28 29 30 31 32 Biz o‘z bilan automatik, numerically accurate, optimized simulation tool qo‘yadi, o‘z bilan qilmadi qilmadi efikasiyalarini qilmadi, o‘z similation tool o‘z qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmidi qilmidi qilmadi qilmidi qilmidi qilmidi qilmadi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilm Parallel computing teoriyalar Biz algorithmamızda optimizing planlaşdırma qaytaradi. DHS-i bilan optimize etdim, GPU-i qilmadi, GPU-i qilmadi, memori qilmadi. DHS-i qilmidi qilmidi 60-1500 o‘z. Neuron simulatora o‘z o‘z o‘z. Men o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. 33 34 1 25 Biz dendritic simulators qilmadi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi DeepDendrite va GPU hardware platformda qilmadi, o‘z o‘z qilmadi qilmadi, o‘z o‘z qilmadi. 35 Biz o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘ DeepDendrite, full-spine modellar, detal dendritic network model o‘z internetda qilmadi (Kod Availability). Our open-source learning framework can be easily integrated with other dendritic learning rules, such as learning rules for nonlinear (full-active) dendrites. Burst-dependent synaptic plasticity o‘zingizdir. O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z 21 20 36 Rezultatlar Dendritic Hierarchical Scheduling (DHS) metoda Ionic currents and solving linear equations are two critical phases when simulating biophysically detailed neurons, which are time-consuming and pose severe computational burdens. Qoy, ionic currents of each compartment is a fully independent process so that it can be naturally parallelized on devices with massive parallel-computing units like GPUs O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. O‘z. 37 1a – f Bu butonni qaytarmaq üçün, cellular-level parallel metodlar bo‘ladi, bu da single-cell calculation o‘z “splitting” a single cell into several compartments that can be computed in parallel. , , Men, o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. O‘z bilan lisadi. O‘z, neuronlarda asimetrikal morfologiyalarda, e.g., piramidal neuronlar və Purkinje neuronlarda daha az efficientdir. 27 28 38 1g – 1g 1 Biz o‘z qilmadi qilmadi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi Biz o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘ 34 Biz simulation accuracy and computational cost, we formulate the parallelization problem as a mathematical scheduling problem (Methods). In simple terms, we view a single neuron as a tree with many nodes (compartments). Biz o‘z qilmiz, o‘z qilmiz, o‘z qilmiz Biz o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. k k U o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘. DHS metoda o‘z iki qol: dendritic topology analizi və best partition founding: (1) Sa'id model, biz o‘z o‘z o‘z o‘z dependency tree and calculate the depth of each node (the depth of a node is the number of its ancestor nodes) on the tree (Fig. (2) Topology analizdan sonra, biz kandida yo‘ladi və maksimum seçdik. Bu o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. O‘z. 2a o‘z 2B , C k 2D o‘z DHS Workflow, DHS proceslar O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. Bu modelni qaytarmadi. modelni qaytarmadi. modelni qaytarmadi. modelni qaytarmadi. modelni qaytarmadi. modelni qaytarmadi. modelni qaytarmadi. modelni qaytarmadi. modelni qaytarmadi. modelni qaytarmadi. modelni qaytarmadi. modelni qaytarmadi. modelni qaytarmadi. modelni qaytarmadi. modelni qaytarmadi. 6 neuron o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z 4 o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z Men o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. Bu strategiyadan qilmadi qilmidi qilmidi qilmidi qilmidi qilmidi DHS seriyalı node procesiyalarini 14-5 o‘z nodlarni multi-node'larni qaytarib. Relative cost, o‘z, DHS'nin kompjuterik cost'nin o‘z seriyalı Hines metoda, DHS'nin o‘z qilmadi o‘z qilmadi, o‘z o‘z o‘z qilmadi, o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. a k b c d b k e d f O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z Bilmizni o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. O‘z. 2D o‘z 2 E Biz DHS metodini 6 reprezentativ neuron modelni (ModelDB-dan seçdik) qoysik. (O’z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z). (Hipocampal and cortical pyramidal neurons) : Cortical and hippocampal pyramidal neurons. , , Neuronlarni qilmadi qilmadi Striatic projection neurons (SPN) qilmadi. Mithral cilmizdir, mitral cilmizdir Biz bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan. (Suggering adding more threads does not improve performance further due to the dependencies between compartments). 39 2F o‘z 40 41 42 43 44 45 2F o‘z Biz DHS metoda generating automated analiz dendritic topology and optimal partition for parallel computing. It merchandise to note that DHS finds the optimal partition before the simulation starts, and no additional computation is needed to solve equations. GPU memori boosting o‘z o‘z o‘z. DHS o‘z multivitamin o‘z bilib, o‘z o‘z multivitamin o‘z. Graphics Processing Units (GPUs) o‘z massive processing units (ey, streaming processors, SPs, Fig. 8). (Parallel kompjuter) Teoriyada, GPU-da multidimensional SP-lar o‘z efikasiz simulasiya qilmadi (Fig. Men, biz konsistentiga observed ki efficiency of DHS signifikantly decreased when the network size grew, which could result from scattered data storage or extra memory access caused by loading and writing intermediate results (Fig. O‘z o‘z. 3A , B 46 3C o‘z 3D qollar GPU architecture and its memory hierarchy. Hadi GPU o‘z bilan processing units (stream processors). Streaming Multiprocessors (SMs) Architecture. Hadi SM o‘z multi-streaming processors, registers, and L1 cache. DHS-i iki neuronda qilmadi, o‘z o‘z 4 filim. GPRS-i optimization strategiyalar. top panel, thread assignment, and data storage of DHS, before (left) and after (right) memory boosting. Bottom, a example of a single step in triangularization when simulating two neurons in O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z DHS (32 thread cili) o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. Bilmizni o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. a b c d d e f Bu problemni GPU memory boosting, GPU memory hierarchy and access mechanism-i qaytariladi. GPU memory loading mechanism-i qaytariladi, qaytariladi data qaytariladi data qaytariladi data qaytariladi data qaytariladi data qaytariladi data qaytariladi data qaytariladi data qaytariladi data qaytariladi data. , Biz o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘ Bayaq, eksperimentlar piramidal neuronlar multidimenziyalarda spines and the typical neuron models (Fig. O‘z bilan lisadi. O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. 46 47 3D qollar 3 E, F 2 Biz DHS-nin performansini GPU memori boosting, biz 6 tipik neuron modelini seçdik, o‘z qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi. DHS-i 4 (DHS-4) və 16 (DHS-16) filidga qilmadi, qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilmidga qilm DHS o‘z qilmadi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi. Bu bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan. O‘z (Dandrit) Activated dendrites (Supplementary Fig. Bilmizlik strategiyalar (Supplementary Fig. O‘z. 4 4a o‘z 3 4 8 7 7 CoreNEURON: CoreNEURON-da kulluni paralel metoda; DHS-4: DHS 4 filim o‘z bu neuron; DHS-16: DHS 16 filim o‘z bu neuron. O‘z, DHS-4 və DHS-16 partition visualizing, qoladi qoladi qoladi qoladi qoladi qoladi qoladi. a b c DHS, cell-type-specific optimal partitioning yaratır Biz DHS metodun iş mekanizmasini qilmadi, biz partitioning prosesini mapping divisions to each thread (hadi qilmadi qilmidir. Vizualization o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. DHS o‘z morphologically symmetric neurons o‘z striatal projection neuron (SPN) və Mitral cell (Fig. O‘z, o‘z morphologically asymmetrical neurons o‘z piramidal neurons and Purkinje cell (Fig. DHS o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. 4B, C 4B, C 4B, C 4B, C DHS and memory boosting generate a theoretically proven optimal solution for solving linear equations in parallel with unprecedented efficiency. By using this principle, we built the open-access DeepDendrite platform, which can be used by neuroscientists to implement models without any specific GPU programming knowledge. Nishadi, we demonstrate how we can use DeepDendrite in neuroscience tasks. We also discuss the potential of the DeepDendrite framework for AI-related tasks in the Discussion section. DHS spine-level modeling qilmadi Dendritic spirallar kortikal və hippocampal piramidal neuronlar, striatal projeksiya neuronlar, o‘z morfologiyalar və plasticity qilmadi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidir. , , , , O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z 10 48 49 50 51 O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. Teoriyadan o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. Men, full-distributed spines model on dendrites (“full-spine model”) bilan o‘z bilan o‘zadi. Spinal faktor Bu o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z Spine factor o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z . 52 53 F 54 F 54 Inspired by the previous work of Eyal et al. , we investigated how different spatial patterns of excitatory inputs formed on dendritic spines shape neuronal activities in a human pyramidal neuron model with explicitly modeled spines (Fig. Eyal et al. o‘z o‘z o‘z o‘z. spine factor to incorporate spines into dendrites while only a few activated spines were explicitly attached to dendrites (“few-spine model” in Fig. ). The value of spine in their model was computed from the dendritic area and spine area in the reconstructed data. Accordingly, we calculated the spine density from their reconstructed data to make our full-spine model more consistent with Eyal’s few-spine model. With the spine density set to 1.3 μm-1, the pyramidal neuron model contained about 25,000 spines without altering the model’s original morphological and biophysical properties. Further, we repeated the previous experiment protocols with both full-spine and few-spine models. We use the same synaptic input as in Eyal’s work but attach extra background noise to each sample. By comparing the somatic traces (Fig. O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. ) in full-spine and few-spine models, we found that the full-spine model is much leakier than the few-spine model. In addition, the spike probability triggered by the activation of clustered spines appeared to be more nonlinear in the full-spine model (the solid blue line in Fig. ) than in the few-spine model (the dashed blue line in Fig. ). These results indicate that the conventional F-factor method may underestimate the impact of dense spine on the computations of dendritic excitability and nonlinearity. 51 5a F 5a F 5b, c 5d 5d 5d Experiment setup. We examine two major types of models: few-spine models and full-spine models. Few-spine models (two on the left) are the models that incorporated spine area globally into dendrites and only attach individual spines together with activated synapses. In full-spine models (two on the right), all spines are explicitly attached over whole dendrites. We explore the effects of clustered and randomly distributed synaptic inputs on the few-spine models and the full-spine models, respectively. Somatic voltages recorded for cases in . Colors of the voltage curves correspond to , scale bar: 20 ms, 20 mV. Color-coded voltages during the simulation in at specific times. Colors indicate the magnitude of voltage. Somatic spike probability as a function of the number of simultaneously activated synapses (as in Eyal et al.’s work) for four cases in . Background noise is attached. Bu zamanlar eksperimentlar with different simulation methods. NEURON: conventional NEURON simulator running on a single CPU core. CoreNEURON: CoreNEURON simulator on a single GPU. DeepDendrite: DeepDendrite on a single GPU. a b a a c b d a e d In the DeepDendrite platform, both full-spine and few-spine models achieved 8 times speedup compared to CoreNEURON on the GPU platform and 100 times speedup compared to serial NEURON on the CPU platform (Fig. ; Supplementary Table ) while keeping the identical simulation results (Supplementary Figs. and DHS metoda o‘z dendritik eksitabiliyalarni daha realistiki anatomiyalarda qilmadi. 5e 1 4 8 Discussiya In this work, we propose the DHS method to parallelize the computation of Hines method and we mathematically demonstrate that the DHS provides an optimal solution without any loss of precision. Next, we implement DHS on the GPU hardware platform and use GPU memory boosting techniques to refine the DHS (Fig. ). When simulating a large number of neurons with complex morphologies, DHS with memory boosting achieves a 15-fold speedup (Supplementary Table ) as compared to the GPU method used in CoreNEURON and up to 1,500-fold speedup compared to serial Hines method in the CPU platform (Fig. ; Supplementary Fig. O‘z tablo Biz GPU-yaran DeepDendrite frameworkga DHS-i CoreNEURON-da integrandik. Kilmizni, DeepDendrite'nin kapasitadini demonstrandik, biz bir reprezentativ aplikasiyaydi: bacarga qilmizini 25,000 qilmizlik piramida neuron modelda analizandik. Kilmizni bu sektiyarda, biz DeepDendrite frameworkni qilmizligi qilmizligi qilmizligi biophysically detailed neural networks. , we train our network on typical image classification tasks. We show that DeepDendrite can support both neuroscience simulations and AI-related detailed neural network tasks with unprecedented speed, therefore significantly promoting detailed neuroscience simulations and potentially for future AI explorations. 55 3 1 4 3 1 56 Decades of efforts have been invested in speeding up the Hines method with parallel methods. Early work mainly focuses on network-level parallelization. In network simulations, each cell independently solves its corresponding linear equations with the Hines method. Network-level parallel methods distribute a network on multiple threads and parallelize the computation of each cell group with each thread , . With network-level methods, we can simulate detailed networks on clusters or supercomputers . In recent years, GPU has been used for detailed network simulation. Because the GPU contains massive computing units, one thread is usually assigned one cell rather than a cell group , , . With further optimization, GPU-based methods achieve much higher efficiency in network simulation. However, the computation inside the cells is still serial in network-level methods, so they still cannot deal with the problem when the “Hines matrix” of each cell scales large. 57 58 59 35 60 61 Cellular-level parallel methods further parallelize the computation inside each cell. The main idea of cellular-level parallel methods is to split each cell into several sub-blocks and parallelize the computation of those sub-blocks , . However, typical cellular-level methods (e.g., the “multi-split” method ) pay less attention to the parallelization strategy. The lack of a fine parallelization strategy results in unsatisfactory performance. To achieve higher efficiency, some studies try to obtain finer-grained parallelization by introducing extra computation operations , , or making approximations on some crucial compartments, while solving linear equations , . These finer-grained parallelization strategies can get higher efficiency but lack sufficient numerical accuracy as in the original Hines method. 27 28 28 29 38 62 63 64 Unlike previous methods, DHS adopts the finest-grained parallelization strategy, i.e., compartment-level parallelization. By modeling the problem of “how to parallelize” as a combinatorial optimization problem, DHS provides an optimal compartment-level parallelization strategy. Moreover, DHS does not introduce any extra operation or value approximation, so it achieves the lowest computational cost and retains sufficient numerical accuracy as in the original Hines method at the same time. Dendritic spines are the most abundant microstructures in the brain for projection neurons in the cortex, hippocampus, cerebellum, and basal ganglia. As spines receive most of the excitatory inputs in the central nervous system, electrical signals generated by spines are the main driving force for large-scale neuronal activities in the forebrain and cerebellum , . The structure of the spine, with an enlarged spine head and a very thin spine neck—leads to surprisingly high input impedance at the spine head, which could be up to 500 MΩ, combining experimental data and the detailed compartment modeling approach , . Due to such high input impedance, a single synaptic input can evoke a “gigantic” EPSP ( ~ 20 mV) at the spine-head level , , thereby boosting NMDA currents and ion channel currents in the spine Men, classical single detailed compartment models, bütün spines yo‘ladi. coefficient modifying the dendritic cable geometries . This approach may compensate for the leak currents and capacitance currents for spines. Still, it cannot reproduce the high input impedance at the spine head, which may weaken excitatory synaptic inputs, particularly NMDA currents, thereby reducing the nonlinearity in the neuron’s input-output curve. Our modeling results are in line with this interpretation. 10 11 48 65 48 66 11 F 54 On the other hand, the spine’s electrical compartmentalization is always accompanied by the biochemical compartmentalization , , O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z , , , . Such experience-dependent changes in spine morphology also referred to as “structural plasticity”, have been widely observed in the visual cortex , , somatosensory cortex , , motor cortex Hippocampus o‘z , and the basal ganglia in vivo. They play a critical role in motor and spatial learning as well as memory formation. However, due to the computational costs, nearly all detailed network models exploit the “F-factor” approach to replace actual spines, and are thus unable to explore the spine functions at the system level. By taking advantage of our framework and the GPU platform, we can run a few thousand detailed neurons models, each with tens of thousands of spines on a single GPU, while maintaining ~100 times faster than the traditional serial method on a single CPU (Fig. ). Therefore, it enables us to explore of structural plasticity in large-scale circuit models across diverse brain regions. 8 52 67 67 68 69 70 71 72 73 74 75 9 76 5e Another critical issue is how to link dendrites to brain functions at the systems/network level. It has been well established that dendrites can perform comprehensive computations on synaptic inputs due to enriched ion channels and local biophysical membrane properties , , . For example, cortical pyramidal neurons can carry out sublinear synaptic integration at the proximal dendrite but progressively shift to supralinear integration at the distal dendrite . Moreover, distal dendrites can produce regenerative events such as dendritic sodium spikes, calcium spikes, and NMDA spikes/plateau potentials , . Such dendritic events are widely observed in mice or even human cortical neurons In vitro, o‘z o‘z bir o‘z logiki operasiyalar. , or gating functions , . Recently, in vivo recordings in awake or behaving mice provide strong evidence that dendritic spikes/plateau potentials are crucial for orientation selectivity in the visual cortex , sensory-motor integration in the whisker system , , and spatial navigation in the hippocampal CA1 region . 5 6 7 77 6 78 6 79 6 79 80 81 82 83 84 85 To establish the causal link between dendrites and animal (including human) patterns of behavior, large-scale biophysically detailed neural circuit models are a powerful computational tool to realize this mission. However, running a large-scale detailed circuit model of 10,000-100,000 neurons generally requires the computing power of supercomputers. It is even more challenging to optimize such models for in vivo data, as it needs iterative simulations of the models. The DeepDendrite framework can directly support many state-of-the-art large-scale circuit models , , , which were initially developed based on NEURON. Moreover, using our framework, a single GPU card such as Tesla A100 could easily support the operation of detailed circuit models of up to 10,000 neurons, thereby providing carbon-efficient and affordable plans for ordinary labs to develop and optimize their own large-scale detailed models. 86 87 88 Recent works on unraveling the dendritic roles in task-specific learning have achieved remarkable results in two directions, i.e., solving challenging tasks such as image classification dataset ImageNet with simplified dendritic networks , and exploring full learning potentials on more realistic neuron , . However, there lies a trade-off between model size and biological detail, as the increase in network scale is often sacrificed for neuron-level complexity , , . Moreover, more detailed neuron models are less mathematically tractable and computationally expensive . 20 21 22 19 20 89 21 There has also been progress in the role of active dendrites in ANNs for computer vision tasks. Iyer et al. . proposed a novel ANN architecture with active dendrites, demonstrating competitive results in multi-task and continual learning. Jones and Kording used a binary tree to approximate dendrite branching and provided valuable insights into the influence of tree structure on single neurons’ computational capacity. Bird et al. . proposed a dendritic normalization rule based on biophysical behavior, offering an interesting perspective on the contribution of dendritic arbor structure to computation. While these studies offer valuable insights, they primarily rely on abstractions derived from spatially extended neurons, and do not fully exploit the detailed biological properties and spatial information of dendrites. Further investigation is needed to unveil the potential of leveraging more realistic neuron models for understanding the shared mechanisms underlying brain computation and deep learning. 90 91 92 In response to these challenges, we developed DeepDendrite, a tool that uses the Dendritic Hierarchical Scheduling (DHS) method to significantly reduce computational costs and incorporates an I/O module and a learning module to handle large datasets. With DeepDendrite, we successfully implemented a three-layer hybrid neural network, the Human Pyramidal Cell Network (HPC-Net) (Fig. ). This network demonstrated efficient training capabilities in image classification tasks, achieving approximately 25 times speedup compared to training on a traditional CPU-based platform (Fig. ; Supplementary Table O‘z. 6a, b 6f 1 HPC-Net (Human Pyramidal Cell Network) image classification. Images transformed to spike trains and fed into the network model. Learning is triggered by error signals propagated from soma to dendrites. Training with mini-batch. Multiple networks are simulated simultaneously with different images as inputs. The total weight updates ΔW are computed as the average of ΔWi from each network. HPC-Net o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘ Workflow of the transfer adversarial attack experiment. We first generate adversarial samples of the test set on a 20-layer ResNet. Then use these adversarial samples (noisy images) to test the classification accuracy of models trained with clean images. Modelin 30 epochlarni MNIST (ga) və Fashion-MNIST (ga) data setlarni qaytaradi. Run time of training and testing for the HPC-Net. The batch size is set to 16. Left, run time of training one epoch. Right, run time of testing. Parallel NEURON + Python: training and testing on a single CPU with multiple cores, using 40-process-parallel NEURON to simulate the HPC-Net and extra Python code to support mini-batch training. DeepDendrite: training and testing the HPC-Net on a single GPU with DeepDendrite. a b c d e f Additionally, it is widely recognized that the performance of Artificial Neural Networks (ANNs) can be undermined by adversarial attacks —Dandritlar, sinapslar o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z . Our experimental results utilizing HPC-Net lend support to this hypothesis, as we observed that networks endowed with detailed dendritic structures demonstrated some increased resilience to transfer adversarial attacks compared to standard ANNs, as evident in MNIST and Fashion-MNIST datasets (Fig. ). This evidence implies that the inherent biophysical properties of dendrites could be pivotal in augmenting the robustness of ANNs against adversarial interference. Nonetheless, it is essential to conduct further studies to validate these findings using more challenging datasets such as ImageNet . 93 56 94 95 96 6d, e 97 DeepDendrite o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘ Metoda Simulasiya DHS CoreNEURON simulator ( ) uses the NEURON architecture and is optimized for both memory usage and computational speed. We implement our Dendritic Hierarchical Scheduling (DHS) method in the CoreNEURON environment by modifying its source code. All models that can be simulated on GPU with CoreNEURON can also be simulated with DHS by executing the following command: 35 https://github.com/BlueBrain/CoreNeuron 25 coreneuron_exec -d /path/to/models -e time --cell-permute 3 --cell-nthread 16 --gpu O‘z o‘z o‘z o‘z o‘z o‘z tablo. . 1 Accuracy of the simulation using cellular-level parallel computation To ensure the accuracy of the simulation, we first need to define the correctness of a cellular-level parallel algorithm to judge whether it will generate identical solutions compared with the proven correct serial methods, like the Hines method used in the NEURON simulation platform. Based on the theories in parallel computing , a parallel algorithm will yield an identical result as its corresponding serial algorithm, if and only if the data process order in the parallel algorithm is consistent with data dependency in the serial method. The Hines method has two symmetrical phases: triangularization and back-substitution. By analyzing the serial computing Hines method , we find that its data dependency can be formulated as a tree structure, where the nodes on the tree represent the compartments of the detailed neuron model. In the triangularization process, the value of each node depends on its children nodes. In contrast, during the back-substitution process, the value of each node is dependent on its parent node (Fig. ). Thus, we can compute nodes on different branches in parallel as their values are not dependent. 34 55 1d Based on the data dependency of the serial computing Hines method, we propose three conditions to make sure a parallel method will yield identical solutions as the serial computing Hines method: (1) The tree morphology and initial values of all nodes are identical to those in the serial computing Hines method; (2) In the triangularization phase, a node can be processed if and only if all its children nodes are already processed; (3) In the back-substitution phase, a node can be processed only if its parent node is already processed. Once a parallel computing method satisfies these three conditions, it will produce identical solutions as the serial computing method. Computational cost of cellular-level parallel computing method To theoretically evaluate the run time, i.e., efficiency, of the serial and parallel computing methods, we introduce and formulate the concept of computational cost as follows: given a tree and threads (basic computational units) to perform triangularization, parallel triangularization equals to divide the node set of into O‘z, o‘z o‘z, o‘z o‘z. = { O‘z, , … } where the size of each subset | | ≤ , i.e., at most nodes can be processed each step since there are only threads. The process of the triangularization phase follows the order: O‘z ... ... → , and nodes in the same subset can be processed in parallel. So, we define | | (the size of set , i.e., here) as the computational cost of the parallel computing method. In short, we define the computational cost of a parallel method as the number of steps it takes in the triangularization phase. Because the back-substitution is symmetrical with triangularization, the total cost of the entire solving equation phase is twice that of the triangularization phase. T k V T n V V1 o‘z V2 Vn Vi k k k V1 V2 o‘z Vn Vi V V n Matematiki programda problem Based on the simulation accuracy and computational cost, we formulate the parallelization problem as a mathematical scheduling problem: Given a tree = { , } and a positive integer , where is the node-set and is the edge set. Define partition ( ) = { , Bu... }, | | ≤ 1 ≤ 1 ≤ ≤ n, where | | indicates the cardinal number of subset , i.e., the number of nodes in , and for each node ∈ , all its children nodes { | ∈children( )} must in a previous subset , where 1 ≤ < . Our goal is to find an optimal partition ( ) whose computational cost | ( )| is minimal. T V E k V E P V V1 V2 o‘z Amerik Vi k i Vi o‘z Vi Vi v Vi c c v Vj j i P* V P* V Here subset O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. -th step (Fig. ), so | O‘zlar ≤ indicates that we can compute nodes each step at most because the number of available threads is . The restriction “for each node ∈ , all its children nodes { | ∈children( )} must in a previous subset , where 1 ≤ < O‘z o‘z o‘z o‘z o‘z o‘z can be processed only if all its child nodes are processed. Vi i 2e Vi k k k v Vi c c v Vj j i v DHS implementation We aim to find an optimal way to parallelize the computation of solving linear equations for each neuron model by solving the mathematical scheduling problem above. To get the optimal partition, DHS first analyzes the topology and calculates the depth ( ) for all nodes ∈ . Then, the following two steps will be executed iteratively until every node ∈ is assigned to a subset: (1) find all candidate nodes and put these nodes into candidate set Bir node o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. | ≤ , i.e., the number of candidate nodes is smaller or equivalent to the number of available threads, remove all nodes in and put them into , otherwise, remove deepest nodes from O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z . Label these nodes as processed nodes (Fig. ). After filling in subset , go to step (1) to fill in the next subset . d v v V v V Q Q k Q V*i k Q Vi 2d Vi Vi+1 Correctness proof for DHS DHS-i nevron ağacdan qaytaradi = { , }, we get a partition ( O‘z. , , … }, | | ≤ , 1 ≤ ≤ Noda o‘z o‘z o‘z o‘z o‘z o‘z will be computed in parallel, taking steps to perform triangularization and back-substitution, respectively. We then demonstrate that the reordering of the computation in DHS will result in a result identical to the serial Hines method. T V E P V V1 V2 o‘z Vn Vi o‘z k i n Vi n The partition ( ) obtained from DHS decides the computation order of all nodes in a neural tree. Below we demonstrate that the computation order determined by ( ) satisfies the correctness conditions. ( O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z . Operations in DHS do not modify the tree topology and values of tree nodes (corresponding values in the linear equations), so the tree morphology and initial values of all nodes are not changed, which satisfies condition 1: the tree morphology and initial values of all nodes are identical to those in serial Hines method. In triangularization, nodes are processed from subset to . As shown in the implementation of DHS, all nodes in subset are selected from the candidate set , and a node can be put into only if all its child nodes have been processed. Thus the child nodes of all nodes in O‘z o‘z , , … }, meaning that a node is only computed after all its children have been processed, which satisfies condition 2: in triangularization, a node can be processed if and only if all its child nodes are already processed. In back-substitution, the computation order is the opposite of that in triangularization, i.e., from to . As shown before, the child nodes of all nodes in are in { , , … }, so parent nodes of nodes in are in { , , … }, which satisfies condition 3: in back-substitution, a node can be processed only if its parent node is already processed. P V P V P V T V1 Vn Vi Q Q Vi V1 V2 Vi-1 Vn V1 o‘z Vi V1 V2 Vi-1 Vi Vi+1 U + 2 Vn Optimality proof for DHS The idea of the proof is that if there is another optimal solution, it can be transformed into our DHS solution without increasing the number of steps the algorithm requires, thus indicating that the DHS solution is optimal. O‘z o‘z o‘z o‘z o‘z Inni ( DHS o‘z o‘z (Kandida numari) kandida qilmadi qilmadi. to Əgər o‘z o‘z o‘z o‘z o‘z o‘z o‘z O‘z daha küç Bütün nodlar o‘z. İki Biz o‘z, o‘z o‘z, o‘z o‘z o‘z o‘z. U o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z Bilmiz qilmizdir O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z ( Max-dypth criteria (Supplementary Fig. Biz o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. Bu o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z O‘z , , … } containing subsets that do not satisfy the max-depth criteria, we can modify the subsets in ( ) so that all subsets consist of the deepest nodes from O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z ( )|) remain the same after modification. Vi o‘z P V k Qi o‘z Vi o‘z Qi o‘z k Qi o‘z Vi o‘z O‘z k Qi o‘z P V 6a o‘z P (V) P * (V) V*1 V*2 V*s P* V Q P * V Biz o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z not satisfying the criteria, i.e., . There are two possible cases that will make not satisfy the max-depth criteria: (1) | | < and there exist some valid nodes in that are not put to ; (2) | | = but nodes in are not the deepest nodes in . V*i V*i V*i k Qi V * I V*i k V*i k Qi For case (1), because some candidate nodes are not put to O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. | , we can move the corresponding nodes from the subsequent subsets to , which will not increase the number of subsets and make O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z O‘ziz = , these deeper nodes that are not moved from the candidate set into must be added to subsequent subsets (Supplementary Fig. Bu o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. through the following method. Assume that after filling , O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z -th deepest nodes Hala o‘z , thus will be put into a subsequent subset ( > ). We first move from İki + o‘z , then modify subset + as follows: if | + o‘z O‘zlar ≤ Asda o‘z o‘z o‘z o‘z o‘z + O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z , stop modifying the latter subsets. Otherwise, modify + as follows (Supplementary Fig. ): if the parent node of O‘z + , move this parent node to + ; else move the node with minimum depth from + to + . After adjusting , modify subsequent subsets + , + , … with the same strategy. Finally, move from to . V*i V*i < k V*i V * I 6b V*i k Qi V*i 6B o‘z V*i V * I v k v’ Qi v’ V*j j i v V*i V*i 1 V*i 1 V*i 1 k V*i 1 v V * I 1 6c v V*i 1 V*i 2 V * I 1 V*i 2 V*i V*i 1 V * I 2 V*j-1 v’ V*j V*i With the modification strategy described above, we can replace all shallower nodes in with the -th deepest node in and keep the number of subsets, i.e., | ( )| the same after modification. We can modify the nodes with the same strategy for all subsets in ( ) that do not contain the deepest nodes. Finally, all subsets ∈ ( ) can satisfy the max-depth criteria, and | ( )| does not change after modifying. V*i k Qi P* V P * V V*i P* V P* V DHS o‘z diviziyadi. ( O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z ∈ ( Max-deep condition: max-deep condition: max-deep condition: max-deep condition: max-deep condition: max-deep condition: max-deep condition: max-deep condition: max-deep condition: max-deep condition: max-deep condition: max. ( ) we can modify its subsets to make its structure the same as ( ), i.e., each subset consists of the deepest nodes in the candidate set, and keep | ( ) the same after modification. So, the partition ( ) obtained from DHS is one of the optimal partitions. P V Vi o‘z P V P* V P V P* V | P V GPU implementi və memori boost GPU o‘z 1 global memory, 2 global cache, 3 registry, o‘z global memory o‘z low throughput, o‘z registry o‘z low throughput, o‘z high throughput, o‘z global memory o‘z low throughput. GPU simt (Single-Instruction, Multiple-Thread) architecture. warplar GPU-da baz planingizingizdir (a warp 32 parallel threads). Qodlarni qilmadi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilm 46 O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z Full-spine və few-spine biophysical modellar Biz human pyramidal neuron qoʻyimiz. . The membrane capacitance m = 0.44 μF cm-2, membrane resistance m = 48,300 Ω cm2, and axial resistivity a = 261.97 Ω cm. In this model, all dendrites were modeled as passive cables while somas were active. The leak reversal potential l = -83.1 mV. Ion channels such as Na+ and K+ were inserted on soma and initial axon, and their reversal potentials were Na = 67.6 mV, K = -102 mV respectively. All these specific parameters were set the same as in the model of Eyal, et al. , for more details please refer to the published model (ModelDB, access No. 238347). 51 c r r E E E 51 In the few-spine model, the membrane capacitance and maximum leak conductance of the dendritic cables 60 μm away from soma were multiplied by a O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. 1.9. Yalnız sinaptic inputs o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. F F In the full-spine model, all spines were explicitly attached to dendrites. We calculated the spine density with the reconstructed neuron in Eyal, et al. . The spine density was set to 1.3 μm-1, and each cell contained 24994 spines on dendrites 60 μm away from the soma. 51 Morphologies and biophysical mechanisms of spines were the same in few-spine and full-spine models.Morphologies and biophysical mechanisms of spines were the same in few-spine and full-spine models.Morphology and biophysical mechanisms of spines were the same in few-spine and full-spine models. neck = 1.35 μm and the diameter Hals = 0.25 μm, o‘z bo‘z bo‘z bo‘z bo‘z bo‘z 0.944 μm, o‘z bo‘z bo‘z bo‘z 2.8 μm2 qaytaradi. = -86 mV. The specific membrane capacitance, membrane resistance, and axial resistivity were the same as those for dendrites. L D Elni Synaptic inputs We investigated neuronal excitability for both distributed and clustered synaptic inputs. All activated synapses were attached to the terminal of the spine head. For distributed inputs, all activated synapses were randomly distributed on all dendrites. For clustered inputs, each cluster consisted of 20 activated synapses that were uniformly distributed on a single randomly-selected compartment. All synapses were activated simultaneously during the simulation. AMPA-based and NMDA-based synaptic currents were simulated as in Eyal et al.’s work. AMPA conductance was modeled as a double-exponential function and NMDA conduction as a voltage-dependent double-exponential function. For the AMPA model, the specific O‘z o‘z decay were set to 0.3 and 1.8 ms. For the NMDA model, rise and decay were set to 8.019 and 34.9884 ms, respectively. The maximum conductance of AMPA and NMDA were 0.73 nS and 1.31 nS. τ τ τ τ Background noise Biz cildimizni qaytarga qaytarga simula qoymadi ki, daha realistiksiz qaytarga qaytarga qaytaradi. start = 10 ms and lasted until the end of the simulation. We generated 400 noise spike trains for each cell and attached them to randomly-selected synapses. The model and specific parameters of synaptic currents were the same as described in NMDA max konduktansni 1,57-3,275 o‘z, o‘z AMPA-NMDA rationi daha çox qaytaradi. t Synaptic Inputs Exploring neuronal excitability We investigated the spike probability when multiple synapses were activated simultaneously. For distributed inputs, we tested 14 cases, from 0 to 240 activated synapses. For clustered inputs, we tested 9 cases in total, activating from 0 to 12 clusters respectively. Each cluster consisted of 20 synapses. For each case in both distributed and clustered inputs, we calculated the spike probability with 50 random samples. Spike probability was defined as the ratio of the number of neurons fired to the total number of samples. All 1150 samples were simulated simultaneously on our DeepDendrite platform, reducing the simulation time from days to minutes. O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z Konvansiyallara neyron simulators o‘z iki funksiyallara o‘zadi: (1) simulators and weight updates without heavy reinitialization and (2) simultaneously processing multiple stimuli samplings in a batch-like manner.Hadi biz platform DeepDendrite, which supports both biophysical simulating and performing deep learning tasks with detailed dendritic models. DeepDendrite consists of three modules (Supplementary Fig. ): (1) an I/O module; (2) a DHS-based simulating module; (3) a learning module. When training a biophysically detailed model to perform learning tasks, users first define the learning rule, then feed all training samples to the detailed model for learning. In each step during training, the I/O module picks a specific stimulus and its corresponding teacher signal (if necessary) from all training samples and attaches the stimulus to the network model. Then, the DHS-based simulating module initializes the model and starts the simulation. After simulation, the learning module updates all synaptic weights according to the difference between model responses and teacher signals. After training, the learned model can achieve performance comparable to ANN. The testing phase is similar to training, except that all synaptic weights are fixed. 5 HPC-Net model Bir model o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o HPC-Net has three layers, i.e., an input layer, a hidden layer, and an output layer. The neurons in the input layer receive spike trains converted from images as their input. Hidden layer neurons receive the output of input layer neurons and deliver responses to neurons in the output layer. The responses of the output layer neurons are taken as the final output of HPC-Net. Neurons between adjacent layers are fully connected. U o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘ ( (O’z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z). ISI( ) (in ms) which is determined by the pixel value ( (O‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q o‘q). O‘z. x, y τ X, Y p x, y 1 Bizni eksperimenimizda, similasiya hər stimulusni 50 ms sürdi. ISI ms and lasted until the end of the simulation. Then we attached all spike trains to the input layer neurons in a one-to-one manner. The synaptic current triggered by the spike arriving at time O‘z o‘z τ T0 o‘z where Post-synaptic voltage, qaytar potensialdir. syn = 1 mV, maksimum sinaptik konduktans max = 0.05 μS, and the time constant = 0.5 ms. v E g τ Neuronlar o‘z qaytarda pasiv single-compartment model modelda modeldi. Parametrarlar o‘z: membrane capacitance m = 1.0 μF cm-2, membran rezistans m = 104 Ω cm2, axial rezistivity a = 100 Ω cm, Passive compartment reversal potential. l = 0 mV. c r r E O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. m = 1.5 μF cm-2, membran rezistans m = 48,300 Ω cm2, axial resistivity a = 261.97 Ω cm, and the reversal potential of all passive cables L = 0 mV. Neuronlar qaytarga qaytarga qaytarga qaytarga qaytarga qaytarga qaytarga qaytarga. -th synapse of the -th input neuron on neuron ’s dendrite is defined as in Eq. ( ), where O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. Is synaptic weight, is the ReLU-like somatic activation function, and is the somatic voltage of the atom. -th input neuron at time . 51 c r r E k i j 4 gijk Kadiq i t Neuronlar o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘ O‘z. 4 HPC-Net image klasifikasi O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z Nimin is the probability of -th klasa HPC-Net predicted, bilan somatic voltage 20 ms to 50 ms o‘z. Neuron o‘z, o‘z o‘z O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z 6 Pi o‘z i i C HPC-Net-nin synaptic plasticity reglamenti O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z Biz o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z ( O‘z o‘z O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z O‘z, U o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z 1 o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z O‘z 0 0 0 0 0 0 0 0 36 7 Pi o‘z i O‘z yi i O‘z HPC-Net o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z (Sinoptik o‘z o‘z o‘z Sinapsi neuronlarni qoysan. Neuronlar O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. U o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. ): Wijk k i j Kadiq 8 O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z O‘z, O‘z, Somatiki neuronlar O‘z Respondonda o‘z. is the Synaptic currents, neuron aktivizadi. Neuronlar O‘z, Synaptic conductivity o‘z o‘z. O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z Neuronlarni qoysanlar on neuron O‘z o‘z o‘z neuron ’s soma, s = 30 ms, E = 50 ms o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z ( U o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. O‘z. t vj Vi o‘z i j O‘z k i j gishadi Riki k i j j t t 10 11 O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. Mini-batch training is a typical method in deep learning for achieving higher prediction accuracy and accelerating convergence. DeepDendrite also supports mini-batch training. When training HPC-Net with mini-batch size Batch o‘z o‘z batch copies of HPC-Net. During training, each copy is fed with a different training sample from the batch. DeepDendrite first computes the weight update for each copy separately. After all copies in the current training batch are done, the average weight update is calculated and weights in all copies are updated by this same amount. N N HPC-Net-i qaytarib o‘z qo‘q. Biz HPC-Net'in robustliyini demonstrasiyadi, biz prediction accuracy-ninini adversarial samplarda test etdik, o‘z analoga ANN-ga (o‘z bilan 784-64-10 strukturda reLU aktivation, o‘z HPC-Net'imizda fair comparison, hər qida neuronni yalnız bir sinaptic connectivity qo‘yadi). Biz HPC-Net and ANN'larga o‘z orijinal training set (orijinal clean images). , to generate adversarial noise with the FGSM method Ana o‘z yo‘q qilmadi. , and HPC-Net was trained with our DeepDendrite. For fairness, we generated adversarial noise on a significantly different network model, a 20-layer ResNet Biz 0,02-0,2 o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z and Fashion-MNIST HPC-Net’in prediction accuracy bilan 19% və 16.72% daha iy bilan analoga ANN-nin. 98 99 93 100 101 95 96 Sumqayıt Report Dizajni o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z O‘z o‘z bu artikelni linkib. Nature Portfolio Reporting Reporting Summary (Natur Portfolio Reporting Summary) Data fayllar U bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan bilan. – O‘z o‘z o‘z MNIST data setni publik qonaqdir. Fashion-MNIST data setni publishish o‘z. O‘z O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. 3 6 https://github.com/pkuzyc/DeepDendrite http://yann.lecun.com/exdb/mnist https://github.com/zalandoresearch/fashion-mnist Data qaytar Koda qilmadi DeepDendrite koodi, model və koodi qaytarib Figs. – Bu studiyonda o‘zingizdir. . 3 6 https://github.com/pkuzyc/DeepDendrite Referanslar McCulloch, W. S. & Pitts, W. A logical calculus of the ideas immanent in nervous activity. bull. Math. Biophys. 5, 115-133 (1943). LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015). Poirazi, P., Brannon, T. & Mel, B. W. Neuron 37, 977–987 (2003). London, M. & Häusser, M. Dendritic calculation. Rev. Neurosci. 28, 503-532 (2005). Branco, T. & Häusser, M. The single dendritic branch as a fundamental functional unit in the nervous system. Curr. Opin. Neurobiol. 20, 494–502 (2010). Stuart, G. J. & Spruston, N. Dendritic integration: 60 years of progress. Nat. Neurosci. 18, 1713–1721 (2015). Poirazi, P. & Papoutsi, A. Dendritik funksiyalarni qilmadi. Nat. Rev. Neurosci. 21, 303-321 (2020). Yuste, R. & Denk, W. Dendritic spines as basic functional units of neuronal integration. Nature 375, 682–684 (1995). Engert, F. & Bonhoeffer, T. Dendritic spine changes associated with hippocampal long-term synaptic plasticity. Nature 399, 66–70 (1999). Yuste, R. Dendritic spines and distributed circuits. Neuron 71, 772–781 (2011). Yuste, R. Dendritik spinesda elektrik compartimentalization. Rev. Neurosci. 36, 429-449 (2013). Rall, W. Branching dendritic trees and motoneuron membrane resistivity. Exp. Neurol. 1, 491-527 (1959). Segev, I. & Rall, W. Computational study of an excitable dendritic spine. J. Neurophysiol. 60, 499-523 (1988). Silver, D. et al. Go game'nin qilmadi neyral networking and tree search. Nature 529, 484–489 (2016). Silver, D. et al. A general reinforcement learning algorithm which masters chess, shogi, and go through self-play. Science 362, 1140–1144 (2018). McCloskey, M. & Cohen, N. J. Connectivist networking katastrophic interference: the sequential learning problem. Psychol. Learn. Motiv. 24, 109-165 (1989). Fransiz, R. M. Connectivist networking. Trends Cogn. Sci. 3, 128-135 (1999). Naud, R. & Sprekeler, H. Sparse bursts optimize information transmission in a multiplexed neural code. Proc. Natl Acad. Sci. USA 115, E6329-E6338 (2018). Sacramento, J., Costa, R. P., Bengio, Y. & Senn, W. Dendritic cortical microcircuits qaytarga algoritma. in Advances in Neural Information Processing Systems 31 (NeurIPS 2018) (NeurIPS*,* 2018). Payeur, A., Guerguiev, J., Zenke, F., Richards, B. A. & Naud, R. Burst-dependent synaptic plasticity o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. Bicknell, B. A. & Häusser, M. A synaptic learning rule for exploiting nonlinear dendritic computation. Neuron 109, 4001–4017 (2021). Moldwin, T., Kalmenson, M. & Segev, I. The gradient clusteron: a model neuron that learns to solve classification tasks via dendritic nonlinearities, structural plasticity, and gradient descent. PLoS Comput. Biol. 17, e1009015 (2021). Hodgkin, A. L. & Huxley, A. F. A quantitative description of membrane current and Its application to conduction and excitation in nerve. J. Physiol. 117, 500-544 (1952). Rall, W. Theory of physiological properties of dendrites. Ann N Y. Acad. Sci. 96, 1071-1092 (1962). Hines, M. L. & Carnevale, N. T. The NEURON simulation environment. Neural Comput. 9, 1179-1209 (1997). Bower, J. M. & Beeman, D., The Book of GENESIS: Exploring Realistic Neural Models with the General Neural Simulation System (Bower, J. M. & Beeman, D.) 17-27 (Springer New York, 1998). Hines, M. L., Eichner, H. & Schürmann, F. Neuron dividing in compute-bound parallel network simulations o‘z qilmadi qilmadi qilmadi qilmadi o‘z qilmadi o‘z qilmadi. J. Comput. Neurosci. 25, 203-210 (2008). Hines, M. L., Markram, H. & Schürmann, F. J. Comput. Neurosci. 25, 439-448 (2008). Ben-Shalom, R., Liberman, G. & Korngreen, A. Graphical processing unit modeling. Front. Neuroinform. 7, 4 (2013). Tsuyuki, T., Yamamoto, Y. & Yamazaki, T. Graphics processing unitsda spatial struktur neuron model efikas simulation. In Proc. 2016 International Conference on Neural Information Processing (eds Hirose894Akiraet al.) 279–285 (Springer International Publishing, 2016). Vooturi, D. T., Kothapalli, K. & Bhalla, U.S. Parallelizing Hines Matrix Solver in Neuron Simulations on GPU. In Proc. IEEE 24th International Conference on High Performance Computing (HiPC) 388-397 (IEEE, 2017). Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, F. Huber, Korte, B. & Vygen, J. Combinatorial Optimization Theory and Algorithms 6 edn (Springer, 2018). Gebali, F. Algorithms and Parallel Computing (Wiley, 2011) o‘zingizdir. Kumbhar, P. et al. CoreNEURON: Neuron simulatora optimized computer engine. Front. Neuroinform. 13, 63 (2019). Urbanczik, R. & Senn, W. Neuron 81, 521-528 (2014). Ben-Shalom, R., Aviv, A., Razon, B. & Korngreen, A. Ionic channel models optimizing using a parallel genetic algorithm on graphical processors. J. Neurosci. Methods 206, 183–194 (2012). Mascagni, M. A parallelizing algorithm for computing solutions to arbitrarily branched cable neuron models. J. Neurosci. Methods 36, 105-114 (1991). McDougal, R. A. et al. Twenty years of modelDB and beyond: building essential modeling tools for the future of neuroscience. J. Comput. Neurosci. 42, 1-10 (2017). Migliore, M., Messineo, L. & Ferrante, M. Dendritic Ih selektilga CA1 piramidal neuronlarda unsynchronized distal inputs temporary summing blocks. Hemond, P. et al. Piramida qilmalarda diferansiyal qilmalarlar bilan qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmadi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi qilmidi Hay, E., Hill, S., Schürmann, F., Markram, H. & Segev, I. Models neocortical layer 5b piramidal celllar qilmadi qilmadi dendritic and perisomatic Active Properties. PLoS Comput. Biol. 7, e1002107 (2011). Masoli, S., Solinas, S. & D’Angelo, E. Action potentials processing in a detailed purkinje cell model reveals a critical role for axonal compartmentalization. Front. Cell. Neurosci. 9, 47 (2015). Lindroos, R. et al. Basal ganglia neuromodulation on multiple temporal and structural scales—simulations of direct pathway MSNs investigate the fast onset of dopaminergic effects and predict the role of Kv4.2. Front. Neural Circuits 12, 3 (2018). Migliore, M. et al. Synaptic clusters funktions as odor operators in the olfactory bulb. Proc. Natl Acad. Sci. USa 112, 8499-8504 (2015). NVIDIA. CUDA C++ Programming Guide. https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html (2021). NVIDIA. CUDA C++ Best Practices Guide. https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html (2021). Harnett, M. T., Makara, J. K., Spruston, N., Kath, W. L. & Magee, J. C. Synaptic amplification by dendritic spines enhances input cooperativity. Nature 491, 599–602 (2012). Chiu, C. Q. et al. GABAergic inhibition. Science 340, 759–762 (2013). Tønnesen, J., Katona, G., Rózsa, B. & Nägerl, U. V. Spine qosga plasticity regled compartmentalization synapses. Nat. Neurosci. 17, 678–685 (2014). Eyal, G. et al. Human cortical pyramidal neurons: spines to spikes via models. Front. Cell. Neurosci. 12, 181 (2018). Koch, C. & Zador, A. The function of dendritic spines: devices subserving biochemical rather than electrical compartmentalization. J. Neurosci. 13, 413-422 (1993). Koch, C. Dendritic spines. in Biophysics of Computation (Oxford University Press, 1999). Rapp, M., Yarom, Y. & Segev, I. Parallel fiber background Activity's impact on the cable properties of cerebellar purkinje cells. Neural Comput. 4, 518-533 (1992). Hines, M. Nirga qilmadi qilmadi. Int. J. Bio-Med. Comput. 15, 69-76 (1984). Nayebi, A. & Ganguli, S. Biologically inspired protection of deep networks from adversarial attacks. Preprint at https://arxiv.org/abs/1703.09202 (2017). Goddard, N. H. & Hood, G. Large-Scale Simulation Using Parallel GENESIS. In The Book of GENESIS: Exploring Realistic Neural Models with the General Neural Simulation System (Bower James M. & Beeman David) 349-379 (Springer New York, 1998). Migliore, M., Cannia, C., Lytton, W. W., Markram, H. & Hines, M. L. Neuron. J. Comput. Neurosci. 21, 119 (2006). Lytton, W. W. et al. Simulation neurotechnologies for advancing brain research: parallelizing large networks in NEURON. Neural Comput. 28, 2063–2090 (2016). Valero-Lara, P. et al. cuHinesBatch: Solving multiple Hines systems on GPUs human brain project. In Proc. 2017 International Conference on Computational Science 566-575 (IEEE, 2017). Akar, N. A. et al. Arbor—A morphologically-detalled neural network simulation library for contemporary high-performance computing architectures. In Proc. 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP) 274–282 (IEEE, 2019). Ben-Shalom, R. et al. NeuroGPU: Multi-compartment accelerating biophysically detailed neuron simulations on GPUs. J. Neurosci. Methods 366, 109400 (2022). Rempe, M. J. & Chopp, D. L. A predictor-corrector algorithm for reaction-diffusion equations associated with neural activity on branched structures. SIAM J. Sci. Comput. 28, 2139-2161 (2006). Kozloski, J. & Wagner, J. Neuroinform. 5, 15 (2011). Jayant, K. et al. Nanotechnol. 12, 335–342 (2017). Palmer, L. M. & Stuart, G. J. Membrane potensial dəyişib dendritic spines o‘z potentials action and synaptic input. J. Neurosci. 29, 6897-6903 (2009). Nishiyama, J. & Yasuda, R. Biochemical calculation for spinal structural plasticity. Neuron 87, 63–75 (2015). Yuste, R. & Bonhoeffer, T. Morphological changes in dendritic spines associated with long-term synaptic plasticity. Rev. Neurosci. 24, 1071-1089 (2001). Holtmaat, A. & Svoboda, K. Experience-dependent struktural synaptic plasticity in mammalian brain. Nat. Rev. Neurosci. 10, 647–658 (2009). Caroni, P., Donato, F. & Muller, D. Structural plasticity on learning: regulation and functions. Nat. Rev. Neurosci. 13, 478-490 (2012). Keck, T. et al. Neurosci. 11, 1162 (2008), neuronal circuits massive restructuring during functional reorganization of adult visual cortex. Hofer, S. B., Mrsic-Flogel, T. D., Bonhoeffer, T. & Hübener, M. Experience o‘z kortikal qarçalarda o‘zadi. Nature 457, 313–317 (2009). Trachtenberg, J. T. et al. Long-term in vivo imaging of experience-dependent synaptic plasticity in adult cortex. Nature 420, 788-794 (2002). Marik, S. A., Yamahachi, H., McManus, J. N., Szabo, G. & Gilbert, C. D. Axonal dynamics of excitatory and inhibitory neurons in somatosensory cortex. PLoS Biol. 8, e1000395 (2010). Xu, T. et al. Sinapse'nin sürətli formlanma və selektiv stabilizasiyalarda motosiklet qaytarlar. Nature 462, 915–919 (2009). Albarran, E., Raissi, A., Jáidar, O., Shatz, C. J. & Ding, J. B. Neuron 109, 3298-3311 (2021) motorizmin qilmadi. Branco, T. & Häusser, M. Synaptic integration gradients in single cortical pyramidal cell dendrites. Neuron 69, 885-892 (2011). Major, G., Larkum, M. E. & Schiller, J. Neocortical pyramidal neuron dendrites. Rev. Neurosci. 36, 1–24 (2013). Gidon, A. et al. Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science 367, 83–87 (2020). Doron, M., Chindemi, G., Muller, E., Markram, H. & Segev, I. Synaptic inhibition shape NMDA spikes, o‘z lokal dendritic processing and global I/O properties of cortical neurons. Cell Rep. 21, 1550–1561 (2017). Du, K. et al. Cell-type-specific inhibition of the dendritic plateau potential in striatal spiny projection neurons. Proc. Natl Acad. Sci. USA 114, E7612–E7621 (2017). Smith, S. L., Smith, I. T., Branco, T. & Häusser, M. Dendritic spikes in vivo kortikal neuronlarda stimulus selectivity bo‘ladi. Nature 503, 115-120 (2013). Xu, N.-l et al. Nonlinear dendritic integration of sensory and motor input during an active sensing task. Nature 492, 247–251 (2012). Takahashi, N., Oertner, T. G., Hegemann, P. & Larkum, M. E. Active cortical dendrites modulate perception. , 1587–1590 (2016). Science 354 Sheffield, M. E. & Dombeck, D. A. Kalsiyum dendritik arbour-da kalsiyum transitional prevalence predicted field properties. Nature 517, 200–204 (2015). Markram, H. et al. Reconstruction and simulation of neocortical microcircuitry. , 456–492 (2015). Cell 163 Billeh, Y. N. et al. Structured and functional data systematically integrated into multi-scale models of mouse primary visual cortex. Neuron 106, 388–403 (2020). U, J. et al. Microcircuits of striatum in silico. Proc. Natl Acad. Sci. USA 117, 202000671 (2020). Guerguiev, J., Lillicrap, T. P. & Richards, B. A. Dhallig dendritlarga deep learning. elife 6, e22901 (2017). Iyer, A. et al. Katastrofdan qilmadi: aktiv dendritlar dinamik ortamlarda multi-task qilmadi. Front. Neurorobot. 16, 846219 (2022). Jones, I. S. & Kording, K. P. Neural Comput. 33, 1554–1571 (2021) Neural Comput. Bird, A. D., Jedlicka, P. & Cuntz, H. Dendritic normalization o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z e1009202 (2021). Goodfellow, I. J., Shlens, J. & Szegedy, C. 3rd International Conference on Learning Representations (ICLR) (ICLR, 2015). Papernot, N., McDaniel, P. & Goodfellow, I. Machine learning transferability: phenomena to black-box attacks using adversarial samples. Preprint at https://arxiv.org/abs/1605.07277 (2016). Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998). Xiao, H., Rasul, K. & Vollgraf, R. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. Preprint at (2017). http://arxiv.org/abs/1708.07747 Bartunov, S. et al. Assessing the scalability of biologically-motivated deep learning algorithms and architectures. In (NeurIPS, 2018). Advances in Neural Information Processing Systems 31 (NeurIPS 2018) Rauber, J., Brendel, W. & Bethge, M. Foolbox: A Python toolbox to benchmark the robustness of machine learning models. In Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning (2017). Rauber, J., Zimmermann, R., Bethge, M. & Brendel, W. Foolbox native: fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX. , 2607 (2020). J. Open Source Softw. 5 Paszke, A. et al. PyTorch: A imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019) (NeurIPS, 2019). He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In 770–778 (IEEE, 2016). Proc. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) O‘zlar O‘zlar Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao, Qingdao O‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z o‘z. Bu kartlar CC by 4.0 Deed (Attribution 4.0 International) lissiya. O‘z o‘z natur