paint-brush
There is Gulf between Science and Technology But AI Could Bridge the Gapby@shefwang
285 reads

There is Gulf between Science and Technology But AI Could Bridge the Gap

by Shef WangSeptember 8th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Discovering new science is really hard and we are not even close to fully ultilising the science we already have today to build technology for a green tomorrow.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - There is Gulf between Science and Technology But AI Could Bridge the Gap
Shef Wang HackerNoon profile picture


Science and Technology are often used interchangeably in everyday situations. One could argue that it is because they are the opposite sides of the same coin. Over-simplified, Science is the ‘reduction’ of nature to rules, and Technology is the ‘construction’ of nature from rules.


From Newton to the industrial revolutions to the emergence of computers and the Internet, paradigm-shifting technologies had always emerged quickly after the establishment of the corresponding science. These include everything we take for granted as part of modernity: internal combustion engines, light bulbs, and Wi-Fi, to name a few.


In the first half of the 1900s, quantum physics was established and was widely considered to be the ultimate science we need to explain everything — from the basic particles all the way to life and galaxies. However, things didn’t go as planned. 100 years after Schrodinger established the Nobel-winning equation that governs the movement of particles, we still don’t have a perfect solution to convert this science into technologies — we still cannot rationally design a ligand for a specific diasease (despite trillions of dollars in healthcare research); we still can’t design a viable lithium metal battery (even after Sony commercialized Lithium-ion battery 30 years ago), and we still don’t have fusion energy (despite being ’50 years away’ for more than 50 years).


Billions of dollars are spent on simulation software every year, which is supposed to help ‘construct’ nature from science; yet researchers still need hundreds of billions of dollars more to spend on wet labs to generate experimental results and ‘reduce’ them into patterns/rules. In areas such as drug design and material science, research is still largely done by “trial and error” not unlike the days of Thomas Edison a century ago.


Why the Gap?


In PW Anderson’s article ‘More is different’ in 1972, he argues that finding the laws that govern nature is not sufficient for understanding nature. ‘Reductionism’ leads us to these laws, but that does not make the opposite path (‘constructionism’) trivial, or even feasible. Another Nobel Laureate, Roald Hoffmann in his recent essay “Simulation vs. Understanding”, echoed the same notion.


As Paul Dirac famously said

“The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble.”


Even with powerful simplifications like Kohn-Sham, solving the Schrodinger equation for a reality-relevant system (typically with >100k atoms) has proven to be near-impossible even with the largest supercomputers in the world.


On the macro level, the pain is equally acute. Imagine if it takes 2h to generate a forecast for the weather next hour. Then this forecast, however accurate, would be de facto useless. This, unfortunately, is still largely the reality today, despite billions of dollars spent by governments on large computation infrastructures.


“More is different” means stacking up CPUs/GPUs will hit the wall of diminishing returns before yielding meaningful results.


In 2020, the Association for Computing Machinery awarded its prestigious Gordon Bell Prize to a team led by Princeton scientists and mathematicians that developed a way to address this very conundrum. This work started with an observation that the main challenge in quantum phyics is efficiently solving high dimensional equations, and AI (machine learning) has, in recent years, demonstrated such capability in similar challenges like Computer Vision (lots of pixels -> high dimensions).


Hence, the team developed “Deep Potential Molecular Dynamics, DeePMD”, a framework training AI to learn and approximate quantum physics equation and predict the movement of particles on a scale of 100 million atoms (100x larger than the State-of-the-Art and 10,000x faster). The team later expand its capacity to 17 billion atoms.


Since 2020, hundreds of papers have been published on Science, EES, PNAS and other top journals using DeePMD (see deepmodeling.org) in a wide variety of disciplines. Scientists from both computation and experiment background have embraced this new paradigm of research. This new AI helped shed light on previously prohibitive areas, such as:


  • Particle Physics, Earth Science (high temperature, pressure and other extreme conditions)

  • Protein folding, high entropy alloy (complexity due to scale)

  • Battery solid electrolyte interphase, catalysis (complexity due to interface)


These works might seem far away from our life but they are in fact critical for building a prosperous, sustainable green future by enabling more efficient energy generation & storage, faster drug discovery and more enduring and environmentally friendly materials & products.


If we can keep pushing more adoption of AI for Science, then we might be able to build tomorrow’s technology with yesterday’s science.


  • *Originally PublishedHere