paint-brush
#HackGenY Keynoteby@dweekly

#HackGenY Keynote

by David E. WeeklyJanuary 24th, 2015
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<em>Keynote speech given to kick off </em><a href="http://hackgeny.com/" target="_blank"><em>Hack Generation Y</em></a><em> on January 24, 2015</em>

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - #HackGenY Keynote
David E. Weekly HackerNoon profile picture

Keynote speech given to kick off Hack Generation Y on January 24, 2015

Welcome to Hack Generation Y!

But wait: what do we mean when we say “hack”? What’s going on here?

We mean to build new logical systems that go beyond what has been built before. This goes beyond quickly writing a fart sound application. We’re counting on you to surprise us, to build things that weren’t anticipated.

And surprises are a good thing to optimize for because they imply that you had one set of beliefs about the world and observed something contrary to those beliefs. They’re fantastic opportunities for learning. If life has become very unsurprising and boring you either aren’t paying very much attention to it or have such vague beliefs about the world that your beliefs aren’t meaningfully predictive. I mean, heck, we still find gravity surprising — we find space probes in different places than they ought to be.

Good technology lends itself to surprises as well, largely in its ability to reprogram us.

Take writing. Writing is a technology. Your brain has portions devoted to language: the Broca’s & Wernicke’s areas, but reading and writing were invented technologies. But your brain is plastic enough that it’s capable of quickly and systematically deciphering different symbols or glyphs and compositing them together into sounds, even if the system for doing so makes very little sense like English instead of something sensible like Spanish or better yet Korean. You can learn to transcribe Korean in about twenty minutes. You won’t have any idea what you’re saying, but you’ll write it about correctly.

Anyhow, writing allowed thoughts — any thoughts — to be preserved across space and time. You had no idea who would be directly impacted by your thoughts — they could come centuries after you were decomposing in the ground and completing the nitrogen cycle. Beowulf and Gilgamesh strove to live extraordinary lives in order to be remembered by the bards who passed legends on by word of mouth, but once a culture included writing, it was authors like Plato and Aristophanes who could truly live forever.

If we can say that writing allowed preservation of a thought across space and time, then code allows preservation of a verb similarly. Because code performs a symbolic manipulation: there are some inputs to your program and as a result your program performs some outputs. The program is a tool that performs action. It is a verb.

If you have a good sort of program, it can take inputs from surprising sources. If its outputs are interesting, they may be consumed by other programs in turn. This concept is the basis for the Unix pipe; flowing data from one bit of code to the next. Unix allows for these small pieces of code to be loosely joined through casual convention of standard output formatting much in the same way that the daemons of the Internet communicate via protocols that largely inherit this casual and reusable syntax.

If you want your ideas to live forever, write. If you want to be able to perform actions forever, code.

Here’s an even edgier thought: you are helping build the Borg. What you are contributing is a combined intelligence. When you search Google, you’re not just asking a computer some questions; the answers were almost all written by humans. But those answers would be useless without a computer to guide you to the right ones. So much of what you do online today is consulting with a lovely synthesis of machine and human intelligence. The Singularity is already here — the Internet is smarter than any one of us, almost by definition. But it’s not humans versus computers any mores than we could describe the cells in your body as waging war against the mitochondria that power them. Instead the two are already inextricably linked. Let’s be honest — how many people out there today could survive in the raw wild if they were tossed, naked, with no implements? We are already beings with a codependence on technology. Our tools master us even as we master them. The two cannot be disentangled.

Pierre Teilhard de Chardin & Vladimir Vernadsky saw this coming a hundred years ago with the rise of telegraphy and saw the emergence of a global super intelligence, a sphere of thought or “noosphere” that could be studied as a separate layer of Earth just as the biosphere can be studied as related to but distinct from the geosphere. And that’s from a Jesuit paleontologist and Russian mineralogist!

Vannevar Bush saw much of this coming half a century ago when right after World War II he penned “As We May Think”, which discusses the implication of having tools to powerfully enhance the mind available at all times and the extension of the species beyond the capabilities of an individual. Yeah, that’s basically Google Glass he sketched out there. In 1945.

But Bush missed out on a critical theme of the power of interconnected thought emphasized by the noosphere. Consider each of us as neurons within a larger brain. We take in some inputs, perform rational and creative work upon them, and produce some outputs. The more and broader inputs you have, more precise and refined processing you have, and the broader distribution, the more critical a role you play within the brain that is our species.

I’ve been starting to look into this field called “Deep Learning”, and strongly recommend that you do too, because it’s changing how we think about computing altogether. We’ve built a new kind of processing that works very differently than classic “if this then that” kind of Von Neumann architecture, but rather that forms its own sense of how it can best compute a good set of outputs given some inputs and a training set of data with known correct outputs. Lots of individually trainable neurons participate in multiple layers of cognition — it’s the depth of these layers that makes the field called “Deep Learning”. And the field has been advancing very quickly in the past few years with the introduction of convolutional neural networks capable of reasoning with surprising intuition about photographs and speech. You can hand a picture of a deer to a computer that has never seen a deer before and it will say “well, I see there an eye and there an eye and there a mouth and some legs and based on that I would reason that there is a tall mammal with thin legs in the picture and in the background I see a lot of things that are tall and green and so are probably trees, so I would reason I’m looking at the kind of tall thin mammal that is in a forest, and based on some reading I’ve done about the world, that’s probably a deer.” You can hand it another picture and it will say “Those are two elephants on a plain with a red Land Rover behind them” in plain English. The advances in the past few years have been just outstanding. It’s a new way for computers to reason and it’s very powerful. It’s clear that these Deep Learning networks can solve a surprisingly vast array of problems; you know how Google’s speech recognition is quite good? That’s a Deep Learning network at work right there. Getting smarter every day, too. You’re helping train it every time you talk to Google! These kinds of systems are better programmed on GPGPUs than CPUs; they will need new kinds of programmers writing new kinds of code that will run not on a handful of cores at once but thousands. The good news about computers is that things are always moving so fast — you won’t get bored! But watch out! or before you know it some kids in high school will be coding circles around you. ;)

I recently invested in a company called Chematria that is using these kinds of deep learning techniques to find new, safe drugs to combat diseases without a known cure like Ebola. Their algorithm basically goes off and reads all relevant scientific research ever done on any chemical and then tries fitting what it learned to the specifics of a given problem. If you do this right, you find a safe, known chemical that just so happens to have the kind of shape that blocks the mechanical action of Ebola attaching to your cell well. Boom, you just used an algorithm to find a cure for a disease. And this is just one application of deep learning.

One of the neat things I came across in researching Deep Learning networks was the concept that you can roughly judge the complexity of a system by the number of synapses it has even moreso than how many neuron-equivalents it has. A mouse has about 71 million neurons but a hundred billion synapses. The human mind has a very large number of neurons — about 100 billion, or a thousand times as many as a mouse — but even more astonishingly, each neuron has as many as 15,000 synapse connections. The net result is that a three year old has a QUADRILLION synapses. That’s insane. A quadrillion. The best Deep Learning machines out there have the equivalent of about a billion, or as many synapses as a honeybee. We’ve got a ways before you can upload your brain to the cloud and have it process just like you do. ☺

But beyond making digital versions of your consciousness, there are some really big problems that are unsolved. If the power of a neural network is proportional to the number of synapses, perhaps we could draw a corollary to the power of the Internet scaling rapidly with the number of people who are connected to it. After all, an Internet with just yourself is not a very exciting place to be. And with just your friend, it’s a point-to-point medium but not that much more interesting than just talking. With three people, now things get a lot more interesting — you can talk to one friend, the other friend, or both friends at once. And same for them. The interestingness of the network scales as the number of subgroups that can be formed — Reed’s Law, O(2^N) — which grows even faster than the number of distinct possible pairings (N^2), or Metcalfe’s Law.

If you really believe this, you’ll understand a moral mandate to get the whole world online. Because for every brain that exists out there, having interesting thoughts but unheard by others, unable to build on the world of others already out there, that is a waste and we all are radically less intelligent for having that individual not able to participate. So we need to get everyone online to realize the noosphere and be able to solve really difficult problems with a synthesis of human and machine intelligence.

This is why I’m at Google and what I’m working on; finding ways to accelerate the pace at which we’re getting everyone online. Only about three billion people today are connected to the Internet, of the 7.2 billion people who are out there. A lot of the things that you might guess are big barriers, like the Internet costing way too much money or simply not being available where people are, turn out to not be the biggest issues. A lot of what we’re dealing with is more complex; in India, only one woman goes online for every three men who connect. There’s a bigger gap there than what can be explained by economics or education alone. Are you sure you want to be perceived by your community as the sort of woman that goes online? That’s a question that some have to deal with and that can be challenging to answer in the affirmative with over intrusive uncles, brothers, fathers, and cousins. Sri Lanka has some of the cheapest cellular retail data rates in the world and yet only a fifth of the population is connected — it’s not strictly a matter of affordability, it’s a matter of whether the Internet is seen as “worth it” to them. And in a lot of these places, the Internet seems like a frivolity or a luxury — it’s not clear what people will get out of it. Or they don’t speak English. The Internet is a lot more exciting of a place in English — you have this vast corpus to search when you go to Google; there are billions of videos to check out, millions of songs and applications for your phone, and a keyboard, QWERTY, that is about the same pretty much everywhere. Now, good luck if you have a language with its own custom script. The problem is not literacy per se — the species has made some really tremendous progress on literacy in the past decade or so, there are well under a billion people who are illiterate today — the problem is the Internet’s poor support of languages other than English. Now, on the one hand, you can go and teach everyone English. That absolutely works, but I also think it’s kind of imperialistic and arrogant; it would be better if we could build tools that help people capture and share great content in any language. This all is to give you a hint of the kind of problems that we’re up against at Google working on Internet access. You need to build the infrastructure, doing crazy things like coating the entire stratosphere in balloons that speak Internet, but you also need to make sure that if you build it, people will come, which is to say making sure that you are building the kind of Internet that everyone wants.

And this gets to the nugget of what good product management is; understanding what people want, sometimes even better than they can express it. Henry Ford may have once said, “if I had delivered what people had asked me for, I would have given them a faster horse.” If you just build what people tell you to build, you will tack lots of different random features together and end up with a monstrosity nobody wants to use. But you can’t make great product in an ivory tower, either. To build something great, you need to discover the common thread behind many people’s pain. That requires “getting out of the building” and talking with people to see what the problems are. The problems a person faces are always legitimate; you can’t tell someone they weren’t annoyed by something! But you can draw your own conclusions about how that situation could have been avoided or allayed.

A good product should make a person feel great. They should be able to understand clearly, in their own words, what the product does and quickly get to a “WHOA!” moment that makes them feel your product is something special. You should be able to practically measure that time-to-whoa. If it’s not clear what in your product will really blow people away or surprise people in a good way, you may need to rethink what you’re doing. Be bold. Make your users feel great and they will love you and spread your product in return.

So as you think about your hacks and what you want to build this weekend, I’d love for you all to ask how your tool will further empower others, extend human capability, and do something nigh impossible. Make your users into superheroes and you will do very well in this world.

I’m looking forward to seeing what you make.

L O A D I N G
. . . comments & more!

About Author

David E. Weekly HackerNoon profile picture
David E. Weekly@dweekly
$

TOPICS

THIS ARTICLE WAS FEATURED IN...