I’m 32 and spent $200k on biohacking. Became calmer, thinner, extroverted, healthier & happier._This post is about how to use modern science and personalized medicine to make yourself healthier, more productive and…_hackernoon.com
How to biohack your intelligence — with everything from sex to modafinil to MDMA_I had some free time over the holidays and wrote this article to showcase, on the basis of a personal story, many…_hackernoon.com
Author’s note: I have been thinking a lot since I wrote this article. I deliberately made it very aggressive because I wanted people to talk about it and to pay attention. But some of the aggression went too far and is not aligned with my values.
I want there to be a great future for those of us who (like myself) want to become posthumans. I want to encourage all humans to explore enhancing their health, intelligence and productivity. There is a real risk of being left behind if you do not do that. I also want all of humanity to share in an amazing, grand future, whether they choose to be trans/posthumans or not.
If we do this right, we will have essentially limitless resources so everyone can benefit. Human and posthuman grand futures are compatible.
To that end I edited the article and removed some of the language I feel does not reflect how I see the world. To be clear I am not in any way going back on my aggressive beliefs or goals. I just realized that I was wrong to think that these goals must be in opposition to the goals of others. There is plenty of awesome future for everyone.
I will write another article focused on this topic later on.
In my previous articles, I talked a lot about how to use biohacking to become more productive, healthier, and more intelligent.
i just realized why one of my favorite blogs is named the way it is. brilliant.
Editor’s Note: This story contains some R-rated approaches to goal-hacking. We published it because we want readers to be informed of what’s actually happening in the technology industry. Proceed at your own risk.
But wait, it leaves a very important question unexplored: what to invest that productivity and intelligence into. And why?
This is an article on how I set goals and prioritize them. It is not a statement of “my goals are awesome and better than other goals.” Although that is obviously how I feel.
Read it as a personal story that you may find inspiring. Or not.
Beyond this, I think prioritization doesn’t matter. When I do deep work (you can read about how to maximize deep work in my previous article), I just pick whatever goal I find most exciting at any given moment. This makes deep work easier.
Here’s mine:
To build one of the platform companies that give us The Singularity. To help make us immortal posthuman gods that cast off the limits of our biology, and spread across the Universe. To be a leader in this transformation. To ensure that my values (freedom and pursuit of knowledge) lead the future. To personally spend that infinite future in an infinite pursuit of more knowledge and more freedom.
I arrived at this vision over a long time. As a combination of (1) fundamental philosophy (2) logic + math (3) seeing that having this vision makes me emotionally happy.
Let me walk you through this.
I grew up on science fiction and in a family of scientists and engineers that valued knowledge and curiosity as the supreme value.
So to me knowledge + freedom to pursue it are more fundamental than other moral values.
As in “I deeply care about the right of sentient organisms to pursue knowledge/truth and be free. I don’t care about sentient organisms ending up equal. I strongly believe inequality will persist.”
This is a philosophical and moral view. I can come up with lots of smart-sounding “rational” explanations for it. E.g. “life at core is about information processing and local entropy minimization, so information is the supreme value of life.” Or “equality is unstable because evolution demands an endless iterative cycle where some variant of a system becomes stronger than others at something, and propagates until the next phase transition.”
But really this is just an axiom that I have. Most human rationalizations about why their moral value is THE correct one is just post-hoc confirmation bias.
Some people will read this and say things like “you are not moral at all, morality must include equality/religion/patriotism etc.” But the fact is, moral preferences are purely subjective. And change with time, culture and lots of other things.
Thinking that one of us, trivial collections of molecules that bump against each other, discovered an unshakable fundamental moral law of the universe seems naive.
Find your own axioms. Every branch of science is based on axioms. Nothing to be embarrassed of.
I will write a long separate post explaining why I believe superintelligence and immortality are coming soon some other time. But here is an off-topic overview of my reasoning.
we will be able to connect artificial neurons to the brain; this is already an engineering problem rather than a problem of science.
the brain is plastic and will integrate these extra neurons; this will allow for an additional cloud neocortex+ additional physical resources (sensors, bodies, satellite backups).
these physical and mental resources can be part of one cohesive organism because data transfer latency in electronics is a fraction of lightspeed vs. 120 m/s in myelinated neurons. organism cohesiveness depends on latency because low latency enables coordinated decisionmaking without autonomous separate agents.
this organism will retain its original identity and consciousness if additional resources are added and integrated gradually.
organisms like these will be truly immortal, superintelligent posthuman “gods” compared to humans today. I am not talking “strong Einstein robot.” I am talking “something that lives in a hundred datacenters, is backed up on satellites, is connected to billions of sensors, has more brainpower than all humans who have ever lived combined.”
this is usually where people start saying that there will be legislation or pitchforks to stop this. game-theoretically, stopping The Singularity is impossible.
the core reason has to do with the Prisoner’s Dilemma. the value of joining the coming posthuman revolution is too high. you get immortality and superintelligence for yourself, your children, people you love. why would anyone give this up?
in order to stop The Singularity, society would need to become a totalitarian global dictatorship. in the face of overwhelming incentives to the contrary for each individual. and this needs to happen in the next 30–100 years.
also if society appoints people who try to ban it, most likely it will look like a badly coordinated group of people who do not understand what is going on. also those of us pursuing The Singularity will move to other countries.
i think the only scenario in which immortality/superintelligence do NOT happen this century is if humanity destroys itself. i do think there is a high chance of human extinction happening this century. and this is one of the reasons i think a human-controlled Singularity needs to be rushed as soon as possible.
i really hope this whole transition will be peaceful. if i get to my goals, i will be happy to share with others. as long as i don’t feel threatened by them. the Universe is obviously big enough for everyone. but if humans act to prevent posthumans from appearing, there will be a lot of violence from all sides.
i will write a much more detailed article about this in the future. feels like aligning human & posthuman interests is a very interesting topic.
Anyway, back to goal-setting:
So from a mathematical viewpoint it is senseless for me to invest into anything other than my long-term vision.
Let’s look at two scenarios: (1) I fail (2) I succeed.
(1) If I fail, I will have spent my life in pursuit of something that has deep meaning to me. My long-term goal is the reason I get out of bed in the morning, my true north, my ikigai. And the meaning will stay until the very end. If it is looking like I am too old and my goal is still too far off, I will just cryopreserve myself. Maybe I will be revived later, maybe not. But the point is my pursuit will never be hopeless or disappointing. Even at the moment my conscious experience is paused or stopped, I will see a material probability of success.
(2) If I succeed, then becoming immortal and superintelligent will just be a small first step on an endless staircase up. My goal isn’t even immortality or superintelligence, it is pursuit of knowledge and freedom. And no matter how powerful we become, the universe will offer new interesting challenges. 500 years from now I want to be building a supercollider the size of the Solar System to model black holes. Or figuring out how to build a cohesive interstellar civilization given lightspeed communication latencies. Interesting things, with millions of years of exploration horizon ahead. Life will never be boring. Because my true goals are infinite.
That last bit is important to happiness. Many people set finite, achievable goals as their meaning of life. Make money. Buy house. Have children. And once they reach these goals, they are disappointed. The dopaminergic system adapts, as it is programmed to do.
The only way to be truly fulfilled is to have infinite goals, that can infinitely be pursued, and to find this infinite pursuit fulfilling by itself.
So this is a win-win. Whether I fail or succeed, the process of being passionate about an infinite future vision is intensely emotionally satisfying.
This whole post is just an illustration of how I found mine. I don’t know how you can find yours. But I would:
It is a long process and your vision could be very different from mine. But I don’t think a life without a long-term vision is worth living.
Also. If your vision is similar to the vision that a lot of people have (e.g. aforementioned “make money. buy house. have children.”) then you are probably just copying someone else.
My initial vision as a kid was “become rich.” It wasn’t mine, it was just planted there by the surrounding environment. I was fortunate enough to have made a lot of money when I was 22 (by selling my equity in one of the companies I started). Bought a Ferrari and a Bentley; lived between Mayfair in London and Sparrow Hills in Moscow; chugged Cristal at Jimmyz in Monaco and Tramp in London. It was pretty awesome for several months. And then became a crushing disappointment. I was surrounded by people I did not respect, doing stuff I did not enjoy.
Because “become rich” in and of itself is a really stupid long-term vision, held by a lot of really stupid people. And it is really easy to catch it from them.
Your vision really needs to be yours. A contrarian vision is a hint that it really is yours. Because it has not been planted there by other people.
Don’t spend your life as a fucking sheep.
It is obvious that to achieve my initial 50-year-long step, I need to do certain things every day.
But how? I mean, I want to be one of the immortal posthuman gods. How do I something today to advance that? Should I be speccing out functional digital neurons instead of writing this post?
The next step is simple.
Let’s start with an example of how I think:
And there we have it. A paragraph of text needs to be written right now. That paragraph enhances my probability of becoming one of the posthuman gods of 2070.
I think this way on a daily basis. Run through scenarios. Execute on actionable steps. Repeat.
The process is to (1) work backwards from the long-term vision (2) focus on instrumental goals (more below on what these are) (3) obsessively look for actionable steps at every timescale.
Obvious. You have your long-term vision. Keep asking “what Z bottlenecks my LTV?” and then “what Y bottlenecks Z?” And keep going. There is always a bottleneck.
The challenge is that this is a very speculative process. The world has ~infinite degrees of freedom. So the future is hard to predict.
This is where instrumental goals come in.
Instrumental goals are a simple and incredibly powerful idea that was first proposed by AI theorists, but has clear applications for daily decisionmaking.
These are goals that are likely to help all long-term visions and paths to them, no matter what the details are. Things like health, resources, focus, survival, intelligence, network of allies are instrumental goals.
The power of instrumental goals is that they are extremely predictable and don’t change much over time.
Going back to my example. It is highly likely that the tech path to BCIs will be different from what I think. It highly unlikely that persuasiveness or health or wealth will not be useful.
Thus it is all but certain that raising testosterone or optimizing sleep advances my long-term vision. Some people think I biohack because I am afraid of death. I’m not. I think immortality is a boring foregone conclusion. The real reason I biohack is the instrumental goal value for my long-term vision.
Here are categories that I consider instrumental to ~any long-term vision:
Obvious. Take your path-to-long-term-vision + your instrumental goals, and obsessively look for concrete actions to advance these goals at every timescale. When an opportunity to advance arises, jump on it.
I like using the following framework for this:
Note that I do not separate health, career, friends etc. They all instrument my long-term vision, and all are critical. Many people see work and life as separate. This is an illusion.
So, by now:
The last bit to prioritization is “keep removing everything that does not advance the long-term vision.”
That’s the general idea. The best way to explain this section is to share examples.
As always my mental Saul Goodman says “do not do illegal things outside of international waters.”
This is not an exhaustive list. But removing these made me vastly more effective at pursuing what actually matters.
I decided to remove this section. I originally included it just to be provocative. It doesn’t carry much real value in terms of the article. And it was an experiment in saying something unpopular that ultimately does not really reflect my values.
As a kid I played a lot of computer games. Mostly RPGs and strategy (turn-based and RTS). Guessing 150 hours a month, comparable to a full-time job, for years, in a period of formative neural development. There’s no way this didn’t have an impact on how I turned out. So, some theoretical speculation:
1. I find the concept of heavily investing in long-term self-improvement (moving up the tech tree, gaining levels, powergaming by optimizing everything) extremely obvious and appealing. Powergaming is the way to win when difficulty is high.
I powergame real life.
2. I think there is always a way to win. Just have to find the right dialog option (which is always there). This is an extremely valuable trait because most of the time persistence pays, rules bend, reality is negotiable, optimism is contagious etc.
3. I suspect that managing huge complexity (e.g. Civ 2–3) at a very young age has significant benefits for intellectual development.
This is all speculation of course. No way to do a proper experiment. But I find thinking about this fascinating.
You can read about this in my other posts. I don’t want to repeat myself, but things like deep conversations with friends, learning, writing, public speaking, meditating, sports etc.
I freed my life of bullshit. And reinvested more into all of these.
Once again, I am not saying that you should have the same goal that I have. I am just sharing how I arrived at my goals. And a bunch of examples of how I think.
Thanks for reading. Comment/share if you found it interesting.
*******************************************************************
I’m 32 and spent $200k on biohacking. Became calmer, thinner, extroverted, healthier & happier._This post is about how to use modern science and personalized medicine to make yourself healthier, more productive and…_hackernoon.com
How to biohack your intelligence — with everything from sex to modafinil to MDMA_I had some free time over the holidays and wrote this article to showcase, on the basis of a personal story, many…_hackernoon.com