(Epistemic status: Generally how I feel about the stupid shit I try in terms of mind hacking)
In Stellaris, a space strategy game by Paradox, the tech tree is kinda variable based on a card system that deals semi random technologies every time you research a technology, based on tier, prerequisites, and weighting of the card. The important part is that there are sometimes technologies, highlighted in reddish orange, that are considered “Dangerous Technologies.” These technologies are dangerous for two reasons. The first is they can anger other civilizations if you pursue them and even make powerful enemies. The second is that they can provoke end game crises. This is a useful metaphor for a recent trend I’ve noticed in myself and others: various high effect mindhacks that don’t strictly track with truth.
You see, mind hacking and trying weird things is relatively similar to researching Dangerous Technologies. The typical example of a dangerous technology that I bring up is “sparkliness”. It’s basically a weird blend of hypomania and introspection that can be directed outward, combined with an understanding of narrative and social reality. It feels like something people independently realize if they have the right neurotype and it starts to feel like a real thing in thingspace when other people start validating these intuitions. The drawback is obvious; hypomania that gets fed and pushed tends to become mania. Mania is generally considered a rather broken state because of that whole unfortunate detachment from reality thing. Sparkliness, or at least my conception of it, is therefore a dangerous technology.
There are other dangerous technologies out there in terms of mind hacking. The category is generally defined by high variance interventions. Dabbling in meditation is unlikely to be a dangerous technology but it’s recently become clear that the more you follow that rabbit hole, the more destabilizing it can become. I’m sure people have read thinkpieces on how western meditation practices basically take the practice without respect for the tradition and then westerners are left lost and confused because they don’t have anyone to guide them through the rougher experiences meditation can lead to. Nootropics are also a bit of a dangerous technology, some more than others; I mean, I doubt anyone is going to start highlighting caffeine in orangish-red.
The power of belief is also an up and coming dangerous technology. We know the placebo effect exists and you can do really cool things with it. You also can end up thinking you’re bulletproof when really you’re just working well together with the rest of your village because your risk assessment is skewed. My basic understanding of conviction charisma also falls into the category, i.e. the infamous reality warping field of startup founders. Belief is a powerful drug, but it’s one you inflict on yourself to inflict on others.
I will note that there are mindhacks that aren’t dangerous technologies. Things like double cruxing, developing normal charisma through social practice, calibration games, various techniques for overcoming bias, these are unlikely to make you insane. The notable thing is how these are largely in the rationalist canon, whereas dangerous technology seems to fall more into postrationalist territory.
Overall, dangerous technology is incredibly appealing in terms of really fast living and creating An Outcome, whether it’s good or bad, without having to do a lot of work (well, depending on your definition of “a lot of work”). It just may, you know, literally break your mind; it also tends to be unreliable/unprovable enough that using it too much tends to make enemies of the more grounded people around you, especially those that have learned to properly fear and respect dangerous technology. It’s a risk reward analysis where the data is opaque; if you aren’t already engaging in dangerous technology research, I would heavily advise against it. If you’re already there…be sure to take a few moments and stop from time to time.
Discussion: Do you use any dangerous technologies in your life? How would one approach a risk/benefit analysis when the risk is literally going insane or worse? Are nondangerous technologies proven and powerful enough to be worth the work without trying to take dangerous shortcuts?