Zooming in: Writing Sequence Number Two

Another writing challenge!?” you ask. Fair enough. Maybe I’ll drop the word “challenge”, to make it seem less grandiose. But it definitely seems to be the case that I deal better with work/research/thinking/writing sprints than with trying to produce a constant trickle of output. Even when I remembered that “write it down” was a good way of dealing with confusions and working out questions, most of my jumbly thoughts didn’t even make it into a short blog post but instead stayed colourful scribbles in my notebook, which are very easy to misinterpret when looking back after a while. In contrast, I’ve enjoyed looking back at last month’s series. Reading through the preliminary assessment post I wrote right at the end of that challenge, I would not only add that I like keeping my thoughts in such a nicely browsable format, but also that, indeed, this is a better way of clarifying my thoughts and staying motivated than whatever else I’ve tried in the meantime (although it was fun to do some free-form exploration)

So, onward!

This time, I wanted a clearer theme – something to focus my attention on for maybe a week. (“I propose that a one-week, one-woman study be carried out to make progress on this issue” – I know, right). I felt the pull to delve deeper into some question, to maybe arrive at new insights instead of just ordering whatever thoughts I already had. At the same time, I had the urge to take a step back to get a better sense of underlying motivations and arguments for or against working on AI risk. Surely I can solve both AI risk questions and the question of whether to work on AI risk. Within a week, yep. Surely.

Here’s how I found a topic

I was about to write down how I got here for the benefit of those who are interested (given that I would certainly have been interested, one or two years ago, in getting an idea of how on earth other people arrive at their research questions); but then I thought it might be more fun to show instead of telling. Here is what flooded out of my head when I was like “um, where do I start?”

I particularly enjoy the prompt “What research areas are there, in general?”

That skeleton gave me a jumping-off point to figure out which questions provoked some sort of internal reaction (that wasn’t “blah! Blah blah! Boring!”)

Going from there, I asked (because, of course, I felt drawn to the meta-questions):

Basically, at each step I was either like “What has to be true for this to be important?”, or “What would have to be true for it to not be important?”. Pretty basic, come to think of it (at least if you have spent time hanging around on LessWrong). This time, I dismissed those possibilities that seemed outside my value system (sure, I shouldn’t work on X-risk mitigation if existence was literally bad, but I’m not going to go that far meta and get stuck thinking about that, right?).

And maybe by this point you can guess that the question I want to delve into is:

How much can we do about the future?

Basically, assuming that existence is good and that I want future generations to exist, and finding it plausible that that’s overall more important than other problems in the world, I would still like to know if it is a thing that we, as collective humanity, actually have any power to influence.

Although I am aware of other people having thought about this, I don’t yet have a workable opinion (which I notice from giving handwavey answers like “Surely thinking about it is better than not thinking about it, given that it’s so immensely important” when people ask me). And it recently occurred to me that thinking about things might not always be better than not thinking about them, not just because we should maybe think about other things instead (like how to make existing people happier), but also because thinking about a thing could lead us to do worse than not thinking about it, once the time comes to act on it. Not saying that this is the case, or is particularly probable, but I would kind of want to know in case it was.

Wonderfully, this question is broad enough to allow me lots and lots of sub-questions and freedom to meander and orient myself. I think of them as follows:

  1. What can we, and what do we know about future events?
    1. How good were people in the past at predicting the future? (I expect to find that in hindsight, everyone thinks X was obvious, but that people at the time actually had no clue)
    2. What are our current methods for thinking about the future? Are they any good? (Or better for certain things than others, which I expect to be the case)
    3. Are there things that are literally unknowable? (And what does that mean?)
    4. Umm, something something cluelessness.
  2. To what extent are we able to cause good events to happen?
    1. Might our efforts prove counterproductive? (investigate the Tchernobyl case, and maybe other disasters and how people had tried to mitigate the risk beforehand)
    2. What are the practical implications of cluelessness concerns?
    3. Even if we know a whole lot about future risks, might their mitigation fail for weird political reasons? Has that happened before?

So, voilà, here’s my plan for the week. This time, I will try to spend around ~2 hours per day reading and researching those questions, and spend ~1 hour on the next day to write down my thoughts and put them into a blog post. I expect to roughly go in order, but reserve the right to pick whatever I feel like thinking about in the moment, and to completely change my mind on everything once I have read some more things (and also because there has to be some upside to just doing writing by and for myself).

One thought on “Zooming in: Writing Sequence Number Two

Leave a comment