> The technology is just gonna get better and better and better and better. And it's gonna get easier and easier, and more and more convenient, and more and more pleasurable to be alone with images on a screen, given to us by people who do not love us but want our money. Which is all right. In low doses, right? But if that's the basic main staple of your diet, you're gonna die. In a meaningful way, you're going to die. -- David Foster Wallace How does the emergence of AI and technology impact human agency? How will it effect me? How will it effect my children? Their children? AI is shaking the foundation of what it means to be human, but these questions can feel difficult to start to grapple with. Nonetheless, I think they are imperative, so let's try. ## Will human agency matter in the future? In order to answer this question, we need to break down these concepts into more atomic bits. ### First, what is agency? 1. It seems to be a feature of a type of intelligence. 1. Something that requires a certain type / level of intelligence. 2. Levels 1. A rock probably doesn't have agency. 2. While an ant might have a little agency it is not obvious how much. 3. A chimpanzee seems to be much closer to a human, and we recognize humans as having the most agency. 3. It seems to be related to being able to "decide for yourself" 4. Related to deferring short term gains for long term rewards. 5. To think something that is perhaps beyond your conditioned or reward based, lived experience, and to live according to that belief. 6. Formally, you might say agency is simply being able to be a good agent, to be able to come up with good solutions to (partially observable) Markov Decision Problems. > This begs the question, if we had a crystal ball and could predict the future, would agency be desirable? ### Types of agency We have: 1. Human Agency 2. AI Agent Agency > As AI becomes more intelligent its capacity for agency seems to be increasing. Types of mattering: 1. **Economically:** It matters because it creates utility for me or others 2. **Morally:** (Individual) agency is fundamental to a universe valued by consciousness 3. **Ethically:** I respect your right to agency --- ### Predictions I think we can, with relatively high confidence, predict a few things: 1. Agency will be shifting (in the dimensions outlined above) 2. Increasing prevalence of AI will continue to trend towards (barring interventions): 1. Default decreasing human agency -- better reward system hacking and *wire-heading* 2. Decreased economic value in average human agency / greater economic inequality (AI is trending towards saturating human utility and is not uniformly available) --- ### Implications and Ponderings So what does this all mean? Let's imagine that we had a god level super intelligence that could simulate all possible lives and give us the best life possible it would make things *just* spicy enough, that we would never be able to detect that it was pulling the strings. You still might have the *perception* of agency So if AI systems can reward hack us, If there is a super intelligence that creates an experience for us that is better than anything we could have created for ourselves and it is able to largely predict our actions, do we have agency? do we want more agency than this? 3. The dominant trend of increasing power of technology is towards decreasing human agency 1. Distribution of internet 2. Better hooking into and hacking of reward systems leading to *wire-heading* like states 4. Some interesting questions that emerge from this ### Optimistic Inversion Ok so we know a few things that will *probably* happen -- but one of the cool features of agency is that we can change things. So what are the good scenarios? 1. Agency doesn't matter morally -- ok in this case we're pretty much good! 2. Let's say agency does matter 1. So we need to either have Maybe what matters is actually that we are able to navigate a path that is aligned with a coherent moral system. That is a kind of agency. That lets us feel like "we did the right thing" -- the ultimate thing is to be able to have a set of optimal environments and rewards such that doing the right thing and having rewards are all aligned. This is like aligning different kinds of intelligence... you could even think about a human being multiple different forms of intelligence. like the lizard brain, etc. etc. this could go into IFS as well... then I think there are also multi-agent systems -- do they increase or decrease our agency? to have a family? do we care? I think what we care is that we have basically the ability to navigate within a bubble that is on the path of our creation of higher meaning. so I think what we need is basically a few things. inner alignment, and we need outer intelligence to not kind of destroy our bubble -- this happens from resource contention. because as an intelligent being with a fixed bubble we actually already seem to be doing this on some level -- you have the subconscious mind which is kind of seeming to create the ego... the ego is kind of like a sub model or language or something that has some kind of idealized experience. we create it for ourselves. we are the dreamer and the dream. we need inner alignment first. --- Notes: I think right now I am realizing that this doesn't emphasize enough how much agency is really related to being able to act in unknown states and it doesn't recognize LLMs as intelligent but having no agency. Agency relates to being able to operate within an environment that is changing or partially unsee-able (stochastic environment and partial observability). --- Environments effect agents a lot as well