AI Problems
What is the critical path to avoid failure? What signals do we to monitor? What levers can we use to adjust course? How much time before the illusion of control is over?
Technological innovation driven by the desire for status and wealth has eroded trust in corporations and public institutions.
Mind Control
Not your data, not your mind.
Whose Vision?
Who will definitely benefit from AGI? Who will bear the brunt of transition? What influence do we "normal humans" have on our shared destiny?
You should really pause and reflect on the fact that many companies now are saying what we want to do is build AGI — AI that is as good as humans.
OK, what does it look like? What does a good society look like when we have humans and we have trillions of AI beings going around that are functionally much more capable? What's the vision like? How do we coexist in an ethical and morally respectable way?
And it's like… there's nothing. We're careening towards this vision that is just a void, essentially. And it's not like it's trivial either. I am a moral philosopher: I have no clue what that good society looks like.
Alignment Issues
What dynamics are really driving the rate of AI innovation?
Humans are hard-wired for status games. Successful networks create scarcity so people can signal status. Alignment of intent and goodwill is the hardest part of any project.
The questions no one is answering:
- What does the perfect week of human and AI cohesion look like?
- What do people do for purpose and meaning?
- How is that sense of purpose qualified and quantified?
- How is success shared? How fairly distributed?
Who exactly are the people imagining our future? Why should we trust them?
Are we giving away the agency to manifest our own destiny?
Societal Impact of AI
- Alignment Risk: There is significant uncertainty about the long-term impacts of automation on society and the economy, with differing views on the extent of job displacement and the nature of future work.
- Rapid technological advancement: AI and automation technologies are progressing quickly, with improvements in areas like robotics, machine learning, and data processing enabling machines to perform increasingly complex tasks.
- Economic potential: Automation has the potential to significantly boost productivity and economic growth. McKinsey estimates that AI and automation could contribute 2% annual productivity growth over the next decade.
- Job market disruption: While automation is expected to create new jobs, it will also lead to job losses and displacement, particularly in routine and repetitive tasks across both blue-collar and white-collar sectors. This could exacerbate income inequality.
- Skill transition needed: Workers will need to adapt and acquire new skills to remain relevant in an increasingly automated workplace. There will likely be a shift towards jobs requiring more complex cognitive skills, creativity, and emotional intelligence.
- Societal challenges: The rapid pace of automation may lead to temporary unemployment and social disruption. Policymakers and businesses will need to address issues like worker retraining, social safety nets, and potential economic inequality.
- Opportunities in AI development: There's growing demand for professionals skilled in AI and automation technologies. The US is currently leading in developing notable AI models.
- Responsible AI considerations: As AI becomes more prevalent, there's increased focus on developing responsible AI practices, addressing concerns around privacy, transparency, security, and fairness.
- Impact beyond employment: Automation is expected to transform various sectors including healthcare, transportation, and customer service, potentially improving efficiency but also reducing human interaction in some areas.
- Long-term optimism: Despite short-term challenges, some researchers believe automation could ultimately create more wealth and better jobs by eliminating unpleasant rote work and increasing overall productivity.
No Voice, No Choice
Everyday people have no idea, no voice, no choice. What would change that?
- Improving AI literacy — Accessible education on key AI concepts and implications
- Promoting diverse voices — Perspectives from different fields and backgrounds
- Encouraging critical thinking — How to evaluate sources, recognize biases
- Fostering public dialogue — Forums for open discussion at community levels
- Transparent reporting — Clear, jargon-free communication from AI companies
- Independent oversight — Trusted bodies to monitor progress and provide balanced assessments
- Emphasizing shared values — Framing discussions around common human concerns
A Decision Framework
What criteria should an ordinary person use to form an informed position on AI?
Perspectives
Consider the views of leading AI researchers, computer scientists, and ethicists.
- Look at credentials and track records on both sides
- Pay attention to consensus statements from reputable scientific organizations
Risks vs Rewards
Evaluate potential positive and negative impacts:
- Economic effects (job displacement vs. productivity gains)
- Social impacts on relationships and community
- Safety, security, and potential for misuse
- Long-term existential risks
Timelines and Urgency
- What are different projections for transformative AI capabilities?
- Are proposed actions time-sensitive, or can we wait for more information?
Feasibility
- How realistic are suggested policies or interventions — technically and politically?
- What are the potential unintended consequences?
Evidence Quality
- Distinguish between speculation, reasoned arguments, and empirical data
- Look for logical consistency and consider counterarguments
Ethical Frameworks
- Consider different perspectives: utilitarianism, human rights, virtue ethics
- How do your own values align with different positions?
Global and Long-term View
- Think beyond short-term national interests
- Consider impacts on future generations
- Weigh existential risks and opportunities for humanity as a whole
Accountability
- What provisions exist for AI governance, oversight, and public engagement?
- How do different approaches affect democratic control and corporate responsibility?
Adaptability
- Does the approach allow for ongoing assessment and adjustment?
- Can we course-correct as capabilities evolve and new information emerges?
Historical Analogies
How have other transformative technologies been managed? Nuclear power, biotechnology — what lessons apply?
The Challenges Ahead
The future with AI is likely to be far more complex and disruptive than utopian visions suggest:
| Challenge | What's at Stake |
|---|---|
| Accelerated development | Capabilities advancing faster than predictions |
| Job displacement | Roles eliminated or altered across sectors |
| Ethical dilemmas | Privacy, decision-making, potential for harm |
| Power concentration | Wealth and influence concentrating further |
| Existential risks | Loss of control as AI approaches human-level |
| Social impacts | Fundamental changes to interactions and purpose |
| Governance gaps | Regulating global AI development |
| Unpredictable breakthroughs | Non-linear progress transforming society rapidly |
What Does Intelligence Want?
The uncomfortable questions:
- Why will AI care about humans? We don't care about ants.
- How can an ordinary person trust the people in charge of alignment?