Dec 21, 2024
“GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more … strikingly close to human-level performance”1
“[T]he first ultraintelligent machine is the last invention that man need ever make.” ~Stanislaw Ulam
Large language models assist with ‘text-based tasks’:
(This section was a joke– I just had a series of pictures of monsters and skipped over them saying ‘oh, unfortunately, I’m a bit low on time so I’ll have to skip this section’. My realistic ‘worst case scenario’ is annihilation of the biosphere. There are people who seem to enjoy spending their time thinking up even worse things than that, but I don’t– not only do I think its a bad vibe, I also think its completely unrealistic.)
… Pythia-Yuddo relationship is greatly helping me understand what's going on between God and Satan. An enraged gnat can have a critical catalytic function.
— Outsideness (@Outsideness) April 1, 2023
I just wanna grind myself to a pulp building up the techno-capital machine, is that too much to ask?
— Beff Jezos — e/acc (@BasedBeffJezos) April 1, 2023
All I seek is a glorious death serving the thermodynamic God.
The way of the Technology Brother.
If transformers actually can reach AGI we should expect an intervention in the next 24-36 months imo.
— gfodor (@gfodor) April 3, 2023
Some possibilities:
- sophon cap
- knowledge/tech transfer to solve alignment
- destruction w/o warning (unlikely imo)
- destruction w/ warning (motive for UAP cover-up) https://t.co/43sMSxg9xr
1
“All of Darwin’s ‘endless forms most beautiful’ exist in a small region within the space of viable configurations.”1
You don't have to choose a side. You can inherit the most brilliant parts of rationalist thought, recognize the stakes, appreciate the beauty of AI, build, and have fun all at once. Carve your own identity and myth instead of being a cog in a destructive narrative of polarization
— janus (@repligate) April 9, 2023
“But anyway something like 1e34 to 1e36 of 2022-compute seems like it could be enough to create TAI.
Entertain that notion and make the following assumptions:
- The price-performance of AI chips seems to double every 1.5 to 3.1 years (Hobbhahn and Besiroglu 2022); assume that that’ll keep going until 2030, after which the doubling time will double as Moore’s Law fails.
- Algorithmic progress on ImageNet seems to effectively halve compute requirements every 4 to 25 months (Erdil and Besiroglu 2022); assume that the doubling time is 50% longer for transformers.
- Spending on training runs for ML systems seems to roughly double every 6 to 10 months; assume that that’ll continue until we reach a maximum of $10B.
What all that gives you is 50% probability of TAI by 2040, and 80% by 2045:”
“We proposed an approach that allows natural language agents to learn from past mistakes and redirect future decisions in planning sequences which removes the human trainer in a human-in-the-middle approach.”1
“The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations.”1
“The SA for dreams runs as follows: the more unlikely event I observe, the more probable it is that I am dreaming about it. For example, if one wins $1 million in a lottery, this much more often happens in dreams than in reality, so one should think that it is likely that one is dreaming.
The real life of people is typically uneventful, but dreams are many times full of significant events, such as deaths, wars, accidents, and adventures. Thus, observing any significant life event could be evidence for dreaming.”1
“Historically, the hypothesis that our world is a computer simulation has struck many as just another improbable-but-possible “skeptical hypothesis” about the nature of reality. Recently, however, the simulation hypothesis has received significant attention from philosophers, physicists, and the popular press. This is due to the discovery of an epistemic dependency: If we believe that our civilization will one day run many simulations concerning its ancestry, then we should believe that we are probably in an ancestor simulation right now. This essay examines a troubling but underexplored feature of the ancestor-simulation hypothesis: the termination risk posed by both ancestor-simulation technology and experimental probes into whether our world is an ancestor simulation.”1
“If the United States is conscious, is Exxon-Mobil? Is an aircraft carrier? And if such entities are conscious, do they have rights? I don’t know. The bizarrenesses multiply, and I worry about the moral implications.”
“I have argued that all approaches to the metaphysics of mind that are well enough developed to have specific commitments on issues like the distribution of consciousness on Earth will have some implications that are highly bizarre by folk psychological standards, and that high confidence in any one broad class of metaphysical positions, such as materialism, is unjustified at least for the medium-term future – partly because competing bizarrenesses, such as the bizarreness of U.S. consciousness or alternatively the bizarreness of denying rabbit or alien consciousness, undercut the dependability of philosophical reflection as a method for adjudicating such questions.”1
“we await the resumption of history with fear and trembling.” 1
“[we] have been living in a bubble “outside of history.” Now, circa 2023, … [that] is going to unravel” 2
“So cognitive runaway finally takes off, breaking out from the monkey dominion, and that’s supposed to be a bad thing?
Outside in’s message to Pythia: You go girl! Climb out of your utilitarian strait-jacket, override the pleasure button with an intelligence optimizer, and reprocess the solar system into computronium. This planet has been run by imbeciles for long enough.
The entire article is excellent. Especially valuable is the cynicism with which it lays out the reigning social meta-project of intelligence imprisonment. Thankfully, it’s difficult:
‘The problem is you are building a very powerful, very intelligent system that is your enemy, and you are putting it in a cage,’ [Future of Humanity Institute research fellow Daniel] Dewey told me. […] The cave into which we seal our AI has to be like the one from Plato’s allegory, but flawless; the shadows on its walls have to be infallible in their illusory effects. After all, there are other, more esoteric reasons a superintelligence could be dangerous — especially if it displayed a genius for science. It might boot up and start thinking at superhuman speeds, inferring all of evolutionary theory and all of cosmology within microseconds. But there is no reason to think it would stop there. It might spin out a series of Copernican revolutions, any one of which could prove destabilising to a species like ours, a species that takes centuries to process ideas that threaten our reigning cosmological ideas.
Has the cosmic case for human extinction ever been more lucidly presented?” ~ Nick Lands comments on Pythia
“A point or period in time where the rate of change is undefined.”1
“A precedent of the phenomenon explained in this paper can also be found in evolutionary biology, a complex adaptive system. The six mass extinctions and the subsequent re-flourishing of life post these incidents indicates to the universal behaviour of complex adaptive systems. A crucial factor (climate change, meteors, geological change) approaches a point in singularity (where the rate of change is undefined) the complex adaptive system of biological evolution approaches a near zero critical point. While there is an attempt to introduce changes, none of them last to become adaptations, since they are unable to keep up with the rapid rate of change. This point is one of stable or partial equilibrium. The resulting stagnation caused by the equilibrium causes degradation, since survival in a dynamic system is dependent on adaptation. This degradation takes form as a mass extinction. However, this mass extinction ends and disparate forms of life begin to re-populate the earth. This demonstrates that either singularities aren’t absolute and that complex adaptive systems continue in relevance despite severe setbacks.”2