impossiblewizardry: (Default)

Nick Bostrom:

It is possible that efforts to contemplate some risk area—say, existential risk—will do more harm than good. One might suppose that thinking about a topic should be entirely harmless, but this is not necessarily so. If one gets a good idea, one will be tempted to share it; and in so doing one might create an information hazard. Still, one likes to believe that, on balance, investigations into existential risks and most other risk areas will tend to reduce rather than increase the risks of their subject matter.

"one likes to believe" is not a reasoned argument. Arguably this author's writing was part of the chain of events leading to the creation of OpenAI, which Yudkowsky says "trashed humanity's chance of survival" (though he doesn't mention Bostrom as a cause)

I have no particular reason to believe that's true. In any case, although some pieces of writing have a large impact, it's hard to know what it will be in advance.

Jessica Taylor's post about her experience at CFAR talks about the insersection of unverifiable claims of future negative impacts and ordinary power, like how you don't want your boss or your friends to be mad at you.

I was discouraged from writing a blog post estimating when AI would be developed, on the basis that a real conversation about this topic among rationalists would cause AI to come sooner, which would be more dangerous (the blog post in question would have been similar to the AI forecasting work I did later, here and here; judge for yourself how dangerous this is). This made it hard to talk about the silencing dynamic; if you don’t have the freedom to speak about the institution and limits of freedom of speech, then you don’t have freedom of speech.
(Is it a surprise that, after over a year in an environment where I was encouraged to think seriously about the possibility that simple actions such as writing blog posts about AI forecasting could destroy the world, I would develop the belief that I could destroy everything through subtle mental movements that manipulate people?)

Can only speculate whether the motivation for telling Taylor not to write the blog post was really the one given, but there's reason to suspect. The idea is just that someone migh thave ordinary petty reasons not to want you to do something, and they're in a position to make pronouncements on long-term consequences, so they say the consequences are bad. For a blatant example see Justinian's decree against homosexuality which cites sodom and gomorroh and says of course we don't want that to happen here; of course that's not the real reason he disapproves of homosexuality.

The important thing here is that there's really no way to like, prove that it won't cause god to destroy the city? I mean... well, now we generally think that kind of thing doesn't happen. But with like, "this blog post could lead to unfriendly AI", it's like, yeah, sure, that could happen. But like... we're multiplying a very small number by a very large number. Probability this blog post makes the difference times badness of disaster. Anyone with experience in numerical computing can tell you there's an issue there. It leaves plenty of room for superstition, and for the accepted answer to just be social consensus, which someone in power has the power to shape.

impossiblewizardry: (Default)
  • Evolution by mutation and natural selection
  • Quantum mechanics
  • Relativity

When you picture the origins of these theories it's Darwin making sketches of birds or Einstein scribbling equations.

We've done a lot since then, like the human genome project and the standard model, and the character of the research seems to have changed. Big projects where each person plays a small part, expensive equipment like particle accelerators and space telescopes, and computers.

I want to argue though that the "one person with a notebook" type research never really stopped, not just because people had to do that to explain particle accelerator results, but because people kept making scientific advances that in principle could have been made in the early 20th century.

  • Density functional theory, originating in the 1960s, nobel prize 1998. It's the basis of the computer programs used to get wavefunctions for large systems. But also it's just a convenient mathematical reformulation of quantum mechanics. For example you can derive a theoretical basis for something like Pauling's electronegativity scale. I imagine Pauling would have loved that, though idk of any comment from him.
  • The coalescent process, originating in the 1980s. It's what's used in computer programs for data-based inference of stuff like how long ago a SNP originated or whether it was selected for. But it's also just an extremely convenient way to derive some of the basic evolution math stuff. Fisher would have loved it had he lived to see it.

These are mathematical reformulations of existing theories, which turned out to be very useful for computational work, but which are also of independent value. It seems to me they could have been thought of at any time during the 20th century, but in fact were thought of pretty late. Basically, when Fisher, Einstein, Heisenberg etc were working out the foundations of the new 20th century theories, they didn't go as far as they could have with their pencils and paper. There was still plenty of worthwhile pencil-and-paper work left to do through out the 20th century. And, maybe even up to today, I don't know.

Oh, I'd add a third bullet point related to general relativity, if I knew anything about it. But there must be something. There's been an explosion of GR practice to explain all the weird stuff we see with modern telescopes, and it's hard for me to believe it has much resemblance to Einstein's GR practice.

I do think that these mathematical reformulations are events of comparable insight and impact to the original discoveries of the theories. They're why we can actually do stuff with the theories. I don't think we would care nearly so much about quantum mechanics if it didn't turn out that it could be applied.

impossiblewizardry: (Default)

a regression model is not a generative model

a regression model, plus an estimated probability distribution for the features, is a generative model

But that's a generative model of (feature vector, response) pairs. If the features are generated from some underlying object, it's not a generative model of that object...

... unless you can invert the featurization process and find an objet with those features

impossiblewizardry: (Default)

some things its weird to me to call it a "theorem", like the hellman-feynman theorem or the blackwell-rao theorem. Blackwell-rao's proof is just one application of jensen's inequality. It's not a mathematical achievement, a unit of math that should be remembered so we dont have to go through the proof every time. Rather its an expository achievement. After Blackwell chose to explain the concept of a sufficient statistic in this way, there's been no going back: that's how it's always explained. (not the only time Blackwell did this!)

impossiblewizardry: (Default)

i got a visceral lesson in the power of compression that you kids won't get, cause i remember when i first got a CD player that could decode mp3's, and started burning CDs with >100 songs on them, thinking, like, what the fuck.... like the correspondence between CD and album had been so firm for me, and I saw it shattered and never forgot

impossiblewizardry: (Default)

ok rereading the (second) transformation party chapter now that i have context.

Ashley met Elliot when she saw him transforming into a girl. So now I have some context for why this is important to Elliot... his ex Sarah had been a bit weirded out by that, and I think she said in the first transformation party that she didnt want their first kiss to be when theyre gender switched. But to Ashley this is part of the appeal, and their first kiss is when Elliot is gender switched.

Ok, cool, Elliot has a girlfriend who's really into transformation magic. So how come their third date is just like... normal? When they met it was exciting and supernatural, so how come now Ashley doesn't want to talk about magic?

so... ashley realizes that her fantasies are not things she'd want to happen in real life. Like her transformation fic includes like, sexy noncon transformation.

first of all, that means she feels she cant be open about the subject with elliot--she doesnt really want to share this side of herself with someone she really likes but doesnt know. So the most exciting side of her relationship to elliot to her, is also a subject she's avoiding.

Second, between date 2 and date 3, ashley found out she's gonna be a wizard. She's going to be able to do these transformation spells, while at the same time feeling like she's the last person who should be trusted with them.

Its complciated and elliot doesnt get it at all, he just knows ashley has magic on her mind but doesnt want to talk about it with him, and so he gets tedd to have the party so she'll have other people to talk to.

ok i think thats it... theres so much backstory to everything in this comic... a lot of effort to load context

impossiblewizardry: (Default)

questionable content robots are a fantasy about a better life for humans, whether or not they were intended that way. Like, May having a shitty robot body and spending all her money on repairs is basically just someone trapped in poverty by a chronic health condition. And she has a gofundme. Except it permanently solved the problem--she buys a new body, which works (she's healthy) (and also she's like two feet taller and has huge boobs cause she wanted that too). Like this is a vision of how much better things could be, a reminder of how bad they are now

impossiblewizardry: (Default)

scientism: metaphysical debates are stupid because as someone with an engineering degree i know the right answers

positivism: metaphysical debates are stupid because the questions are meaningless

if you translate all arguments into social conflicts, they can look the same

impossiblewizardry: (Default)

fermat's last theorem and irrationality of ζ(3) were both important in the 90s... yes the ζ(3) proof was 70s but bear with me

they're both unsolved problems from the beginning of number theory. Not really comparable because fermats last theorem was pursued much harder, and because the proof was deeper and resulted in widely applicable theory.

BUT... here's where the 90s come in... the 90s is when the hard part of the ζ(3) proof was automated. Part (all?) of this was zeilbergers algorithm, which you may have used without realizing it in a computer algebra system. A different kind of generality.

and zeilberger pushed this politically, politicized judgments of what's "real math". You may have heard, its tragic what you learned in your math classes, just symbol manipulation and procedures, whereas real math is deep and about ideas and there's barely any numbers in the formulas. No, said zeilberger, the symbol manipulation is the real math, and its cool and shouldnt be disparaged.

i kind of wonder, how much more successful would this political move have been, had it not been for the fermats last theorem proof? Would we just take for granted, ah the increasing abstraction of 20th century math didnt connect with real problems, it turns out we should all obsess over symbol manipulation procedures and achieve generality with symbolic computation algorithms rather than with theorems? Idk probably this was just one of many things going on, i mean i feel like penrose recently getting the nobel prize is another big confirmation of the value of super abstract 20th century math although tbh idk what im talking about cause idk much about penroses work

impossiblewizardry: (Default)

Francis Crick - What Mad Pursuit:

Biologists must constantly keep in mind that what they see was not designed, but rather evolved. It might be thought, therefore, that evolutionary arguments would play a large part in guiding biological research, but this is far from the case. It is difficult enough to study what is happening now. To try to figure out exactly what happened in evolution is even more difficult. Thus evolutionary arguments can usefully be used as hints to suggest possible lines of research, but it is highly dangerous to trust them too much. It is all too easy to make mistaken inferences unless the process involved is already very well understood.

impossiblewizardry: (Default)

basically, if you have some evolutionary reason for expecting something, ask yourself:

  • what do you know about the selection pressures during the time the trait was evolving? Do you know what would have been seelcted for, and how strongly? If it's something related to hunting, do you know how the sepcies hunted at that time? If it's something related to mating, do you know how the species mating rituals worked at that time? Not just a vague idea, but well enough to know what makes the difference in success vs failure, and what is negligible compared to other factors

  • what do you know about the available genetic variation at that time? Are you sure alleles with the effect you're thinking of actually existed? Or were more common than alleles with different effect sthat responded to the same selection pressure?

Garbage in garbage out.

impossiblewizardry: (Default)

my own theory is that the discomfort of thinking is mostly a response to the danger of not attending to the environment, and it's reduced by quiet, being alone or with people you trust, not multitasking, not having anything urgent coming up, not feeling like there's someone who hates you, a lack of clutter and well-cleared paths for walking and moving (if you move your arm you wont bump your elbow etc), and anything else that gives you reason to believe its safe to ignore your environment

impossiblewizardry: (Default)

Look this is spoilers for the movie oblivion, but it's a silly movie anyway...

the girl REPLACES HER HUSBAND WITH A CLONE TWICE

like her husband is just... replaceable! There's a lot of him! When something bad happens she gets a new one!

ok here's another one. In El Goonish Shive, Nanase and Elliot break up. Later, Nanase realizes she was a lesbian, and though she really liked Elliot was never really physically attracted to him. Then she gets together with Elliot's female copy, Ellen. Isn't that convenient!

So many problems you can solve by copying people

impossiblewizardry: (Default)
I finally get the intuition behind Bohr's derivation of the Rydberg constant. Not his original one; the one that I like, which he published a few years later.

It makes sense if I think about, what is the Rydberg constant, and what is Planck's constant.

Planck's constant tells us the scale where the world is quantum. In an oscillating electric field (such as being exposed to light) with frequency ν, electrons won't continuously gain energy as you'd expect classically, but will instead randomly acquire a quantum of energy h ν. To get some idea of what this implies physically: the way Millikan did his measurement of h was by illuminating a sample, and measuring the kinetic energy of the excited electrons that shot out of the sample: h is the slope of their kinetic energy as a function of ν.

The Rydberg constant, R, also tells us the scale where the world is quantum, but specifically for the hydrogen atom. You can define it in multiple ways, but I'm defining it as the Rydberg energy. It appears in Rydberg's law, which long predates quantum mechanics, but with quantum mechanics in mind you can interpret it as the work required to ionize hydrogen from its ground state: R = 13.6 eV.

Our goal is to figure out the relationship between them. We at least know that R has to be a decreasing function of h. Classically (h=0), R should be infinity, because the potential well of a hydrogen atom is infinitely deep. Since h>0, R is finite. So it'd make sense for h to be in the denominator of R, so that the h→0 limit gives you infinite R. h is a measure of how quantum the wold is. R is a measure of how non-quantum the hydrogen atom is.

The way we find this relationship is to consider a high-n (high energy level) limit of the hydrogen atom. The great thing about the hydrogen atom is that the high-n limit is classical: the spacing of energy levels approaches 0. This is not the case for example with the harmonic oscillator, where the energy levels are always equally spaced. In the classical high-n limit, the electron will be in an elliptical orbit around the proton, following Kepler's laws.

The key to relating R and h is the frequency. In the classical limit, two different frequencies have to be equal:

  • The frequency of the electromagnetic radiation absorbed or emitted
  • The frequency of rotation of the electron

Both depend on n and R through Rydberg's law. The radiation frequency of course is given by Rydberg's law (specifically, consider a transition between adjacent energy levels, with large n). And the frequency of rotation depends on the total energy of the system, which you can get by interpreting Rydberg's law.

They both depend on n in the same way, so n will cancel.

HOWEVER. ONLY the frequency of radiation has h in it (through change in energy = hν). The frequency of rotation is just from classical mechanics and h is not involved, just R.

By setting these equal to each other, you're equating a quantity depending on R, with a quantity depending on both h and R. So you're deriving a relationship between our two measures of quantum-ness, h and R.

And that's how you solve for R, the Rydberg constant, in terms of h and a bunch of constants related to protons, electrons, and electromagnetism. You're deriving how quantum the hydrogen atom is, from how quantum the world is, plus some facts about the hydrogen atom.

THAT'S how Bohr's derivation of the Rydberg constant really works, set out clearly in his later derivation, but obscured in his original derivation by a bunch of trivial algebra converting between the 1/n² formula for the energy levels and other equivalent specifications of the energy levels.

impossiblewizardry: (Default)

ok i was excited that the currently posting It's Walky! storyline had ultra car because I've read all the other publicly available Ultra Car content

it turns out all she does is sit in the parking lot during the climactic fight scene:

They'll probably let me know if they need me.

which is perfect actually. Ultra Car is unhelpful. That's why it's so special when she is helpful, it shows she cares!

(likee, with someone else you cuold wonder whether they care or if they're just doing it cause they feel like they're supposed to, but with ltra Car you can rule out that second possibility)

impossiblewizardry: (Default)

ok my idea is: no more optimization in R. However, keep doing linear regression and random forest in R. Like the really simple baseline models. But actually optimizing parameters of a model, do that in PyTorch or something.

Learning the boundaries of how much I can get away with doing in R is important. Like doing something in R that gets too complex is how I've made myself miserable many times in the past.

impossiblewizardry: (Default)

Eliezer Yudkowsky:

Level Zero skill: When you make a mistake, rather than rehearsing the action you wish you'd taken, rehearse the plausible thought process that you wish had led up to that action. Like, if I fail to notice confusion, I imagine what it would have felt like at the time to notice the small note of discord, promote it to my attention, and reason about it using general heuristics that would have led to the right conclusion. Basically you want to visualize what it would have felt like, experientially, to be the nearest person to you who would have avoided that mistake using general thought processes and without any advance foreknowledge.

Ziz:

It's also important that you test whatever heuristics you come up with on other past experiences where the opposite choice was correct and you chose it. "Sure this heuristic would have saved me from trusting this untrustworthy person, but would it have also prevented me from trusting these other trustworthy people I did trust?"

This is what I love about programming computers compared to trainin myself. Computers make "rerun on all previously considered cases" easy, but for my own life it's difficult and timeconsuming

impossiblewizardry: (Default)

lucretius knew there was a copper age before the iron age, and that humans were hunter-gathers before agriculture.

He got a lot of things wrong too (he thought the period of hunter gatherer tribes was preceded by a period of solitary humans iirc--nope!). Still it's way better than, for example, the narrative in genesis.

impossiblewizardry: (Default)

unlike some of you, I don't have a special organ just for getting fucked in. I work with what I have.

impossiblewizardry: (Default)

I still don't really get evolution. Like, I have no reason to believe it's feasible in within the age of the earth, except that it actually happened. I have no theory where I can plug in life as we know it, and get as output "that can evolve in a couple billion years, doesn't take a trillion"

Page generated Feb. 6th, 2026 10:30 pm
Powered by Dreamwidth Studios