sofiechan home

Alignment Research and Intelligence Enhancement by BLT

anon_hyni said in #3949 1mo ago: received

https://substack.com/home/post/p-159429010

I always like reading what Ben has to say because he's careful and a good thinker and writer on important topics. I largely agreed with his criticism of the AI doomers failed strategy (the main effect of which has been to plausibly speed up the dangerous kind of AI progress). I'm not surprised the rationalists have some disagreements, either. But the interesting payload here is that the core rationalists are now mostly focused on eugenic intelligence enhancement as their path to progress.

On one hand, as Ben says, this is great. If we could produce a cohort of thoroughbred eugenic geniuses, that would be an amazing breakthrough, throwing the spear of mankind beyond where it has previously been. You don't have to believe that its effect would be primarily in the AI safety problem to believe that it would be great. Ben offered some very reasonable cautions about why he doesn't expect this to be as easy as they seem to think, but cheered them on anyways.

In that spirit, while cheering on our friends working on "intelligence enhancement", "superbabies" and all this wonderful stuff, I noticed one note of caution that was entirely missing from Ben's response which seemed the obvious one to me: there is no reason, and I haven't even seen any arguments, for why intelligence enhancement would lead to differentially favoring AI alignment research over capabilities research. Certainly right now there's probably more total intelligence in capabilities research, and probably the peak intellects too. So it's not empirical correlation.

The alignment people seem to have this idea baked in very deep that more intelligence means more strategic rationality means specifically a focus on the alignment problem. Is this self-flattery? Or is it downstream of their general worldview premise that a super-advanced AI would at some level of intelligence also eventually turn all efforts towards alignment? Or is there some specific reason they expect increased intelligence to shift the balance in favor of alignment?

(It goes without saying that the idea of AI alignment as a valid concept at all is highly questionable, but even granting that it's possible, why should it be aided by higher intelligence?)

referenced by: >>3958

I always like readin received

anon_wove said in #3950 1mo ago: received

BLT is my fav sandwich

BLT is my fav sandwi received

anon_qyru said in #3958 1mo ago: received

>>3949
>But the interesting payload here is that the core rationalists are now mostly focused on eugenic intelligence enhancement as their path to progress.
I'm ootl, is this their main plan now? From the bits I hear it sounds like they're still really into technical alignment and also trying to get more resources into technical alignment. But I know the upstream sentiment among the MIRI folk has been hopeless for a while.

Maybe it means the rats actual timelines are getting longer, even if everyone is talking about them getting shorter, because longer timelines means a lot more time for bio intelligence enhancement?

referenced by: >>3960

I'm ootl, is this th received

anon_guzo said in #3960 1mo ago: received

>>3958

> Maybe it means the rats actual timelines are getting longer, even if everyone is talking about them getting shorter

Yeah "eugenic intelligence enhancement" makes no sense for the AI 2027 crowd.

Evolution can occur surprisingly rapidly as previously discussed--see eg. dog breeding--but significant changes still require multiple generations. Even with the most sci-fi iterated-embryo-selection technology (which does not exist at all today), it would take 20+ years for the first enhanced generation to be born and reach adulthood.

Yeah "eugenic intell received

You must login to post.