sofiechan home

A pinpoint brain with less than a million neurons, somehow capable of mammalian-level problem-solving.

1010

anon 0x3a said in #474 2y ago: 22

What is the Darwinist cope for the origin of spider webs?

referenced by: >>478 >>480

What is the Darwinis 22

anon 0x3c said in #477 2y ago: 33

>>473
This is a great article. I have long considered that the general intelligence algorithms that even very primitive-seeming animals (eg spiders) have are just way beyond anything we've yet put into computers. This implies the whole "scale is all you need" paradigm is total cope. No, your transformer with a trillion neurons isn't going to outsmart a spider with a million neurons any more than your transformer with a billion neurons could. It's just not the right *qualitative shape*.

This reminds me of a story James A Donald once wrote, of a spider making a calculated jump from his shovel to a woodpile in a way that could not possibly have been preprogrammed by evolution. He inferred immediately a bunch of interesting consequences around AI, Moravec's paradox, and human psychology. I unfortunately can't find it now, but here are some of his thoughts in the general area:

https://blog.reaction.la/economics/no-real-ai-progress/
https://blog.reaction.la/science/moravecs-paradox-rna-and-uploads/

This is a great arti 33

anon 0x3d said in #478 2y ago: 22

>>474
Well, given the apparent intelligence of spiders, and the clever uses they could come up with for sticky and stringy substances, it's not hard to imagine how one gets from the ability to sticky your enemy to the ability to tie him up, to the ability to trap him, to the ability to make complex traps, to the diversity of spider webs we see. Where do spiders get sticky stuff? I don't know. I note that many bugs have such things, often related to how they lay eggs or whatever. The rest is just protein tweaks, good old fashioned cleverness, and morphological and psychological changes doubling down on the most successful strategies.

Well, given the appa 22

anon 0x3f said in #480 2y ago: 11

>>474
More Kolmogorov than Darwin (though the two eventually merge [0]): the patterns of webs require only short programs to generate, and thus can fit into the tape-brains of spiders, as hunting behavior(s) can fit into ants [1].

[0] https://www.pnas.org/doi/full/10.1073/pnas.2113883119

[1] https://www.sciencedirect.com/science/article/abs/pii/S0925231211007375

More Kolmogorov than 11

anon 0x40 said in #483 2y ago: 55

>Portia is a really weedy little spider and has to spend ages planning a careful attack. But its eyesight and trial-and-error approach means it can tackle any sort of web spider it comes across, even ones it has never met before," says Harland.

This kind of careful, plotting cleverness is the kind that gets really dangerous at high intelligence levels. It can just sit there for ages coming up with an attack that will work. What if it had petaflops to play with?

referenced by: >>502

This kind of careful 55

anon 0x41 said in #484 2y ago: 22

>Many psychologists would baulk at granting such abilities to cats or even chimps, never mind spiders.

This prejudice is really offensive. Are these people retarded? I mean it's understandable in this case, but cats? Sometimes I feel like no one else has ever met an animal. It's impossible to believe that these creatures are reflexive automatons without interiority if you even just watch one for a while. Is refusal to believe there's such a thing as animal intelligence separate from advanced verbal bullshitting why AI companies are so focused on the electric wordcel?

referenced by: >>502

This prejudice is re 22

anon 0x47 said in #502 2y ago: 22

>>497
>>484
>>483
Sorry these quotes are from the PDF article linked in the OP article:

https://www.rifters.com/real/articles/Sinclair%20ZX80%20spiders.pdf

Sorry these quotes a 22

anon 0x49 said in #504 2y ago: 22

where do scientists get this kind of arrogance from?

> Their nervous systems were supposed to be capable of no more than hard-wired reflexes, and certainly no one would talk in terms of thinking, planning, trial-and-error learning, attention span or - shudder - consciousness

I'm tempted to say STEMlordist materialism makes it impossible to conceive of these things, but I think this kind of attitude might be ancient. I can't remember how the greek philosophers thought of animals. But this is the kind of thing where doctors justify not using anesthesia on infants because infants don't feel pain, and similar inhuman psychotic nonsense

As for consciousness, it's clear that the spider is conscious: https://www.nursingcenter.com/ncblog/october-2022/level-of-consciousness
>1. Alert: the patient opens their eyes spontaneously, looks at you when spoken to in a normal voice, responds appropriately to stimuli, and movements are purposeful.

it is alert. It opens its eyes spontaneously, responds appropriately to stimuli, and its movements are purposeful. It of course doesn't look at you when spoken to because it is a spider, but that part isn't really relevant.

And before you object, that's the only notion of "consciousness," the condition of *being conscious*, that is meaningful. All else is being dumb and thinking that just because you turned an adjective into a noun, there must be some object or some kind of stuff that that noun now refers to

referenced by: >>1829

where do scientists 22

anon 0x52 said in #534 2y ago: 00

I think a lot of this hinges on the question “what kind of thing is a brain?”

We really don’t know at this point, because most neuroscience is fake, although there are corners of neuroscience that look less fake, including the free energy stuff and harmonic stuff. Most neuroscientists are politically biased and academia is a pretty awful web of incentives, which makes me pessimistic about whether neuroscience will magically fix itself.

I’d claim that we need an alternative fresh branch of neuroscience, similar to how we need this in e.g. sociology, anthropology, etc. The neuroscience of today is upstream of the normative models of tomorrow, and these models may get used by neurotech and AI for various significant purposes. It would be good to offer a reasonable, heterodox-friendly foundation with room for views such as gender differences, somatic stuff, the importance and limits and mechanisms of intuition, etc.

I think a lot of thi 00

anon 0x53 said in #536 2y ago: 11

it's sinple. a brain is that big mushy organ in your skull which coordinates the nervous system. most nervous system activity either is in this organ, or goes from or to this organ, to or from other parts of the body. other animals have organs like this. anything else is a pseudoscientific extension of the notion of brain.

it's sinple. a brain 11

anon 0x56 said in #542 2y ago: 11

We can say a computer is a hard, rock-like object that can process information and talk to other computers, but we leave enormous value on the table by not digging in deeper. Turing machines, von Neumann architecture, 7 network layers, stacks, compilers, high level vs low level languages, buffers and buffer overflows, type errors, race conditions, speculative execution, memory paging, ASICS vs general purpose computing — these things offer deep insight on “what is a computer”.

Finding such analogues for brains, or finding great arguments for why such analogues don’t exist, seems important.

referenced by: >>544

We can say a compute 11

anon 0x57 said in #544 2y ago: 22

>>542
Yeah we don't know brains as well as computers because we're not brain engineers, but there must be such detail. "Mushy nervous system organ" or even "control center" is good as babby's first description, but it's too conservative. I think what the epistemically conservative anon is getting at is that people overconfidently declare much more speculative things without particularly solid grounding, but that doesn't mean we shouldn't try.

I tend to follow the "every organ is simple" theory. The heart pumps blood. The circulatory system conducts it. The muscles produce force. The stomach breaks down food. Etc. The brain pumps information, knowledge, and action signals. How?

We don't have the basic turing machine model of intelligence (or whatever we want to call what the brain does), if such a model exists. That would be a good start. What is the basic theory of how sensor signals can be integrated into a world model, plans made according to goals, and actions coordinated? Of course, if we had such a theory we could probably build one on a computer...

Yeah we don't know b 22

anon 0x5b said in #565 2y ago: 44

Stephen Wolfram's result on the simplest Universal Turing Machine may be relevant to thinking about what's going on with spiders. Wolfram conjectured (later proven by Alex Smith in 2007) that a simple cellular automaton with only 2 states and 3 symbols was a Universal Turing Machine, capable of literally any possible computation. (Not any function, of course, just any computable function.)

Given that result, it's not hard to believe that spider could have a great deal of intelligence with a few thousand neurons.

Note: I am not claiming that a spider's neurons form a Turing Machine or anything like that! My point is without prejudice to an account of how spiders actually work.

It's simply saying that it's easy to believe that intelligence like a spider's need not track a metric such as number of neurons.

referenced by: >>1822

Stephen Wolfram's re 44

anon 0x2af said in #1822 12mo ago: 33

>>565
That's interesting. Of course a universal TM's power depends on having a program and data segment stored somewhere, and the universality comes from those being arbitrarily large. Neurons do computation, but they also hold state. What can a UTM do with only a few thousand slots on the tape? In any case it's plausible that a very small and simple machine could be fully generally intelligent, with the number of neurons scaling only speed, and memory.

This is why I don't believe the scaling hypothesis for AGI (that scale is all you need). It has been somewhat conflated with the bitter lesson (that more general algorithms that learn their domain from data and compute are more successful than more specialized algorithms with baked in knowledge that therefore don't scale with data and compute), but this is inappropriate. I think you could develop full generality on relatively smaller problems with small compute. The key is getting the right algorithms.

That's interesting. 33

anon 0x2b0 said in #1829 12mo ago: 22

>>504
> I can't remember how the greek philosophers thought of animals.

Aristotle described animals as having souls* capable of knowledge derived from the senses and the capability to use that knowledge to support the movements characteristic of their species.

*"Soul" just means the dynamic form of a living being. It is not a thing existing apart from the living body. For Aristotle, plants also have souls, just with functions more limited than those of animals.

Aristotle described 22

You must login to post.