sofiechan home

Who’s Behind All the ‘Pussy in Bio’ on Twitter? A little something on what we're up against.

1212

anon 0x117 said in #1184 14mo ago: 77

This caught my eye as an example of the problems that plague public and especially anonymous social media. I hope this platform will become a serious piece of our social infrastructure, and that's going to mean eventually dealing with not only idiots, but also spammers, shills, fedposters, and even darker things (pic related). These all have the commonality of being undesirable content from undesirable people, at high volume, adversarially concocted to be labor-intensive to identify. This will be especially important if there are serious factions and powers upset about the conversations we have here, which I expect there will be if we say anything of consequence.

This little piece of the technological frontier appeals greatly to me. If we could maintain a platform with lots of free high quality human discourse, we'd have something of immense importance. Some of the most important moments in history are driven by free high quality human discourse. The basic problem then is to efficiently and accurately verify the good-faith humanity of posters, and the quality of posts.

I've been thinking about how to do this for this platform, and I've got a few ideas. I'm curious what other ideas you guys have, and how well you think these will work in practice:

* First of all, we have our system of community-driven curation of posts and posters (already in operation). The quality of each post is estimated from the votes, interactions, and poster. High quality posts last longer, low quality posts get deleted. If your interactions (and posting) predict quality well, then the system trusts you more. It's a self-supervised learning system, seeded in its quality estimates by the administrator's judgement but substantially relying on the consensus of high-taste users. The fact that we verify/enforce the trustworthiness and taste of users means we can empower most users to participate in moderation. I believe this will allow us to rapidly identify and delete bad-faith content. There's nothing more frustrating than being able to identify spam as a user but not being empowered to delete it as a moderator. This solves that problem; if you can reliably discriminate, you have power.

* As the spammers begin to arrive, we have the capability to restrict new unverified users (until they prove themselves) to certain threads, to early threads, and to restrict their ability to post images and new threads. This should contain the problem if there is one, and raise the cost of building up and then losing the reputation to do the most consequential spam. Some anons on 4chan were suggesting a proof of work scheme to make spam computationally intensive, but this is one better; you have to prove yourself with the work of making worthwhile posts.

* Eventually, I want to build a vouching system where if you happen to know that someone else is a good faith poster, you can vouch for them. If they are subsequently found to be high quality and good faith, you gain reputation from that. If they are found to be a problem, you lose. This combined with the community curation should make it basically impossible for undesirables to take root and certainly to stick around. Whole communities of undesirables that vouch each other in can be purged automatically by the fact that no one else likes them. This will be another way of raising the good faith baseline so that hostiles stand out more and get purged faster. The trick here will be designing ways for us to identify each other as friends and worthy of vouching on what is primarily an anonymous platform.

* Given the above systems, there will be a niche for moderation bots that use features of posts, posters, posting styles, content, privileged back-end information, and other clues to discriminate quality and trash. The automatic taste estimation systems will give us a very powerful architecture to integrate these without worrying too much about how they interact, whether we're over-weighting certain features, or whatever.

What do you guys think?

referenced by: >>1613

This caught my eye a 77

anon 0x12d said in #1221 14mo ago: 44

>I believe this will allow us to rapidly identify and delete bad-faith content.
>As the spammers begin to arrive, we have the capability to restrict new unverified users (until they prove themselves) to certain threads, to early threads, and to restrict their ability to post images and new threads.
>Given the above systems, there will be a niche for moderation bots that use features of posts, posters, posting styles, content, privileged back-end information, and other clues to discriminate quality and trash.

Is it possible to do some wargaming? Like, a platoon of spammers who are vouching for themselves tries to post retarded (but harmless) stuff and there is a test of how long it takes for them to be repelled.

>Given the above systems, there will be a niche for moderation bots that use features of posts, posters, posting styles, content, privileged back-end information, and other clues to discriminate quality and trash.

Stylometry has come up in discussions before (>>329) but trying to anonymize ourselves by all writing in a similar style might be way too restrictive. Using stylometry for moderation bots, e.g. to detect spambots and distasteful activity from humanoids, seems more realistic and reasonable. Stylometry also is helpful for detecting friends on the forum.

>Some anons on 4chan were suggesting a proof of work scheme to make spam computationally intensive, but this is one better; you have to prove yourself with the work of making worthwhile posts.
>>1136

I think the Goldhaber article justifies why computation itself cannot be the foundation of repelling spam. If compute power is all that's needed to create more spam then that's what it will be used for. Having the powerful (by virtue of their taste) hold the authority to determine where attention should be directed brings us closer to the nature (physis?) of intellectual leadership in older societies.

referenced by: >>1222 >>1604

Is it possible to do 44

anon 0x116 said in #1222 14mo ago: 77

>>1221
>Is it possible to do some wargaming?
Yes but not yet please. We are still in alpha around here. When the time comes we can invite our friends from 4chan to stress test the system.

Stylometry votebots are in the roadmap as per OP. Trust the plan. We might want an anti-stylometry assist feature when composing posts that highlights words or phrases which are statistically identifying, but this is not in the roadmap.

> Having the powerful (by virtue of their taste) hold the authority to determine where attention should be directed brings us closer to the nature (physis?) of intellectual leadership in older societies.

Precisely.

Yes but not yet plea 77

anon 0x22d said in #1613 13mo ago: 88

>>1184
>I hope this platform will become a serious piece of our social infrastructure, ... and that's going to mean eventually dealing with... darker things (sites being shut down by the hosting platforms).

The current curation system is really cool and seems to work well so far, and the voucher system seems like a great idea for onboarding new users, but there still is a core issue of being shut down. Unironically Urbit could provide help here. It is completely peer to peer, meaning everyone could run their own version of the app so that it is impossible to shut down.

Something to think about and perhaps contact them about if the situation ever becomes dire. I'm sure there are a handful of Urbiters that would love to help. A very adjacent and aligned community.

referenced by: >>1621 >>1634

The current curation 88

anon 0x231 said in #1621 13mo ago: 77

>>1613

Nothing against Urbit, but there are many technical options between what admin is doing at present and Urbit. One can independently host, etc. Even individuals can easily go much further than they usually do, e.g. the methods here:

https://sive.rs/ti

Since admin is making a serious project of this, he could go further.

referenced by: >>1634

Nothing against Urbi 77

anon 0x116 said in #1634 13mo ago: 77

>>1613
1. Urbit is not designed to be uncensorable, nor is it particularly useful for our purposes.
2. I'm not worried about this escalating to the point that the powers that be are trying to shut us down at the internet infrastructure level. I will personally rein you guys in far short of that point. That's my job as admin, actually, to channel the potential of the forum in productive ways that don't just pick fights with intelligence agencies. That's not why we're here. I hope I don't need to remind you that fedposting is not allowed.
3. Our web host is great and will not shut us down without a court order.
4. The daily stormer, "the most censored publication in history", is still up and is first search result for its own name (though Inspire and Dabiq, two other obvious canaries, are down afaict).
5. If push comes to shove we will launch an uncensorable overlay network. Philosophy is non-negotiable.

>>1621
Great link. Deserves it's own thread.

--admin

1. Urbit is not desi 77

You must login to post.