You are viewing a single comment's thread from:

RE: Community Antiabuse Discord Bot

in #antiabuse2 months ago
Plagiarism, spam, identity theft, comment farming rewards, recycling content, and piece of shit (POS). Note that the community defines the definitions and categories are.
Should POS fall under an "Abuse Detection" project? I would have thought it was up to the Hive userbase to DV it if they were so inclined and only if they came across it rather than a concerted effort to track it down and DV it? And then there's the question of "Who says it's shit?" see I think 99.9% of Memes are shit but lots of people seem to like them. 🤷‍♂️
Sort:  

This is under the assumption that a community can make up their own category of what they find as abuse. If you post memes and one pocket of community says its abuse, then it is abuse. This is still a decentralized tool for any dev who can make their own bot for the community.

That's why the bot isn't available on public yet. It's just a prototype because my blacklist may contain categories or names that aren't considered worthy to be on the blacklist of other groups to begin with.

You need to have some of your own.
People who walk funny
Vegans
People who make noises by blowing leaves wedged between their thumbs

Having an umbrella term like Disliked would solve that. I'm sure I can remember what I dislike about someone when I see their posts.

But on a serious note, I like to see more communities out there take charge of their own antiabuse efforts and this tool enables them to get started and have their own records. Decentralized blacklists aren't new, we can do exactly those but we end up making enemies on chain as the notifications can give away to the other person you blacklist them.

So do you intend it to only blacklist (Ban) them from the community that's using the bot then? Surely they're going to know when nobody interacts with them? Is there an appeals process?

So do you intend it to only blacklist (Ban) them from the community that's using the bot then?

It's up to the community if they want to collaborate with other communities when it comes to their list making.

Surely they're going to know when nobody interacts with them? Is there an appeals process?

It's up to the community to handle these too. Again, instead of a funded antiabuse project, the tool enables more communities to do their decentralized antiabuse initiative. A dev from a community can piece up their own bot from the info here and start their own gimmick.

The major difference here is how information is now more accessible as there is a paper trail displayed why the user has been blacklisted to mods and people inquiring readily. Some organization to which members that can call upon the bot on discord readily whenever an inquiry is made so that problems with being understaffed is no more.

Again, it's a tool that just empowers communities to do their own antiabuse program and how they want to define their own abuse terms.

Yeah, I saw the data output and thought, "Aha! Reasons why. Now that's a step in the right direction."

<><

<><

LUV

Connect

Trade


@adamada, you've been given LUV from @dickturpin.

Check the LUV in your H-E wallet. (1/1)