Hi Lemmy,
I’ve created a bot that I envision helping the Lemmy community, although admittedly it might be a problem that doesn’t need solving. Big social media platforms like Meta have buildings of people moderating content, it can be a very labourous and arduous task to keep internet content safe. I’ve not really seen anything on Lemmy that is offensive, so I think the moderation is pretty good (or you peeps in the community being great). If content was questionable, then this AI bot, LemmyNanny, could be a good start at adding an robot eyes to moderation.
About https://lemmynanny.ca/
While creating this bot I felt like it was producing some interesting responses. As LemmyNanny ran on my tablet, the console output kept my attention as I wanted to see what the AI was processing next, what it was seeing and what it was thinking. I’m a web dev by trade, so I extended LemmyNanny to have some webhooks to push elsewhere (could be good for logging if actually used as it was envisioned).
But yea, it’s just a few weeks of me musing and hacking together something for the community. Hopefully you find it novel if nothing else, maybe make a drinking game from it 🤷♂️.
Direct link to repository is here: https://github.com/IsaaacD/LemmyNanny
Sure people might downvote here but engineers care about facts. have you tried testing this in real world setting? working with moderators? what feedback did you get?
Right now this is experimental. you can’t just use AI and automatically expect it to always do a better job then the established methods.
Yea I agree this is experimental, it’s not meant to replace anyone but I’ve been using it to sort of “browse” Lemmy. If moderators or admins want to use it to moderate a community or instance then that’s great, but if not it was just like 3 weekends of playing around. I had fun, and if it dies like the rest of my pet projects that’s fine too.
Is this type of AI the best one for moderation?
What are some example responses of the bot? Also how do you interpret the bot response for identified reportable content? I only found a starts with “yes” condition.
This one was surprising in that it sort of broke the prompt to be a bit more heartfelt.
And yea, there’s user configurable prompt in settings but hardcoded at the end of the prompt is to start with “Yes” (to report) or “No”. There are tools which ollama can use, but I just wanted to use the models I wanted to, even if tool usage isn’t part of it
I don’t want to discourage the effort you put into trying to build something useful, so I’m trying to say this nicely… you’d probably get more interest if you built a moderation tool that doesn’t involve AI.
Lemmy is probably an unlikely crowd to be looking to for positive reception of an AI nanny bot. Even less so on .ml, and even less so on an open source community.
Im sure your intentions are good but Lemmy doesnt need more AI
Fucking clankers.