Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit before joining the Threadiverse as well.

  • 0 Posts
  • 38 Comments
Joined 2 years ago
cake
Cake day: March 3rd, 2024

help-circle







  • FaceDeer@fedia.iotome_irl@lemmy.worldme_irl
    link
    fedilink
    arrow-up
    3
    arrow-down
    5
    ·
    18 days ago

    If the art doesn’t look good by whatever standards you have, then it doesn’t look good. Whether it’s not-good AI-generated or not-good human-generated doesn’t matter.

    Just look at the picture, and if you like it then like it. This moral panic about Abominable Intelligence’s supposedly soulless touch is pointless.


  • FaceDeer@fedia.iotome_irl@lemmy.worldme_irl
    link
    fedilink
    arrow-up
    4
    arrow-down
    5
    ·
    18 days ago

    I don’t self-inflict it on myself, because when I see a piece of art that looks really neat I go “ooh, that looks really neat” rather than “wait, I need to dig around to find out whether I’m supposed to like this or not.”

    People have to actively choose to make themselves miserable in the way this comic depicts. That’s what I mean when I say it’s self-inflicted.









  • For every news article you read?

    That’s the point here. AI can allow for tedious tasks to be automated. I could have a button in my browser that, when clicked, tells the AI to follow up on those sources to confirm that they say what the article says they say. It can highlight the ones that don’t. It can add notes mentioning if those sources happen to be inherently questionable - environmental projections from a fossil fuel think tank, for example. It can highlight claims that don’t have a source, and can do a web search to try to find them.

    These are all things I can do myself by hand, sure. I do that sometimes when an article seems particularly important or questionable. It takes a lot of time and effort, though. I would much rather have an AI do the grunt work of going through all that and highlighting problem areas for me to potentially check up on myself. Even if it makes mistakes sometimes that’s still going to give me a far more thoroughly checked and vetted view of the news than the existing process.

    Did you look at the link I gave you about how this sort of automated fact-checking has worked out on Wikipedia? Or was it too much hassle to follow the link manually, read through it, and verify whether it actually supported or detracted from my argument?