• 1 Post
  • 13 Comments
Joined 2 years ago
cake
Cake day: December 17th, 2023

help-circle


  • It’s the same trap that execs fall into when thinking they can replace humans with AI

    Gen AI doesn’t “think” for itself. All the models do is answer “given the text X in front of me what’s the most probable response to spit out”. It has no concept of memory or anything like that. Even chat convos are a bit of a hack as all that happens is that all the text in the convo up until that point is thrown in as X. It’s why chat window limits exist in LLMs, because there’s a limit of how much text they can throw in for X before the model shits itself.

    That doesn’t mean they can’t be useful. They are semi decent at taking human input and translating it to programmatic calls (which up until that point you had to work with clunky NLP libraries to do). They are also okay at summarizing info.

    But the chat bot and garbage hype around them has people convinced that these are things they’re not. And every company is starting to learn the hard way there’s hard limits to what these things can do.











  • “What I’ve seen with AI is an erosion of trust.”

    Mozilla is not going to train its own giant LLM anytime soon. But there’s still an AI Mode coming to Firefox next year, which Enzor-DeMeo says will offer users their choice of model and product, all in a browser they can understand and from a company they can trust. “We’re not incentivized to push one model or the other,” he says. “So we’re going to try to go to market with multiple models.” Some will be open-source models available to anyone.

    This is such an out of touch non-answer here.

    People don’t oppose ai changes because they’re locked into a model. In fact most AI products I use for my job let you choose a fucking model.

    People hate them because

    A) 90% of the time they’re useless and the remaining 10 are detrimental to the product experience

    B) Ethical concerns about training off of artists and authors as well as environmental impact. EDIT: or also the general trend of trying to replace humans with AI.

    C) Not wanting to play into the fucking arms race the billionaire class are manufacturing

    D) The time they could be useful they have a risk of being either hilariously wrong or dangerously wrong. And there’s no amount of training and GPU manufacturing that’s gonna fix that.

    Absolutely none of this is addressed by the CEO. I’m sure he has to say this because of the fucking tulip crazy money is in around this but it doesn’t make it any less tone deaf or futile.