cross-posted from: https://lemmy.world/post/44699253

This is clearly a sign that the product failed to draw in enough customers and its viability was overhyped.

Hopefully, it is the start of the AI bubble bursting.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        English
        arrow-up
        11
        ·
        19 hours ago

        Robots aren’t like software, it’s immediately obvious when they don’t work the way they’re advertised whereas chatbots can trick people into thinking they’re way more useful than they actually are. The “fake it till you make it” “move fast and break things” ethos of tech doesn’t work when there’s actual, physical evidence that shit’s busted.

        • StupidBrotherInLaw@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          6
          ·
          17 hours ago

          Unpopular Opinion Incoming

          I was assigned at work to evaluate a few LLMs for potential adoption, so I spent a solid week doing so.

          Most of the “AI is broken and doesn’t work” on here is solid echo chamber cope. It’s more competent than several of my coworkers, though it’s thankfully not ready to replace knowledge workers as it requires a knowledge baseline to best direct it and evaluate its answers.

          I still advised against using it for multiple reasons, including ethics, but much of Lemmy is playing make believe about the actual capabilities of LLMs.

          • Erdalion@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            16 hours ago

            Mind telling us what it is that you do? I heard similar things being said in the Plain English podcast last week (and the host was pretty anti-AI before) and I’m starting to wonder if certain jobs are going to be more affected than others.

            Or are your coworkers just bad at what they do? :P When I was working tech support, there were people that were worse at their jobs than the bots of the time, let alone LLMs, I swear.

            • StupidBrotherInLaw@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 hours ago

              Electrical engineering. My mentioned coworkers are competent but more junior in the field. We did a miniature internal study and found the best models provided accurate, relevant information on the first prompt about 90% of the time when asked to explain or verify concepts. The remainder consisted of hallucinations or misunderstood queries.

              They struggled with questions that instead required complex problem, providing some mixture of appropriate solutions, overly complex but still functional solutions, and hallucinated shite.

              I recommended that we do not move forward with adopting AI in any capacity. While it has some utility for basic information retrieval and fact checking, it still required someone with sufficient knowledge to be able to quickly evaluate the quality of its output. Helpful for someone who knows what they’re doing, dangerous 10% if the time for someone who does not. I also highlighted the ethical concerns, many of which my peers were unaware.

          • queermunist she/her@lemmy.ml
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            15 hours ago

            Cool anecdote. Every time we actually see real data, though, the numbers don’t reflect much in the way of productivity gains or increased efficiency or better output. People say that LLMs are useful because it feels useful, but we aren’t seeing actual usefulness. The most recent study out of Duke University observes “a productivity paradox, in which perceived productivity gains are larger than measured productivity gains, likely reflecting a delay in revenue realizations.”

            A delay. Sure.

            • StupidBrotherInLaw@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              12 hours ago

              I really appreciate your dismissive, arrogant tone. Your casual dismissing of my anecdote really added to how you provided even less substance to support your point.

              But hey, it got you those “supporting the echo chamber by dunking on dissent” up votes, and that’s what we’re all here for, right?

      • schema@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 hours ago

        Correct, thought there is still good news in a way: OpenAI is running out of money rapidly. So much so, that they have to pick and choose one thing over the other.

        They would have done the robot thing anyways, but the fact that they had to shut something else down for it sbows that the massive deficit is starting to affect them pretty heavily.

        Maybe im just coping, but imo, the cracks are getting bigger and bigger.