• melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    Should we trust a researcher whose brain got fried. Did they remember to do the old double-blind setup before the frying of the brains occurred?

  • Comet79@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    3
    ·
    8 hours ago

    1980: TVs will fry your brain

    1990: Videogames will fry your brain

    2000: Computers will fry your brain

    2010: Smartphones will fry your brain

    2020: AI will fry your brain

    Any takes for the 2030s?

    • flying_sheep@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      And before that books and comics. But LLMs are different: they pretend to be your friend but actually just encourage whatever you come up with. You can easily fry people’s brains by being their sycophant, now everyone can subscribe to one.

  • HertzDentalBar@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    8
    ·
    8 hours ago

    I fucking hate this AI shit but I’ll admit I end up using Gemini (knowing its wrong sometimes) but it’s like how I’d use Google but just more of a complex ask instead of simple search query’s, I couldn’t imagine using it beyond that other than a follow-up or two.

    It’s just a chatbot that has access to info, who goes onto their cable companies website and befriends the chatbot?

    • Boingboing_r@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 hours ago

      I have found Google search to be getting progressively worse where as I can type out a question to Gemini that will return better results than Google search. It’s annoying that Google search has gotten so bad and duckduckgo will return you something interesting but not relavent. So Gemini is my Google search nowadays.

    • WIZARD POPE💫@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      I’ve used gpt a coup times when I was searching the web and forums for well over an hour and found nothing relevant enough to work. Theissue got solved in 5-10 minutes.

      • nooch@lemmy.vg
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 hours ago

        They enshittified the search so now using the chatbot is more useful. The search just returns slop and even fake slop forums.

        • WIZARD POPE💫@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          I mean I have been using DDG for years now. I just could not find the right answer for my specific issue on my specific linux distro and AI was sadly just faster

    • architect@thelemmy.club
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      1 hour ago

      You can’t see the same kind of propaganda your grandparents were saying about computers just for ai now?

      Besides, why are colleges passing illiterate students? That’s the actual problem.

      • FlyingCircus@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 minutes ago

        There’s a tiny difference between then and now called scientific evidence. These are actual scientific studies saying that using AI results in lower cognitive abilities.

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    5
    ·
    edit-2
    11 hours ago

    Studies show that using a bulldozer for plowing a field decreases the farmers muscle density after just one day of use.

    Christ. What a load of shit.

  • zebidiah@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    3
    ·
    21 hours ago

    AI is like a dog looking at itself in a mirror.

    Some dogs are smart, and understand that this is a tool and that it is there to help you see things better… Some dogs are fucking morons and think their reflection is another dog, and they wanna fuck and fight…

    There are a ton of good use cases for ai, and none of them include coquettish sexbots or drawings of me as a Simpson or a Ghibli sketch.

    • partofthevoice@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      17 hours ago

      How do you know the dogs which want to fuck and fight aren’t the smarter ones?

      What if the other dogs don’t recognize the reflection as anything meaningful — not a tool, a reflection, …? In that case, at least the “dumb” dogs figured out that something’s up.

      Edit: anthropomorphizing the idea that nonchalant reactions = understanding well enough to not care. There’s many reasons any particular dog may not fight a mirror. Particularly, they may just rely less on vision to determine whether something is alive or not. That would not indicate understanding, though… it would indicate the dogs understandably passive approach to things which don’t seem to have any significance. Closer to a lack of awareness than an actual understanding of any kind.

  • texture@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    1 day ago

    i think reading the title of this post hurt my brain. like what are we doing here? making medical claims using sensationalist and meaningless language… seems unhelpful

  • melfie@lemmy.zip
    link
    fedilink
    English
    arrow-up
    8
    ·
    24 hours ago

    I think the key point is that you’re not outsourcing critical thinking to LLMs, but are instead using it as a tool to do grunt work that you could’ve done yourself, but the LLM can pump out faster. This means constantly being critical of everything it does, asking questions, asking for links to credible sources, asking it to provide info to help evaluate the pros and cons of multiple approaches, with you making the decisions and learning along the way. Overall, any work a LLM produces that will have your name on it should be work you entirely understand and agree with. For coding, I find agent markdown files to be especially helpful to make sure the LLM follows my desired practices without me constantly making it refactor.

    Largely, my assumption at this point is that LLMs may not always be around, so I definitely don’t want to be left holding the bag with a bunch of slop I can’t manage on my own. I think I’ll feel better when I can run open weight models on my own hardware that are fully competitive with cloud models. With models like Qwen 3.6 27B, it seems we are getting closer to that.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    110
    arrow-down
    5
    ·
    edit-2
    2 days ago

    According to a new study by researchers at Carnegie Mellon, MIT, Oxford, and UCLA,

    Study should be solid I guess.

    participants who were given AI assistants (in this case, a chatbot powered by OpenAI’s GPT-5 model) would have the aid pulled from them without warning during the test

    Wow, interesting idea. 👍

    where they had their assistant removed, the AI group saw the solve rate fall off a cliff. They had a solve rate about 20% lower

    And even worse IMO:

    They also had nearly double the skip rate, meaning they simply chose not to solve the questions.

    This seems very alarming IMO, because this indicates they lost some of their ability to think constructively on how to actually solve a problem!

    I know there have always been some who cried wold every time new technology has become available, like calculators and computers. Even dictionaries were once claimed to be harmful once!
    But maybe this time there is a real danger, because AI takes away a lot of the need to actually think creatively and constructively. And that’s an ability we must not lose.

    The last paragraph of the article is even worse. As it mentions 2 studies that show these effects are also long term!!!

    • Ioughttamow@fedia.io
      link
      fedilink
      arrow-up
      55
      arrow-down
      1
      ·
      2 days ago

      When driving somewhere, if I set out with the mindset that I can’t rely on gps I can usually wing it and figure out where to go when a hiccup occurs. If I don’t, then I have a lot of trouble getting into that path finding mode when needed… similar to this maybe?

      • Brave Little Hitachi Wand@feddit.uk
        link
        fedilink
        English
        arrow-up
        19
        ·
        2 days ago

        Yeah exactly, because although it’s possible to do more with technology sometimes, you’re actively de-skilling at the same time. When we invented the written word yes it legitimately made everything better, but also we lost oral traditions and the capacity to memorize large volumes of storytelling, songs, and histories. Now you can burn the books, and the knowledge dies. It’s a real risk.

        Everything is like this. Every technology has a cost beyond its price, and making a decision of whether to use it or not will always be in error unless you think about what you’re losing in the process.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      2
      ·
      2 days ago

      Changing the terms of the test in the middle of it, without warning, is disruptive. I’m not convinced it “fried their brains.” The same would happen with a calculator suddenly removed during the middle of an exam.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 day ago

        You are disregarding the last paragraph, where 2 other studies showed similar results, without having the “disruptive” factor.

        • scarabic@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          22 hours ago

          Here’s that last paragraph. Microsoft’s finding actually sounds like it does have the disruptive factor: people are trained to use AI and then it is removed. And finally, finally in the very last sentence of the entire article we get the one piece of information that’s been missing the entire time: doctors perform better with AI help, but then worse than ever without it.

          My conclusion? Let people have AI and perform better with it.

          Carpenters trained on power tools will suddenly perform worse with hand tools than carpenters who were never given power tools. But if they are given power tools, they can build homes faster.

          No shit?

          The findings are also in line with a study Microsoft published last yearthat looked at cognitive decline among knowledge workers, which found that the more people lean on AI, the worse they perform when asked to work without support. It also echoes a study out of Poland, which found that while doctors are better at spotting cancer risks with AI assistance, they perform worse than the no-AI baseline once that assistance is removed.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            7
            ·
            edit-2
            19 hours ago

            Carpenters trained on power tools will suddenly perform worse with hand tools than carpenters who were never given power tools.

            Now you are just making shit up. None of these examples are about people being trained on AI. The comparison would be if a carpenter using power tolls for 10 minutes, suddenly becomes worse at using the traditional tools he is trained to use.

            Your claim is baseless, there is no evidence for it, and the lack of any evidence of it, makes it an unreasonable assumption based on your prejudice alone, and should not be believed.

            Let people have AI and perform better with it.

            Again a very loaded statement, nobody is preventing anybody from using AI based on this research. But maybe people are not really performing better, or at least not always, it may depend on the task.

            Your logic is fundamentally flawed and inconsistent, and you seem to lack any ability to see this as a potential problem, so much so that it reeks of you having an agenda.

            Your flawed logic and prejudice does not beat 3 research papers.

              • Buffalox@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                6
                ·
                18 hours ago

                Yes the article reporting on a research paper has an agenda, and not the random guy ignoring the evidence to contradict it. With absolutely zero to show for your argument, and clearly using flawed logic.

                All I hear is the laugh of ignorance.

                • scarabic@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  15 hours ago

                  Ah yes, Gizmodo, arbiter of scientific truths. Their agenda is clear: to get you to click, typically with an outragey clickbait headline that reinforces your favorite narrative.

                  You need to learn the difference between debating someone and shouting at them that they have no argument, no logic, no evidence, and ill motivations. I can think of a couple other things you also need to do, but I’ll keep it PG.

      • howrar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 day ago

        Or any task change really. You tell me that I’m here for a writing task, then halfway through it becomes a math test? There’s no way I’m doing anywhere near as well as if they told me what was happening ahead of time.

    • NeilNuggetstrong@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      2 days ago

      If I use AI for my personal coding projects I’ve found that if the task is unsolvable by the ai model, I’m not able to sit down and do it myself until the next day. It’s like I’ve got to reset my brain.

      If I want to save time and use AI for a specific part of the code, it probably saves me 5 hours of work. But then I spend five hours yelling at the ai to try to get it to actually solve it. Next day I’ll just fix it myself in 2 hours.

      • Sockenklaus@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        But what you’re describing is not that uncommon, even without AI: Oftentimes when trying to solve a complex problem and being unsuccessful you have to reset your brain by doing something fundamentally different or have a good night of sleep and after that you solve the problem easily.

        May what you’re experiencing is not AI related at all.

        • NeilNuggetstrong@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          You’re probably right, but I think it’s made worse by AI. Jumping into the code after 3 hours with Claude doing the dirty work feels like an impossibility

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 day ago

        That sounds a lot like what the studies show. And IMO that sounds like a serious problem.

        • NeilNuggetstrong@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          I’m really just tricking my brain to think I’m being more productive lmao.

          But then again, some of the stuff I’m working on is in principle quite easy to do, but is also outside of my skillet, for these cases I benefit from using AI.

          IMO the challenge is knowing how and when to use AI. Small companies using AI correctly can probably benefit massively from it. Although it’s risky

    • Chloé 🥕@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      there have always been some who cried wold every time new technology has become available, like calculators and computers

      and they kinda have a point, really. people got worse at memorizing stuff by heart when writing was invented, and people got worse at mental calculus when calculators when invented.

      but they allowed many things that were simply not possible. a calculation that takes me 2 minutes in wolfram alpha could take hours if not days to solve by hand!

      ai, meanwhile, or at least the ai we’re sold, does not offer significant advantages (at best it saves a few minutes), at the cost of making us worse at thinking, a skill that is absolutely essential to have… and of course, that’s the point. the tech oligarchs want us to be dependent on their extremely expensive products.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 day ago

        and people got worse at mental calculus when calculators when invented.

        That may be true, but that is a much more limited problem, than losing some of our ability for critical thinking and problem solving in general.

        ai, meanwhile, or at least the ai we’re sold, does not offer significant advantages

        This is very true, the AI are shown to even hallucinate, and give incorrect and harmful solutions. A calculator does NOT do that.
        So not only is the AI a danger to our critical thinking, we actually need it MORE when using AI.

        • architect@thelemmy.club
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          A calculator will do that if you don’t know how to input correctly, and i promise you, plenty of people don’t know the rules of simple math.

      • badgermurphy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        But they’re using the hell out of it, too, right? They’re exactly the types of people that love and use it the most: managers and owners.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      1 day ago

      This paper shows that a person who has performed a task 12 times performs better than a person who has never performed the same task.

      They also do not properly control for performance loss due to context switching which is a well known contributor to performance loss.

      It’s a paper on arXiv, it hasn’t been peer reviewed or published.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        No the test is not training, that’s a weird thing to claim. The switch is what is tested, and you disregard that 2 other tests have shown similar results. An actual decline in critical and problem solving thinking.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          13 hours ago

          Here is the paper: https://ai-project-website.github.io/AI-assistance-reduces-persistence/

          No the test is not training, that’s a weird thing to claim.

          The control group solved 12 questions manually and then the 3 test questions manually. The AI grouped solved 0 questions manually and the 3 test questions manually. One group had 12 more manual math tasks to prepare for the manual math test the other group had 0 and also had to context switch.

          The AI-assisted group was dealt a context switch, which results in a pretty severe performance loss. A context switch causes performance loss of around 40% according to this paper, which was peer-reviewed and published and is also the most cited paper on the topic, in the APA: https://www.apa.org/pubs/journals/releases/xhp274763.pdf

          The AI-assisted group also did not have 12 questions to adjust to the new context, like the control group did. If they wanted to wipe out the context switching performance loss they should have kept asking questions to see if, after 12 questions, the AI-assisted group had a similar performance.

          The switch is what is tested, and you disregard that 2 other tests have shown similar results.

          No, they did not switch what was tested. Here is an image from the actual paper.

          They were given 12 tasks with one group using AI and another doing mental math and then 3 tasks doing mental math. One group had 12 more tasks worth of preparation than the other.

          Nothing, not even the article in theOP, says that they did math and swapped to reading to test.

          They did 3 different experiments, in each experiment they gave 12 tasks and then disabled the AI for one group and gave 3 more tasks as a test. At no point did they ask 12 math questions and then finish with 3 reading questions or vice versa. They did 2 experiments using math tasks and 1 experiment using reading comprehension tasks.

          So one group had 15 math tasks and one group had 12 ‘how to ask an AI’ tasks and then 3 math questions.

          They also did not control for context switching losses, which is a well documented (see the APA paper) effect. The proper control would be to continue asking questions so the AI group also had 12 math tasks before the test.

          There’s a reason that this is published on arXiv and not in a peer-reviewed journal. Designing a poor quality experiment doesn’t tell you anything useful even if you do multiple different versions of the same experiment.

          This paper demonstrates a lack of a proper control group, specifically a failure to control for context switching performance loss.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            7 hours ago

            The picture you post contradict your claims. The 2 groups are getting the same question, but one has AI assistance, the other has not.
            Again you fail to show anything to support your claims.

        • Womble@piefed.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          The switch is what is being tested yes, but it is not clear that what is being measured in the switch is “AI fried their brains” rather than “context switching in the middle of a test”. If they wanted to make that point it would be useful to have the maths test run with a calculator group who also got it yanked halfway through, that way we would be able to see what proportion of the effect is over dependence on AI removing critical thinking and what amount is having your methods disrupted mid task.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 day ago

            The calculator test might be good for comparison, and I’m pretty sure if given the same amount of time, and one group being allowed to use calculator for half the test, that group would solidly outperform a group not using calculators at all.

            I was in 5th grade in 1975, and we were the first class to get calculators in 5th grade. Which became the standard for many years after.
            I have never heard complaints about students being less capable of understanding basic math problems because they use calculators. Although the idea of using calculators in schools were heavily debated. It’s similar to people not getting worse at spelling from using a dictionary.

        • iglou@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Not training, no, but warm up. And no, it is not about critical thinking, it’s about reading comprehension and calculations.

    • redsand@infosec.pub
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Also and this is the big one for me. It’s 10% wrong on average. That’s really bad. 1 in 10 google Gemini answers is bullshit

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 day ago

        And the ability to think critically to detect it declines. So it’s doubly harmful!

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        5
        ·
        1 day ago

        A calculator is not the same problem, it doesn’t reduce our general ability to think critically.

        • derAbsender@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 day ago

          As the study defines critical thinking: yes it does.

          The study claims, essentially, relying on a machine that solves a Problem for you, lessens your critical thinking skills.

          Their Definition of “critical thinking” is just, at least to me, way Off.

          Just because i can conprehend Stuff i read for example, does not show critical thinking. It just shows i can repeat shit i read adequately.

          It’s just bad science.

        • iglou@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 day ago

          The studies referenced are about calculations, reading comprehension and work performance, not critical thinking.

          The article is, like many, a bad one. It generalises what it should not.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            1 day ago

            The sessions lasted about 10 minutes, suggesting that those who decided to rely heavily on AI to solve problems for them abandoned their critical thinking abilities in a matter of minutes.

            • iglou@programming.dev
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              2
              ·
              1 day ago

              As I said, this is a bad article. The experiment does not suggest that at all. The study does not mention critical thinking.

              I’d say, however, that the proliferation of shitty news websites has caused readers to lose their critical thinking.

              • Buffalox@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                5
                ·
                1 day ago

                In academia it is normal not to directly spell out things that are obvious to a person with academic knowledge on the subject, research papers are meant for scholars, and they are supposed to be able to read and understand the consequences for themselves.

                So you can’t use it as an argument that it isn’t spelled out, if it can be easily derived by a person who understands the subject.
                Research papers do not spell out every possible consequence of their findings.

                • iglou@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 day ago

                  It isn’t spelled out because it is not a logical conclusion at all. Nothing in this test requires critical thinking to achieve.

                  Why are you defending an obviously terribly written article?

  • Rioting Pacifist@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    6
    ·
    2 days ago

    The test seems kind of dogshit, you could make the same argument against any tool, calculators or even abacuses would have the same effect.

    I’m made to use it for work and it does speed up some tasks, however for some stuff it ends up being like the experiment where not doing the work the first time means the whole process takes longer at the end.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      1 day ago

      To add to this, we already know that context switching causes a loss in performance.

      A person who’s thinking about how to solve a problem one way and then has to suddenly think about solving it in another way will perform worse.

      https://medium.com/@codewithmunyao/the-hidden-cost-of-context-switching-why-your-most-productive-hours-are-disappearing-43c5b501de19

      The Neuroscience Behind the Pain

      Context switching isn’t just annoying — it’s neurologically expensive. When you shift from debugging a race condition to answering emails, your brain doesn’t simply “change tabs.” It goes through a complex process:

      -Memory consolidation: Storing your current mental model

      -Attention disengagement: Breaking focus from the current task

      -Cognitive reloading: Building a new mental model for the next task

      -Re-engagement: Getting back into flow

      Research from Carnegie Mellon shows that even brief interruptions can increase task completion time by up to 23%. For complex cognitive work like programming, this cost multiplies dramatically.

      Here’s another article from CMU discussing the same thing: https://www.sei.cmu.edu/blog/addressing-the-detrimental-effects-of-context-switching-with-devops/

      What this study shows is that a person who is faced with an unexpected context switch performs worse on a task than a user who has spent the last 12 questions performing the task the same way.

      This exact problem would happen if you replaced AI with a calculator, or made a person swap from using paper to doing mental math. The problem here is context switching, not AI.

      The way to ensure that the problem is AI and not the context switch, would be to continue the quest and see if the first group reverts back to baseline after 12 questions. 12 questions is how long the control group had to become acclimated to the task before their last context swap at the start of the test.

      Also, of note, this is a paper on arXiv it is not published so it has not gone through a peer-review process which would certainly catch the failure to set a proper control group.

      • chunes@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 day ago

        Context switching isn’t just X — it’s Y.

        Are we sure this was written by a human?

          • chunes@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            Thanks.

            And I’m all for em dashes. After all, I started using them after reading enough books. It’s just that particular construct that strikes me as especially LLM-y.

            • luciferofastora@feddit.org
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 day ago

              AI was trained on human writing. If it produces a certain tone, then that’s probably a result of the material that was favoured in training it. That construction was common in human writing before it became common in AI too.

              What makes it stick out is when AI uses it in contexts where humans normally wouldn’t, but this kind of assertion is common in scientific papers and articles. It would make sense to train an AI on scientific writing, since that tone sounds authoritative and like you have some idea of what you’re talking about.

              So I don’t think this is an LLM-construct; it’s an instance of the original style that LLMs copy.

            • FauxLiving@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 day ago

              I’d like to see a study on that, I see it mentioned so much it’s almost achieved meme status.

              It could very well be a Baader–(👀)Meinhof phenomenon.

  • SunshineJogger@feddit.org
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    I really do see the issue with AI. I see people around me outsource thinking to it too much. Like literally. As if they are happy that a machine can make their life choices for them. This is extremely worrying It’s About how people use it

    • minorkeys@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      Thinking is hard and ppl would prefer to feel, instead. When you just have to vibe with your AI that thinks for you, ppl will absolutely use it and disempower themselves under the illusion of empowerment. They will infantilize themselves and end up being treated like the children they want to be.

    • ExLisper@lemmy.curiana.net
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 day ago

      I always thought recommendation algorithm will do it but the progress stopped at some point. We had apps recommending videos, music, feeds, news and so on for a long time but it never evolved into recommended careers or recommended places to live. Not in the sense where some algorithms that tracks you all the times tells you what your next important life choice should be. I don’t know anyone who’s using AI like that yet but I can see it happening in the future.

  • iglou@programming.dev
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    1 day ago

    Those are important studies but nothing shocking. The conclusion to draw from them is the same one we’ve drawn from all technologies that have improved our lives to some degree: Without them, we tend to either be incompetent as losing access to them isn’t worth planning for, or we are demotivated because why would we deprive ourselves from technology that makes our work so much less exhausting?

    It doesn’t necessarily remove our capacity to think (and the article falsely generalises to critical thinking), it shifts what kind of thinking we do.

    If AI is as good or better than I am at writing code, then I’ll switch my brain to only do the orchestrating and architecture rather than the writing code part. And yes, if you remove AI, then the switch will cause me to perform less than I used to before AI, but not permanently, only until I get used to it again.

    If an AI is better than a doctor at finding cancer indicators, then the doctor will focus their mind on finding solutions only rather than splitting it on both the detection and solution.

    This is not new, not bad, and I’ll even go to the extent of saying it’s a great use of AI: Humans evolved for specialization. The less varied the tasks are, the better we are at the subset we specialize in. That’s what has driven our rapid technological and societal advances in the past millenia.

    But, AI has many issues and many detrimental applications as well, so don’t see this comment as a full endorsement of AI.

  • lechekaflan@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    1 day ago

    I don’t want it, all it does is to negate years of learned experience and ability to organically formulate ideas.

  • nonentity@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    4
    ·
    2 days ago

    I’ll never understand how an explosively imprecise, statistically luke-warm, grey goo extrusion sphincter could ever be mistaken for intelligence.

    AI doesn’t exist, it’s a vacuous marketing term.

    LLMs have vanishingly narrow legitimate, defensible use cases, but their output is intrinsically inaccurate, and should never be used without supervision from relevant domain experts.

    • texture@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      theres plenty of legitimate use cases. your comment just sounds like youre repeating what everyone else says about it.

      • nonentity@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        There could be many use cases, and some of them may even be legitimate, but I’m yet to observe any which have broad applicability, and they should only ever be wielded by a responsible, expert adult.

        • texture@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          its a multitool, its applicability needent be broad imo. anyway, im glad we arent speaking in absolutes.

    • dogzilla@masto.deluma.biz
      link
      fedilink
      arrow-up
      3
      arrow-down
      5
      ·
      2 days ago

      @nonentity @technology I think the problem with your framing is it implies that humans are not also “explosively imprecise, statistically luke-warm, grey goo extrusion sphincter(s)”. We weren’t exactly living in a perfect world prior to AI, and all AI does is regurgitate what humans created. AI isn’t really changing the character of anything - and in several domains I’d argue it’s improving the baseline (coding for one).

      • nonentity@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        2 days ago

        It’s telling that you assumed the description applied exclusively to LLMs.

        No one who persists in labelling LLMs as ‘AI’ should be treated as an authority on the subject, and I’d argue it’s one of the greatest indicators of how little they comprehend the situation.

        • astronaut_sloth@mander.xyz
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          2 days ago

          THANK YOU! I studied AI in school, and it always bothers me when people think that LLMs are the only facet of AI. Between 2022-2024, I had a knee jerk reaction of explaining that AI is more than LLMs and that LLMs are really a small subset of the entire universe of AI, yadda yadda yadda. Now I’ve given up and roll my eyes as someone tries to tell me about the cool new Claude skill they built.

          What’s funnier is people think I hate LLMs. That couldn’t be further from the truth; they are a fantastically interesting and innovative technology! “Attention is All You Need” is a great paper, and super impactful. I just hate that people are outsourcing their thinking to a chatbot and neglect the rest of my field of study.

          • howrar@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            LLMs are still a facet of AI though. It sounds like they’re saying it shouldn’t be categorized as AI at all.

        • Em Adespoton@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          2 days ago

          I’m confused. Aren’t you the one who referred to LLMs In a thread that was conflating LLMs with AI? The parent’s comment seems to be right on point.

          It’s kind of like how we’ve lost the war on hacking.

          Large language models fall under the current definition of artificial intelligence just as much as Cyc or Cog did in their day, or various expert systems and machine learning models, diffusion models, etc.

          Pretty much any non-deterministic inference engine can be classified as an AI, including LLMs.