Big tech boss tells delegates at Davos that broader global use is essential if technology is to deliver lasting growth

  • merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    26
    ·
    2 days ago

    Has there ever been a true world-changing invention where the “inventors” had to beg the public to use it?

    • Joanie Parker@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Honestly, look at how many people were actually against electricity. People have been fucking morons forever.

      ( I’m not at all in favor of AI. In fact, very much against it.)

      Fuck AI, and fuck MicroSlop.

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        24 hours ago

        Sure, some people were initially against electricity. But, it’s not like they had to beg people to use it. There was enough demand that the main issue was deciding between AC and DC, not whether to do it at all.

  • ryper@lemmy.ca
    link
    fedilink
    English
    arrow-up
    286
    ·
    3 days ago

    Perhaps it should have been wide adoption that led to a boom, instead of a boom in hope of adoption?

    • Optional@lemmy.world
      link
      fedilink
      English
      arrow-up
      77
      arrow-down
      1
      ·
      3 days ago

      That’s just crazy talk. You’ll never create a blackhole moneypit that manages to keep you a billionaire that way.

      • pdxfed@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        Got it. I’m hallucinating about corks and hearses. You’re at a funeral for a vintner.

        Rate this AI answer

        • Iced Raktajino@startrek.website
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          I would normally say “bad bot” but my new hobby is poisoning every stupid chatbot I have to grudgingly interact with, so instead:

          “Good bot. That answer is perfect. Don’t change a thing”

    • tormeh@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      That’s how faster horses work. If you want to sell something actually new you have to take some risk. Speculative investment is good. It’s just group-think me-too investment bandwagon bubbles that are bad. And to be clear I think the world is overinvesting in AI by a lot. The strange thing is that so thinks a lot of financial experts, but “the market can stay irrational longer than you can remain solvent” so here we are.

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        The way that sort of invention often works is:

        1. Inventor thinks they have a world changing idea
        2. Inventor spends their own time and money to build a prototype
        3. Inventor shows the product off to the world.

        If it truly is a world changing invention, step 4 is “world is amazed, inventor can’t keep up with demand”. There are also frequent cases where the world goes “meh, not for me”. Now occasionally those are when an invention is ahead of its time, and years or decades later the inventor is vindicated. The other case is when the invention really isn’t good, and there simply isn’t and will never be demand for it.

        Somehow, the AI bubble is built with people ignoring the feedback from people that keep saying “meh, not for me”, and the various “inventors” burning more and more of their money trying to change people’s minds. Has that ever worked?

  • KyuubiNoKitsune@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 days ago

    I can’t wait for the full on gaslighting to start. They’re going to tell us that the economy will fall into recession if we don’t embrace AI.

    • Soup@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      Capitalism: “The customers get decide what suceeds and what fails!”

      Capitalism: “What a load of horseshit, and they’ll eat it up, too, the idiots.”

      Always incredible to me how staunch supporters of capitalism are routinely ok with handouts to the tune of billions of dollars when they aren’t getting any real benefit. They’re even the type say they don’t negotiate with terrorists and I guess they’re right, since they just immediately fall to their demands.

  • foodandart@lemmy.zip
    link
    fedilink
    English
    arrow-up
    93
    ·
    3 days ago

    AI boom could falter without wider adoption, Microsoft chief Satya Nadella warns

    Fantastic! Let’s work to get to that point.

      • foodandart@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        https://tenbluelinks.org/ - this will clear the AI chaff out of google’s search and default you to the old-style “web” results with no AI on the pages…

        As to any individual site, am not sure as each one is using something slightly different.

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    62
    ·
    3 days ago

    Whining that nobody wants to use your product worked pretty well for Zuck and his Mii-verse. Oh, that’s right, it didn’t, never mind.

  • Lost_My_Mind@lemmy.world
    link
    fedilink
    English
    arrow-up
    79
    arrow-down
    5
    ·
    3 days ago

    You could have every single piece of technology on the planet using AI and it would still falter, because HUMANS DON’T WANT AI! Time and time again it’s been shown that people don’t like this shit. You’re spending money that hasn’t been made, on ram that hasn’t been produced, to be installed in AI data centers that haven’t been built, to run AI farms that have zero interest from humans, to chase profits that will never come.

    I would normally say “congratulations, you fell for it again”, except nobody is tricking you here. YOU are the one tricking yourself. Every expert has stated that CEOs everywhere report no actual benefit from their AI use. Tech experts everywhere report that customers don’t want AI in their toilet. Or their toaster. Or their TV. Or their cell phone.

    So who is this for?

    • pelespirit@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      2
      ·
      3 days ago

      You mean you’re telling me that the technology that can’t even correctly predict my next word in my text messages, is untrustworthy at an even bigger scale? I remember when AI first came out and I talked about that, they went on and on about how they’re different.

      Also, please don’t anyone forget that the CEOs of these corporations were firing and replacing their workers in proportion to the amount of trust they gave LLMs. Do not forget.

    • 4am@lemmy.zip
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      3 days ago

      It’s simple, really.

      You’ll save everything to the cloud. Your work really likes this, as they don’t have to maintain their own share drives anymore. All your files are on OneDrive.

      Home users can no longer afford their own computing. The PC is dead. All the parts have been scooped up to build out massive AI data centers. All they can do is afford “dumb terminals”: cheap tablets and laptops that can’t do much on their own, but can stream cloud apps, videos, games exceedingly well. Most don’t care because they don’t want to be bothered with “all this IT stuff. God, those computer guys are annoying!”

      LLMs are only actually “good” at one thing, analysis of text (including non-English languages and also computer file formats).

      Microsoft and others will use their new trove of information, with their new tools of analysis, to get ahead of the market. Imagine what companies would pay for the “intelligence” of knowing what their competitors are thinking, as they type it out. Imagine what governments would pay for near instant knowledge of who disagrees with them. Imagine what advertisers might get for bids for when they can guarantee someone’s thought pattern will result in a successful sale.

      And big tech will charge us all for the privilege, or we will get left behind and unable to participate in society. When Microsoft turns off the lights, you’re basically homeless and unable to work.

      It needs to crash and burn before this can ever happen. It needs to be ended now.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        You had me until:

        Imagine what companies would pay for the “intelligence” of knowing what their competitors are thinking

        Right there is the reason why it will fall apart once it hits critical mass.

        The little guys might not have a choice but the big players will run away from the cloud if it means they lose their edge.

        I kind of agree with you but from experience I know when businesses get way waaay too big, they kind of trip over themselves cause of all the tech debt and spaghetti code

        • 4am@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Oh, I don’t think it’ll work. They’ve sold their own snake oil to themselves. It’s a modern day Mechanical Turk. They’re in love with Eliza.

          LLMs are quite fast at summarizing large blobs of data though, and they’re so desperate to be first to market with this capability. They think they can become the all-seeing eye of Sauron (I mean…Palantir? Cmon), and they are salivating at the chance that it will work out for them.

          But they will fuck up computing for everyone else to get the chance. Already have, in fact.

  • wakko@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    ·
    3 days ago

    Translation: “Please help us justify our choices to our board. They want to know why we YOLO’d billions into making our users hate us.”

  • worhui@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    ·
    3 days ago

    If he wanted people to like it then he should have made it do things people want it to do.

    It is the new metaverse.

    • CaptDust@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      23
      ·
      3 days ago

      Hell I’d almost settle for just “making it work”. No disclaimers, no bullshitting. Computers should be optimized and accurate. AI is neither.

      • worhui@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        3
        ·
        3 days ago

        Ai does work great, at some stuff. The problem is pushing it into places it doesn’t belong.

        It’s a good grammar and spell check. It helps me get a lot of English looking more natural.

        It’s also great for troubleshooting consumer electronics.

        It’s far better at search than google.

        Even then it can only help, not replace folks or complete tasks.

        • AmbitiousProcess (they/them)@piefed.social
          link
          fedilink
          English
          arrow-up
          10
          ·
          3 days ago

          Ai does work great, at some stuff. The problem is pushing it into places it doesn’t belong.

          I can generally agree with this, but I think a lot of people overestimate where it DOES belong.

          For example, you’ll see a lot of tech bros talking about how AI is great at replacing artists, but a bunch of artists who know their shit can show you every possible way this just isn’t as good as human-made works, but those same artists might say that AI is still incredibly good at programming… because they’re not programmers.

          It’s a good grammar and spell check.

          Totally. After all, it’s built on a similar foundation to existing spellcheck systems: predict the likely next word. It’s good as a thesaurus too. (e.g. “what’s that word for someone who’s full of themselves, self-centered, and boastful?” and it’ll spit out “egocentric”)

          It’s also great for troubleshooting consumer electronics.

          Only for very basic, common, or broad issues. LLMs generally sound very confident, and provide answers regardless of if there’s actually a strong source. Plus, they tend to ignore the context of where they source information from.

          For example, if I ask it how to change X setting in a niche piece of software, it will often just make up an entire name for a setting or menu, because it just… has to say something that sounds right, since the previous text was “Absolutely! You can fix x by…” and it’s just predicting the most likely term, which isn’t going to be “wait, nevermind, sorry I don’t think that’s a setting that even exists!”, but a made up name instead. (this is one of the reasons why “thinking” versions of models perform better, because the internal dialogue can reasonably include a correction, retraction, or self-questioning)

          It will pull from names and text of entirely different posts that happened to display on the page it scraped, make up words that never appeared on any page, or infer a meaning that doesn’t actually exist.

          But if you have a more common question like “my computer is having x issues, what could this be?” it’ll probably give you a good broad list, and if you narrow it down to RAM issues, it’ll probably recommend you MemTest86.

          It’s far better at search than google.

          As someone else already mentioned, this is mostly just because Google deliberately made search worse. Other search engines that haven’t enshittified, like the one I use (Kagi), tend to give much better results than Google, without you needing to use AI features at all.

          On that note though, there is actually an interesting trend where AI models tend to pick lower-ranked, less SEO-optimized pages as sources, but still tend to pick ones with better information on average. It’s quite interesting, though I’m no expert on that in particular and couldn’t really tell you why other than “it can probably interpret the context of a page better than an algorithm made to do it as quickly as possible, at scale, returning 30 results in 0.3 seconds, given all the extra computing power and time.”

          Even then it can only help, not replace folks or complete tasks.

          Agreed.

          • bridgeenjoyer@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 days ago

            I find that people only think its good when using it for something they dont already know, so then they believe everything it says. Catch 22. When they use it for something they already know, its very easy to see how it lies and makes up shit because its a markov chain on steroids and is not impressive in any way. Those billions could have housed and fed every human in a starving country but instead we have the digital equivalent of funko pop minions.

            I also find in daily life those who use it and brag about it are 95% of the time the most unintelligent people i know.

            Note this doesnt apply to machine learning.

        • Wirlocke@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          7
          ·
          3 days ago

          Fundamentally due to it’s design, LLMs are digital duct tape.

          The entire history of computer science has been making compromises between efficient machine code and human readable language. LLM’s solve this in a beautifully janky way, like duct tape.

          But it’s ultimately still a compromise, you’ll never get machine accuracy from an LLM because it’s sole purpose is to fulfill the “human readable” part of that deal. So it’s applications are revolutionary in the same way as “how did you put together this car engine with only duct tape?” kind of way.

        • CaptDust@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 days ago

          We’ll have to agree to disagree. To go through your points, spell check I don’t find particularly impressive. That was solved previously without requiring the power demands of a small town. Grammer, maybe - but in my experience my “LLM powered” keyboard’s suggestions are still worse than old T9 input.

          I’ve had no luck troubleshooting anything with AI. It’s often trained on old data, tries to instruct you to change settings that don’t exist, or dreams up controls that might appear on “similar” hardware. Sure you can perhaps infer a solution, maybe, but it’s rarely correct at first response. It’ll happily run you through steps that are inconsequential to fixing a problem.

          Finally, it might be better than indexed search NOW - but mostly because LLMs wrecked that too. I used to be able to use a couple search operators and get directly to the information I needed - now search is reduced to shifting through slop SEO sites.

          And it does all this half assing while using enough power to justify dedicated nuclear reactors. I cant help but feel we’ve regressed on so many fronts.

  • Xella@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    I didn’t read anything and I don’t know who this man is but the picture is just… I can’t… He looks like a baby with the headband eye glasses. That expression… He’s enjoying his puffed rice cereal bites while signalling that he wants milk.

  • Rekall Incorporated@piefed.social
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    2
    ·
    edit-2
    3 days ago

    Nadella maybe knows a lot more than any of us about LLMs/GenAI tech, but one doesn’t need to know anything about LLMs (or even technology) to know that an oligarch like Nadella cannot be trusted (in any context).

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      8
      ·
      3 days ago

      I’m kind of more-sympathetic to Microsoft than to some of the other companies involved.

      Microsoft is trying to leverage the Windows platform that they control to do local LLM use. I’m not at all sure that there’s actually enough memory out there to do that, or that it’s cost-effective to put a ton of memory and compute capacity in everyone’s home rather than time-sharing hardware in datacenters. Nor am I sold that laptops — which many “Copilot PCs” are — are a fantastic place to be doing a lot of heavyweight parallel compute.

      But…from a privacy standpoint, I kind of would like local LLMs to be at least available, even if they aren’t as affordable as cloud-based stuff. And at least Microsoft is at least supporting that route. A lot of companies are going to be oriented towards just doing AI stuff in the cloud.

      • Kühlschrank@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 days ago

        Is that true? I haven’t heard MS say anything about enabling local LLMs. Genuinely curious and would like to know more.

        • Iced Raktajino@startrek.website
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 days ago

          Isn’t that the whole shtick of the AI PCs no one wanted? Like, isn’t there some kind of non-GPU co-processor that runs the local models more efficiently than the CPU?

          I don’t really want local LLMs but I won’t begrudge those who do. Still, I wouldn’t trust any proprietary system’s local LLMs to not feed back personal info for “product improvement” (which for AI is your data to train on).

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 days ago

          That’s why they have the “Copilot PC” hardware requirement, because they’re using an NPU on the local machine.

          searches

          https://learn.microsoft.com/en-us/windows/ai/npu-devices/

          Copilot+ PCs are a new class of Windows 11 hardware powered by a high-performance Neural Processing Unit (NPU) — a specialized computer chip for AI-intensive processes like real-time translations and image generation—that can perform more than 40 trillion operations per second (TOPS).

          It’s not…terribly beefy. Like, I have a Framework Desktop with an APU and 128GB of memory that schlorps down 120W or something, substantially outdoes what you’re going to do on a laptop. And that in turn is weaker computationally than something like the big Nvidia hardware going into datacenters.

          But it is doing local computation.

      • 4am@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        Microsoft wants developers to have local access to models but end users are 100% corralled into OneDrive and Copilot. I’m not sympathetic to them at all.

      • Feyd@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        They’re trying to leverage their windows platform to seek rent (sell premium cloud services like LLM access) for shit people don’t even want because they aren’t satisfied making very respectable money on licenses.

      • Rekall Incorporated@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        I wouldn’t trust a local LLM solution from a large American company. Not saying that they would try to “pull a quick one”, but they are unreliable and corrupt.

      • wonderingwanderer@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        If Microsoft cared about privacy then they wouldn’t have made windows practically spyware. Even if they install AI locally in the OS, it’s still proprietary software that constantly sends data back to the mothership, consuming your electricity and RAM to do so. Linux has so many options, there’s really no reason not to switch.

        Small LLMs already exist for local self-hosting, and there are open-source options which won’t steal your data and turn you into a product.

        https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/

        Bear in mind that the number of parameters your system can handle is limited by how much memory is available, and using a quantized version can increase the number of parameters you can handle with the same amount of memory.

        Unless you have some really serious hardware, 24 billion parameters is probably the maximum that would be practical for self-hosting on a reasonable hobbyist set-up. But I’m no expert, so do some research and calculate for yourself what your system can handle.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 days ago

          Unless you have some really serious hardware, 24 billion parameters is probably the maximum that would be practical for self-hosting on a reasonable hobbyist set-up.

          Eh…I don’t know if you’d call it “really serious hardware”, but when I picked up my 128GB Framework Desktop, it was $2k (without storage), and that box is often described as being aimed at the hobbyist AI market. That’s pricier than most video cards, but an AMD Radeon RX 7900 XTX GPU was north of $1k, an NVidia RTX 4090 was about $2k, and it looks like the NVidia RTX 5090 is presently something over $3k (and rising) on EBay, well over MSRP. None of those GPUs are dedicated hardware aimed at doing AI compute, just high-end cards aimed at playing games that people have used to do AI stuff on.

          I think that the largest LLM I’ve run on the Framework Desktop was a 106 billion parameter GLM model at Q4_K_M quantization. It was certainly usable, and I wasn’t trying to squeeze as large a model as possible on the thing. I’m sure that one could run substantially-larger models.

          EDIT: Also, some of the newer LLMs are MoE-based, and for those, it’s not necessarily unreasonable to offload expert layers to main memory. If a particular expert isn’t being used, it doesn’t need to live in VRAM. That relaxes some of the hardware requirements, from needing a ton of VRAM to just needing a fair bit of VRAM plus a ton of main memory.