Is it? I keep hearing people keep parroting this but what big advancements have we made cause of AI?
As a developer, I keep hearing this but all I see is low quality software that is all smoke and mirrors. Pumping out low quality code at a high pace is worse than pumping out less but higher quality code.
Dude, ChatGPT just solved an Erdős problem a few days ago and Mythos is exploiting decade old undiscovered 0-days in OSes and capable of pivoting 0-day Firefox bugs into full blown root access.
Yeah, I get that the viral “how many 'r’s are in strawberry” stuff is funny, but the idea that historical issues with transformers is preventing them from accelerating peak capabilities way beyond what most experts thought was possible just years ago is borderline delusional.
The field is moving so fast at this point that if you are basing any sense of limitations on even ~2mo old sampling, your conclusions are likely out of date.
They aren’t a silver bullet for everything (yet) but how capable they are at the things transformers are starting to be specialized into is well past the avg practitioner.
I’ve been writing software for well over a decade and the modern agents do a better job than I would around 90% of the time. Yes, I’ll occasionally need to bring up issues with their work, but I’d say at this point around 50% of the times I think they made a mistake I was actually the one who was wrong.
This is only within around the last 3-4 months that it’s been like this.
Dude, it’s not even worth it. These crazy people want to live in their own realities. No matter how much you explain something, they’ll continue to believe what they want to feel morally superior, even if it’s completely naive.
‘Just’? It’s been an open problem for decades that mathematicians have tried to solve over that time.
And now it is solved.
Because ChatGPT applied something no humans ever thought to do.
And Terence Tao and the other mathematicians that have reviewed it say it’s solved. But I guess someone should let them know that grandwolf319 doesn’t consider it solved?
Literally name any single industry with anything, and AI has vastly pushed it forward. It’s way to big to type here. Just off the top of my head: climate, pharmaceutical, other biomedical stuff (neuroscience, genetics, medical advances in every possible body system), energy (that alone has THOUSANDS of huge advances), science in general (astrophysics, geophysics, chemistry, agriculture, I mean every single scientific field). I’m listing every field I can think of, because it’s that pervasive.
The most visible advances which is just in like business/productivity for the sake of making money, I’d argue is the least important. It’s most important for a capitalist society that values profit over all else, but that’s a recipe for collapse, which is where we’re quickly headed.
People on this site are crazy, I understand. They see “AI” and instantly assume it’s all Palantir self-targeting murder drones. No amount of explaining will change crazy people’s minds, and they want to live in their own reality because it makes them feel morally superior.
I use all kinds of models, to include diffusion models (generative), vision transformers, LSTMs, CNNs, and all kinds of classical ML methods. It really doesn’t matter if I say what the models are or not.
You are not helping your cause by emo-venting here. Go back up and re-read the OP title - I’ll wait.
So long as people have anxiety over AI issues, including ethics and water usage, then the people asking questions have a firm foundation for their statements. Why not (gently) invite them in, to know what you do? Curiosity is an amazingly adaptive trait in humanity, and they might be genuinely ready to listen to a well-intentioned answer. But you are turning them away not so much with the content as the tone of your responses, essentially proving them right that pro-AI advocates froth at the mouth at how AI will overtake humans rather than use logical argumentation practices. But why put forth Musk’s words here, on the Threadiverse?
If you can keep your head while the rest of the world loses theirs…
First, read all the responses. My initial tone is fine, but like 10 different posters were foaming at the mouth saying I personally am killing people because I work in the general space. There’s no reasoning with people like that.
That might actually be true… but then you were the one who tried to do so? And now your words will echo on, years from now people can look up this old thread and see your back and forth fighting, and nothing will have changed.
Do whatever you want - I am not a moderator here, I just thought I would offer this perspective that your words might be working counter to what aim you first set out to achieve, before you got frustrated and lost your cool, and thereby your ability to influence people any further here.
lol please, go research something before you make any claims on it. No I’m not talking about datacenters fucking over the water supply or using fossil fuels, that’s bad obviously. Literally right now go google “AI used in climate science”. Just go do it. You’ll learn.
Are we talking about machine learning which has been around for a decade or generative AI? People usually mean the latter. Machine learning isn’t what caused the AI craze.
I honestly am curious in how an LLM could improve the climate in anyway.
And imo leaving the datacenters out is kind of a bad faith argument, it’s the only reason why it’s everywhere. It wouldn’t be a problem if it was basically a new computation tool used by niche professions.
Is it? I keep hearing people keep parroting this but what big advancements have we made cause of AI?
As a developer, I keep hearing this but all I see is low quality software that is all smoke and mirrors. Pumping out low quality code at a high pace is worse than pumping out less but higher quality code.
Dude, ChatGPT just solved an Erdős problem a few days ago and Mythos is exploiting decade old undiscovered 0-days in OSes and capable of pivoting 0-day Firefox bugs into full blown root access.
Yeah, I get that the viral “how many 'r’s are in strawberry” stuff is funny, but the idea that historical issues with transformers is preventing them from accelerating peak capabilities way beyond what most experts thought was possible just years ago is borderline delusional.
The field is moving so fast at this point that if you are basing any sense of limitations on even ~2mo old sampling, your conclusions are likely out of date.
They aren’t a silver bullet for everything (yet) but how capable they are at the things transformers are starting to be specialized into is well past the avg practitioner.
I’ve been writing software for well over a decade and the modern agents do a better job than I would around 90% of the time. Yes, I’ll occasionally need to bring up issues with their work, but I’d say at this point around 50% of the times I think they made a mistake I was actually the one who was wrong.
This is only within around the last 3-4 months that it’s been like this.
Dude, it’s not even worth it. These crazy people want to live in their own realities. No matter how much you explain something, they’ll continue to believe what they want to feel morally superior, even if it’s completely naive.
Oh did it solve it? You didn’t really provide any sources so I had to look it up myself.
And in the example from 2 days ago, it just applied an existing formula in a different context.
Which is helpful for sure, but I wouldn’t say it solves it.
‘Just’? It’s been an open problem for decades that mathematicians have tried to solve over that time.
And now it is solved.
Because ChatGPT applied something no humans ever thought to do.
And Terence Tao and the other mathematicians that have reviewed it say it’s solved. But I guess someone should let them know that grandwolf319 doesn’t consider it solved?
I didn’t say it’s not solved, I said chatGPT didn’t solve it but gave a hint.
Literally name any single industry with anything, and AI has vastly pushed it forward. It’s way to big to type here. Just off the top of my head: climate, pharmaceutical, other biomedical stuff (neuroscience, genetics, medical advances in every possible body system), energy (that alone has THOUSANDS of huge advances), science in general (astrophysics, geophysics, chemistry, agriculture, I mean every single scientific field). I’m listing every field I can think of, because it’s that pervasive.
The most visible advances which is just in like business/productivity for the sake of making money, I’d argue is the least important. It’s most important for a capitalist society that values profit over all else, but that’s a recipe for collapse, which is where we’re quickly headed.
You’re getting a lot of downvotes - I think it might be helpful if you explain you are using a different sort of AI rather than LLMs or gen AI.
People on this site are crazy, I understand. They see “AI” and instantly assume it’s all Palantir self-targeting murder drones. No amount of explaining will change crazy people’s minds, and they want to live in their own reality because it makes them feel morally superior.
I use all kinds of models, to include diffusion models (generative), vision transformers, LSTMs, CNNs, and all kinds of classical ML methods. It really doesn’t matter if I say what the models are or not.
You are not helping your cause by emo-venting here. Go back up and re-read the OP title - I’ll wait.
So long as people have anxiety over AI issues, including ethics and water usage, then the people asking questions have a firm foundation for their statements. Why not (gently) invite them in, to know what you do? Curiosity is an amazingly adaptive trait in humanity, and they might be genuinely ready to listen to a well-intentioned answer. But you are turning them away not so much with the content as the tone of your responses, essentially proving them right that pro-AI advocates froth at the mouth at how AI will overtake humans rather than use logical argumentation practices. But why put forth Musk’s words here, on the Threadiverse?
If you can keep your head while the rest of the world loses theirs…
First, read all the responses. My initial tone is fine, but like 10 different posters were foaming at the mouth saying I personally am killing people because I work in the general space. There’s no reasoning with people like that.
That might actually be true… but then you were the one who tried to do so? And now your words will echo on, years from now people can look up this old thread and see your back and forth fighting, and nothing will have changed.
Do whatever you want - I am not a moderator here, I just thought I would offer this perspective that your words might be working counter to what aim you first set out to achieve, before you got frustrated and lost your cool, and thereby your ability to influence people any further here.
You think AI has made improvements to our climate???
Can’t believe I read this on lemmy
lol please, go research something before you make any claims on it. No I’m not talking about datacenters fucking over the water supply or using fossil fuels, that’s bad obviously. Literally right now go google “AI used in climate science”. Just go do it. You’ll learn.
Are we talking about machine learning which has been around for a decade or generative AI? People usually mean the latter. Machine learning isn’t what caused the AI craze.
I honestly am curious in how an LLM could improve the climate in anyway.
And imo leaving the datacenters out is kind of a bad faith argument, it’s the only reason why it’s everywhere. It wouldn’t be a problem if it was basically a new computation tool used by niche professions.
I know I’m being pedantic, but machine learning has been around for many decades
go google what I said
That is not how socializing on the internet works. You make the claim, you back it up or be discredited for inconsideration
lol
🙄👍