And all that data does by definition exclude: “AI” is not built on “all of humankind’s knowledge” but based on whatever a mostly western view of the world and what is relevant looks like. Cultures who are not within that framework, who might even be based on more oral forms of keeping history and knowledge are not represented. Even if those groups are not actively excluded (which again they very often are) there are huge populations who just are not seen by the data do not get a say in how they are represented. Or if they are represented it’s just as problems: Think about unsheltered people for example.

The right loves those patterns because they confirm their prejudices: Ask an image generator for a picture of two people kissing and you most often get a heterosexual couple, often white. Because that’s what the training data looks like. That makes “AI” perfect for creating the form of idealized, fictional “past” that fascists love to allude to (“make America great again“), a past that never existed but that needs to be saved or restored (we’ll get back to that later).

we can pretty easily determine the short-term purpose of “AI”: The destruction of labor power.

This dismantling happens on multiple levels by attacking the foundation of what allows those forms of organization to take place.

The first level is very individualistic: By pointing at “the AI” that can replace a worker that worker is pressured into working harder, not asking for raises or any other improvements of their working conditions. Even though “AI” cannot do your job, the threat itself is useful to employers to undermine your individual power, your feeling of being valuable as a worker.

The second level is about attacking the idea of solidarity and connection: Because “AI” will not replace you (again, “AI” cannot replace the absolute majority of workers!) “but someone using AI will”. This sets up kind of a Thunderdome in which we all have to fight against each other for scraps/jobs. This framing implies that you should not unionize and connect with your fellow workers but that you should see them as your enemies, as the people who will take your job and your ability to provide for your family. We know this dynamic, it’s exactly how the right presents migration as “attack”. It also normalizes violence again turning all of existence into an endless fight against one another (unless you are one of the few people in power of course).

The third level is somewhat more devious. Because it makes us do that form of dissolving of social bonds ourselves. An example: If I use an “AI” to generate an illustration instead of asking a designer I am saying that while my skills and labor has value, that of the designer has not. This implicitly cuts my ability to form connections of solidarity with designers whose work and livelihood I have implicitly declared irrelevant. It makes me put myself over my fellow workers, workers who are facing the same struggles as me, who are my comrades. But no more.

    • yucandu@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      2 hours ago

      I did read the whole article, it feels rambling, vague, and incoherent, with full of these classic talking points.

      Like congratulations, in talking about AI, you somehow managed to shoehorn in - unhoused people, MAGA, labour value, worker solidarity, migration, the normalization of violence… you write like a tweaker.

      • naevaTheRat@lemmy.dbzer0.comOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        2 hours ago

        Well on your whole “it’s just what’s more common” this sort of entrenching of the “norm” as what is true is specifically one of the criticisms raised, so it seems strange to raise it as a defense, further much of the training data for all models is in english and reflects global north values and perspectives, particularly european and american because this is what has been digitised.

        He also writes about the issues with centralisation and control, disinformation, lack of consent, undermining of government accountability/devaluing of institutions and transparency. It seems weird to dismiss this all as “just use a distillation of chatgpt run by a chinese company”.