• 0 Posts
  • 6 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle

  • If you take data, and effectively do extremely lossy compression on it, there is still a way for that data to theoretically be recovered.

    This is extremely wrong and your entire argument rests on this single sentence’s accuracy so I’m going to focus on it.

    It’s very, very easy to do a lossy compression on some data and wind up with something unrecognizable. Actual lossy compression algorithms are a tight balancing act of trying to get rid of just the right amount of just the right pieces of data so that the result is still satisfactory.

    LLMs are designed with no such restriction. And any single entry in a large data set is both theoretically and mathematically unrecoverable. The only way that these large models reproduce anything is due to heavy replication in the data set such that, essentially, enough of the “compressed” data makes it through. There’s a reason why whenever you read about this the examples are very culturally significant.




  • VoterFrog@lemmy.worldtoMicroblog Memes@lemmy.worldLa_Brea_V2.0.exe
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    21 days ago

    Doesn’t work either

    The text you provided translates to:
    “But what about typing like this?”. This style of writing involves replacing standard Latin letters with similar-looking characters from other alphabets or adding diacritical marks (accents, tildes, umlauts) available in the Unicode standard.


  • VoterFrog@lemmy.worldtoMicroblog Memes@lemmy.worldLa_Brea_V2.0.exe
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    21 days ago

    Yeah, much like the thorn, LLMs are more than capable of recognizing when they’re being fed Markov gibberish. Try it yourself. I asked one to summarize a bunch of keyboard auto complete junk.

    The provided text appears to be incoherent, resembling a string of predictive text auto-complete suggestions or a corrupted speech-to-text transcription. Because it lacks a logical grammatical structure or a clear narrative, it cannot be summarized in the traditional sense.

    I’ve tried the same with posts with the thorn in it and it’ll explain that the person writing the post is being cheeky - and still successfully summarizes the information. These aren’t real techniques for LLM poisoning.