It stands to reason that if you have access to an LLM’s training data, you can influence what’s coming out the other end of the inscrutable AI’s network. The obvious guess is that…
some random pixels have totally nonsensical / erratic colors,
assuming you could poison a model enough for it to produce this, then it would just also produce occasional random pixels that you would also not notice.
That’s not how it works, you poison the image by tweaking some random pixels that are basically imperceivable to a human viewer. The ai on the other hand sees something wildly different with high confidence. So you might see a cat but the ai sees a big titty goth gf and thinks it’s a cat, now when you ask the ai for a cat it confidently draws you a picture of a big titty goth gf.
Ah, yes, the large limage model.
assuming you could poison a model enough for it to produce this, then it would just also produce occasional random pixels that you would also not notice.
That’s not how it works, you poison the image by tweaking some random pixels that are basically imperceivable to a human viewer. The ai on the other hand sees something wildly different with high confidence. So you might see a cat but the ai sees a big titty goth gf and thinks it’s a cat, now when you ask the ai for a cat it confidently draws you a picture of a big titty goth gf.
…what if I WANT a big titty goth gf?
You better stay away from mine, Romeo.