• Turun@feddit.de
    link
    fedilink
    arrow-up
    2
    ·
    5 months ago

    There would still need to be a corpus of text and some supervised training of a model on that text in order to “recognize” with some level of confidence what the text represents, right?

    Correct. The clip encoder is trained on images and their corresponding description. Therefore it learns the names for things in images.

    And now it is obvious why this prompt fails: there are no images of empty rooms tagged as “no elephants”. This can be fixed by adding a negative prompt, which subtracts the concept of “elephants” from the image in one of the automagical steps.