While I don’t think training on hidden data, or without the author’s permission, is particularly great… won’t the next article be "AI discriminates against races/cultures/ages" when this data gets removed from the training set, without being replaced by equivalent authorized photos?
While I don’t think training on hidden data, or without the author’s permission, is particularly great… won’t the next article be "AI discriminates against races/cultures/ages" when this data gets removed from the training set, without being replaced by equivalent authorized photos?