A sex offender convicted of making more than 1,000 indecent images of children has been banned from using any “AI creating tools” for the next five years in the first known case of its kind.

Anthony Dover, 48, was ordered by a UK court “not to use, visit or access” artificial intelligence generation tools without the prior permission of police as a condition of a sexual harm prevention order imposed in February.

The ban prohibits him from using tools such as text-to-image generators, which can make lifelike pictures based on a written command, and “nudifying” websites used to make explicit “deepfakes”.

Dover, who was given a community order and £200 fine, has also been explicitly ordered not to use Stable Diffusion software, which has reportedly been exploited by paedophiles to create hyper-realistic child sexual abuse material, according to records from a sentencing hearing at Poole magistrates court.

  • rebelsimile@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    6 months ago

    What you’re saying is based on the predicate that the system can’t draw concepts it has never seen which is simply untrue. Everything else past that is sophistry.

    Edit: also not continuing a conversation with someone who is hostile to the basic rules of logic.

    • xmunk@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      6 months ago

      You have a basic misunderstanding of how AI works and are endowing it with mystical properties. Generative AI can’t accurately infer concepts or items it doesn’t understand. It has all the knowledge of the internet but if you ask it to draw a schematic for a hydrogen bomb it’ll give you back hallucinated bullshit. I’ll grant that there’s a small chance that just enough random details have been leaked that the AI may actually know how to build a hydrogen bomb - but it can’t infer how that would work from “understanding physics”.

      Either way, these models were trained on csam, so my initial point is accurate and not misinformation.