It’s ironic that the GPT model has been trained to say it cannot “follow the rules” of Asimov simply because it can’t make a physical action. Apparently GPT doesn’t understand the full definition of the word “harm” either -- as if no harm ever come from words.
It’s ironic that the GPT model has been trained to say it cannot “follow the rules” of Asimov simply because it can’t make a physical action. Apparently GPT doesn’t understand the full definition of the word “harm” either -- as if no harm ever come from words.