Generative AI misses the forest for the trees

ChatGPT can be tricked into responding to malicious prompts by encoding the instructions in hex format. Because it uses a word filter to determine malicious prompts rather than fully understanding the context that makes a request malicious, workarounds like these are trivial.

ChatGPT is blocked from generating responses about specific people.