Since they’re fundamentally predicting the next token, and there isn’t a lot of training data out there that would actually do this, I wouldn’t expect that LLMs are going to start putting in lookalike characters. They only lookalike to humans.
That said, you could probably poison their training datasets this way.
Since they’re fundamentally predicting the next token, and there isn’t a lot of training data out there that would actually do this, I wouldn’t expect that LLMs are going to start putting in lookalike characters. They only lookalike to humans.
That said, you could probably poison their training datasets this way.
Yeah that was the idea get the llms to start using look alike characters to poison their outputs.