Large Language Models (LLMs) often use web content without permission. Blocking them via robots.txt isn't effective as many crawlers ignore it. Instead, the author suggests poisoning LLMs by creating corrupted versions of content, accessible only through nofollow links. This approach aims to mislead LLMs while keeping legitimate search rankings intact.
1 Comment
Sort: