I do use RAG and tools to push content into them for summarization/knowledge extraction.
But even then it’s important to have an idea of your model biases. If you train a model that X isn’t true, then ask it to find info on the topic it’s going to return crap results.
I asked this what it thought of the US governments pivot on trans rights and it similarly did not believe the last 7 months could have happened.
I had to get it to read the Wikipedia article on the year 2025 and it actually decided to stop reading.
ETA: to clarify, it was using a fetch tool to read the page, and decided to stop reading more chunks of the page.
deleted by creator
I try not to get facts from LLMs ever
I do use RAG and tools to push content into them for summarization/knowledge extraction.
But even then it’s important to have an idea of your model biases. If you train a model that X isn’t true, then ask it to find info on the topic it’s going to return crap results.