Doesn’t the new Chinese model just released actually do abstract reasoning?
DeepSeek-R1 leverages a pure RL approach, enabling it to autonomously develop chain-of-thought (CoT) reasoning, self-verification, and reflection—capabilities critical for solving complex problems.
With chain of thought it basically asks itself to generate related sub questions and then answers for those sub questions.
Basically it’s just the same but recursive. So, like it looks like it can tell you things, it just also looks like reasoning.
Now it may well be an improvement, but it’s still basically. “I have this word, what is statistically most likely to be the next word” over and over again.
Doesn’t the new Chinese model just released actually do abstract reasoning?
To my untrained self, that sounds like reasoning.
With chain of thought it basically asks itself to generate related sub questions and then answers for those sub questions.
Basically it’s just the same but recursive. So, like it looks like it can tell you things, it just also looks like reasoning.
Now it may well be an improvement, but it’s still basically. “I have this word, what is statistically most likely to be the next word” over and over again.
Thanks.
Edit: Not sure who’s downvoting me for asking reasonable questions.