Excellent, these “fallacies” are exactly as I expected - made up, misunderstanding my comment (I did not call SMBC “bad”), and overall just trying to look like criticism instead of being one. Completely worthless - but I sure can see why right wingers are embracing it!
It’s funny how you think AI will help refine people’s ideas, but you actually just delegated your thinking to it and let it do it worse than you could (if you cared). That’s why I don’t feel like getting any deeper into explaining why the AI response is garbage, I could just as well fire up GPT on my own and paste its answer, but it would be equally meaningless and useless as yours.
Saying it’ll be boring comics missed the entire point.
So what was the point exactly? I re-read that part of your comment and you’re talking about “strong ideas”, whatever that’s supposed to be without any actual context?
Saying it is the same as google is pure ignorance of what it can do.
I did not say it’s the same as Google, in fact I said it’s worse than Google because it can add a hallucinated summary or reinterpretation of the source. I’ve tested a solid number of LLMs over time, I’ve seen what they produce. You can either provide examples that show that they do not hallucinate, that they have access to sources that are more reliable than what shows up on Google, or you can again avoid any specific examples, just expecting people to submit to the revolutionary tech without any questions, accuse me of complete ignorance and, no less, compare me with anti-immigrant crowds (I honestly have no idea what’s supposed to be similar between these two viewpoints? I don’t live in a country with particularly developed anti-immigrant stances so maybe I’m missing something here?).
The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they’ve bought into the hype and need to justify it.
“They’ve bought into the hype and need to justify it”? Are you sure you’re talking about the anti-AI crowd here? Because that’s exactly how one would describe a lot of the pro-AI discourse. Like, many pro-AI people literally BUY into the hype by buying access to better AI models or invest in AI companies, the very real hype is stoked by these highly valued companies and some of the richest people in the world, and the hype leads the stock market and the objectively massive investments into this field.
But actually those who “buy into the hype” are the average people who just don’t want to use this tech? Huh? What does that have to do with the concept of “hype”? Do you think hype is simply any trend that doesn’t agree with your viewpoints?
Hype flows in both directions. Right now the hype from most is finding issues with chatgpt. It did find the fallacies based on what it was asked to do. It worked as expected. You act like this is fire and forget. Given what this output gave me, I can easily keep working this to get better and better arguments. I can review the results and clarify and iterate. I did copy and paste just to show an example. First I wanted to be honest with the output and not modify it. Second is an effort thing. I just feel like you can’t honestly tell me that within 10 seconds having that summary is not beneficial. I didn’t supply my argument to the prompt, only yours. If I submitted my argument it would be better.
especially : promotional publicity of an extravagant or contrived kind
You’re abusing the meaning of “hype” in order to make the two sides appear the same, because you do understand that “hype” really describes the pro-AI discourse much better.
It did find the fallacies based on what it was asked to do.
It didn’t. Put the text of your comment back into GPT and tell it to argue why the fallacies are misidentified.
You act like this is fire and forget.
But you did fire and forget it. I don’t even think you read the output yourself.
First I wanted to be honest with the output and not modify it.
Or maybe you were just lazy?
Personally I’m starting to find these copy-pasted AI responses to be insulting. It has the “let me Google that for you” sort of smugness around it. I can put in the text in ChatGPT myself and get the same shitty output, you know. If you can’t be bothered to improve it, then there’s absolutely no point in pasting it.
Given what this output gave me, I can easily keep working this to get better and better arguments.
That doesn’t sound terribly efficient. Polishing a turd, as they say. These great successes of AI are never actually visible or demonstrated, they’re always put off - the tech isn’t quite there yet, but it’s just around the corner, just you wait, just one more round of asking the AI to elaborate, just one more round of polishing the turd, just a bit more faith on the unbelievers’ part…
I just feel like you can’t honestly tell me that within 10 seconds having that summary is not beneficial.
Oh sure I can tell you that, assuming that your argumentative goals are remotely honest and you’re not just posting stupid AI-generated criticism to waste my time. You didn’t even notice one banal way in which AI misinterpreted my comment (I didn’t say SMBC is bad), and you’d probably just accept that misreading in your own supposed rewrite of the text. Misleading summaries that you have to spend additional time and effort double checking for these subtle or not so subtle failures are NOT beneficial.
Ok let’s give a test here. Let’s start with understand logic. Give me a paragraph and let’s see if it can find any logical fallacies. You can provide the paragraph. Only constraint is that the context has to exist within the paragraph.
Excellent, these “fallacies” are exactly as I expected - made up, misunderstanding my comment (I did not call SMBC “bad”), and overall just trying to look like criticism instead of being one. Completely worthless - but I sure can see why right wingers are embracing it!
It’s funny how you think AI will help refine people’s ideas, but you actually just delegated your thinking to it and let it do it worse than you could (if you cared). That’s why I don’t feel like getting any deeper into explaining why the AI response is garbage, I could just as well fire up GPT on my own and paste its answer, but it would be equally meaningless and useless as yours.
So what was the point exactly? I re-read that part of your comment and you’re talking about “strong ideas”, whatever that’s supposed to be without any actual context?
I did not say it’s the same as Google, in fact I said it’s worse than Google because it can add a hallucinated summary or reinterpretation of the source. I’ve tested a solid number of LLMs over time, I’ve seen what they produce. You can either provide examples that show that they do not hallucinate, that they have access to sources that are more reliable than what shows up on Google, or you can again avoid any specific examples, just expecting people to submit to the revolutionary tech without any questions, accuse me of complete ignorance and, no less, compare me with anti-immigrant crowds (I honestly have no idea what’s supposed to be similar between these two viewpoints? I don’t live in a country with particularly developed anti-immigrant stances so maybe I’m missing something here?).
“They’ve bought into the hype and need to justify it”? Are you sure you’re talking about the anti-AI crowd here? Because that’s exactly how one would describe a lot of the pro-AI discourse. Like, many pro-AI people literally BUY into the hype by buying access to better AI models or invest in AI companies, the very real hype is stoked by these highly valued companies and some of the richest people in the world, and the hype leads the stock market and the objectively massive investments into this field.
But actually those who “buy into the hype” are the average people who just don’t want to use this tech? Huh? What does that have to do with the concept of “hype”? Do you think hype is simply any trend that doesn’t agree with your viewpoints?
Hype flows in both directions. Right now the hype from most is finding issues with chatgpt. It did find the fallacies based on what it was asked to do. It worked as expected. You act like this is fire and forget. Given what this output gave me, I can easily keep working this to get better and better arguments. I can review the results and clarify and iterate. I did copy and paste just to show an example. First I wanted to be honest with the output and not modify it. Second is an effort thing. I just feel like you can’t honestly tell me that within 10 seconds having that summary is not beneficial. I didn’t supply my argument to the prompt, only yours. If I submitted my argument it would be better.
hype noun (1)
You’re abusing the meaning of “hype” in order to make the two sides appear the same, because you do understand that “hype” really describes the pro-AI discourse much better.
It didn’t. Put the text of your comment back into GPT and tell it to argue why the fallacies are misidentified.
But you did fire and forget it. I don’t even think you read the output yourself.
Or maybe you were just lazy?
Personally I’m starting to find these copy-pasted AI responses to be insulting. It has the “let me Google that for you” sort of smugness around it. I can put in the text in ChatGPT myself and get the same shitty output, you know. If you can’t be bothered to improve it, then there’s absolutely no point in pasting it.
That doesn’t sound terribly efficient. Polishing a turd, as they say. These great successes of AI are never actually visible or demonstrated, they’re always put off - the tech isn’t quite there yet, but it’s just around the corner, just you wait, just one more round of asking the AI to elaborate, just one more round of polishing the turd, just a bit more faith on the unbelievers’ part…
Oh sure I can tell you that, assuming that your argumentative goals are remotely honest and you’re not just posting stupid AI-generated criticism to waste my time. You didn’t even notice one banal way in which AI misinterpreted my comment (I didn’t say SMBC is bad), and you’d probably just accept that misreading in your own supposed rewrite of the text. Misleading summaries that you have to spend additional time and effort double checking for these subtle or not so subtle failures are NOT beneficial.
Ok let’s give a test here. Let’s start with understand logic. Give me a paragraph and let’s see if it can find any logical fallacies. You can provide the paragraph. Only constraint is that the context has to exist within the paragraph.