Unfortunately there is no way to combat this, and it seems like the end of the internet we once knew. Even with a “proof of human” technology, people could still just paste whatever AI-generated text they wanted, under their “real” account.
This has likely been going on since the first ChatGPT was released.
The subreddit has question-askers give feedback to whether their view was changed. The askers are aware of how their response might appear publicly. This makes me wonder if "appeal to identity" is especially effective, at least superficially if not actually. The fine-tuning might've been reacting to this.
> I think the reason I find this so upsetting is that, despite the risk of bots, I like to engage in discussions on the internet with people in good faith. The idea that my opinion on an issue could have been influenced by a fake personal anecdote invented by a research bot is abhorrent to me.
I like Simon's musings in general, but are we not way past this point already? It is completely and totally inevitable that if you try to engage in discussions on the internet, you will be influenced by fake personal anecdotes invented by LLMs. The only difference here is they eventually disclosed it, but aren't various state and political actors already doing this in spades, undisclosed?
I keep seeing this take, and it makes me mad. "The house is on fire, didn't you expect people to start burning to death? People will inevitably die, why discuss when it happens?"
Engineering is fundamentally about exercising the power of intelligence to change something in the physical world. Posts to the effect of "<bad thing> is inevitable and unstoppable, so it isn't worth talking about" strike me as the opposite of the hacker ethos!
I think the other thing to keep discussing is that doing research, or otherwise using an LLM, to manipulate people's emotions without disclosure, is unethical.
By the way, people die in house fires from toxic smoke inhalation and a lack of oxygen. Engineers created smoke detectors and other devices to lower the risk of fire due to electrical shorts, gas leaks, etc., and to create fire suppression systems.
People still die because they didn't replace batteries, didn't follow electrical cord/device warnings, or left candles or other heat sources unattended. We discuss these events as warnings and reminders that accidents kill when warnings are not followed, when inattentiveness allows failure to propagate, and as a reminder that rarely occurring events still kill innocent people.
Maybe this will motivate people to meet in person, until that is also corrupted with cyber brain augmentation and in-person propaganda actors, rather than relying on only online anecdotes.
With online media, meetings in person are still corrupted by their skewed view from online sources. Such physical meetings would likely end up reinforcing the corruption!
I see this as further discounting the importance of anecdotes and personal experiences when making decisions that affect populations.
Yes, we know that personal stories can be compelling, and communicating with someone with different experiences from ours can be enlightening. Still, before applying these learnings to larger groups, we should remember that individual experiences do not capture the entire population.
"This project yields important insights, and the risks (e.g. trauma etc.) are minimal." They can't possibly measure the insights or claim that the trauma is minimal.
Imagine a conversation about good options for message queues, and someone pipes in with this:
"I've been a sysadmin operating RabbitMQ and Redis for five years. I've found Redis to be a great deal less trouble to administer than Rabbit, and I've never lost any data."
We all have an expectation that these message boards are like the forums of the 2000s, but that's just not true and hasn't been for a long time. We will never see that internet again it seems, because AI was the atomic bomb on all this astroturfing and engineered content. Educating people away from these synthetic forums is appearing near impossible.
Unfortunately there is no way to combat this, and it seems like the end of the internet we once knew. Even with a “proof of human” technology, people could still just paste whatever AI-generated text they wanted, under their “real” account.
This has likely been going on since the first ChatGPT was released.
[dead]
Discussion (212 points, 1 day ago, 144 comments) https://news.ycombinator.com/item?id=43806940
The subreddit has question-askers give feedback to whether their view was changed. The askers are aware of how their response might appear publicly. This makes me wonder if "appeal to identity" is especially effective, at least superficially if not actually. The fine-tuning might've been reacting to this.
> I think the reason I find this so upsetting is that, despite the risk of bots, I like to engage in discussions on the internet with people in good faith. The idea that my opinion on an issue could have been influenced by a fake personal anecdote invented by a research bot is abhorrent to me.
I like Simon's musings in general, but are we not way past this point already? It is completely and totally inevitable that if you try to engage in discussions on the internet, you will be influenced by fake personal anecdotes invented by LLMs. The only difference here is they eventually disclosed it, but aren't various state and political actors already doing this in spades, undisclosed?
I keep seeing this take, and it makes me mad. "The house is on fire, didn't you expect people to start burning to death? People will inevitably die, why discuss when it happens?"
Engineering is fundamentally about exercising the power of intelligence to change something in the physical world. Posts to the effect of "<bad thing> is inevitable and unstoppable, so it isn't worth talking about" strike me as the opposite of the hacker ethos!
I think the other thing to keep discussing is that doing research, or otherwise using an LLM, to manipulate people's emotions without disclosure, is unethical.
By the way, people die in house fires from toxic smoke inhalation and a lack of oxygen. Engineers created smoke detectors and other devices to lower the risk of fire due to electrical shorts, gas leaks, etc., and to create fire suppression systems.
People still die because they didn't replace batteries, didn't follow electrical cord/device warnings, or left candles or other heat sources unattended. We discuss these events as warnings and reminders that accidents kill when warnings are not followed, when inattentiveness allows failure to propagate, and as a reminder that rarely occurring events still kill innocent people.
Maybe this will motivate people to meet in person, until that is also corrupted with cyber brain augmentation and in-person propaganda actors, rather than relying on only online anecdotes.
With online media, meetings in person are still corrupted by their skewed view from online sources. Such physical meetings would likely end up reinforcing the corruption!
I see this as further discounting the importance of anecdotes and personal experiences when making decisions that affect populations.
Yes, we know that personal stories can be compelling, and communicating with someone with different experiences from ours can be enlightening. Still, before applying these learnings to larger groups, we should remember that individual experiences do not capture the entire population.
Sure, but that doesn't mean I'm not furious when it happens.
"This project yields important insights, and the risks (e.g. trauma etc.) are minimal." They can't possibly measure the insights or claim that the trauma is minimal.
> The idea that my opinion on an issue could have been influenced by a fake personal anecdote invented by a research bot is abhorrent to me.
Then stop basing your opinion on issues on personal anecdotes from complete strangers. This is nothing new.
Imagine a conversation about good options for message queues, and someone pipes in with this:
"I've been a sysadmin operating RabbitMQ and Redis for five years. I've found Redis to be a great deal less trouble to administer than Rabbit, and I've never lost any data."
See why I care about this?
This is a bad example. A good sysadmin should fact-check and do testing themselves instead of relying on what other people say.
More of the same? Reddit's genesis included fake accounts and content. I don't doubt upvotes and the frontpage is fully curated:
https://economictimes.indiatimes.com/magazines/panache/reddi...
We all have an expectation that these message boards are like the forums of the 2000s, but that's just not true and hasn't been for a long time. We will never see that internet again it seems, because AI was the atomic bomb on all this astroturfing and engineered content. Educating people away from these synthetic forums is appearing near impossible.
[flagged]