When there are appropriate advances in physical substrate for AI (or computing in general) the risk will be more intuitive and alignment will grow quickly as a field of study. It's possible that all the work, research and discourse on alignment until then will be obsolete. Calling current alignment work a scam is a stretch though, who is benefitting from the scam?
Re cui bono: AI "alignment" researchers often command high salaries & pull in donations and prestige, and large incumbent institutions have used the fears they've mongered to start a movement to, effectively, bar smaller competitors from entering the lucrative space.
Re: the other part of your comment, I'm not sure what you mean; if you're implying that only the Elect who can Understand Abstraction can possibly comprehend the risk of AI, I'd propose that this maps on rather well to perennial cultish millenarian concerns.
The first part of my comment is the prediction that once the current substrate that computers and AI run on (computer chips, circuits, semiconductors etc.) is made obsolete all the alignment work until that point will also be obsolete. Furthermore, once we have the new substrate that computers/AI run on (I imagine it will come from big advancements in material sciences or maybe it'll be some form of quantum computing or something I'll never fathom) the risk will be more intuitive to the average person and alignment will grow as a field of study. I think the average person of any time period understands abstraction pretty well and I agree with you that currently AI 'doomerism' is partially overblown.
I also agree that alignment researchers would benefit from fear of AI and some may leverage those fears for better positions/prestige though I haven't seen much evidence that their efforts are preventing smaller competitors from entering AI related research. For example, are large companies like Google, Microsoft or Open AI pushing relevant legislation?
I'm pretty confident that true AI won't directly emerge from LLMs or the current substrate. We'll have very powerful AI that can do a lot of useful and interesting things but not with human level cognition.
A useful analogy might be with ancient cultures who had some idea of the potential of steam power but trains weren't invented until many years later when we developed/discovered the process to make steel. Until we develop/discover a new substrate for computation AI will remain quite narrow.
I think we're more or less of the same mind, though I'm agnostic about what the next big improvement will be. Thank you for clarifying. Yes, there is a push to regulate AI, which would benefit incumbents, and there are many AI safety sinecures.
> "First, computers are in fact powerless because all one needs to do is turn them off"
This is a common but naive argument. If a model had the ability to spread on the internet like a virus it would be very difficult to eradicate.
> "Second, even ignoring our complete control over the physical substrate required to run any AI, there would still be no realistic prospect of a "foom" scenario in which some privileged program or other learns the hidden key to practically-infinite intelligence and thus forever imprisons the world. Instead, all indications are that we’ll see a more or less steady increase in capabilities, more or less mirrored by various sites around the world."
There definitely is evidence that "foom" or rapid self-improvement is possible. AlphaZero went from zero to superhuman chess ability in just 4 hours. While I think rapid AI self-improvement is a major risk because it could cause AI to act too quickly for us to respond, it is not necessary for AI to be an existential risk.
A steady increase in capabilities is still dangerous when the AI exceeds human intelligence because the AI's intelligence could be misused or its high intelligence may lead to behavior that humans can't understand or anticipate.
Thanks for reading and taking the time to respond.
Your first argument falls to an extension of my point, since we can just turn off all impacted systems. To be sure, worms can be dangerous, but that's not some new risk.
And the experience with AlphaGo just demonstrates my point in that the models don't generalize and they're in equipoise with matched models running the same algorithms on similar hardware.
But yes, tools can be dangerous, fingers can get cut by knives in the kitchen — that's a good thing, since progress can't happen with unrealistic demands for absolute safety.
One can only turn of all infected systems,and only all infected systems, if one knows exactly what they are. There is no way of knowing that without computer assistance...
When there are appropriate advances in physical substrate for AI (or computing in general) the risk will be more intuitive and alignment will grow quickly as a field of study. It's possible that all the work, research and discourse on alignment until then will be obsolete. Calling current alignment work a scam is a stretch though, who is benefitting from the scam?
Re cui bono: AI "alignment" researchers often command high salaries & pull in donations and prestige, and large incumbent institutions have used the fears they've mongered to start a movement to, effectively, bar smaller competitors from entering the lucrative space.
Re: the other part of your comment, I'm not sure what you mean; if you're implying that only the Elect who can Understand Abstraction can possibly comprehend the risk of AI, I'd propose that this maps on rather well to perennial cultish millenarian concerns.
The first part of my comment is the prediction that once the current substrate that computers and AI run on (computer chips, circuits, semiconductors etc.) is made obsolete all the alignment work until that point will also be obsolete. Furthermore, once we have the new substrate that computers/AI run on (I imagine it will come from big advancements in material sciences or maybe it'll be some form of quantum computing or something I'll never fathom) the risk will be more intuitive to the average person and alignment will grow as a field of study. I think the average person of any time period understands abstraction pretty well and I agree with you that currently AI 'doomerism' is partially overblown.
I also agree that alignment researchers would benefit from fear of AI and some may leverage those fears for better positions/prestige though I haven't seen much evidence that their efforts are preventing smaller competitors from entering AI related research. For example, are large companies like Google, Microsoft or Open AI pushing relevant legislation?
I hope you're right about material advances and they are re lobbying efforts.
I'm pretty confident that true AI won't directly emerge from LLMs or the current substrate. We'll have very powerful AI that can do a lot of useful and interesting things but not with human level cognition.
A useful analogy might be with ancient cultures who had some idea of the potential of steam power but trains weren't invented until many years later when we developed/discovered the process to make steel. Until we develop/discover a new substrate for computation AI will remain quite narrow.
I think we're more or less of the same mind, though I'm agnostic about what the next big improvement will be. Thank you for clarifying. Yes, there is a push to regulate AI, which would benefit incumbents, and there are many AI safety sinecures.
> "First, computers are in fact powerless because all one needs to do is turn them off"
This is a common but naive argument. If a model had the ability to spread on the internet like a virus it would be very difficult to eradicate.
> "Second, even ignoring our complete control over the physical substrate required to run any AI, there would still be no realistic prospect of a "foom" scenario in which some privileged program or other learns the hidden key to practically-infinite intelligence and thus forever imprisons the world. Instead, all indications are that we’ll see a more or less steady increase in capabilities, more or less mirrored by various sites around the world."
There definitely is evidence that "foom" or rapid self-improvement is possible. AlphaZero went from zero to superhuman chess ability in just 4 hours. While I think rapid AI self-improvement is a major risk because it could cause AI to act too quickly for us to respond, it is not necessary for AI to be an existential risk.
A steady increase in capabilities is still dangerous when the AI exceeds human intelligence because the AI's intelligence could be misused or its high intelligence may lead to behavior that humans can't understand or anticipate.
Thanks for reading and taking the time to respond.
Your first argument falls to an extension of my point, since we can just turn off all impacted systems. To be sure, worms can be dangerous, but that's not some new risk.
And the experience with AlphaGo just demonstrates my point in that the models don't generalize and they're in equipoise with matched models running the same algorithms on similar hardware.
But yes, tools can be dangerous, fingers can get cut by knives in the kitchen — that's a good thing, since progress can't happen with unrealistic demands for absolute safety.
One can only turn of all infected systems,and only all infected systems, if one knows exactly what they are. There is no way of knowing that without computer assistance...