What do you mean by not too far? Because I can see A LOT of issues which cannot be solved easily. Even with maximum wishful thinking and a lot of handwaving that’s not something that’s possible even by a long shot.
Or do you mean not too far, like within the next 1000 years? Cause yeah I could believe that.
Also AI doesn’t mean general intelligence, so not intelligence like human intelligence. Don’t be fooled by the PR and hype going around, LLMs aren’t general intelligence.
What do you mean by not too far? Because I can see A LOT of issues which cannot be solved easily. Even with maximum wishful thinking and a lot of handwaving that’s not something that’s possible even by a long shot.
Or do you mean not too far, like within the next 1000 years? Cause yeah I could believe that.
Also AI doesn’t mean general intelligence, so not intelligence like human intelligence. Don’t be fooled by the PR and hype going around, LLMs aren’t general intelligence.