Crude reductionist beliefs such as humans being nothing more than “meat computers” and/or “stochastic parrots” have certainly contributed to the belief that a sufficiently elaborate LLM treat printer would be at least as valid a person as actual living people.
I don’t know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don’t believe in a soul, or that organic matter has special properties that allows sentience to arise.
I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.
Even if we find the limit to LLMs and figure out that sentience can’t arise (I don’t know how this would be proven, but let’s say it was), you’d still somehow have to prove that algorithms can’t produce sentience, and that only the magical fairy dust in our souls produce sentience.
To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience.
How is that plausible? The human brain has more processing power than a snake’s. Which has more power than a bacterium’s (equivalent of a) brain. Those two things are still experiencing consciousness/sentience. Bacteria will look out for their own interests, will chatGPT do that? No, chatGPT is a perfect slave, just like every computer program ever written
chatGPT : freshman-year-“hello world”-program
human being : amoeba
(the : symbol means it’s being analogized to something)
a human is a sentience made up of trillions of unicellular consciousnesses.
chatGPT is a program made up of trillions of data points. But they’re still just data points, which have no sentience or consciousness.
Both are something much greater than the sum of their parts, but in a human’s case, those parts were sentient/conscious to begin with. Amoebas will reproduce and kill and eat just like us, our lung cells and nephrons and etc are basically little tiny specialized amoebas. ChatGPT doesn’t…do anything, it has no will
Well, my (admittedly postgrad) work with biology gives me the impression that the brain has a lot more parts to consider than just a language-trained machine. Hell, most living creatures don’t even have language.
It just screams of a marketing scam. I’m not against the idea of AI. Although from an ethical standpoint I question bringing life into this world for the purpose of using it like a tool. You know, slavery. But I don’t think this is what they’re doing. I think they’re just trying to sell the next Google AdSense
Notice the distinction in my comments between an LLM and other algorithms, that’s a key point that you’re ignoring. The idea that other commenters have is that for some reason there is no input that could produce the output of human thought other than the magical fairy dust that exists within our souls. I don’t believe this. I think a sufficiently advanced input could arrive at the holistic output of human thought. This doesn’t have to be LLMs.
For fucks sake it’s just an algorithm. It’s not capable of becoming sentient.
Have I lost it or has everyone become an idiot?
Crude reductionist beliefs such as humans being nothing more than “meat computers” and/or “stochastic parrots” have certainly contributed to the belief that a sufficiently elaborate LLM treat printer would be at least as valid a person as actual living people.
If I call you a meat computer, or a stochastic parrot, or say “ape” enough times, the algorithm will by comparison seem closer to sentient.
Well no, owls are smart. But yes, in terms of idiocy, very few go lower than “Silicon Valley techbro”
I don’t know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don’t believe in a soul, or that organic matter has special properties that allows sentience to arise.
I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.
Even if we find the limit to LLMs and figure out that sentience can’t arise (I don’t know how this would be proven, but let’s say it was), you’d still somehow have to prove that algorithms can’t produce sentience, and that only the magical fairy dust in our souls produce sentience.
That’s not something that I’ve bought into yet.
How is that plausible? The human brain has more processing power than a snake’s. Which has more power than a bacterium’s (equivalent of a) brain. Those two things are still experiencing consciousness/sentience. Bacteria will look out for their own interests, will chatGPT do that? No, chatGPT is a perfect slave, just like every computer program ever written
chatGPT : freshman-year-“hello world”-program
human being : amoeba
(the : symbol means it’s being analogized to something)
a human is a sentience made up of trillions of unicellular consciousnesses.
chatGPT is a program made up of trillions of data points. But they’re still just data points, which have no sentience or consciousness.
Both are something much greater than the sum of their parts, but in a human’s case, those parts were sentient/conscious to begin with. Amoebas will reproduce and kill and eat just like us, our lung cells and nephrons and etc are basically little tiny specialized amoebas. ChatGPT doesn’t…do anything, it has no will
Well, my (admittedly postgrad) work with biology gives me the impression that the brain has a lot more parts to consider than just a language-trained machine. Hell, most living creatures don’t even have language.
It just screams of a marketing scam. I’m not against the idea of AI. Although from an ethical standpoint I question bringing life into this world for the purpose of using it like a tool. You know, slavery. But I don’t think this is what they’re doing. I think they’re just trying to sell the next Google AdSense
Notice the distinction in my comments between an LLM and other algorithms, that’s a key point that you’re ignoring. The idea that other commenters have is that for some reason there is no input that could produce the output of human thought other than the magical fairy dust that exists within our souls. I don’t believe this. I think a sufficiently advanced input could arrive at the holistic output of human thought. This doesn’t have to be LLMs.
Who said that?