Было такое исследование, – Dan Dennett (профессор, доктор философии) с коллегой взяли все работы Dennett'а и обучили на них chatGPT. Затем задали chatGPT 10 философских вопросов, точный ответ как ответил бы Dan был в работах. Вопросы задавали до тех пор, пока размер ответа не будет максимум такой же, как оригинальный ответ минус 5 символов.
Взяли 4 ответа чатгпт и 1 ответ самого Dan'а.
И просили людей выбрать тот вариант ответа, который дал сам Dennett
Participants.
We recruited three sets of participants:
* 98 online research participants with college degrees from the online research platform Prolific,
* 302 respondents who followed a link from my blog,
* 25 experts on Dennett's work, nominated by and directly contacted by Dennett and/or Strasser.
Мало кто мог отличить реальный ответ от сгенерированного.
Ссылка: https://schwitzsplinters.blogspot.com/2022/07/results-computerized-philosopher-can.html
Кому лень переходить, выводы
Reflections
I want to emphasize: This is not a Turing test! Had experts been given an extended opportunity to interact with GPT-3, I have no doubt they would soon have realized that they were not interacting with the real Daniel Dennett. Instead, they were evaluating only one-shot responses, which is a very different task and much more difficult.
Nonetheless, it's striking that our fine-tuned GPT-3 could produce outputs sufficiently Dennettlike that experts on Dennett's work had difficulty distinguishing them from Dennett's real answers, and that this could be done mechanically with no meaningful editing or cherry-picking.
As the case of LaMDA suggests, we might be approaching a future in which machine outputs are sufficiently humanlike that ordinary people start to attribute real sentience to machines, coming to see them as more than "mere machines" and perhaps even as deserving moral consideration or rights. Although the machines of 2022 probably don't deserve much more moral consideration than do other human artifacts, it's likely that someday the question of machine rights and machine consciousness will come vividly before us, with reasonable opinion diverging. In the not-too-distant future, we might well face creations of ours so humanlike in their capacities that we genuinely won't know whether they are non-sentient tools to be used and disposed of as we wish or instead entities with real consciousness, real feelings, and real moral status, who deserve our care and protection.
If we don't know whether some of our machines deserve moral consideration similar to that of human beings, we potentially face a catastrophic moral dilemma: Either deny the machines humanlike rights and risk perpetrating the moral equivalents of murder and slavery against them, or give the machines humanlike rights and risk sacrificing real human lives for empty tools without interests worth the sacrifice.
In light of this potential dilemma, Mara Garza and I (2015, 2020) have recommended what we call "The Design Policy of the Excluded Middle": Avoid designing machines if it's unclear whether they deserve moral consideration similar to that of humans. Either follow Joanna Bryson's advice and create machines that clearly don't deserve such moral consideration, or go all the way and create machines (like the android Data from Star Trek) that clearly should, and do, receive full moral consideration.
Взяли 4 ответа чатгпт и 1 ответ самого Dan'а.
И просили людей выбрать тот вариант ответа, который дал сам Dennett
Participants.
We recruited three sets of participants:
* 98 online research participants with college degrees from the online research platform Prolific,
* 302 respondents who followed a link from my blog,
* 25 experts on Dennett's work, nominated by and directly contacted by Dennett and/or Strasser.
Мало кто мог отличить реальный ответ от сгенерированного.
Ссылка: https://schwitzsplinters.blogspot.com/2022/07/results-computerized-philosopher-can.html
Кому лень переходить, выводы
Reflections
I want to emphasize: This is not a Turing test! Had experts been given an extended opportunity to interact with GPT-3, I have no doubt they would soon have realized that they were not interacting with the real Daniel Dennett. Instead, they were evaluating only one-shot responses, which is a very different task and much more difficult.
Nonetheless, it's striking that our fine-tuned GPT-3 could produce outputs sufficiently Dennettlike that experts on Dennett's work had difficulty distinguishing them from Dennett's real answers, and that this could be done mechanically with no meaningful editing or cherry-picking.
As the case of LaMDA suggests, we might be approaching a future in which machine outputs are sufficiently humanlike that ordinary people start to attribute real sentience to machines, coming to see them as more than "mere machines" and perhaps even as deserving moral consideration or rights. Although the machines of 2022 probably don't deserve much more moral consideration than do other human artifacts, it's likely that someday the question of machine rights and machine consciousness will come vividly before us, with reasonable opinion diverging. In the not-too-distant future, we might well face creations of ours so humanlike in their capacities that we genuinely won't know whether they are non-sentient tools to be used and disposed of as we wish or instead entities with real consciousness, real feelings, and real moral status, who deserve our care and protection.
If we don't know whether some of our machines deserve moral consideration similar to that of human beings, we potentially face a catastrophic moral dilemma: Either deny the machines humanlike rights and risk perpetrating the moral equivalents of murder and slavery against them, or give the machines humanlike rights and risk sacrificing real human lives for empty tools without interests worth the sacrifice.
In light of this potential dilemma, Mara Garza and I (2015, 2020) have recommended what we call "The Design Policy of the Excluded Middle": Avoid designing machines if it's unclear whether they deserve moral consideration similar to that of humans. Either follow Joanna Bryson's advice and create machines that clearly don't deserve such moral consideration, or go all the way and create machines (like the android Data from Star Trek) that clearly should, and do, receive full moral consideration.