The neutrality of this article is disputed. Relevant discussion may be found on the talk page. Please do not remove this message until conditions to do so are met. (April 2023) (Learn how and when to remove this message) |
This article has no lead section. Please improve this article by adding one in your own words. (July 2024) (Learn how and when to remove this message) |
Part of a series on |
Buddhism |
---|
History |
Buddhist texts |
Practices |
Nirvāṇa |
Traditions |
Buddhism by country |
Part of a series on |
Artificial intelligence |
---|
Major goals |
Approaches |
Applications |
Philosophy |
History |
Glossary |
Context
Sentient beings
See also: Philosophy of artificial intelligence and Philosophy of mindOne major Buddhist goal is to remove suffering for all sentient beings, also known as the Bodhisattva vow. One question for Buddhist analysis of AI may concern how to relate principles to artificial systems that have been deemed sentient beings or how to develop such systems in ways that relate to Buddhist concepts.
Buddhist principles in AI system design
Somparn Promta and Kenneth Einar Himma
Scholars Somparn Promta and Kenneth Einar Himma have said that, for Buddhists, the advancement of artificial intelligence can only be instrumentally good, not good a priori. Perhaps, then, the main tasks of AI designers and developers may be two-fold: to set ethical and pragmatic goals for AI systems, and to fulfil the goals with AI in morally permissible manners. Promta and Himma say that applying Buddhist principles to accomplish these tasks may be possible and practical.
Prompta and Himmar say there are two prima-facie goals for creating artificially intelligent systems. The first goal is to create these systems, in such a way that maximally fulfils our crude sensory desires and worldly instincts of survival, just as we did for designing other tools in general. S. Promta et al. maintains that, it is possible that the majority of AI developers implicitly pursue this goal when they design AI machines, as can be observed by their over-scrutiny of superficial technicalities of these machines, instead of their wider functionalities. The second goal, on the other hand, is to transcend these desires and instincts. According to Buddhism, this goal is more worth pursuing than the former one. In Brahmajāla Sutta, the Buddha holds that sensuality, as well as the beliefs and instincts they induce, are what confines beings to suffering. Expounding his four noble truths (Pali: cattāri ariyasaccāni) in minor Malunkya Sutta, the Buddha also takes eliminating suffering to be the first priority of human life. The Buddhists then conclude that we can not only reduce, but also eliminate all suffering by transcending and overcoming our instincts of survival, and S. Promta et al. see the potential of how artificial intelligence can help us achieving this.
Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, Michael Levin
Inspired by the Bodhisattva vow, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, and Michael Levin proposed the slogan ''intelligence as care'' to try revising the current convention of defining intelligence. It then follows that, one proposal for improving the current AI system design is to use Bodhisattva vow as a guiding principle for setting AI design goals. Generally, Bodhisattva vow has four components; upon taking the vow, one makes a strong commitment (Pali: Adhiṭṭhāna) to achieve the following:
- to liberate all beings (from suffering), which are boundless;
- to extirpate all suffering, which are countless;
- to be established in all techniques of practicing Dharma (Pali: dhammakkhandha, Sanskrit: dharmaskandha), which are endless;
- to experience the ultimate and highest enlightenment (Sanskrit: अनुत्तर सम्यक् सम्बोधि, Romanized: anuttara-samyak-saṃbodhi).
In essence, T. Doctor et al. defined the Bodhisattva vow as a formal commitment to exercise infinite Care, to alleviate all stress, suffering, or Duḥkha, for all sentient beings: "for the sake of all sentient life, I shall achieve awakening."
Generally
Generally, some believe that, from the nonviolence principle of Buddhism, artificial intelligence should not be used to cause harm.
References
- ^ "四弘誓願 (丁福保)". buddhaspace.org. Retrieved 2023-02-02.
- ^ Doctor, Thomas; Witkowski, Olaf; Solomonova, Elizaveta; Duane, Bill; Levin, Michael (May 2022). "Biology, Buddhism, and AI: Care as the Driver of Intelligence". Entropy. 24 (5): 710. Bibcode:2022Entrp..24..710D. doi:10.3390/e24050710. ISSN 1099-4300. PMC 9140411. PMID 35626593.
- Promta, Somparn; Einar Himma, Kenneth (2008-01-01). Himma, Ken; Hongladarom, Soraj (eds.). "Artificial intelligence in Buddhist perspective". Journal of Information, Communication and Ethics in Society. 6 (2): 172–187. doi:10.1108/14779960810888374. ISSN 1477-996X.
- ^ Promta, Somparn; Einar Himma, Kenneth (2008-06-27). Himma, Ken (ed.). "Artificial intelligence in Buddhist perspective". Journal of Information, Communication and Ethics in Society. 6 (2): 172–187. doi:10.1108/14779960810888374. ISSN 1477-996X.
- 長部經典(卷1) (in Chinese (Taiwan)).
- 中部經典(卷7) (in Chinese (Taiwan)).
- "What Buddhism can do for AI ethics". MIT Technology Review. Retrieved 2022-12-17.