Misplaced Pages

Human-based computation

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Distributed thinking) Computer science technique

Human-based computation (HBC), human-assisted computation, ubiquitous human computing or distributed thinking (by analogy to distributed computing) is a computer science technique in which a machine performs its function by outsourcing certain steps to humans, usually as microwork. This approach uses differences in abilities and alternative costs between humans and computer agents to achieve symbiotic human–computer interaction. For computationally difficult tasks such as image recognition, human-based computation plays a central role in training Deep Learning-based Artificial Intelligence systems. In this case, human-based computation has been referred to as human-aided artificial intelligence.

In traditional computation, a human employs a computer to solve a problem; a human provides a formalized problem description and an algorithm to a computer, and receives a solution to interpret. Human-based computation frequently reverses the roles; the computer asks a person or a large group of people to solve a problem, then collects, interprets, and integrates their solutions. This turns hybrid networks of humans and computers into "large scale distributed computing networks". where code is partially executed in human brains and on silicon based processors.

Early work

Human-based computation (apart from the historical meaning of "computer") research has its origins in the early work on interactive evolutionary computation (EC). The idea behind interactive evolutionary algorithms has been attributed to Richard Dawkins; in the Biomorphs software accompanying his book The Blind Watchmaker (Dawkins, 1986) the preference of a human experimenter is used to guide the evolution of two-dimensional sets of line segments. In essence, this program asks a human to be the fitness function of an evolutionary algorithm, so that the algorithm can use human visual perception and aesthetic judgment to do something that a normal evolutionary algorithm cannot do. However, it is difficult to get enough evaluations from a single human if we want to evolve more complex shapes. Victor Johnston and Karl Sims extended this concept by harnessing the power of many people for fitness evaluation (Caldwell and Johnston, 1991; Sims, 1991). As a result, their programs could evolve beautiful faces and pieces of art appealing to the public. These programs effectively reversed the common interaction between computers and humans. In these programs, the computer is no longer an agent of its user, but instead, a coordinator aggregating efforts of many human evaluators. These and other similar research efforts became the topic of research in aesthetic selection or interactive evolutionary computation (Takagi, 2001), however the scope of this research was limited to outsourcing evaluation and, as a result, it was not fully exploring the full potential of the outsourcing.

A concept of the automatic Turing test pioneered by Moni Naor (1996) is another precursor of human-based computation. In Naor's test, the machine can control the access of humans and computers to a service by challenging them with a natural language processing (NLP) or computer vision (CV) problem to identify humans among them. The set of problems is chosen in a way that they have no algorithmic solution that is both effective and efficient at the moment. If it existed, such an algorithm could be easily performed by a computer, thus defeating the test. In fact, Moni Naor was modest by calling this an automated Turing test. The imitation game described by Alan Turing (1950) didn't propose using CV problems. It was only proposing a specific NLP task, while the Naor test identifies and explores a large class of problems, not necessarily from the domain of NLP, that could be used for the same purpose in both automated and non-automated versions of the test.

Finally, Human-based genetic algorithm (HBGA) encourages human participation in multiple different roles. Humans are not limited to the role of evaluator or some other predefined role, but can choose to perform a more diverse set of tasks. In particular, they can contribute their innovative solutions into the evolutionary process, make incremental changes to existing solutions, and perform intelligent recombination. In short, HBGA allows humans to participate in all operations of a typical genetic algorithm. As a result of this, HBGA can process solutions for which there are no computational innovation operators available, for example, natural languages. Thus, HBGA obviated the need for a fixed representational scheme that was a limiting factor of both standard and interactive EC. These algorithms can also be viewed as novel forms of social organization coordinated by a computer, according to Alex Kosorukoff and David Goldberg.

Classes of human-based computation

Human-based computation methods combine computers and humans in different roles. Kosorukoff (2000) proposed a way to describe division of labor in computation, that groups human-based methods into three classes. The following table uses the evolutionary computation model to describe four classes of computation, three of which rely on humans in some role. For each class, a representative example is shown. The classification is in terms of the roles (innovation or selection) performed in each case by humans and computational processes. This table is a slice of a three-dimensional table. The third dimension defines if the organizational function is performed by humans or a computer. Here it is assumed to be performed by a computer.

Division of labor in computation
Innovation agent
Computer Human
Selection
agent
Computer Genetic algorithm Computerized tests
Human Interactive genetic algorithm Human-based genetic algorithm

Classes of human-based computation from this table can be referred by two-letter abbreviations: HC, CH, HH. Here the first letter identifies the type of agents performing innovation, the second letter specifies the type of selection agents. In some implementations (wiki is the most common example), human-based selection functionality might be limited, it can be shown with small h.

Methods of human-based computation

  • (HC) Darwin (Vyssotsky, Morris, McIlroy, 1961) and Core War (Jones, Dewdney 1984) These are games where several programs written by people compete in a tournament (computational simulation) in which fittest programs will survive. Authors of the programs copy, modify, and recombine successful strategies to improve their chances of winning.
  • (CH) Interactive EC (Dawkins, 1986; Caldwell and Johnston, 1991; Sims, 1991) IEC enables the user to create an abstract drawing only by selecting his/her favorite images, so the human only performs fitness computation and software performs the innovative role. Simulated breeding style introduces no explicit fitness, just selection, which is easier for humans.
  • (HH2) Wiki (Cunningham, 1995) enabled editing the web content by multiple users, i.e. supported two types of human-based innovation (contributing new page and its incremental edits). However, the selection mechanism was absent until 2002, when wiki has been augmented with a revision history allowing for reversing of unhelpful changes. This provided means for selection among several versions of the same page and turned wiki into a tool supporting collaborative content evolution (would be classified as human-based evolution strategy in EC terms).
  • (HH3) Human-based genetic algorithm (Kosorukoff, 1998) uses both human-based selection and three types of human-based innovation (contributing new content, mutation, and recombination). Thus, all operators of a typical genetic algorithm are outsourced to humans (hence the origin of human-based). This idea is extended to integrating crowds with genetic algorithm to study creativity in 2011.
  • (HH1) Social search applications accept contributions from users and attempt to use human evaluation to select the fittest contributions that get to the top of the list. These use one type of human-based innovation. Early work was done in the context of HBGA. Digg and Reddit are recently popular examples. See also Collaborative filtering.
  • (HC) Computerized tests. A computer generates a problem and presents it to evaluate a user. For example, CAPTCHA tells human users from computer programs by presenting a problem that is supposedly easy for a human and difficult for a computer. While CAPTCHAs are effective security measures for preventing automated abuse of online services, the human effort spent solving them is otherwise wasted. The reCAPTCHA system makes use of these human cycles to help digitize books by presenting words from scanned old books that optical character recognition cannot decipher.
  • (HC) Interactive online games: These are programs that extract knowledge from people in an entertaining way.
  • (HC) "Human Swarming" or "Social Swarming". The UNU platform for human swarming establishes real-time closed-loop systems around groups of networked users molded after biological swarms, enabling human participants to behave as a unified collective intelligence.
  • (NHC) Natural Human Computation involves leveraging existing human behavior to extract computationally significant work without disturbing that behavior. NHC is distinguished from other forms of human-based computation in that rather than involving outsourcing computational work to human activity by asking humans to perform novel computational tasks, it involves taking advantage of previously unnoticed computational significance in existing behavior.

Incentives to participation

In different human-based computation projects people are motivated by one or more of the following.

  • Receiving a fair share of the result
  • Direct monetary compensation (e.g. in Amazon Mechanical Turk, ChaCha Search guide, Mahalo.com Answers members)
  • Opportunity to participate in the global information economy
  • Desire to diversify their activity (e.g. "people aren't asked in their daily lives to be creative")
  • Esthetic satisfaction
  • Curiosity, desire to test if it works
  • Volunteerism, desire to support a cause of the project
  • Reciprocity, exchange, mutual help
  • Desire to be entertained with the competitive or cooperative spirit of a game
  • Desire to communicate and share knowledge
  • Desire to share a user innovation to see if someone else can improve on it
  • Desire to game the system and influence the final result
  • Enjoyment
  • Increasing online reputation/recognition

Many projects had explored various combinations of these incentives. See more information about motivation of participants in these projects in Kosorukoff, and Von Hippel.

Human-based computation as a form of social organization

Viewed as a form of social organization, human-based computation often surprisingly turns out to be more robust and productive than traditional organizations. The latter depend on obligations to maintain their more or less fixed structure, be functional and stable. Each of them is similar to a carefully designed mechanism with humans as its parts. However, this limits the freedom of their human employees and subjects them to various kinds of stresses. Most people, unlike mechanical parts, find it difficult to adapt to some fixed roles that best fit the organization. Evolutionary human-computation projects offer a natural solution to this problem. They adapt organizational structure to human spontaneity, accommodate human mistakes and creativity, and utilize both in a constructive way. This leaves their participants free from obligations without endangering the functionality of the whole, making people happier. There are still some challenging research problems that need to be solved before we can realize the full potential of this idea.

The algorithmic outsourcing techniques used in human-based computation are much more scalable than the manual or automated techniques used to manage outsourcing traditionally. It is this scalability that allows to easily distribute the effort among thousands (or more) of participants. It was suggested recently that this mass outsourcing is sufficiently different from traditional small-scale outsourcing to merit a new name: crowdsourcing. However, others have argued that crowdsourcing ought to be distinguished from true human-based computation. Crowdsourcing does indeed involve the distribution of computation tasks across a number of human agents, but Michelucci argues that this is not sufficient for it to be considered human computation. Human computation requires not just that a task be distributed across different agents, but also that the set of agents across which the task is distributed be mixed: some of them must be humans, but others must be traditional computers. It is this mixture of different types of agents in a computational system that gives human-based computation its distinctive character. Some instances of crowdsourcing do indeed meet this criterion, but not all of them do.

Human Computation organizes workers through a task market with APIs, task prices, and software-as-a-service protocols that allow employers / requesters to receive data produced by workers directly in to IT systems. As a result, many employers attempt to manage worker automatically through algorithms rather than responding to workers on a case-by-case basis or addressing their concerns. Responding to workers is difficult to scale to the employment levels enabled by human computation microwork platforms. Workers in the system Mechanical Turk, for example, have reported that human computation employers can be unresponsive to their concerns and needs

Applications

Further information: List of crowdsourcing projects

Human assistance can be helpful in solving any AI-complete problem, which by definition is a task which is infeasible for computers to do but feasible for humans. Specific practical applications include:

Criticism

Human-based computation has been criticized as exploitative and deceptive with the potential to undermine collective action.

In social philosophy it has been argued that human-based computation is an implicit form of online labour. The philosopher Rainer Mühlhoff distinguishes five different types of "machinic capture" of human microwork in "hybrid human-computer networks": (1) gamification, (2) "trapping and tracking" (e.g. CAPTCHAs or click-tracking in Google search), (3) social exploitation (e.g. tagging faces on Facebook), (4) information mining and (5) click-work (such as on Amazon Mechanical Turk). Mühlhoff argues that human-based computation often feeds into Deep Learning-based Artificial Intelligence systems, a phenomenon he analyzes as "human-aided artificial intelligence".

See also

References

  1. Shahaf, Dafna; Amir, Eyal (28 March 2007). "Towards a Theory of AI Completeness" (PDF). Retrieved 12 May 2022.
  2. Mühlhoff, Rainer (2019-11-06). "Human-aided artificial intelligence: Or, how to run large computations in human brains? Toward a media sociology of machine learning". New Media & Society. 22 (10): 1868–1884. doi:10.1177/1461444819885334. ISSN 1461-4448. S2CID 209363848.
  3. the term "computer" is used the modern usage of computer, not the one of human computer
  4. Turing, Alan M. (1950). "Computer Machinery and Intelligence" (PDF). Retrieved 12 May 2022.
  5. Fogarty, Terence C. (20 August 2003). "Automatic concept evolution". The Second IEEE International Conference on Cognitive Informatics, 2003. Proceedings. p. 89. doi:10.1109/COGINF.2003.1225961. ISBN 0-7695-1986-5. S2CID 30299981. Retrieved 21 June 2021.
  6. von Ahn, Luis (22 August 2012), Human Computation, vol. Google Tech Talk July 26, 2006, archived from the original on 2021-12-19, retrieved 2019-11-22. Cited after Mühlhoff, Rainer (2019). "Human-aided artificial intelligence: Or, how to run large computations in human brains? Toward a media sociology of machine learning". New Media & Society: 146144481988533. doi:10.1177/1461444819885334. ISSN 1461-4448.
  7. Gentry, Craig; Ramzan, Zulfikar; Stubblebine, Stuart. "Secure Distributed Human Computation" (PDF). Retrieved 12 May 2022.
  8. Gentry, Craig; Ramzan, Zulfikar; Stubblebine, Stuart (2005). "Secure Distributed Human Computation". Secure Distributed Human Computation. Lecture Notes in Computer Science. Vol. 3570. pp. 328–332. doi:10.1007/11507840_28. ISBN 978-3-540-26656-3. Retrieved 12 May 2022.
  9. Herdy, Michael (1996). Evolution strategies with subjective selection. Basic Concepts of Evolutionary Computation. Volumen 1141, pp. 22-31. pp. 22–31. doi:10.1007/3-540-61723-X_966. ISBN 9783540706687. Retrieved 12 May 2022.
  10. Dawkins, Richard. "The Blind Watchmaker". Retrieved 12 May 2022.
  11. Johnston, Victor. "Method and apparatus for generating composites of human faces". Archived from the original on October 14, 2013. Retrieved 12 May 2022. U.S. patent 5,375,195
  12. Sims, Karl P. "Computer system and method for generating and mutating objects by iterative evolution". Archived from the original on October 14, 2013. Retrieved 12 May 2022. U.S. patent 6,088,510
  13. Naor, Moni. "Verification of a human in the loop or Identification via the Turing Test". Retrieved 12 May 2021.
  14. Kosorukoff, A. (2001). "Human based genetic algorithm". Human-based genetic algorithm. Vol. 5. pp. 3464–3469. doi:10.1109/ICSMC.2001.972056. ISBN 0-7803-7087-2. S2CID 13839604. Retrieved 12 May 2022.
  15. Fogarty, Terence C.; Hammond, Michelle O. "Co-operative OuLiPian (Ouvroir de littérature potentielle) Generative Literature Using Human-Based Evolutionary Computing" (PDF). Retrieved 12 May 2022.
  16. Takagi, Hideyuki (September 2001). "Interactive evolutionary computation: fusion of the capabilities of EC optimization and human evaluation, pp. 1275-1296". Proceedings of the IEEE. 89 (9): 1275–1296. doi:10.1109/5.949485. hdl:2324/1670053. S2CID 16929436. Retrieved 12 May 2022.
  17. "Evolutionary Computation as a Form of Organization, pp. 965-972" (PDF). Archived from the original (PDF) on 7 July 2011. Retrieved 12 May 2022.
  18. Unemi, Tastsuo (1998). "A Design of Multi-Field User Interface for Simulated Breeding, pp. 489-494". Proceedings of the Korean Institute of Intelligent Systems Conference: 489–494. Retrieved 12 May 2022.
  19. Yu, Lixiu; Nickerson, Jeffrey V. (May 7, 2011). Cooks or cobblers?: Crowd creativity through combination. pp. 1393–1402. doi:10.1145/1978942.1979147. ISBN 9781450302289. S2CID 11287874. Retrieved 12 May 2022.
  20. von Ahn, Luis; Maurer, Benjamin; McMillen, Colin; Abraham, David; Blum, Manuel (12 September 2008). "reCAPTCHA: Human-Based Character Recognition via Web Security Measures" (PDF). Retrieved 12 May 2022.
  21. Burgener, Robin. "20Q . net. Twenty Questions. The neural-net on the Internet. Play Twenty Questions". Archived from the original on 29 February 2000. Retrieved 12 May 2022.
  22. von Ahn, Luis; Dabbish, Laura. "Labeling Images with a Computer Game" (PDF). Retrieved 12 May 2022.
  23. von Ahn, Luis; Kedia, Mihir; Blum, Manuel. "Verbosity: A Game for Collecting Common-Sense Facts" (PDF). Retrieved 12 May 2022.
  24. von Ahn, Luis; Ginosar, Shiri; Kedia, Mihir; Liu, Ruoran; Blum, Manuel. "Improving Accessibility of the Web with a Computer Game" (PDF). Retrieved 12 May 2022.
  25. von Ahn, Luis (19 July 2011). "Method for labeling images through a computer game". Retrieved 12 May 2022.U.S. patent 7,980,953
  26. Rosenberg, Louis B. "Human Swarms: a real-time paradigm for Collective intelligence" (PDF). University of Michigan College of LSA. Retrieved 12 May 2021.
  27. "Swarms: a real-time paradigm for Collective intelligence". Archived from the original on 27 October 2015. Retrieved 12 May 2022.
  28. Sunstein, Cass R. (August 16, 2006). "Infotopia: How Many Minds Produce Knowledge". SSRN 924249. Retrieved 12 May 2022.
  29. Malone, Thomas W.; Laubacher, Robert; Dellarocas, Chrysanthos (February 3, 2009). "Harnessing Crowds: Mapping the Genome of Collective Intelligence". doi:10.2139/ssrn.1381502. hdl:1721.1/66259. S2CID 110848079. SSRN 1381502. Retrieved 12 May 2022. {{cite journal}}: Cite journal requires |journal= (help)
  30. "Human Swarms, a real-time method for collective intelligence". Archived from the original on October 27, 2015. Retrieved October 12, 2015.
  31. "Swarms of Humans Power A.I. Platform : Discovery News". Archived from the original on June 21, 2015. Retrieved June 21, 2015.
  32. Estrada, Daniel, and Jonathan Lawhead, "Gaming the Attention Economy" in The Springer Handbook of Human Computation, Pietro Michelucci (ed.), (Springer, 2014)
  33. Schriner, Andrew; Oerther, Daniel (2014). "No Really, (Crowd) Work is the Silver Bullet". Procedia Engineering. 78 (2014): 224–228. doi:10.1016/j.proeng.2014.07.060.
  34. (Q&A) Your Assignment: Art
  35. Kosorukoff, Alexander. "Social classification structures. Optimal decision making in an organization" (PDF). Archived from the original (PDF) on 7 July 2011. Retrieved 12 May 2022.
  36. Von Hippel, Eric. "Democratizing Innovation". Retrieved 12 May 2022.
  37. von Hippel, Eric (2005). Democratizing Innovation. Book collections on Project MUSE. MIT Press. ISBN 978-0-262-00274-5. Retrieved 17 June 2024.
  38. Kosorukoff, Alexander; Goldberg, David (2002). "Evolutionary Computation as a Form of Organization" (PDF). Archived from the original (PDF) on 7 July 2011. Retrieved 12 May 2022.
  39. Howe, Jeff (June 2006). "The Rise of Crowdsourcing". Wired. Retrieved 12 May 2022.
  40. Michelucci, Pietro. Handbook of Human Computation. Retrieved 12 May 2022.
  41. Irani, Lilly (2015). "The Cultural Work of Microwork". New Media & Society. 17 (5): 720–739. doi:10.1177/1461444813511926. S2CID 377594.
  42. Irani, Lilly; Silberman, Six (2013). "Turkopticon". Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Chi '13. pp. 611–620. doi:10.1145/2470654.2470742. ISBN 9781450318990. S2CID 207203679.
  43. US 7599911, Manber, Udi & Chang, Chi-Chao, "Method and apparatus for search ranking using human input and automated ranking", published 2009-10-06, assigned to Yahoo! Inc. 
  44. "Method and apparatus for search ranking using human input and automated ranking". Retrieved 12 May 2022.
  45. Zittrain, Jonathan (July 20, 2019). "Minds for Sale". Retrieved 12 May 2022.
  46. Jafarinaimi, Nassim (February 7, 2012). Exploring the character of participation in social media: the case of Google Image Labeler. pp. 72–79. doi:10.1145/2132176.2132186. ISBN 9781450307826. S2CID 7094199. Retrieved 12 May 2022.
  47. Mühlhoff, Rainer (2020). "Human-aided artificial intelligence: Or, how to run large computations in human brains? Toward a media sociology of machine learning". New Media & Society. 22 (10): 1868–1884. doi:10.1177/1461444819885334. S2CID 209363848.
  48. Mühlhoff, Rainer (2019-11-06). "Human-aided artificial intelligence: Or, how to run large computations in human brains? Toward a media sociology of machine learning". New Media & Society. 22 (10): 1868–1884. doi:10.1177/1461444819885334. ISSN 1461-4448. S2CID 209363848.
  49. Mühlhoff, Rainer. "Human-aided artificial intelligence: Or, how to run large computations in human brains? Toward a media sociology of machine learning" (PDF). Retrieved 12 May 2022.
Category: