Comparing differences of trust, collaboration and communication between human-human vs human-bot teams: an experimental study

Authors

  • Ruchika Jain Delhi Technological University, Bawana Rd, Delhi Technological University, Shahbad Daulatpur Village, Rohini, New Delhi, Delhi
  • Naval Garg Delhi Technological University, Bawana Rd, Delhi Technological University, Shahbad Daulatpur Village, Rohini, New Delhi, Delhi
  • Shikha N Khera Delhi Technological University, Bawana Rd, Delhi Technological University, Shahbad Daulatpur Village, Rohini, New Delhi, Delhi

DOI:

https://doi.org/10.23726/cij.2022.1387

Keywords:

Human-bot, Trust, Collaboration, Communication

Abstract

As machines enter the workplace, organizations work toward building their collaboration with humans. There is a limited understanding in litearture of how human-machine collaboration differs from human-human collaboration. Using an experimental design the study aimed at studying differences in trust, collaboration and communication between the two teams: humans and bot and humans-only teams. Due to limited availability of bots that express collaboration this set up was chosen. The findings highlight the differences in communication and collaboration between humans and bots as teammates. There were no differences in trust experienced by humans. The originality of the research is that it focuses on collaboration as a process and outcome rather than the team's performance.

References

Amalberti, R., Carbonell, N., & Falzon, P., 1993, User representations of computer systems in human-computer speech interaction. International Journal of Man-Machine Studies, 38(4): 547–566. https://doi.org/10.1006/imms.1993.1026

Alarcon, G. M., Gibson, A. M., Jessup, S. A., & Capiola, A., 2021, Exploring the differential effects of trust violations in human-human and human-robot interactions. Applied Ergonomics, 93, 103350. https://doi.org/10.1016/j.apergo.2020.103350

Appenzeller, T., 2017, The AI revolution in science. Science. Retrieved November 29, 2021. https://www.science.org/content/article/ai-revolution-science

Bansal, G., Nushi, B., Kamar, E., Weld, D. S., Lasecki, W. S., & Horvitz, E., 2019, Updates in human-AI teams: Understanding and addressing the performance/compatibility tradeoff, Proceedings of the AAAI Conference on Artificial Intelligence. Proceedings of the AAAI Conference on Artificial Intelligence, 33: 2429–2437. https://doi.org/10.1609/aaai.v33i01.33012429

Brandstetter, J., Racz, P., Beckner, C., Sandoval, E. B., Hay, J., & Bartneck, C., 2014, A peer pressure experiment: Recreation of the asch conformity experiment with Robots, IEEE Publications. https://doi.org/10.1109/IROS.2014.6942730

Bushe, G. R., & Coetzer, G., 1995, Appreciative inquiry as a team-development intervention: A controlled experiment, Journal of Applied Behavioral Science, 31(1): 13–30. https://doi.org/10.1177/0021886395311004

Castelo, N., Bos, M. W., & Lehmann, D. R., 2019, Task-dependent machines aversion, Journal of Marketing Research, 56(5): 809–825. https://doi.org/10.1177/0022243719851788

Chen, J. Y. C., & Barnes, M. J., 2014, Human-agent teaming for Multirobot Control: A review of human factors issues, IEEE Transactions on Human-Machine Systems, 44(1): 13–29. https://doi.org/10.1109/THMS.2013.2293535

Cremer, D. D., 2020, Leadership by machines: Who leads and who follows in the Ai era? Harriman House.

Demir, M., Cooke, N. J., & Amazeen, P. G., 2018, A conceptual model of team dynamical behaviors and performance in human-autonomy teaming, Cognitive Systems Research, 52: 497–507. https://doi.org/10.1016/j.cogsys.2018.07.029

Demir, M., McNeese, N. J., She, M., & Cooke, N. J., 2019, Team coordination of team situation awareness in human-autonomy teaming. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 63, No. 1, pp. 146-147, SAGE Publications, Los Angeles, CA, USA.

Demir, M., McNeese, N. J., & Cooke, N. J., 2020, Understanding human–robot teams in light of all-human teams: Aspects of team interaction and shared cognition, International Journal of Human-Computer Studies, 140, 102436. https://doi.org/10.1016/j.ijhcs.2020.102436

Demir, M., McNeese, N. J., Cooke, N. J., Ball, J. T., Myers, C., & Frieman, M., 2015, Synthetic teammate communication and coordination with humans. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 59, No. 1, pp. 951-955. SAGE Publications, Los Angeles, CA, USA. https://doi.org/10.1177/1541931215591275

Dietvorst, B. J., Simmons, J. P., & Massey, C., 2015, Machines aversion: People erroneously avoid machines after seeing them err, Journal of Experimental Psychology. General, 144(1): 114–126. https://doi.org/10.1037/xge0000033

Edwards, C., Edwards, A., Spence, P. R., & Westerman, D., 2016, Initial interaction expectations with robots: Testing the human-to-human interaction script, Communication Studies, 67(2): 227–238. https://doi.org/10.1080/10510974.2015.1121899

Elson, J. S., Derrick, D., & Ligon, G., 2018, Examining trust and reliance in collaborations between humans and Automated Agents. Proceedings of the Annual Hawaii International Conference on System Sciences. Proceedings of the 51st Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2018.056

Fiore, S. M., & Wiltshire, T. J., 2016, Technology as teammate: Examining the role of external cognition in support of team cognitive processes, Frontiers in Psychology, 7: 1531. https://doi.org/10.3389/fpsyg.2016.01531

Glikson, E., & Woolley, A. W., 2020, Human trust in artificial intelligence: Review of empirical research, Academy of Management Annals, 14(2): 627–660. https://doi.org/10.5465/annals.2018.0057

Goddard, K., Roudsari, A., & Wyatt, J. C., 2012, Automation bias: A systematic review of frequency, effect mediators, and mitigators, Journal of the American Medical Informatics Association, 19(1): 121–127. https://doi.org/10.1136/amiajnl-2011-000089

Groom, V., & Nass, C., 2007, Can robots be teammates? Interaction Studies, Social Behaviour and Communication in Biological and Artificial Systems, 8(3): 483–500. https://doi.org/10.1075/is.8.3.10gro

Grosz, B. J., & Stone, P., 2018, A century-long commitment to assessing artificial intelligence and its impact on society, Communications of the ACM, 61(12): 68–73. https://doi.org/10.1145/3198470

Gunkel, D. J., 2012, The Machine question. MIT Press. https://doi.org/10.7551/mitpress/8975.001.0001

Guzman, A. L., & Lewis, S. C., 2020, Artificial Intelligence and Communication: A human–machine communication research agenda, New Media and Society, 22(1): 70–86. https://doi.org/10.1177/1461444819858691

Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., de Visser, E. J., & Parasuraman, R., 2011, A meta-analysis of factors affecting trust in human–robot interaction: The Journal of the Human Factors and Ergonomics Society, Human Factors, 53(5): 517–527. https://doi.org/10.1177/0018720811417254

Harbers, M., Catholijn, M., Jonker, C. M., Birna, M., & Riemsdijk, M. B. V., 2014, Context-sensitive sharedness criteria for teamwork, Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, pp. 1507-1508. https://dl.acm.org/doi/abs/10.5555/2615731.2616035

Hertz, N., & Wiese, E., 2019, Good advice is beyond all price, but what if it comes from a Machine? Journal of Experimental Psychology Applied, 25(3): 386–395. https://doi.org/10.1037/xap0000205

Hill, J., Randolph Ford, W., & Farreras, I. G., 2015, Real conversations with artificial intelligence: A comparison between human-human online conversations and human–chatbot conversations, Computers in Human Behavior, 49: 245–250. https://doi.org/10.1016/j.chb.2015.02.026

Jarrahi, M. H., 2018, Artificial Intelligence and the future of work: Human-AI symbiosis in organizational decision making, Business Horizons, 61(4): 577–586. https://doi.org/10.1016/j.bushor.2018.03.007

Kamar, E., 2016, Directions in hybrid intelligence: Complementing AI systems with human intelligence, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pp. 4070-4073. https://dl.acm.org/doi/10.5555/3061053.3061219

Lee, J. J., Knox, W. B., Wormwood, J. B., Breazeal, C., & DeSteno, D., 2013, Computationally modeling interpersonal trust, Frontiers in Psychology, 4: 893. https://doi.org/10.3389/fpsyg.2013.00893

Lee, M. K., Kiesler, S., Forlizzi, J., & Rybski, P., 2012, Ripple effects of an embedded social agent, In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 695–704. https://doi.org/10.1145/2207676.2207776

Lee, S. A., & Liang, Y. J., 2015, Reciprocity in computer-human interaction: Source-based, norm-based, and affect-based explanations, Cyberpsychology, Behavior and Social Networking, 18(4): 234–240. https://doi.org/10.1089/cyber.2014.0458

Lewis, K., 2003, Measuring transactive memory systems in the field: Scale Development and validation, Journal of Applied Psychology, 88(4), 587–604. https://doi.org/10.1037/0021-9010.88.4.587

Lortie, C. L., & Guitton, M. J., 2011, Judgment of the humanness of an interlocutor is in the eye of the beholder, PLOS ONE, 6(9), e25085. https://doi.org/10.1371/journal.pone.0025085

Madhavan, P., & Wiegmann, D. A., 2007, Similarities and differences between human-human and human-automation trust: An integrative review, Theoretical Issues in Ergonomics Science, 8(4): 277–301. https://doi.org/10.1080/14639220500337708

Malone, T. W., 2018, How human-computer 'Superminds' are redefining the future of work, MIT Sloan Management Review. Retrieved November 30, 2021. https://sloanreview.mit.edu/article/how-human-computer-superminds-are-redefining-the-future-of-work/

Merritt, S. M., & Ilgen, D. R., 2008, Not all trust is created equal: Dispositional and history-based trust in human-automation interactions: The Journal of the Human Factors and Ergonomics Society, Human Factors, 50(2): 194–210. https://doi.org/10.1518/001872008X288574

Merritt, S. M., 2011, Affective processes in human–automation interactions, Human Factors, 53(4), 356-370. https://doi.org/10.1177/0018720811411912

Mou, Y., & Xu, K., 2017, The media inequality: Comparing the initial human-human and human-ai social interactions. Computers in Human Behavior, 72: 432–440. https://doi.org/10.1016/j.chb.2017.02.067

Nass, C., & Moon, Y., 2000, Machines and mindlessness: Social responses to computers, Journal of Social Issues, 56(1): 81–103. https://doi.org/10.1111/0022-4537.00153

Nissan, E., 2018, Computer tools and techniques for lawyers and the judiciary, Cybernetics and Systems, 49(4): 201-233. https://doi.org/10.1080/01969722.2018.1447766

Parasuraman, R., & Manzey, D. H., 2010, Complacency and bias in human use of automation: An attentional integration: The Journal of the Human Factors and Ergonomics Society, Human Factors, 52(3): 381–410. https://doi.org/10.1177/0018720810376055

Peña-López, I., 2017, OECD digital economy outlook 2017. https://doi.org/10.1787/9789264276284-en

Prahl, A., & Van Swol, L., 2017, Understanding machines aversion: When is advice from automation discounted? Journal of Forecasting, 36(6), 691–702. https://doi.org/10.1002/for.2464

Przegalinska, A., Ciechanowski, L., Stroz, A., Gloor, P., & Mazurek, G., 2019, In bot we trust: A new methodology of chatbot performance measures, Business Horizons, 62(6): 785–797. https://doi.org/10.1016/j.bushor.2019.08.005

Puranam, P., 2021, Human–AI collaborative decision-making as an organization design problem, Journal of Organization Design, 10(2): 75–80. https://doi.org/10.1007/s41469-021-00095-2

Reeves, B., & Nass, C. I., 1996, The media equation: How people treat computers, television, and new media like real people and places. CSLI Publishing.

Russell, S., & Norvig, P., 1995, A modern, agent-oriented approach to introductory artificial intelligence, ACM SIGART Bulletin, 6(2): 24–26. https://doi.org/10.1145/201977.201989

Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., Schwabe, G., & Söllner, M., 2020, Machines as teammates: A research agenda on AI in team collaboration, Information and Management, 57(2), 103174. https://doi.org/10.1016/j.im.2019.103174

Shaikh, S. J., & Cruz, I., 2019, Alexa, do you know anything? the impact of an intelligent assistant on team interactions and creative performance under time scarcity. Retrieved November 30, 2021. https://arxiv.org/abs/1912.12914

Shank, D. B., Graves, C., Gott, A., Gamez, P., & Rodriguez, S., 2019, Feeling our way to Machine minds: People's emotions when perceiving mind in Artificial Intelligence, Computers in Human Behavior, 98: 256–266. https://doi.org/10.1016/j.chb.2019.04.001

Shechtman, N., & Horowitz, L. M., 2003, Media inequality in conversation. Proceedings of the Conference on Human Factors in Computing Systems—CHI, 2003. https://doi.org/10.1145/642611.642661

Shiomi, M., & Hagita, N., 2016, Do synchronized multiple robots exert peer pressure? In Proceedings of the Fourth International Conference on Human-Agent Interaction, 27–33. https://doi.org/10.1145/2974804.2974808

Shrestha, Y. R., Ben-Menahem, S. M., & von Krogh, G., 2019, Organizational decision-making structures in the age of Artificial Intelligence, California Management Review, 61(4): 66–83. https://doi.org/10.1177/0008125619862257

Spence, P. R., Edwards, A., Edwards, C., & Jin, X., 2019, 'the bot predicted rain, grab an umbrella': Few perceived differences in communication quality of a weather Twitterbot versus professional and amateur meteorologists, Behaviour and Information Technology, 38(1): 101–109. https://doi.org/10.1080/0144929X.2018.1514425

Spence, P. R., Westerman, D., Edwards, C., & Edwards, A., 2014, Welcoming our robot overlords: Initial expectations about interaction with a robot, Communication Research Reports, 31(3): 272–280. https://doi.org/10.1080/08824096.2014.924337

Strohkorb Sebo, S., Traeger, M., Jung, M., & Scassellati, B., 2018, The ripple effects of vulnerability. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 178–186. https://doi.org/10.1145/3171221.3171275

Von Krogh, G., 2018, Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing, Academy of Management Discoveries, 4(4): 404-409. https://doi.org/10.5465/amd.2018.0084

Walliser, J. C., de Visser, E. J., Wiese, E., & Shaw, T. H., 2019, Team structure and team-building improve human–machine teaming with Autonomous Agents, Journal of Cognitive Engineering and Decision Making, 13(4): 258–278. https://doi.org/10.1177/1555343419867563

Wang, N., Pynadath, D. V., & Hill, S. G., 2016, Trust calibration within a human–robot team: Comparing automatically generated explanations 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). https://doi.org/10.1109/HRI.2016.7451741

Wilson, H., & Daugherty, P., 2018, Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review. Retrieved December 1, 2021. https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces

Xie, Y., Bodala, I. P., Ong, D. C., Hsu, D., & Soh, H., 2019, Robot capability and intention in trust-based decisions across tasks. In 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 39–47, IEEE Publications. https://doi.org/10.1109/HRI.2019.8673084

Yu, K. H., Beam, A. L., & Kohane, I. S., 2018, Artificial intelligence in healthcare, Nature Biomedical Engineering, 2(10): 719-731. https://doi.org/10.1038/s41551-018-0305-z

Zhang, R., McNeese, N. J., Freeman, G., & Musick, G., 2021, 'An Ideal Human' expectations of AI teammates in human-AI teaming, Proceedings of the ACM on Human-Computer Interaction, 4, 1–2.

Downloads

Published

2022-12-22

How to Cite

Jain, R., Garg, N., & N Khera , S. (2022). Comparing differences of trust, collaboration and communication between human-human vs human-bot teams: an experimental study. CERN IdeaSquare Journal of Experimental Innovation, 7(2), 8–16. https://doi.org/10.23726/cij.2022.1387

Issue

Section

Original Articles

Categories