Comparing differences of trust, collaboration and communication between human-human vs human-bot teams: an experimental study


  • Ruchika Jain Delhi Technological University, Bawana Rd, Delhi Technological University, Shahbad Daulatpur Village, Rohini, New Delhi, Delhi
  • Naval Garg Delhi Technological University, Bawana Rd, Delhi Technological University, Shahbad Daulatpur Village, Rohini, New Delhi, Delhi
  • Shikha N Khera Delhi Technological University, Bawana Rd, Delhi Technological University, Shahbad Daulatpur Village, Rohini, New Delhi, Delhi



Human-bot, Trust, Collaboration, Communication


As machines enter the workplace, organizations work toward building their collaboration with humans. There is a limited understanding in litearture of how human-machine collaboration differs from human-human collaboration. Using an experimental design the study aimed at studying differences in trust, collaboration and communication between the two teams: humans and bot and humans-only teams. Due to limited availability of bots that express collaboration this set up was chosen. The findings highlight the differences in communication and collaboration between humans and bots as teammates. There were no differences in trust experienced by humans. The originality of the research is that it focuses on collaboration as a process and outcome rather than the team's performance.


Amalberti, R., Carbonell, N., & Falzon, P., 1993, User representations of computer systems in human-computer speech interaction. International Journal of Man-Machine Studies, 38(4): 547–566.

Alarcon, G. M., Gibson, A. M., Jessup, S. A., & Capiola, A., 2021, Exploring the differential effects of trust violations in human-human and human-robot interactions. Applied Ergonomics, 93, 103350.

Appenzeller, T., 2017, The AI revolution in science. Science. Retrieved November 29, 2021.

Bansal, G., Nushi, B., Kamar, E., Weld, D. S., Lasecki, W. S., & Horvitz, E., 2019, Updates in human-AI teams: Understanding and addressing the performance/compatibility tradeoff, Proceedings of the AAAI Conference on Artificial Intelligence. Proceedings of the AAAI Conference on Artificial Intelligence, 33: 2429–2437.

Brandstetter, J., Racz, P., Beckner, C., Sandoval, E. B., Hay, J., & Bartneck, C., 2014, A peer pressure experiment: Recreation of the asch conformity experiment with Robots, IEEE Publications.

Bushe, G. R., & Coetzer, G., 1995, Appreciative inquiry as a team-development intervention: A controlled experiment, Journal of Applied Behavioral Science, 31(1): 13–30.

Castelo, N., Bos, M. W., & Lehmann, D. R., 2019, Task-dependent machines aversion, Journal of Marketing Research, 56(5): 809–825.

Chen, J. Y. C., & Barnes, M. J., 2014, Human-agent teaming for Multirobot Control: A review of human factors issues, IEEE Transactions on Human-Machine Systems, 44(1): 13–29.

Cremer, D. D., 2020, Leadership by machines: Who leads and who follows in the Ai era? Harriman House.

Demir, M., Cooke, N. J., & Amazeen, P. G., 2018, A conceptual model of team dynamical behaviors and performance in human-autonomy teaming, Cognitive Systems Research, 52: 497–507.

Demir, M., McNeese, N. J., She, M., & Cooke, N. J., 2019, Team coordination of team situation awareness in human-autonomy teaming. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 63, No. 1, pp. 146-147, SAGE Publications, Los Angeles, CA, USA.

Demir, M., McNeese, N. J., & Cooke, N. J., 2020, Understanding human–robot teams in light of all-human teams: Aspects of team interaction and shared cognition, International Journal of Human-Computer Studies, 140, 102436.

Demir, M., McNeese, N. J., Cooke, N. J., Ball, J. T., Myers, C., & Frieman, M., 2015, Synthetic teammate communication and coordination with humans. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 59, No. 1, pp. 951-955. SAGE Publications, Los Angeles, CA, USA.

Dietvorst, B. J., Simmons, J. P., & Massey, C., 2015, Machines aversion: People erroneously avoid machines after seeing them err, Journal of Experimental Psychology. General, 144(1): 114–126.

Edwards, C., Edwards, A., Spence, P. R., & Westerman, D., 2016, Initial interaction expectations with robots: Testing the human-to-human interaction script, Communication Studies, 67(2): 227–238.

Elson, J. S., Derrick, D., & Ligon, G., 2018, Examining trust and reliance in collaborations between humans and Automated Agents. Proceedings of the Annual Hawaii International Conference on System Sciences. Proceedings of the 51st Hawaii International Conference on System Sciences.

Fiore, S. M., & Wiltshire, T. J., 2016, Technology as teammate: Examining the role of external cognition in support of team cognitive processes, Frontiers in Psychology, 7: 1531.

Glikson, E., & Woolley, A. W., 2020, Human trust in artificial intelligence: Review of empirical research, Academy of Management Annals, 14(2): 627–660.

Goddard, K., Roudsari, A., & Wyatt, J. C., 2012, Automation bias: A systematic review of frequency, effect mediators, and mitigators, Journal of the American Medical Informatics Association, 19(1): 121–127.

Groom, V., & Nass, C., 2007, Can robots be teammates? Interaction Studies, Social Behaviour and Communication in Biological and Artificial Systems, 8(3): 483–500.

Grosz, B. J., & Stone, P., 2018, A century-long commitment to assessing artificial intelligence and its impact on society, Communications of the ACM, 61(12): 68–73.

Gunkel, D. J., 2012, The Machine question. MIT Press.

Guzman, A. L., & Lewis, S. C., 2020, Artificial Intelligence and Communication: A human–machine communication research agenda, New Media and Society, 22(1): 70–86.

Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., de Visser, E. J., & Parasuraman, R., 2011, A meta-analysis of factors affecting trust in human–robot interaction: The Journal of the Human Factors and Ergonomics Society, Human Factors, 53(5): 517–527.

Harbers, M., Catholijn, M., Jonker, C. M., Birna, M., & Riemsdijk, M. B. V., 2014, Context-sensitive sharedness criteria for teamwork, Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, pp. 1507-1508.

Hertz, N., & Wiese, E., 2019, Good advice is beyond all price, but what if it comes from a Machine? Journal of Experimental Psychology Applied, 25(3): 386–395.

Hill, J., Randolph Ford, W., & Farreras, I. G., 2015, Real conversations with artificial intelligence: A comparison between human-human online conversations and human–chatbot conversations, Computers in Human Behavior, 49: 245–250.

Jarrahi, M. H., 2018, Artificial Intelligence and the future of work: Human-AI symbiosis in organizational decision making, Business Horizons, 61(4): 577–586.

Kamar, E., 2016, Directions in hybrid intelligence: Complementing AI systems with human intelligence, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pp. 4070-4073.

Lee, J. J., Knox, W. B., Wormwood, J. B., Breazeal, C., & DeSteno, D., 2013, Computationally modeling interpersonal trust, Frontiers in Psychology, 4: 893.

Lee, M. K., Kiesler, S., Forlizzi, J., & Rybski, P., 2012, Ripple effects of an embedded social agent, In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 695–704.

Lee, S. A., & Liang, Y. J., 2015, Reciprocity in computer-human interaction: Source-based, norm-based, and affect-based explanations, Cyberpsychology, Behavior and Social Networking, 18(4): 234–240.

Lewis, K., 2003, Measuring transactive memory systems in the field: Scale Development and validation, Journal of Applied Psychology, 88(4), 587–604.

Lortie, C. L., & Guitton, M. J., 2011, Judgment of the humanness of an interlocutor is in the eye of the beholder, PLOS ONE, 6(9), e25085.

Madhavan, P., & Wiegmann, D. A., 2007, Similarities and differences between human-human and human-automation trust: An integrative review, Theoretical Issues in Ergonomics Science, 8(4): 277–301.

Malone, T. W., 2018, How human-computer 'Superminds' are redefining the future of work, MIT Sloan Management Review. Retrieved November 30, 2021.

Merritt, S. M., & Ilgen, D. R., 2008, Not all trust is created equal: Dispositional and history-based trust in human-automation interactions: The Journal of the Human Factors and Ergonomics Society, Human Factors, 50(2): 194–210.

Merritt, S. M., 2011, Affective processes in human–automation interactions, Human Factors, 53(4), 356-370.

Mou, Y., & Xu, K., 2017, The media inequality: Comparing the initial human-human and human-ai social interactions. Computers in Human Behavior, 72: 432–440.

Nass, C., & Moon, Y., 2000, Machines and mindlessness: Social responses to computers, Journal of Social Issues, 56(1): 81–103.

Nissan, E., 2018, Computer tools and techniques for lawyers and the judiciary, Cybernetics and Systems, 49(4): 201-233.

Parasuraman, R., & Manzey, D. H., 2010, Complacency and bias in human use of automation: An attentional integration: The Journal of the Human Factors and Ergonomics Society, Human Factors, 52(3): 381–410.

Peña-López, I., 2017, OECD digital economy outlook 2017.

Prahl, A., & Van Swol, L., 2017, Understanding machines aversion: When is advice from automation discounted? Journal of Forecasting, 36(6), 691–702.

Przegalinska, A., Ciechanowski, L., Stroz, A., Gloor, P., & Mazurek, G., 2019, In bot we trust: A new methodology of chatbot performance measures, Business Horizons, 62(6): 785–797.

Puranam, P., 2021, Human–AI collaborative decision-making as an organization design problem, Journal of Organization Design, 10(2): 75–80.

Reeves, B., & Nass, C. I., 1996, The media equation: How people treat computers, television, and new media like real people and places. CSLI Publishing.

Russell, S., & Norvig, P., 1995, A modern, agent-oriented approach to introductory artificial intelligence, ACM SIGART Bulletin, 6(2): 24–26.

Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., Schwabe, G., & Söllner, M., 2020, Machines as teammates: A research agenda on AI in team collaboration, Information and Management, 57(2), 103174.

Shaikh, S. J., & Cruz, I., 2019, Alexa, do you know anything? the impact of an intelligent assistant on team interactions and creative performance under time scarcity. Retrieved November 30, 2021.

Shank, D. B., Graves, C., Gott, A., Gamez, P., & Rodriguez, S., 2019, Feeling our way to Machine minds: People's emotions when perceiving mind in Artificial Intelligence, Computers in Human Behavior, 98: 256–266.

Shechtman, N., & Horowitz, L. M., 2003, Media inequality in conversation. Proceedings of the Conference on Human Factors in Computing Systems—CHI, 2003.

Shiomi, M., & Hagita, N., 2016, Do synchronized multiple robots exert peer pressure? In Proceedings of the Fourth International Conference on Human-Agent Interaction, 27–33.

Shrestha, Y. R., Ben-Menahem, S. M., & von Krogh, G., 2019, Organizational decision-making structures in the age of Artificial Intelligence, California Management Review, 61(4): 66–83.

Spence, P. R., Edwards, A., Edwards, C., & Jin, X., 2019, 'the bot predicted rain, grab an umbrella': Few perceived differences in communication quality of a weather Twitterbot versus professional and amateur meteorologists, Behaviour and Information Technology, 38(1): 101–109.

Spence, P. R., Westerman, D., Edwards, C., & Edwards, A., 2014, Welcoming our robot overlords: Initial expectations about interaction with a robot, Communication Research Reports, 31(3): 272–280.

Strohkorb Sebo, S., Traeger, M., Jung, M., & Scassellati, B., 2018, The ripple effects of vulnerability. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 178–186.

Von Krogh, G., 2018, Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing, Academy of Management Discoveries, 4(4): 404-409.

Walliser, J. C., de Visser, E. J., Wiese, E., & Shaw, T. H., 2019, Team structure and team-building improve human–machine teaming with Autonomous Agents, Journal of Cognitive Engineering and Decision Making, 13(4): 258–278.

Wang, N., Pynadath, D. V., & Hill, S. G., 2016, Trust calibration within a human–robot team: Comparing automatically generated explanations 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

Wilson, H., & Daugherty, P., 2018, Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review. Retrieved December 1, 2021.

Xie, Y., Bodala, I. P., Ong, D. C., Hsu, D., & Soh, H., 2019, Robot capability and intention in trust-based decisions across tasks. In 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 39–47, IEEE Publications.

Yu, K. H., Beam, A. L., & Kohane, I. S., 2018, Artificial intelligence in healthcare, Nature Biomedical Engineering, 2(10): 719-731.

Zhang, R., McNeese, N. J., Freeman, G., & Musick, G., 2021, 'An Ideal Human' expectations of AI teammates in human-AI teaming, Proceedings of the ACM on Human-Computer Interaction, 4, 1–2.




How to Cite

Jain, R., Garg, N., & N Khera , S. (2022). Comparing differences of trust, collaboration and communication between human-human vs human-bot teams: an experimental study. CERN IdeaSquare Journal of Experimental Innovation, 7(2), 8–16.



Original Articles