-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathconverted_publications.json
1300 lines (1300 loc) · 73.6 KB
/
converted_publications.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
[
{
"title": "The purposeful presentation of ai teammates: Impacts on human acceptance and perception",
"author": [
{
"given": "Christopher Flathmann"
},
{
"given": "Beau G Schelble"
},
{
"given": "Nathan J McNeese"
},
{
"given": "Bart Knijnenburg"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=30&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:SdhP9T11ey4C",
"abstract": "OBJECTIVE: >The paper reports on two empirical studies that provide the first examination into how the presentation of an AI teammate\u2019s identity, responsibility, and capability impacts humans\u2019 perception surrounding AI teammate adoption before interacting as teammates. Study 1\u2019s results indicated that AI teammates are accepted when they share equal responsibility on a task with humans, but other perceptions such as job security generally decline the more responsibility AI teammates have. Study 1 also revealed that identifying an AI as a tool instead of a teammate can have small benefits to human perceptions of job security and adoption. Study 2 revealed that the negative impacts of increasing responsibility can be mitigated by presenting AI teammates\u2019 capabilities as being endorsed by coworkers and one\u2019s own past experience. This paper discusses how to use these results to best balance the presentation of AI teammates\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Adapting to the human: A systematic review of a decade of human factors research on adaptive autonomy",
"author": [
{
"given": "Allyson I Hauptman"
},
{
"given": "Christopher Flathmann"
},
{
"given": "Nathan J McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:35r97b3x0nAC",
"abstract": "OBJECTIVE: >This systematic review provides an understanding of existing human factors research on adaptive autonomy, its design, its impacts, and its definition. We conducted a search on <i>adaptive autonomy</i> and additional relevant search terms in four databases, which produced an initial 245 articles. The application of inclusion and exclusion criteria produced a total of 60 articles for in-depth review. Through a collaborative coding process and analysis, we extracted triggers for and types of autonomy adaptations, as well as human factors dependent variables that have been studied in previous adaptive autonomy research. Based on this analysis, we present a definition of <i>adaptive autonomy</i> for use in human factors artificial intelligence research, as well as a comprehensive review of existing research contributions, notable research gaps, and the application of adaptive autonomy.",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "Recommendations with benefits: exploring explanations in information sharing recommender systems for temporary teams",
"author": [
{
"given": "Geoff Musick"
},
{
"given": "Allyson I Hauptman"
},
{
"given": "Christopher Flathmann"
},
{
"given": "Nathan J McNeese"
},
{
"given": "Bart P Knijnenburg"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=20&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:PR6Y55bgFSsC",
"abstract": "OBJECTIVE: >Increased use of collaborative technologies and agile teamwork models has led to a greater need for temporary teams. Unfortunately, they lack the normal team formation processes that traditional teams use. Information sharing recommender systems can be used to share information about team members amongst the team; however, these systems rely on the team members themselves to disclose valuable information. While prior research has shown that an effective way to encourage user disclosure is through explanations to the user about what benefits they will gain from disclosure, the timing of such explanations has yet to be consideblack. In a between-subjects study with 150 participants, we assessed the content and timing of explanations on levels of disclosure in temporary teams. Our results indicate that providing benefit-related explanations during the time of disclosure can increase user disclosure, and\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Human-autonomy Teaming: Need for a guiding team-based framework?",
"author": [
{
"given": "Christopher Flathmann"
},
{
"given": "Nathan J McNeese"
},
{
"given": "Eduardo Salas"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=30&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:olpn-zPbct0C",
"abstract": "OBJECTIVE: >Whereas high-performance teamwork has been studied empirically for 70 years, a new form of teaming is on the rise. Enabled through the rapid progression of artificial intelligence, a human-autonomy team (HAT) involves one or more autonomous computerized agents collaborating with humans on interdependent tasks toward the achievement of a common goal. Whereas research on HATs is exploding in recent years, that research has not strongly embraced the vast literature, theory, and methods already developed in the all-human teaming literature. Moreover, definitional and construct validity issues, in terms of what constitutes a HAT, persist in the literature. In the current article we offer construct clarity and we integrate the Input-Mediator-Output model from the high-performance teaming literature to help future researchers classify the variables under study, theorize deeper, and consolidate findings across\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Psychosocial Portraits of Participation in a Virtual World: A Comparative Analysis of Roles and Motives Across Three Different Professional Development Subreddits",
"author": [
{
"given": "Subhasree Sengupta"
},
{
"given": "Jasmina Tacheva"
},
{
"given": "Nathan McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=10&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:_Re3VWB3Y0AC",
"abstract": "OBJECTIVE: >Work and learning are essential facets of our existence, yet women continue to face multiple restrictions that hinder and impede their professional outcomes. These restrictions are especially pronounced in the technical domains of Information technology and Computer science. This paper explores the power of informal online communities to act as a collective shield of care and support in resisting and disrupting gender-based barriers. By comparing three professional development forums on Reddit, we explore the emergent social roles and how these engender community extending support, solidarity, and collective enrichment. Through a novel exploration of psychosocial linguistic markers, we identify four roles and outline key signatures delineating differing motives, intent, and commitment to the community. Expanding prior research that distinguishes between communal and agentic dispositions of actors in\u00a0\u2026",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "Privacy and Trust in HCI",
"author": [
{
"given": "Bart P Knijnenburg"
},
{
"given": "Nathan J McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:-_dYPAW6P2MC",
"abstract": "OBJECTIVE: >This chapter presents a high-level overview of HCI-relevant research on privacy and trust. While this chapter mostly presents timeless aspects of privacy and trust, it aims to offer a primer targeted at researchers and practitioners who work on contemporary computing technologies, in particular artificial intelligence (AI) systems and social technologies. The two main components of this primer are a section covering theoretical perspectives that can be employed to conceptualize privacy and trust, and a section covering the methods that can be employed to measure these concepts. Specific opportunities for HCI researchers and practitioners are provided throughout this chapter.",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "What you say vs what you do: Utilizing positive emotional expressions to relay AI teammate intent within human-AI teams",
"author": [
{
"given": "Rohit Mallick"
},
{
"given": "Christopher Flathmann"
},
{
"given": "Wen Duan"
},
{
"given": "Beau G Schelble"
},
{
"given": "Nathan J McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:uJ-U7cs_P_0C",
"abstract": "OBJECTIVE: >With the expansive growth of AI\u2019s capabilities in recent years, researchers have been tasked with developing and improving human-centered AI collaborations, necessitating the creation of human-AI teams (HATs). However, the differences in communication styles between humans and AI often prevent human teammates from fully understanding the intent and needs of AI teammates. One core difference is that humans naturally leverage a positive emotional tone during communication to convey their confidence or lack thereof to convey doubt in their ability to complete a task. Yet, this communication strategy must be explicitly designed in order for an AI teammate to be human-centered. In this mixed-methods study, 45 participants completed a study examining how human teammates interpret the behaviors of their AI teammates when they express different positive emotions via specific words/phrases. Quantitative\u00a0\u2026",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "Understanding the impact and design of AI teammate etiquette",
"author": [
{
"given": "Christopher Flathmann"
},
{
"given": "Nathan J McNeese"
},
{
"given": "Beau Schelble"
},
{
"given": "Bart Knijnenburg"
},
{
"given": "Guo Freeman"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=40&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:XiVPGOgt02cC",
"abstract": "OBJECTIVE: >Technical and practical advancements in Artificial Intelligence (AI) have led to AI teammates working alongside humans in an area known as human-agent teaming. While critical past research has shown the benefit to trust driven by the incorporation of interaction rules and structures (i.e. etiquette) in both AI tools and robotic teammates, research has yet to explicitly examine etiquette for digital AI teammates. Given the historic importance of trust within human-agent teams, the identification of etiquette\u2019s impact within said teams should be paramount. Thus, this study empirically evaluates the impact of AI teammate etiquette through a mixed-methods study that compares AI teammates that either adhere to or ignore traditional etiquette standards for machine systems. The quantitative results show that traditional etiquette adherence leads to greater trust, perceived performance of the AI, and perceived performance of\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Adapt and overcome: Perceptions of adaptive autonomous agents for human-AI teaming",
"author": [
{
"given": "Allyson I Hauptman"
},
{
"given": "Beau G Schelble"
},
{
"given": "Nathan J McNeese"
},
{
"given": "Kapil Chalil Madathil"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=50&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:LPZeul_q3PIC",
"abstract": "OBJECTIVE: >Rapid advances in AI technologies have caused teams to explore the use of AI agents as full, active members of the team. The complex environments that teams occupy require human team members to constantly adapt their behaviors, and thus the ability of AI teammates to similarly adapt to changing situations significantly enhances the team\u2019s chances to succeed. In order to design such agents, it is important that we understand not only how to identify the amount of autonomous control AI agents have over their decisions, but also how changes to this control cognitively affects the rest of the team. Professional organizations often break their work cycles into phases that set limits on the team members\u2019 actions, and we propose that a similar process could be used to define the autonomy levels of AI teammates. Cyber incident response is an ideal context for this proposal, as we were able to use incident response\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Towards ethical AI: Empirically investigating dimensions of AI ethics, trust repair, and performance in human-AI teaming",
"author": [
{
"given": "Beau G Schelble"
},
{
"given": "Jeremy Lopez"
},
{
"given": "Claire Textor"
},
{
"given": "Rui Zhang"
},
{
"given": "Nathan J McNeese"
},
{
"given": "Richard Pak"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=10&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:5Ul4iDaHHb8C",
"abstract": "OBJECTIVE: >Determining the efficacy of two trust repair strategies (apology and denial) for trust violations of an ethical nature by an autonomous teammate.BACKGROUND: >While ethics in human-AI interaction is extensively studied, little research has investigated how decisions with ethical implications impact trust and performance within human-AI teams and their subsequent repair.METHODS: >Forty teams of two participants and one autonomous teammate completed three team missions within a synthetic task environment. The autonomous teammate made an ethical or unethical action during each mission, followed by an apology or denial. Measures of individual team trust, autonomous teammate trust, human teammate trust, perceived autonomous teammate ethicality, and team performance were taken.",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "Refocusing human-AI interaction through a teamwork lens",
"author": [
{
"given": "Christopher Flathmann"
},
{
"given": "Beau G Schelble"
},
{
"given": "Nathan J McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=40&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:tkaPQYYpVKoC",
"abstract": "OBJECTIVE: >Alongside the rapid development and progression of AI algorithms, parallel efforts have been made to apply AI algorithms to human-facing systems. Over the last few decades, computational systems have been mostly relegated to automating basic and repetitive tasks alongside humans, thus creating human\u2013automation interaction (Lee & See, 2004). However, the last few years have seen major strides made in artificial intelligence (AI) allowing for more dynamic and complex problems to be solved. What separates the technologies is that automation is often task-oriented and lacks the flexibility to handle changes in the task it was designed for (known as brittleness), but AI has the capability to handle tasks with dynamic features and may even have the potential to handle multiple tasks in a dynamic environment, which also makes it more capable of working alongside humans in these environments (Wynne & Lyons\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Dynamical measurement of team resilience",
"author": [
{
"given": "David AP Grimm"
},
{
"given": "Jamie C Gorman"
},
{
"given": "Nancy J Cooke"
},
{
"given": "Mustafa Demir"
},
{
"given": "Nathan J McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=20&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:eq2jaN3J8jMC",
"abstract": "OBJECTIVE: >Resilient teams overcome sudden, dynamic changes by enacting rapid, adaptive responses that maintain system effectiveness. We analyzed two experiments on human-autonomy teams (HATs) operating a simulated remotely piloted aircraft system (RPAS) and correlated dynamical measures of resilience with measures of team performance. Across both experiments, HATs experienced automation and autonomy failures, using a Wizard of Oz paradigm. Team performance was measured in multiple ways, using a mission-level performance score, a target processing efficiency score, a failure overcome score, and a ground truth resilience score. Novel dynamical systems metrics of resilience measured the timing of system reorganization in response to failures across RPAS layers, including vehicle, controls, communications layers, and the system overall. Time to achieve extreme values of reorganization and novelty\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Investigating the effects of perceived teammate artificiality on human performance and cognition",
"author": [
{
"given": "Beau G Schelble"
},
{
"given": "Christopher Flathmann"
},
{
"given": "Nathan J McNeese"
},
{
"given": "Thomas O\u2019Neill"
},
{
"given": "Richard Pak"
},
{
"given": "Moses Namara"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=30&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:sSrBHYA8nusC",
"abstract": "OBJECTIVE: >Teammates powered by artificial intelligence (AI) are becoming more prevalent and capable in their abilities as a teammate. While these teammates have great potential in improving team performance, empirical work that explores the impacts of these teammates on the humans they work with is still in its infancy. Thus, this study explores how the inclusion of AI teammates impacts both the performative abilities of human-AI teams in addition to the perceptions those humans form. The current study found that participants perceiving their third teammate as artificial performed worse than those perceiving them as human. Furthermore, these performance differences were significantly moderated by the task\u2019s difficulty, with participants in the AI teammate condition significantly outperforming participants perceiving a human teammate in the highest difficulty task, which diverges from previous human-AI teaming literature\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "The Role of Autonomy Levels and Contextual Risk in Designing Safer AI Teammates",
"author": [
{
"given": "Allyson I Hauptman"
},
{
"given": "Beau G Schelble"
},
{
"given": "Christopher Flathmann"
},
{
"given": "Nathan J McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:evX43VCCuoAC",
"abstract": "OBJECTIVE: >As AI becomes more intelligent and autonomous, the concept of human-AI teaming has become more realistic and attractive. Despite the promises of AI teammates, human-AI teams face new, unique challenges. One such challenge is the declining ability of human team members to detect and respond to AI failures as they become further removed from the AI\u2019s decision-making loop. In this study, we conducted virtual experiments with twelve experts in two different teaming contexts, cyber incident response and medical triage, to understand how contextual risk impacts human teammate situational awareness and failure performance over a human-AI team\u2019s action cycle. Our results indicate that situational awareness is more closely tied to context, while failure performance is more closely tied to the team\u2019s action cycle. These results provide the foundation for future research into using contextual risk in determining\u00a0\u2026",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "Selective Sharing is Caring: Toward the Design of a Collaborative Tool to Facilitate Team Sharing.",
"author": [
{
"given": "Geoff Musick"
},
{
"given": "Beau G Schelble"
},
{
"given": "Rohit Mallick"
},
{
"given": "Nathan J McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=40&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:5ugPr518TE4C",
"abstract": "OBJECTIVE: >Temporary teams are commonly limited by the amount of experience with their new teammates, leading to poor understanding and coordination. Collaborative tools can promote teammate team mental models (eg, teammate attitudes, tendencies, and preferences) by sharing personal information between teammates during team formation. The current study utilizes 89 participants engaging in real-world temporary teams to better understand user perceptions of sharing personal information. Qualitative and quantitative results revealed unique findings including: 1) Users perceived personality and conflict management style assessments to be accurate and sharing these assessments to be helpful, but had mixed perceptions regarding the appropriateness of sharing; 2) Users of the collaborative tool had higher perceptions of sharing in terms of helpfulness and appropriateness; and 3) User feedback highlighted the need for tools to selectively share less data with more context to improve appropriateness and helpfulness while reducing the amount of time to read.",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "The Effect of AI Teammate Ethicality on Trust Outcomes and Individual Performance in Human-AI Teams.",
"author": [
{
"given": "Beau G Schelble"
},
{
"given": "Caitlin Lancaster"
},
{
"given": "Wen Duan"
},
{
"given": "Rohit Mallick"
},
{
"given": "Nathan J McNeese"
},
{
"given": "Jeremy Lopez"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=40&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:J-pR_7NvFogC",
"abstract": "OBJECTIVE: >This study improves the understanding of trust in human-AI teams by investigating the relationship of AI teammate ethicality on individual outcomes of trust (ie, monitoring, confidence, fear) in AI teammates and human teammates over time. Specifically, a synthetic task environment was built to support a three-person team with two human teammates and one AI teammate (simulated by a confederate). The AI teammate performed either an ethical or unethical action in three missions, and measures of trust in the human and AI teammate were taken after each mission. Results from the study revealed that unethical actions by the AT had a significant effect on nearly all of the outcomes of trust measured and that levels of trust were dynamic over time for both the AI and human teammate, with the AI teammate recovering trust in Mission 1 levels by Mission 3. AI ethicality was mostly unrelated to participants\u2019 trust in their fellow human teammates but did decrease perceptions of fear, paranoia, and skepticism in them, and trust in the human and AI teammate was not significantly related to individual performance outcomes, which both diverge from previous trust research in human-AI teams utilizing competency-based trust violations.",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "The complex relationship of AI ethics and trust in human\u2013AI teaming: insights from advanced real-world subject matter experts",
"author": [
{
"given": "Jeremy Lopez"
},
{
"given": "Claire Textor"
},
{
"given": "Caitlin Lancaster"
},
{
"given": "Beau Schelble"
},
{
"given": "Guo Freeman"
},
{
"given": "Rui Zhang"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=30&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:dTyEYWd-f8wC",
"abstract": "OBJECTIVE: >Human-autonomy teams will likely first see use within environments with ethical considerations (e.g., military, healthcare). Therefore, we must consider how to best design an ethical autonomous teammate that can promote trust within teams, an antecedent to team effectiveness. In the current study, we conducted 14 semi-structured interviews with US Air Force pilots on the topics of autonomous teammates, trust, and ethics. A thematic analysis revealed that the pilots see themselves serving a parental role alongside a developing machine teammate. As parents, the pilots would feel responsible for their machine teammate\u2019s behavior, and their unethical actions may not lead to a loss of trust. However, once the pilots feel their teammate has matured, their unethical actions would likely lower trust. To repair that trust, the pilots would want to understand their teammate\u2019s processing, yet they are concerned about their\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Balancing the Scales of Explainable and Transparent AI Agents within Human-Agent Teams",
"author": [
{
"given": "Sarvesh Sawant"
},
{
"given": "Rohit Mallick"
},
{
"given": "Camden Brady"
},
{
"given": "Kapil Chalil Madathil"
},
{
"given": "Nathan McNeese"
},
{
"given": "Jeff Bertrand"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=30&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:dQ2og3OwTAUC",
"abstract": "OBJECTIVE: >With the progressive nature of Human-Agent Teams becoming more and more useful for high-quality work output, there is a proportional need for bi-directional communication between teammates to increase efficient collaboration. This need is centered around the well-known issue of innate mistrust between humans and artificial intelligence, resulting in sub-optimal work. To combat this, computer scientists and humancomputer interaction researchers alike have presented and refined specific solutions to this issue through different methods of AI interpretability. These different methods include explicit AI explanations as well as implicit manipulations of the AI interface, otherwise known as AI transparency. Individually these solutions hold considerable merit in repairing the relationship of trust between teammates, but also have individual flaws. We posit that the combination of different interpretable mechanisms\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Responsible Computing",
"author": [
{
"given": "KR Fleischmann"
},
{
"given": "A McMillan-Major"
},
{
"given": "EM Bender"
},
{
"given": "B Friedman"
},
{
"given": "D Saxena"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=20&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:7T2F9Uy0os0C",
"abstract": "OBJECTIVE: >With great computing power must come responsible computing! Computing now impacts so many areas of our lives that a journal devoted to exploring the ethical and societal implications of computing is essential. Computing professionals must be at the forefront of raising questions and conducting research about how the technologies we help develop can best serve humanity in a responsible way.BACKGROUND: >The ACM Journal on Responsible Computing (JRC) aims to foster an interdisciplinary conversation connecting researchers and practitioners across a wide range of fields, including but not limited to computing, information, ethics, law, policy, communication, anthropology, sociology, psychology, political science, economics, and science and technology studies. Our vision for JRC is that it will be a home for outstanding research and a valued resource. We especially hope to attract articles that bring a convergent\u00a0\u2026",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "Towards leveraging AI-Based moderation to address emergent harassment in social virtual reality",
"author": [
{
"given": "Kelsea Schulenberg"
},
{
"given": "Li"
},
{
"given": "Guo Freeman"
},
{
"given": "Samaneh Zamanifard"
},
{
"given": "Nathan J McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=40&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:WA5NYHcadZ8C",
"abstract": "OBJECTIVE: >Extensive HCI research has investigated how to prevent and mitigate harassment in virtual spaces, particularly by leveraging human-based and Artificial Intelligence (AI)-based moderation. However, social Virtual Reality (VR) constitutes a novel social space that faces both intensified harassment challenges and a lack of consensus on how moderation should be approached to address such harassment. Drawing on 39 interviews with social VR users with diverse backgrounds, we investigate the perceived opportunities and limitations for leveraging AI-based moderation to address emergent harassment in social VR, and how future AI moderators can be designed to enhance such opportunities and address limitations. We provide the first empirical investigation into re-envisioning AI\u2019s new roles in innovating content moderation approaches to better combat harassment in social VR. We also highlight important\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "The impact of training on human\u2013autonomy team communications and trust calibration",
"author": [
{
"given": "Craig J Johnson"
},
{
"given": "Mustafa Demir"
},
{
"given": "Nathan J McNeese"
},
{
"given": "Jamie C Gorman"
},
{
"given": "Alexandra T Wolff"
},
{
"given": "Nancy J Cooke"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=20&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:u9iWguZQMMsC",
"abstract": "OBJECTIVE: >This work examines two human\u2013autonomy team (HAT) training approaches that target communication and trust calibration to improve team effectiveness under degraded conditions.BACKGROUND: >Human\u2013autonomy teaming presents challenges to teamwork, some of which may be addressed through training. Factors vital to HAT performance include communication and calibrated trust.METHODS: >Thirty teams of three, including one confederate acting as an autonomous agent, received either entrainment-based coordination training, trust calibration training, or control training before executing a series of missions operating a simulated remotely piloted aircraft. Automation and autonomy failures simulating degraded conditions were injected during missions, and measures of team communication, trust, and task efficiency were collected.",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Language, Camera, Autonomy! Prompt-engineered Robot Control for Rapidly Evolving Deployment",
"author": [
{
"given": "Jacob P Macdonald"
},
{
"given": "Rohit Mallick"
},
{
"given": "Allan B Wollaber"
},
{
"given": "Jaime D Pe\u00f1a"
},
{
"given": "Nathan McNeese"
},
{
"given": "Ho Chit Siu"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=10&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:Fu2w8maKXqMC",
"abstract": "OBJECTIVE: >The Context-observant LLM-Enabled Autonomous Robots (CLEAR) platform offers a general solution for large language model (LLM)-enabled robot autonomy. CLEAR-controlled robots use natural language to perceive and interact with their environment: contextual description deriving from computer vision and optional human commands prompt intelligent LLM responses that map to robotic actions. By emphasizing prompting, system behavior is programmed without manipulating code, and unlike other LLM-based robot control methods, we do not perform any model fine-tuning. CLEAR employs off-the-shelf pre-trained machine learning models for controlling robots ranging from simulated quadcopters to terrestrial quadrupeds. We provide the open-source CLEAR platform, along with sample implementations for a Unity-based quadcopter and Boston Dynamics Spot\u00ae robot. Each LLM used, GPT-3.5, GPT-4, and\u00a0\u2026",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "Evaluating Cross-Training\u2019s Impact on Perceived Teaming Outcomes for Human-AI Teams",
"author": [
{
"given": "Caitlin Lancaster"
},
{
"given": "Hanna Gilreath"
},
{
"given": "Rohit Mallick"
},
{
"given": "Nathan J McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:VLnqNzywnoUC",
"abstract": "OBJECTIVE: >The rapid integration of artificial intelligence (AI) across various industries has given rise to human-AI teams (HATs), where collaboration between humans and AI may leverage their unique strengths. However, these teams often face performance challenges due to mismatches between human expectations and AI capabilities, hindering the effectiveness of these future workforce teams. Addressing these discrepancies, team training, particularly cross-training, has emerged as a promising intervention to align expectations and enhance team dynamics. This study explores the efficacy of different cross-training approaches and human/AI team role assignments on team training reactions and perceived task performance in an advertising co-creation task. The findings suggest that cross-training significantly improves both training reactions and task performance perceptions. By extending traditional team training methods\u00a0\u2026",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "A Comment on \u201cCan You Outsmart the Robot? An Unexpected Path to Work Meaningfulness\u201d by Bernadeta Go\u0161Tautait\u0117, Irina Liubert\u0117, Sharon K. Parker, and Ilona Bu\u010di\u016bnien\u0117: Calling \u2026",
"author": [
{
"given": "Thomas A O\u2019Neill"
},
{
"given": "Christopher Flathmann"
},
{
"given": "Nathan J McNeese"
},
{
"given": "Samantha K Jones"
},
{
"given": "Beau G Schelble"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=10&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:kzcrU_BdoSEC",
"abstract": "OBJECTIVE: >The article \u201cCan You Outsmart the Robot? An Unexpected Path to Work Meaningfulness\u201d by Go\u0161tautait\u0117, Liubert\u0117, Parker, and Bu\u010di\u016bnien\u0117 (2023) examines how people handled the integration of industrial robots into two Lithuanian manufacturing facilities. We applaud the authors for this work and we hope scholars will continue to examine how humans can work in a healthy way alongside robots, artificial intelligence (AI), automation, and disembodied agent peers both now and in the future of work. In particular, much more fieldwork like this is needed to help guide our understanding of modern technological appropriation by humans. The article by Go\u0161tautait\u0117 et al.(2023) was concerned with human work meaningfulness in the context of low-reliability robots, as well as automation and autonomy more broadly. The authors offer several interesting observations about their findings that have important implications to\u00a0\u2026",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "Public Perceptions, Critical Awareness and Community Discourse on AI Ethics: Evidence from an Online Discussion Forum",
"author": [
{
"given": "Subhasree Sengupta"
},
{
"given": "Swapnil Srivastava"
},
{
"given": "Nathan McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=10&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:JQOojiI6XY0C",
"abstract": "OBJECTIVE: >As Artificial Intelligence (AI) become increasingly ingrained into society, ethical and regularity concerns become critical. Given the vast array of philosophical considerations of AI ethics, there is a pressing need to understand and balance public opinion and expectations of how AI ethics should be defined and implemented, such that it centers the voice of experts and non-experts alike. This investigation explores a subreddit r/aiethics through a multi-methodological, multi-level approach. The analysis yields six conversational themes, sentiment trends, and emergent roles that elicit narratives associated with expanding implementation, policy, critical literacy, communal preparedness, and increased awareness towards combining technical and social aspects of AI ethics. Such insights can help to distill necessary considerations for the practice of AI ethics beyond scholarly traditions and how informal spaces (such as virtual channels) can and should act as avenues of learning, raising critical consciousness, bolstering connectivity, and enhancing narrative agency on AI ethics.",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "\u201cIt\u2019s Everybody\u2019s Role to Speak Up... But Not Everyone Will\u201d: Understanding AI Professionals\u2019 Perceptions of Accountability for AI Bias Mitigation",
"author": [
{
"given": "Caitlin M Lancaster"
},
{
"given": "Kelsea Schulenberg"
},
{
"given": "Christopher Flathmann"
},
{
"given": "Nathan J McNeese"
},
{
"given": "Guo Freeman"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=10&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:ZuybSZzF8UAC",
"abstract": "OBJECTIVE: >In this paper, we investigate the perceptions of AI professionals for their accountability for mitigating AI bias. Our work is motivated by calls for socially responsible AI development and governance in the face of societal harm but a lack of accountability across the entire socio-technical system. In particular, we explore a gap in the field stemming from the lack of empirical data needed to conclude how real AI professionals view bias mitigation and why individual AI professionals may be prevented from taking accountability even if they have the technical ability to do so. This gap is concerning as larger responsible AI efforts inherently rely on individuals who contribute to designing, developing, and deploying AI technologies and mitigation solutions. Through semi-structured interviews with AI professionals from diverse roles, organizations, and industries working on development projects, we identify that AI professionals\u00a0\u2026",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "Leveraging Artificial Intelligence to Promote Awareness in Augmented Reality Systems",
"author": [
{
"given": "Wangfan Li"
},
{
"given": "Rohit Mallick"
},
{
"given": "Carlos Toxtli-Hernandez"
},
{
"given": "Christopher Flathmann"
},
{
"given": "Nathan J McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:tKAzc9rXhukC",
"abstract": "OBJECTIVE: >Recent developments in artificial intelligence (AI) have permeated through an array of different immersive environments, including virtual, augmented, and mixed realities. AI brings a wealth of potential that centers on its ability to critically analyze environments, identify relevant artifacts to a goal or action, and then autonomously execute decision-making strategies to optimize the reward-to-risk ratio. However, the inherent benefits of AI are not without disadvantages as the autonomy and communication methodology can interfere with the human's awareness of their environment. More specifically in the case of autonomy, the relevant human-computer interaction literature cites that high autonomy results in an \"out-of-the-loop\" experience for the human such that they are not aware of critical artifacts or situational changes that require their attention. At the same time, low autonomy of an AI system can limit the human's own autonomy with repeated requests to approve its decisions. In these circumstances, humans enter into supervisor roles, which tend to increase their workload and, therefore, decrease their awareness in a multitude of ways. In this position statement, we call for the development of human-centered AI in immersive environments to sustain and promote awareness. It is our position then that we believe with the inherent risk presented in both AI and AR/VR systems, we need to examine the interaction between them when we integrate the two to create a new system for any unforeseen risks, and that it is crucial to do so because of its practical application in many high-risk environments.",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "Understanding and Mitigating Challenges for Non-Profit Driven Indie Game Development to Innovate Game Production",
"author": [
{
"given": "Guo Freeman"
},
{
"given": "Li"
},
{
"given": "Nathan Mcneese"
},
{
"given": "Kelsea Schulenberg"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=40&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:bnK-pcrLprsC",
"abstract": "OBJECTIVE: >Non-profit driven indie game development represents a growing open and participatory game production model as an alternative to the traditional mainstream gaming industry. However, this community is also facing and coping with tensions and dilemmas brought by its focus on artistic and cultural values over economic benefits. Using 28 interviews with indie game developers with a non-profit agenda across various cultures, we investigate the challenges non-profit driven indie game developers face, which mainly emerge in their personal or collaborative labor and their endeavors to secure sustainable resources and produce quality products. Our investigation extends the current HCI knowledge of the democratization of technology and its impact on the trajectory of innovating, designing, and producing future (gaming) technologies. These insights may help increase the opportunities for and retention of previously\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Communication Strategies in Human-Autonomy Teams During Technological Failures",
"author": [
{
"given": "Julie L Harrison"
},
{
"given": "Shiwen Zhou"
},
{
"given": "Matthew J Scalia"
},
{
"given": "David AP Grimm"
},
{
"given": "Mustafa Demir"
},
{
"given": "Nathan J McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=10&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:uLbwQdceFCQC",
"abstract": "OBJECTIVE: >This study examines low-, medium-, and high-performing Human-Autonomy Teams\u2019 (HATs\u2019) communication strategies during various technological failures that impact routine communication strategies to adapt to the task environment.BACKGROUND: >Teams must adapt their communication strategies during dynamic tasks, where more successful teams make more substantial adaptations. Adaptations in communication strategies may explain how successful HATs overcome technological failures. Further, technological failures of variable severity may alter communication strategies of HATs at different performance levels in their attempts to overcome each failure.METHODS: >HATs in a Remotely Piloted Aircraft System-Synthetic Task Environment (RPAS-STE), involving three team members, were tasked with photographing targets. Each triad had two randomly assigned participants in navigator and photographer\u00a0\u2026",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "Examining the impact of varying levels of AI teammate influence on human-AI teams",
"author": [
{
"given": "Christopher Flathmann"
},
{
"given": "Beau G Schelble"
},
{
"given": "Patrick J Rosopa"
},
{
"given": "Nathan J McNeese"
},
{
"given": "Rohit Mallick"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=30&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:AXPGKjj_ei8C",
"abstract": "OBJECTIVE: >The implementation of AI teammates is creating a wealth of research that examines how AI teammates impact human-AI teams. However, AI teammates themselves are not static, and their roles and responsibilities in human-AI teams are likely to change as technologies advance in the coming years. As a result of this advancement, AI teammates will gain influence in teams, which refers to their ability to change and manipulate a team\u2019s shared resources. This study uses a mixed-methods experiment to examine how the amount of influence AI teammates have on a team\u2019s shared resources can impact the team outcomes of human teammate performance, teammate perceptions, and whole-team perception. Results indicate that AI teammates that increase their influence on shared resources over time can stagnate the improvement of human performance, but AI teammates that decrease their influence on shared\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Knowing Unknown Teammates: Exploring Anonymity and Explanations in a Teammate Information-Sharing Recommender System",
"author": [
{
"given": "Geoff Musick"
},
{
"given": "Elizabeth S Gilman"
},
{
"given": "Wen Duan"
},
{
"given": "Nathan J McNeese"
},
{
"given": "Bart Knijnenburg"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=20&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:5awf1xo2G04C",
"abstract": "OBJECTIVE: >A growing organizational trend is to utilize ad-hoc team formation which allows for teams to intentionally form based on the member skills required to accomplish a specific task. Due to the unfamiliar nature of these teams, teammates are often limited by their understanding of one another (e.g., teammate preferences, tendencies, attitudes) which limits the team's functioning and efficiency. This study conceptualizes and investigates the use of a teammate information-sharing recommender system which selectively shares interpersonal recommendations between unfamiliar teammates (e.g., \"Your voice may be overshadowed by this teammate when making decisions...\") to promote teammate understanding. Through a mixed-methods approach involving 105 participants working on actual unfamiliar teams, this study explores how presentation elements such as anonymity and explanations influence system perceptions\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Development of a Real-Time Trust/Distrust Metric Using Interactive Hybrid Cognitive Task Analysis",
"author": [
{
"given": "Shiwen Zhou"
},
{
"given": "Xioyun Yin"
},
{
"given": "Matthew J Scalia"
},
{
"given": "Ruihao Zhang"
},
{
"given": "Jamie C Gorman"
},
{
"given": "Nathan J McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=30&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:VL0QpB8kHFEC",
"abstract": "OBJECTIVE: >While there is increased interest in how trust spreads in Human Autonomy Teams (HATs), most trust measurements are subjective and do not examine real-time changes in trust. To develop a trust metric that consists of objective variables influenced by trust/distrust manipulations, we conducted an Interactive hybrid Cognitive Task Analysis (IhCTA) for a Remotely Piloted Aerial System (RPAS) HAT. The IhCTA adapted parts of the hybrid Cognitive Task Analysis (hCTA) framework. In this paper, we present the four steps of the IhCTA approach, including 1) generating a scenario task overview, 2) generating teammate-specific event flow diagrams, 3) identifying interactions and interdependencies impacted by trust/distrust manipulations, and 4) processing RPAS variables based on the IhCTA to create a metric. We demonstrate the application of the metric through a case study that examines how the influence of specific\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Leveraging Artificial Intelligence for Team Cognition in Human-AI Teams",
"author": [
{
"given": "Beau Gregory Schelble"
},
{
"given": "Nathan McNeese"
},
{
"given": "Guo Freeman"
},
{
"given": "Bart Knijnenburg"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=20&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:_Ybze24A_UAC",
"abstract": "OBJECTIVE: >Advances in artificial intelligence (AI) technologies have enabled AI to be applied across a wide variety of new fields like cryptography, art, and data analysis. Several of these fields are social in nature, including decision-making and teaming, which introduces a new set of challenges for AI research. While each of these fields has its unique challenges, the area of human-AI teaming is beset with many that center around the expectations and abilities of AI teammates. One such challenge is understanding team cognition in these human-AI teams and AI teammates\u2019 ability to contribute towards, support, and encourage it. Team cognition is defined as any cognitive activity among the team members regarding their shared knowledge of the team and task, including concepts such as shared task or team mental models, team situation awareness, or schema similarity. Team cognition is fundamental to effective teams, as it is\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Stepping out of the shadow of human-human teaming: Crafting a unique identity for human-autonomy teams",
"author": [
{
"given": "Nathan J McNeese"
},
{
"given": "Christopher Flathmann"
},
{
"given": "Eduardo Salas"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&cstart=20&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:9vf0nzSNQJEC",
"abstract": "OBJECTIVE: >While the rapid expansion of human-autonomy teaming can and should be aided by using historically beneficial theories from human-human teams, the advent of human-autonomy teaming should also possess a unique identity that explicitly denotes the specific advantages, limitations, and considerations these teams will present to society. As such, continuing to frame human-autonomy teams as technologically advanced human-human teams might not only restrict the global effectiveness of these teams but doing so might also stagnate these teams from being well perceived and respected by humans. However, recent efforts to differentiate human-autonomy teams have prioritized the differentiation from human-autonomy interactions rather than human-human teams. The following article further differentiates human-autonomy teams from human-human teams by first discussing the core differentiating\u00a0\u2026",
"issued": {
"date-parts": [
[
"2023"
]
]
}
},
{
"title": "Human factors considerations for the context-aware design of adaptive autonomous teammates",
"author": [
{
"given": "Allyson I Hauptman"
},
{
"given": "Rohit Mallick"
},
{
"given": "Christopher Flathmann"
},
{
"given": "Nathan J McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:fEOibwPWpKIC",
"abstract": "OBJECTIVE: >Despite the gains in performance that AI can bring to human-AI teams, they also present them with new challenges, such as the decline in human ability to respond to AI failures as the AI becomes more autonomous. This challenge is particularly dangerous in human-AI teams, where the AI holds a unique role in the team\u2019s success. Thus, it is imperative that researchers find solutions for designing AI team-mates that consider their human team-mates\u2019 needs in their adaptation logic. This study explores adaptive autonomy as a solution to overcoming these challenges. We conducted twelve contextual inquiries with professionals in two teaming contexts in order to understand how human teammate perceptions can be used to determine optimal autonomy levels for AI team-mates. The results of this study will enable the human factors community to develop AI team-mates that can enhance their team\u2019s performance while\u00a0\u2026",
"issued": {
"date-parts": [
[
"2024"
]
]
}
},
{
"title": "Understanding the influence of AI autonomy on AI explainability levels in human-AI teams using a mixed methods approach",
"author": [
{
"given": "Allyson I Hauptman"
},
{
"given": "Beau G Schelble"
},
{
"given": "Wen Duan"
},
{
"given": "Christopher Flathmann"
},
{
"given": "Nathan J McNeese"
}
],
"URL": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=G1CnZ38AAAAJ&pagesize=10&sortby=pubdate&citation_for_view=G1CnZ38AAAAJ:tzM49s52ZIMC",
"abstract": "OBJECTIVE: >An obstacle to effective teaming between humans and AI is the agent\u2019s\" black box\" design. AI explanations have proven benefits, but few studies have explored the effects that explanations can have in a teaming environment with AI agents operating at heightened levels of autonomy. We conducted two complementary studies, an experiment and participatory design sessions, investigating the effect that varying levels of AI explainability and AI autonomy have on the participants\u2019 perceived trust and competence of an AI teammate to address this research gap. The results of the experiment were counter-intuitive, where the participants actually perceived the lower explainability agent as both more trustworthy and more competent. The participatory design sessions further revealed how a team\u2019s need to know influences when and what teammates need explained from AI teammates. Based on these findings, several\u00a0\u2026",
"issued": {
"date-parts": [
[
"2024"
]