Alex Richter on Computer Supported Collaborative Work, webs of participation, and human-AI collaboration in the metaverse (AC Ep65)
Manage episode 444279209 series 3394253
“Trust is a key ingredient when you look into Explainable AI; it’s about how can we build trust towards these systems.”
– Alex Richter
About Alex Richter
Alexander Richter is Professor of Information Systems at Victoria University of Wellington in New Zealand. where he has also been Inaugural Director of the Executive MBA and Associate Dean, where he specializes in the transformative impact of IT in the workplace. He has published more than 100 articles in leading academic journals and conferences, with several best paper awards and been covered by many major news outlets. He also has extensive industry experience and has led over 25 projects funded by companies and organizations, including the European Union..
Website: www.alexanderrichter.name
University Website: people.wgtn.ac.nz/alex.richter
LinkedIn: Alexander Richter
Twitter: @arimue
Publications (Google Scholar): Alexander Richter
Publications (ResearchGate): Alexander Richter
What you will learn
- The significance of CSCW in human-centered collaboration
- Trust as a cornerstone of explainable AI
- Emerging technologies enhancing human-AI teamwork
- The role of context in sense-making with AI tools
- Shifts in organizational structures due to AI integration
- The importance of inclusivity in AI applications
- Foresight and future thinking in the age of AI
Episode Resources
Transcript
Ross: Alex, it’s wonderful to have you on the show.
Alex Richter: Thank you for having me, Ross.
Ross: Your work is fascinating, and many strands of it are extremely relevant to amplifying cognition. So let’s dive in and see where we can get to. You were just saying to me a moment ago that the origins of a lot of your work are around what you call CSCW. So, what is that, and how has that provided a framework for your work?
Alex: Yeah, CSCW (Computer-Supported Cooperative Work) or Computer-Supported Collaborative Work is the idea that we put the human at the center and want to understand how they work. And now, for quite a few years, we’ve had more and more emerging technologies that can support this collaboration. The idea of this research field is that we work together in an interdisciplinary way to support human collaboration, and now more and more, human-AI collaboration. What fascinates me about this is that you need to understand the IT part of it—what is possible—but more importantly, you need to understand humans from a psychological perspective, understanding individuals, but also how teams and groups of people work. So, from a sociological perspective, and then often embedded in organizational practices or communities. There are a lot of different perspectives that need to be shared to design meaningful collaboration.
Ross: As you say, the technologies and potential are changing now, but taking a broader look at Computer-Supported Collaborative Work, are there any principles or foundations around this body of work that inform the studies that have been done?
Alex: I think there are a couple of recurring themes. There are actually different traditions. For my own history, I’m part of the European tradition. When I was in Munich, Zurich, and especially Copenhagen, there’s a strong Scandinavian tradition. For me, the term “community” is quite important—what it means to be part of a community. That fits nicely with what I experienced during my time there with the culture. Another term that always comes back to me in various forms is “awareness.” The idea is that if we want to work successfully, we need to have a good understanding of what others are doing, maybe even what others think or feel. That leads to other important ingredients of successful collaboration, like trust, which is currently a very important topic in human-AI collaboration. A lot of what I see is that people are concerned about trust—how can we build it? For me, that’s a key ingredient. When you look into Explainable AI, it’s about how we can build trust toward these systems. But ultimately, originally, trust between humans is obviously very important. Being aware of what others are doing and why they’re doing it is always crucial.
Ross: You were talking about Computer-Supported Collaborative Work, and I suppose that initial framing was around collaborative work between humans. Have you seen any technologies that support greater trust or awareness between humans, in order to facilitate trust and collaboration through computers?
Alex: In my own research, an important upgrade was when we had Web 2.0 or social software, or social media—there are many terms for it, like Enterprise 2.0—but basically, these awareness streams and the simplicity of the platforms made it easy to post and share. I think there were great concepts before, but finally, thanks to Ajax and other technologies, these ideas were implemented. The technology wasn’t brand new, but it was finally accessible, and people could use the internet and participate. That got me excited to do a PhD and to share how this could facilitate better collaboration.
Ross: I love that phrase, “web of participation.” Your work came to my attention because you and some of your students or colleagues did a literature review on human-AI teams and some of the success factors, challenges, and use cases. What stood out to you in that paper regarding current research in this space?
Alex: I would say there’s a general trend in academia where more and more research is being published, and speed is very important. AI excites so many people, and many colleagues are working on it. One of the challenges is getting an overview of what has already been done. For a PhD student, especially the first author of the paper you mentioned—Chloe—it was important for her to understand the existing body of work. Her idea is to understand the emergence of human-AI teams and how AI is taking on some roles and responsibilities previously held by humans. This changes how we work and communicate, and it ultimately changes organizational structures, even if not formally right away. For example, communication structures are already changing. This isn’t surprising—it has happened before with social software and social media. But I find it interesting that there isn’t much research on the changes in these structures, likely due to the difficulty in accessing data. There’s a lot of research on the effects of AI—both positive and negative. I don’t have one specific study in mind, but what’s key is to be aware of the many different angles to look at. That was the purpose of the literature review—to get a broader, higher-level perspective of what’s happening and the emerging trends.
Ross: Absolutely. We’ll share the link to that in the show notes. With that broader view, are there any particularly exciting directions we need to explore to advance human-AI teams?
Alex: One pattern I noticed from my previous research in social media is that when people look at these tools, it’s not immediately clear how to use them. We call these “use cases,” but essentially, it’s about what you can do with the tool. Depending on what you do, you can assess the benefits, risks, and so on. What excites me is that it depends heavily on context—my experience, my organization, my department, and my way of working. A lot of sense-making is going on at an individual level: how can I use generative AI to be more productive or efficient, while maintaining balance and doing what feels right? These use cases are exciting because, when we conducted interviews, we saw a diverse range of perspectives based on the department people worked in and the use cases they were familiar with. Some heard about using AI for ideation and thought, “That’s exciting! Let’s try that.” Others heard about using chatbots for customer interactions, but they heard negative examples and were worried. They said, “We should be careful.” There are obviously concerns about ethics and privacy as well, but it really depends on the context. Ultimately, the use cases help us decide what is good for us and what to prioritize.
Ross: So there’s kind of a discovery process, where at an organizational level, you can identify use cases to instruct people on and deploy, with safeguards in place. But it’s also a sense-making process at the individual level, where people are figuring out how to use these tools. Everyone talks about training and generative AI, but maybe it’s more about facilitating the sense-making process to discover how these tools can be used individually.
Alex: Absolutely. You have to experience it for yourself and learn. It’s good to be aware of the risks, but you need to get involved. Otherwise, it’s hard to discuss it theoretically. It’s like it was before with social media—if you had a text input field, you could post something. For a long time, in our research domain, we tried to make sense of it based on functions, but especially with AI, the functions are not immediately clear. That’s why we invest so much effort into transparency—making it clearer what happens in the background, what you can do with the tool, and where the limitations lie.
Ross: So, we’re talking about sense-making in terms of how we use these tools. But if we’re talking about amplifying cognition, can we use generative AI or other tools to assist our own sense-making across any domain? How can we support better human sense-making?
Alex: I think one point is that generative AI obviously can create a lot for us—that’s where the term comes from—but it’s also a very easy-to-use interface for accessing a lot of what’s going on. From my personal experience with ChatGPT and others like Google Gemini, it provides a very easy-to-use way of accessing all this knowledge. So, when you think about the definition of generative AI, there may be a smaller definition—like it’s just for generating content—but for me, the more impactful effect is that you can use it to access many other AI tools and break down the knowledge in ways that are easier to use and consume.
Ross: I think there are some people who are very skilled at that—they’re using generative AI very well to assist in their sense-making or learning. Others are probably unsure where to start, and there are probably tools that could facilitate that. Are there any approaches that can help people be better at sense-making, either generally or in a way that’s relevant to a particular learning style?
Alex: I’m not sure if this is where you’re going, but when you said that, I thought about the fact that we all have individual learning styles. What I find interesting about generative AI is that it’s quite inclusive. I had feedback from Executive MBA students who, for example, are neurodivergent learners. They told me it’s helpful for them because they can control the speed of how they consume the information. Sometimes, they go through it quickly because they’re really into it, and other times, they need it broken down. So, you’re in the driver’s seat. You decide how to consume the information—whether that’s in terms of speed or complexity. I think that’s a very important aspect of learning and sense-making in general. So yeah, inclusivity is definitely a dimension worth considering.
Ross: Well, to your point around consuming information, I like the term “assimilating” information because it suggests the information is becoming part of your knowledge structures. So, we’ve talked about individual sense-making. Is there a way we can frame this more broadly, to help facilitate organizational sense-making?
Alex: Yeah, we’re working with several companies, and I have one specific example in mind where we tried to support the organizational sense-making process by first creating awareness. When we talk about AI, we might be discussing different things. The use cases can help us reach common ground. By the way, “common ground” is another key CSCW concept. For successful collaboration, you need to look in the same direction, right? And you need to know what that direction is. Defining a set of use cases can ensure you’re discussing the same types of AI usage. You can then discuss the specific benefits as an organization, and use cases help you prioritize. Of course, you also need to be aware of the risks. One insight I got from a focus group during the implementation of generative AI in this company was that they had some low-risk use cases, but the more exciting ones were higher-risk. They agreed to pursue both. They wanted to start with some low-key use cases they knew would go smoothly in terms of privacy and ethics, but they also wanted to push boundaries with higher-risk use cases while creating awareness of the risks. They got top-level support and made sure everyone, including the workers’ council, was on board. So, that’s one way of using use cases—to balance higher-risk but potentially more beneficial options with safer, low-risk use cases.
Ross: Sense-making relates very much to foresight. Company leadership needs to make strategic decisions in a fast-changing world, and they need to make sense of their business environment—what are the shifts, what’s the competition, what are the opportunities? Foresight helps frame where you see things going. Effective foresight is fueled by sense-making. Does any of your work address how to facilitate useful foresight, whether individually or organizationally?
Alex: Yes. Especially with my wife, Shahper—who is also an academic—and a few other colleagues, we thought, early last year when ChatGPT had a big impact, “Why was this such a surprise?” AI is not a new topic. When you look around, obviously now it’s more of a hype, but it’s been around for a long time. Some of the concepts we’re still discussing now come from the 1950s and 60s. So, why was it so surprising? I think it’s because the way we do research is mainly driven by trying to understand what has happened. There’s a good reason for that because we can learn a lot from the past. But if ChatGPT taught us one thing, it’s that we also need to look more into the future. In our domain—whether it’s CSCW or Information Systems Research—we have the tools to do that. Foresight or future thinking is about anticipating—not necessarily predicting—but preparing for different scenarios. That’s exciting, and I hope we’ll see more of this type of research. For example, we presented a study at a conference in June where we looked at human-AI collaboration in the metaverse, whatever that is. It’s not just sitting in front of a screen with ChatGPT but actually having avatars talking to us, interacting with us, and at some point, having virtual teams where it’s no longer a big difference whether I’m communicating with a human or an AI-based avatar.
Ross: One of the first thoughts that comes to mind is if we have a metaverse where a team has some humans represented by avatars and some AI avatars, is it better for the AI avatars to be as human-like as possible, or would it be better for them to have distinct visual characteristics or communication styles that are not human-like?
Alex: That’s a great question. One of my PhD students, Bayu, thought a bit about this. His topic is actually visibility in hybrid work, and he found that avatars will play a bigger role. Avatars have been around for a while, depending on how you define them. In a recent study we presented, we tried to understand how much fidelity you need for an avatar. Again, it depends on the use case—sorry to be repetitive—but understanding the context is essential. We’re extending this toward AI avatars. There’s a recent study from colleagues at the University of Sydney, led by Mike Seymour, and they found that the more human-like an AI avatar is, the more trustworthy it appears to people. That seems intuitive, but it contradicts earlier studies that suggested people don’t like AI that is too human-like because it feels like it’s imitating us. One term used in this context is the “uncanny valley.” But Mike Seymour’s study is worth watching. They present a paper using an avatar that is so human-like that people commented on how relatable it felt. As technology advances, and as we as humans adjust our perceptions, we may become more comfortable with human-like avatars. But again, this varies depending on the context. Do we want AI to make decisions about bank loans, or healthcare, for example? We’ll see many more studies in this area, and as perceptions change, so will our ideas about what we trust and how transparent AI needs to be. Already, some chatbots are so human-like that it’s not immediately clear whether you’re interacting with a human or a bot.
Ross: A very interesting space. To wrap up, what excites you the most right now? Where will you focus your energy in exploring the possibilities we’ve been discussing?
Alex: What excites me most right now is seeing how organizations—companies, governmental organizations, and communities—are making sense of what’s happening and trying to find their way. What I like is that there isn’t a one-size-fits-all approach, especially not in this context. Here in New Zealand, I love discussing cultural values with my Executive MBA students and how our society, which is very aware of values and community, can embrace AI differently from other cultures. Again, it comes back to context—cultural context, in this case. It’s exciting to see diverse case studies where sometimes we get counterintuitive or contradictory effects depending on the organization. We won’t be able to address biases in AI as long as we don’t address biases in society. How can we expect AI to get things right if we as a society don’t get things right? This ties back to the very beginning of our conversation about CSCW. It’s important for CSCW to also include sociologists to understand society, how we develop, and how this shapes technology. Maybe, in the long run, technology will also contribute to shaping society. That will keep me busy, I think.
Ross: Absolutely. As you say, this is all about humanity—technology is just an aid. Thank you so much for your time and insights. I’m fascinated by your work and will definitely keep following it.
Alex: Thank you very much, Ross. Thanks for having me.
The post Alex Richter on Computer Supported Collaborative Work, webs of participation, and human-AI collaboration in the metaverse (AC Ep65) appeared first on amplifyingcognition.
101 つのエピソード