Seven Themes for Effective Human-AI Teams
People-in-the-loop, Collaborative decision-making, Relationship-driven AI, Character as a differentiator, Ritual imperative, The need for Transparency, Fairness, and Trust
Welcome to Culturescapes by Kursat Ozenc. This newsletter examines culture through the design lens. It discusses big ideas and small practices for crafting better cultures in our personal, work, and social spheres and their intersection with technology.
The key idea behind this issue is the emergence of human-AI teams and what they mean for workplace culture.
As we leverage AI-powered agents like ChatGPT, Claude, and CoPilot in our everyday tasks, we're not just using tools but entering the realm of human-AI teams. Moving from tool to teams is an exciting shift, with companies like Asana boldly declaring 'A.I. is no longer a tool; it's a teammate' and Microsoft positioning CoPilot as an A.I. companion. The potential for innovation and collaboration is truly inspiring.
You might be asking what it means to have a human-AI team. Imagine that you have a team of five people and an AI teammate. You all meet everyday to discuss what to tackle, the progress of ongoing tasks, and solve issues on the go. Your AI-team mate lives on your respective communication channels, attend meetings, and can create files, make progress on tasks, so forth. It’s an active participating member with its own demeanor and working style.
I surveyed the most recent research on human-AI teaming and identified seven themes that will be critical in shaping human-AI work cultures in the years ahead. These themes include people-in-the-loop, collaborative decision-making, relationship-driven interaction, character as a differentiator, ritual imperative, the need for transparency, fairness, and trust.
In identifying these themes, I also tapped into Richard Hackmann's conditions for high-performing teams (compelling direction, interdependency, enabling structures (norms, precise tasks), expert coaching, and supportive context.
Let's dive into the themes!
1. People and their A.I. mates are in the loop
First and foremost, it's essential to recognize that humans will continue to be in the driver's seat in human AI teams. The 'human-in-the-loop' concept means people are instrumental in training, operating, and shaping AI/ML models, ensuring they perform well and ethically. This role will extend to our A.I. teammates, with companies using relatable metaphors like apprentice and companion to underscore people’s significance in this dynamic. To do a good job as a coach, we must have our values and principles in place (see 6, 7).
2. Human and AI are a Good Ying Yang
People and A.I. have their distinct strengths. We're good at intuition, abductive reasoning, and empathy. A.I. is good at analytical thinking, parsing and analyzing tons of data in a breeze. With these two complementary skill sets, human-AI teammates will act like power couples, collaboratively deciding and acting on tasks. This collaboration could redefine job roles and responsibilities, with humans focusing on tasks requiring creativity and empathy and A.I. handling more analytical tasks. An attractive research space in that collaboration is the dynamic role-switching between the partners (when you hand off the decision-making to the other party and vice versa) and boundaries. For example, you switch driver roles with your auto-pilot in a driving scenario based on specific triggers.
3.Human-AI Teaming will be Relationship-driven, marked by how well humans and A.I. know each other.
The more humans and A.I. understand each other's capabilities and limitations, the better they can collaborate and make informed decisions. For this understanding, language will be critical. There's a lot of hype around prompt engineering, and I would like to frame this as a relationship-building challenge. When you meet a new person, you start your conversation carefully, observing what resonates with the other person and making them click. Similarly, with your A.I., you build rapport by giving them context, roles, and tasks. Your work quality becomes more successful as you communicate and articulate yourself well. Your A.I. teammate will develop similar relational attributes as they become more familiar with your talking and words.
4. A.I. agents will differentiate with their character traits
A.I. agents will also differentiate themselves with their relatability and character traits. Just look at Claude and ChatGPT; they have very different character traits, and you can tell from their responses. For instance, Claude might be more formal daily in her interactions, while ChatGPT might be more neutral. This level of character design is currently in its infancy, mostly switching voice and gender, but it could go more specific and rich. For instance, you could create your A.I.'s character in a character design studio by giving a few inputs (combine my favorite childhood cartoon character with my favorite professor and add sprinkles of my personality traits).
5. Need for Human-AI Norms and Rituals
Richard Hackmann discusses enabling structure as one of the pillars of effective teams. By enabling structure, he refers to having clear roles, effective communication channels, and well-defined processes. This structure helps organize the team's efforts and resources effectively. We will need new human-AI norms and rituals to facilitate these processes and structures. For instance, when your AI makes a mistake, you might give her feedback to correct it, but you also exercise empathy and understanding. How would your AI teammate respond when her teammates make a mistake? Will she go ballistic and report you to the department head, or will she be understanding and exercise empathy? We must define these nuanced interactions to support a welcoming workplace.
6. Trust and Transparency
Trust and transparency are not just buzzwords; they're the foundation of effective teams, and the same holds for human-AI teams. To build this trust, our A.I. teammates must be able to explain their decisions and the processes that led to them. While this may be a challenge for current A.I. models, this level of explainability is crucial for ethical practices and maintaining trust, especially in critical moments. This commitment to transparency ensures a reliable and secure work environment.
7. Fairness & Safety
Finally, your A.I. member must be fair and safe for everyone on the team. Fairness can take many forms in this context, from including everyone in a communication thread to ensuring they credit everyone for the work output.
The transition to leveraging AI in the workplace is steady, but it will take time to realize its full potential as a team member. Asana’s most recent state of the AI report gives good signals on which industries are adopting AI and integrating it into daily work. The tech industry, by nature, is leading the way.
The key takeaway from these themes is that a human-centered lens needs to shape the what, how, and why of human AI teaming. In the upcoming issues, I will double-click on several themes; stay tuned.
What I am reading, listening
If A.I. Can Do Your Job, Maybe It Can Replace Your C.E.O.
Defining human-AI teaming the human-centered way: a scoping review and network analysis
This is a wrap for this issue. Until next time, take good care of yourself and your loved ones! And Happy Fathers’ Day to all the dads and children out there:)