Blog

October 4, 2024

The Contract Network

Embracing AI as a New Colleague: Lessons from Stanford Physicians and GPT

AI has become an undeniable force in reshaping the way knowledge workers operate across various industries. From lawyers to doctors, AI has moved from being a futuristic concept to a present reality that professionals must navigate. But one thing is clear: working with AI isn’t about merely treating it as a tool or an enhancement—it’s about learning to collaborate with a new kind of peer. As AI becomes more capable, its role shifts depending on the tasks at hand, from performing routine functions to acting as a valuable thought partner. But getting the most out of AI requires overcoming real and tangible challenges, as a recent study on physicians’ interaction with AI vividly illustrates.

Stanford Physician Study: A Reality Check on AI Collaboration

A recent study titled “Influence of a Large Language Model on Diagnostic Reasoning: A Randomized Clinical Vignette Study” examined how well licensed physicians could integrate AI (in this case, GPT-4) into their diagnostic workflow. The study’s results offer an important insight: even when AI is incredibly capable, professionals struggle to use it effectively if they don’t know how to interact with it.

The study randomized physicians into two groups—those who had access to GPT-4 alongside traditional diagnostic resources, and those who used conventional resources alone. While AI improved diagnostic efficiency and even outperformed human doctors in final diagnosis accuracy, it did not significantly enhance the doctors’ diagnostic reasoning. This wasn’t because AI was inadequate; it was because the physicians didn’t know how best to engage with the AI.

AI as More Than a Tool

The lesson from this study is that AI can take on multiple roles, but professionals must understand how to collaborate with it to extract the most value. In some cases, AI can handle smaller tasks—such as sorting through data, flagging anomalies, or even suggesting potential diagnoses, much like a junior assistant. Yet, AI is also capable of taking on the role of a peer or thought partner, contributing fresh perspectives or helping uncover patterns that might go unnoticed by a human.

In the Stanford study, for example, GPT-4 was able to make highly accurate final diagnoses, surpassing the performance of both physician groups. However, because the physicians didn’t fully understand how to engage GPT-4—how to ask it the right questions or when to trust its output—they missed out on AI’s full potential. This situation is not unique to medicine; it’s a challenge every knowledge-based industry is facing as AI becomes more integrated into workflows.

The Challenges of Working with AI

The real challenge lies in knowing not just how to allocate tasks between humans and AI, but to learn to collaborate with AI – understanding which roles AI can best fill, and when human oversight or expertise is still required. This isn’t simply about automating repetitive tasks, but about redefining workflows to allow AI to contribute in meaningful ways.

Here’s where many professionals encounter friction:

  • Task Allocation: AI can handle routine tasks—summarizing documents, analyzing datasets, or generating preliminary outputs. However, deciding which tasks are appropriate for AI and which require human input can be tricky. In the Stanford study, physicians struggled to know when to rely on GPT-4’s suggestions and when to step in with their own judgment.
  • Effective Communication: AI doesn’t just “know” what you want it to do. Like any new colleague, you need to learn how to give it clear instructions. In the case of GPT-4, this means developing “AI communication skills,” often referred to as prompt engineering—framing your queries or instructions in ways that allow AI to generate useful, actionable outputs. The physicians in the study lacked these skills, which limited how effectively they could use GPT-4.
  • Recognizing Fallibility on Both Sides: AI isn’t perfect, and it will make mistakes. But neither are humans infallible. The collaboration between AI and humans needs to be built on the understanding that both will bring their strengths and weaknesses to the table. In the Stanford study, while GPT-4 sometimes outperformed the physicians, it was also prone to errors that could have been caught with proper oversight. Similarly, physicians made mistakes that AI could have mitigated had they trusted it more.

Embracing AI as a Thought Partner

The potential for AI to act as a thought partner is where the most profound shift will happen. We’re not just asking AI to do tasks for us—we’re asking it to help us think better, challenge our assumptions, and offer new insights. This is where AI moves beyond a mere tool and starts to act like a colleague.

But this requires knowledge workers to adopt a new mindset. We must be open to learning how to interact with AI and be patient with the process. Just like any new colleague, AI will make mistakes, and so will we as we learn how to work with it. The key is to develop the skills needed to collaborate effectively, from knowing how to prompt AI to understanding when its insights are valuable versus when they need further human refinement.

Overcoming the Resistance to AI

A common barrier professionals face is the instinct to resist this new workflow dynamic. “If I have to double-check everything, why use AI at all?” or “I don’t have time to learn something new” are familiar refrains. But the Stanford study demonstrates that this learning curve is not optional—it’s an essential part of adapting to the new reality where AI will be a permanent part of the workforce.

Yes, you will need to double-check AI’s work initially. Yes, you will have to invest time in learning how to prompt and interact with AI effectively. But the long-term payoff is clear: collaboration with AI—when done right—leads to better efficiency, sharper insights, and, ultimately, better outcomes.

Moving Forward: Openness and Patience

The integration of AI into knowledge work requires a beginner’s mindset. It’s not about layering AI onto traditional workflows and calling it an upgrade. It’s about rethinking how work is done altogether. The doctors in the Stanford study were highly skilled in medicine, but not in AI collaboration, which limited their ability to fully leverage GPT-4. For all of us, the path forward involves embracing discomfort, recognizing that this new colleague, AI, will change the way we work, and that this change is often for the better.

We must be open to learning, willing to experiment, and patient with both the machine and ourselves as we navigate this new dynamic. The future of work isn’t about humans or AI—it’s about humans and AI, working together as colleagues.