Shelly Fan in Singularity Hub:
Emergency doctors make high-stakes decisions in fast-paced, often chaotic situations. They have to figure out which patient most urgently needs care, what’s wrong, and what to do next. AI could lend a hand. In a series of challenging scenarios, OpenAI’s o1-preview model matched or exceeded doctors in clinical reasoning. Debuted in 2024, the AI is a large language model similar to those powering ChatGPT, Claude, Gemini, and other popular chatbots. But when it was first developed, o1-preview differed in its ability to “think” through problems before answering. Such reasoning models explore multiple strategies, check themselves, and revise answers before offering a conclusion. This is a little closer to how humans solve problems. Given case reports from an established database, o1-preview diagnosed the problem nearly 89 percent of the time. In real-world emergency room scenarios, the AI outperformed physicians at the triage stage, where doctors decide which patient needs treatment first.
…This doesn’t mean that o1-preview is ready for the clinic or is about to replace physicians. Instead of a human-versus-machine spectacle, the study was more focused on setting a higher bar for systems designed to work alongside people. Like everyone else, doctors are incorporating AI into their work. Whether that improves or hinders care is an open question.
More here.
Enjoying the content on 3QD? Help keep us going by donating now.
