“Garbage in, garbage out.” So goes the adage about computing that argues the results rendered by the “thinking” machines will be flawed if the information fed into the devices proves to be erroneous.
However, generative AI—the idea that a computer can self-correct—challenges the cliché, Dennis L. McWilliams, a partner at Santé Ventures, told Fierce Healthcare. Santé Ventures, an early-stage venture capital firm, specializes in healthtech, medtech and biotech investments.
“One of the interesting things when you look at large language models like ChatGPT, they seem to have the ability to overcome that,” McWilliams said. “Not to get too technical, once they work through a certain threshold of training, the algorithms learn to sort out the garbage.”
For example, that’s not how it works with current computer algorithms that might evaluate X-rays, McWilliams said. “If your data set is messy, your algorithms are going to be messy. And if your data set has inherent biases, your algorithm is going to have biases,” he said. “In generative AI—at least a preliminary view of these really large data sets—that, to some extent, gets easier to manage.” Read more.