You’re probably aware that machine learning (ML) has a reproducibility problem. Hundreds of pre-prints and papers are published every week in the ML space but too many can’t be replicated or validated . As a result, they amount to little more than hype, and compromise trust and sustainability in the field.
The reproducibility problem is not new, and efforts have been made to tackle it by enabling the sharing of code repositories and trained models. That said, anyone who has tried to replicate or validate an ML paper knows that it’s still surprisingly difficult and time-consuming. …
Diagnostic machine learning algorithms are already outperforming physicians in a range of specialties — ophthalmology, radiology, and dermatology, among others. We’ve seen these algorithms surpass humans in their ability to classify retinal fundus images, chest X-rays, and melanomas. So why do we rarely see doctors using these models in the day-to-day practice of medicine?
Often, the missing piece is interpretability, or the ability of a model to explain why it has given an output. “Black box” models, or models that simply provide a prediction with no explanation, are likely to face challenges in building user trust, even if their performance…
In this post, we’ll learn:
First, let’s define artificial intelligence and machine learning.
In this post, we train a GAN to generate fake chest X-rays, looking at how small changes to learning rates have a big impact on model quality.
Generative-Adversarial Networks (GANs) are trained to generate fake data that can pass as real by pitting two models against each other: a generator (G) and a discriminator (D).
The generator never sees any real data. It produces a sample and is penalized if the discriminator can tell that it’s fake. Using feedback from the discriminator, it learns what kinds of output can pass as real.
The discriminator’s job is a lot easier. Its…
I write about machine learning and medicine. M.D. Candidate at Emory School of Medicine, ex-Google software engineer