This article in Science argues against what they call "research exceptionalism", which involves setting aside the usual expected scientific rigor in biomedical research in the name of moving faster because of the urgency and time-sensitivity of a pandemic like COVID-19.
I agree with much of the premise of this article -- that the rigor is there for a reason, and important to avoid false leads and doing damage from ineffective or unsafe treatments. Observational data are tricky to understand, to pull apart the causal relationships between treatments and response, because of their inherent biases, which is why well-designed studies are needed to evaluate new treatments. I agree that there is a lot of sloppy science and medical advice out there -- from the early French Hydroxychloroquine "study" to the Santa Clara serology study that had many issues. These took root in the minds of society and policymakers before peer review could identify and point out flaws, to hopefully put the results in proper context.
However, I also think the article does not address the specific challenges we face in a time like this -- in the middle of the surge with literally millions of people suffering from this disease for which there is no known treatment, we can't just wait for definitive study results to act. Patients are coming to hospitals and they have to try SOMETHING. Quickly assembled observational or single arm trials are certainly not the best science, but at least getting things under a protocol will provide some degree of regularity and documentation of patient inclusion and exclusion criteria, endpoints to measure, etc. that make these suboptimal studies still better than undocumented clinical practice learned on the fly. This is "wartime medicine" where people on the front lines have no choice but to take the best available evidence and learn on the fly, hopefully documenting and sharing information along the way so we can try to learn what we can. This is messy and difficult, but necessary in these times.
We should clearly strive to design rigorous comparative studies, and to coordinate efforts through national and multinational working groups to gain statistical power and avoid redundancy, but we also need to find ways to process imperfect information coming from clinical practice and less well designed studies, to sift through the noise and try to extract the signals. We need statisticians who are experts in causal inference and meta-analysis to get in there, roll up their sleeves, and find ways to pull information together across various data sets recording results during this pandemic, doing their best to try to adjust for the biases and extract what information they can while carefully communicating the uncertainty of their conclusions.