Large language models like GPT-3 make it possible to turn all natural language processing tasks into text generation problems. As we move towards this paradigm of NLP in scientific and biomedical applications we need to consider whether these models are safe enough to guarantee the factuality of their outputs. Currently, they are not. But what is factuality? Are humans factual? Is science factual? This talk presents initial conceptual work on how we might teach text generation models to be (more) factual using models of how humans and institutions ensure factuality.