Large language models (LLMs) are extremely useful in generating supervised training data and survey/experimental data. However, this study suggests that crowdsourcing, which is commonly used to collect human annotations, may be impacted by the widespread use of LLMs among crowd workers. The concern is that crowd workers can increase their productivity and income by using LLMs, which can lead to invalid results. The study found that 33-46% of crowd workers used LLMs on an abstract summarization task on Amazon Mechanical Turk. This calls for platforms, researchers, and crowd workers to find ways to ensure that human data remain human.
https://arxiv.org/abs/2306.07899