Finetuning works well if we make sure the finetuning dataset is highly relevant to the style/domain of evaluation we are using. AlpaGasus [10]: authors directly study how much finetuning data is necessary for an LLM to perform well on various downstream tasks. Bibliography: [1] Wei, Jason, et al. "Finetuned language models are zero-shot learners." More @Wikipedia
Hover over any link to get a description of the article. Please note that search keywords are sometimes hidden within the full article and don't appear in the description or title.