Fine-tuning large language models (LLMs) on niche text corpora has emerged as a crucial step in enhancing their performance on technical tasks. This article investigates various fine-tuning strategies for LLMs when applied to research text. We evaluate the impact of different variables, such as training, neural structure, and configuration settings