KnowTuning: Knowledge-aware Fine-tuning for Large Language Models
Yougang Lyu et al.
This paper introduces KnowTuning (Knowledge-aware Fine-tuning), a novel two-stage method to enhance the fine-grained and coarse-grained knowledge awareness of LLMs. This method addresses LLMs' limitations in complex knowledge-intensive tasks, such as generating incomplete, non-factual, or illogical answers. It involves fine-grained knowledge augmentation and coarse-grained knowledge comparison across completeness, factuality, and logicality.
MobiLLM: Server-Assisted Fine-Tuning for Mobile LLMs
Liang Li et al.
MobiLLM is a server-assisted framework that enables efficient fine-tuning of large language models directly on mobile devices while maintaining data privacy. It offloads backpropagation to a remote server while keeping the frozen backbone on the device, using quantized activation transfer for efficiency. This design achieves up to 4× lower memory usage and 2.3× faster training, making billion-scale LLM fine-tuning practical on resource-limited devices.