Transfer learning has become a prevailing machine learning technique thanks to its superiority in learning knowledge from limited training data for prediction. In the existing works, collection and collaboration are two major approaches to realize the improvement of transfer learning performance. Even though the effectiveness of these approaches has been validated in extensive experiments, there lacks the support of theoretical analysis. Consequently, how to enhance transfer learning effectively is an open problem. In light of this, in this paper, we thoroughly and deeply study the methods of improving transfer learning performance in order to provide the guidelines for applying transfer learning in real applications. Through our proof process, critical conclusions are drawn to help learn the motivation of implementing collection and collaboration, the performance gap between collection and collaboration, and the impacts of data sharing strategies on transfer learning in collaboration. These conclusions can further build a theoretical foundation for future research on transfer learning.