Abstract
In this paper we relate university licensing revenues to both university research expenditures and characteristics of the university and the university technology transfer office. We apply the Hausman–Taylor estimator for panel data with time-invariant explanatory variables and the Arellano–Bover dynamic panel model to unbalanced panels for the years 1991–2003 and balanced panels for the years 1995–2003. We find conflicting evidence regarding the short-term impacts of research expenditures on licensing revenues. On the other hand, both early initiation of technology transfer programs and staff size increase expected licensing revenues. Staff size and early entry appear to be substitutes, however. One-year lagged licensing revenue has strong predictive power for current licensing revenue. Further research is necessary to analyze changes in technology transfer office efficiency over time and the contribution of technology transfer to larger university missions.
Similar content being viewed by others
Notes
In the data set we use below, Stanford ranked 13th and Columbia 28th in the size of their annual research budgets (measured as medians). However, Columbia was 2nd and Stanford 3rd in terms of annual licensing revenues (again measured as medians).
HT show that the necessary and sufficient condition for identification of (β, γ) is that the matrix \( \left( {\begin{array}{*{20}c} {\user2{X}_{\text{it}}^{\prime } } \\ {\user2{Z}_{\text{i}}^{\prime } } \\ \end{array} } \right)P_{\text{A}} \left( {\begin{array}{*{20}c} {\user2{X}_{\text{it}} } & {\user2{Z}_{\text{i}} } \\ \end{array} } \right) \) is non-singular. This test holds in our estimation.
Recently Plümper and Troeger (2007) have proposed an alternative three-stage estimator for models with time-invariant variables in panel data. Based on Monte Carlo simulations, Plümper and Troeger suggest that the finite sample properties of their estimator may make it more efficient than the Hausman–Taylor estimator. We applied this model to our data. Results for technology transfer variables were quite similar to those provided by the Hausman–Taylor estimator. Results for research investment and university type differed somewhat, as they also do for the dynamic panel model, discussed below, to which we applied an estimator developed by Arellano and Bover (1995).
Fiscal year refers to the institution’s fiscal year for both AUTM and NSF data. Therefore, the AUTM and NSF data should correspond for a given institution and year, but they will not necessarily be consistent across institutions.
In 2003, AUTM surveyed 232 US universities, with a response rate of 71%. Universities surveyed are the employers of AUTM members. Among sampled universities, AUTM followed-up most intensely with universities in NSF’s top 100 in research expenditures, therefore, these universities are represented more heavily. The sample size (168) and response rate (58%) were smallest in the first year of the survey, 1991. On average, 207 universities were sampled per year, with an average response rate of 64%. NSF data come from a census of all science and engineering institutions that grant doctoral degrees and those granting masters or bachelors degrees with total research expenditures greater than $150,000 in the previous year. This population ranges from 461 to 625, and the response rate ranges from 90 to 99.6 percent during the years for which these data are available (1992–2003).
Licensing revenues and research and development budgets were deflated using an R&D deflator maintained by the Economic Research Service, Department of Agriculture. This index is very similar to indices calculated by Huffman and Evenson (2006) and an index maintained by the National Institutes of Health for deflating health research expenditures.
Only 30 percent of universities had a complete panel for all 13 years in the study. However, 60% of universities reported in all years or all but 1 year after 1994. We imputed licensing and staff size data for 25 universities that did not report AUTM data in only 1 year after 1994. Since results did not vary substantially depending on whether or not these universities were excluded, we included them in the results presented here. See the “Appendix” for the procedures followed in data imputation.
References
Adams, J., & Griliches, Z. (1996). Measuring science: An exploration. Proceedings of the National Academy of Sciences, USA, 93, 12664–12670.
Adams, J. D., & Griliches, Z. (1998). Research productivity in a system of universities. Annales D’Économie et de Statistique, 49(50), 127–162.
Amemiya, T., & MaCurdy, T. E. (1986). Instrumental-variable estimation of an error components model. Econometrica, 54, 869–881.
Arellano, M., & Bond, S. (1991). Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations. Review of Economic Studies, 58, 277–297.
Arellano, M., & Bover, O. (1995). Another look at the instrumental variable estimation of error-components models. Journal of Econometrics, 68, 29–51.
Breusch, T. S., Mizon, G. E., & Schmidt, P. (1989). Efficient estimation using panel data. Econometrica, 57, 695–700.
Busch, L., Allison, R., Harris, C., Rudy, A., Shaw, B. T., Ten Eyck, T., et al. (2004). External review of the collaborative research agreement between Novartis Agricultural Discovery Institute, Inc. and The Regents of the University of California. East Lansing, MI: Institute for Food and Agricultural Standards, Michigan State University.
Chapple, W., Lockett, A., Siegel, D., & Wright, M. (2005). Assessing the relative performance of UK university technology transfer offices: Parametric and non-parametric evidence. Research Policy, 34, 369–384.
Colyvas, J., Crow, M., Gelijns, A., Mazzoleni, R., Nelson, R. R., Rosenberg, N., et al. (2002). How do university inventions get into practice? Management Science, 48, 61–72.
Coupé, T. (2003). Science is golden: Academic R&D and university patents. Journal of Technology Transfer, 28, 31–46.
David, P. A. (2004). Can “Open Science” be protected from the evolving regime of IPR protections? Journal of Institutional and Theoretical Economics, 160, 9–34.
Foltz, J. D., Barham, B. L., & Kim, K. (2007). Synergies or trade-offs in university life sciences research. American Journal of Agricultural Economics, 89, 353–367.
Hall, B. H., Griliches, Z., & Hausman, J. A. (1986). Patents and R and D: Is there a lag? International Economic Review, 27, 265–283.
Hausman, J. A., & Taylor, W. E. (1981). Panel data and unobservable individual effects. Econometrica, 49, 1377–1398.
Helfat, C. E., Finkelstein, S., Mitchell, W., Peteraf, M. A., Singh, H., Teece, D. J., et al. (2007). Dynamic capabilities: Understanding strategic change in organizations. Oxford, UK: Blackwell.
Henderson, R., Jaffe, A. B., & Trajtenberg, M. (1998). Universities as a source of commercial technology: A detailed analysis of university patenting, 1965–1988. The Review of Economics and Statistics, 80, 119–127.
Huffman, W. E., & Evenson, R. E. (2006). Science for agriculture: A long-term perspective (2nd ed.). Ames, IA: Iowa State University Press.
Jaffé, A. B. (2000). The US patent system in transition: Policy innovation and the innovation process. Research Policy, 29, 531–557.
Jensen, R., & Thursby, M. (2001). Proofs and prototypes for sale: The licensing of university inventions. American Economic Review, 91, 240–259.
Just, R. E., & Huffman, W. E. (2006). The role of patents, royalties, and public-private partnering in university funding. In M. Holt & J.-P. Chavas (Eds.), Exploring frontiers in applied economic analysis: Essays in honor of Stanley R. Johnson. Berkeley, CA: Berkeley Electronic Press. Available on-line at http://www.bepress.com/sjohnson/. Accessed 9 March 2009.
Mowery, D. C., Nelson, R. R., Sampat, B. N., & Ziedonis, A. A. (2001). The growth of patenting and licensing by US universities: An assessment of the effects of the Bayh–Dole act of 1980. Research Policy, 30, 99–119.
Mowery, D. C., & Sampat, B. N. (2001). University patents and patent policy debates in the USA, 1925–1980. Industrial and Corporate Change, 10, 781–814.
Murray, F., & Stern, S. (2007). Do formal intellectual property rights hinder the free flow of scientific knowledge? An empirical test of the anti-commons hypothesis. Journal of Economic Behavior and Organization, 63, 648–687.
National Research Council (NRC). (1996). Colleges of agriculture at the land grant universities: Public service and public policy. Washington, DC: National Academy Press.
Nelson, R. R. (2001). Observations on the post-Bayh–Dole rise of patenting at American universities. Journal of Technology Transfer, 26, 13–19.
Oehmke, J. F., & Schimmelpfennig, D. E. (2004). Quantifying structural change in US agriculture: The case of research and productivity. Journal of Productivity Analysis, 21, 297–315.
Pakes, A., & Griliches, Z. (1984). Estimating distributed lags in short panels with an application to the specification of depreciation patterns and capital stock constructs. Review of Economic Studies, 51, 243–262.
Pardey, P. G. (1989). The agricultural knowledge production function: An empirical look. Review of Economics and Statistics, 71, 453–461.
Plümper, T., & Troeger, V. E. (2007). Efficient estimation of time-invariant and rarely changing variables in finite sample panel analyses with unit fixed effects. Political Analysis, 15, 124–139.
Siegel, D. S., Veugelers, R., & Wright, M. (2007). Technology transfer offices and commercialization of university intellectual property: Performance and policy implications. Oxford Review of Economic Policy, 23, 640–660.
Siegel, D. S., Waldman, D., & Link, A. (2003). Assessing the impact of organizational practices on the relative productivity of university technology transfer offices: An exploratory study. Research Policy, 32, 27–48.
Thursby, J. G., Jensen, R., & Thursby, M. C. (2001). Objectives, characteristics and outcomes of university licensing: A survey of major US universities. Journal of Technology Transfer, 26, 59–72.
Thursby, J. G., & Kemp, S. (2002). Growth and productive efficiency of university intellectual property licensing. Research Policy, 31, 109–124.
Trune, D. R., & Goslin, L. N. (1998). University technology transfer programs: A profit/loss analysis. Technological Forecasting and Social Change, 57, 197–204.
Acknowledgments
We are grateful to Dr. June Blalock of the Office of Technology Transfer, Agricultural Research Service, US Department of Agriculture, for interesting discussions that provided us with some of the ideas for this article. We are also grateful for the comments of an anonymous referee. All conclusions are our own, and do not necessarily represent the views of the Economic Research Service or the US Department of Agriculture. Any errors, too, are our own.
Author information
Authors and Affiliations
Corresponding author
Appendix: Imputations used in the construction of the data set
Appendix: Imputations used in the construction of the data set
1.1 Licensing revenue and staff size
Licensing revenue and staff size were imputed for 25 universities that were missing only one observation between 1995 and 2003. Licensing revenue was imputed by regressing licensing revenue on lagged licensing revenue among universities with complete data and predicting licensing data for campuses with missing data using these coefficients. In cases where there were insufficient data to predict missing values in this way, the university’s mean revenue was used. Staff size was imputed using the mean staff size from the year before and the year after the missing data.
1.2 “Early” technology transfer programs
The variable “early” identifies universities that first had at least one-half full time equivalent licensing or technology transfer employee before 1980. This variable was created using the AUTM “progyear” variable, which was missing for 17 universities included in the final analysis. Eleven of these universities, however, never reported having a technology transfer staff size greater than .5 FTE. Thus, early was coded as “0” for these universities, as it was assumed that they also did not have .5 FTE at any time before 1980 if they have not had .5 FTE during 1991–2003. For the six universities that had missing program year data and greater than .5 FTE at some time during 1991–2003, we contacted the university directly and confirmed that in all cases, the university’s technology transfer program began after 1979. Sensitivity analyses showed that excluding these universities due to the missing data or including them with early coded as 0 made little difference to the final estimates.
1.3 University of California system
Licensing data for individual University of California campuses were cited in the University of California Office of Technology Transfer Annual Reports, fiscal years 1996–2003. The sum of campus-level data did not necessarily equal system-level data as reported to AUTM because data were double counted in Annual Reports when researchers from more than one campus worked on an invention. Therefore, we rescaled the campus-level data so that the sum of campus-level data would equal the system-level data from AUTM. Additionally, not all variables in the AUTM data were included in the annual reports. In particular, annual reports did not include campus-level staffing data, likely because only 4 campuses have their own licensing offices and even those campuses rely on the system’s technology transfer staff as well. Staff size was assigned to individual campuses in proportion to the campus’s total research and development expenditures. While licensing revenue was comparable to AUTM data from 1998 to 2003, it was not reported for years 1995–2003. However, campus-level adjusted gross income (AGI), which is licensing revenue less payments to joint holders, was available for all years. In the years for which data were available, only UC San Francisco had substantial payments to joint holders. Therefore, AGI was used in place of licensing revenue for all campuses except for UCSF, for which licensing revenue equaled AGI plus total UC payments to joint holders. As a result, UCSF may have slightly inflated revenue data, while UC Berkeley, UC San Diego and UC Los Angeles may have slightly deflated revenue data in some years.
1.4 R&D zeros
Research and development expenditures data were missing for sub-disciplines in some years for a small percentage of universities. After examining the universities that were missing these data, we were confident that expenditures were likely below $1,000 in these years, and therefore these universities should be considered as having $0 expenditures in these sub-disciplines in these years.
Rights and permissions
About this article
Cite this article
Heisey, P.W., Adelman, S.W. Research expenditures, technology transfer activity, and university licensing revenue. J Technol Transf 36, 38–60 (2011). https://doi.org/10.1007/s10961-009-9129-z
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10961-009-9129-z