I am an associate professor in the Computer and Information Sciences Department and a JP Morgan Chase Faculty Fellow in the Institute for Financial Services Analytics. I also have a joint appointment in the Electrical and Computer Engineering Department. My research interests are in high-performance computing, machine learning, predictive analytics, and application of these technologies to hard problems.
I was one of the first researchers to work on applying machine learning to improve compilers. This field of research has rapidly grown to include hundreds of authors from many major universities and technology companies. Compilers typically contain many heuristics to solve hard problems approximately and efficiently. Finding heuristics that perform well on a broad range of applications and processors is one of the most complex tasks faced by compiler writers. My research involves using machine learning techniques to automatically construct compiler optimization heuristics. I have shown that this technique can completely eliminate the human from heuristic design. My research on applying machine learning to compiler optimizations received the NSF CAREER award.
* Automatic Construction of Inlining Heuristics using Machine Learning
Sameer Kulkarni, John Cavazos, Christian Wimmer, and Douglas Simon.
Book on Automatic Software Tuning
* Software Automatic Tuning: From Concepts to State-of-the-Art Results.
Editors : Ken Naono, Keita Teranishi, John Cavazos, and Reiji Suda.
Automatic Performance Tuning is a new software paradigm which enables software to be high performance in any computing environment. Its methodologies have been developed over the past decade, and it is now rapidly growing in terms of its scope and applicability, as well as in its scientific knowledge and technological methods. Software developers and researchers in the area of scientific and technical computing, high performance database systems, optimized compilers, high performance systems software, and low-power computing will find this book to be an invaluable reference to this powerful new paradigm.
* FinanceBench This benchmark suite is aimed at those who work with financial code to see how certain code paths can be targeted to accelerators.
* PolyBench/ACC Is a collection of benchmark kernels that exhibit parallelism and have been ported to accelerators using a variety of different parallel languages.
This research was generously sponsored by the National Science Foundation, DARPA, JP Morgan, and Google.
Next page: Cavazos Lab