picture of Travis

J. Travis Johnston

Postdoctoral researcher

University of Delaware

College of Engineering

Department of Computer and Information Sciences

Global Computing Laboratory


Office: 203 Smith Hall

email: travisj 'at' udel.edu

phone: (302) 831-6370


Curriculum Vitae

A complete CV is available for download here.


Research Interests

As an undergraduate student

The first research project I was involved in was with Dr. Hardy (professor emeritus).  We worked together on a series of molecular dynamics simulations.  The project was my first experience with scientific computation.  Later, I worked in a physics lab with Dr. Stephen Ducharme studying ferroelectric thin films.  I did simulations to study the crystalline structure of these films.  While there, I also designed and built a small crystal oven (for lack of better terminology) to enable us to do temperature dependent spectroscopy studies.  I also wrote software to control the device and automate the data collection process.  At the same time, I was also working with Dr. Stephen Hartke on my undergraduate thesis focusing on a problem in combinatorics which was motivated by observations of gene re-arrangement in evolutionary biology.  I was also privileged to spend a summer working at the National Security Agency in a mathematics summer employment program (MSEP, which has since undergone many name changes).  While there, I worked on a computational problem related to image processing.

As a graduate student

I worked with Dr. Linyuan Lu on several problems in extremal graph theory.  My dissertation is focused on generalizing methodologies and results from Turan type problems on graphs and uniform hypergraphs to non-uniform hypergraphs.  During my studies as a Ph. D. student, I came to appreciate how powerful computers can be when solving problems.  I also came to understand that frequently a problem or algorithm can be easily stated, but the computation is prohibitively expensive (in terms of time and resources).  One of my favorite examples of such a problem is the computation of Ramsey numbers: does there exist a simple graph on say 45 vertices that does not contain a clique on 5 vertices (5 vertices which are all mutually connected by an edge) and does not contain an anti-clique on 5 vertices (5 vertices which contain no edges between them)?  The simple, naive, algorithm is to search all possible graphs on 45 vertices.  The problem is that there are about $2^{45\cdot 22}$ such graphs.  Even if one could generate a billion such graphs every second and determine whether or not one contained the clique or anti-clique, it would still take about $3\times 10^{281}$ years to complete the computation (assuming the end result was that no such graph exists).

Current research

I am presently working with Dr. Michela Taufer in the Global Computing Laboratory at the University of Delaware.  I am working on several projects that are centered around big data analytics for scientific computation.  I began working on a project with a Ph. D. student (Mohammad Alsulmi) involving the optimization of MapReduce frameworks.  We demonstrated how surrogate based modeling can effectively model and predict the runtimes of given configurations.  Using the model, we are able to predict, with a high degree of accuracy, optimal parameter settings for specific applications.  This modeling allows users to optimize the performance of the hardware that they have available for their computations.  I am also working with Dylan Chapp (a graduate student at the University of Delaware) on a project related to numerical reproducibility.  The goal of that project is to explore the effects of threading on numerical scientific computing.  When large computations are done in parallel there is a high cost associated with enforcing any determinism.  This means that we cannot effectively decide what order operations (like addition, subtraction, multiplication, and division) are done in.  Mathematically, operations like addition are commutative, meaning that the order in which one sums up a list of numbers has no effect on the outcome.  However, when dealing with floating point arithmetic this is not true.  Essentially, we seek to describe how much error can be introduced by the non-determinism of the threading and how the error can be reduced in the most cost effective way.  I am also working closely with Boyu Zhang who has just completed her Ph. D.  We developed, and set on a rigorous footing, a technique for in-situ data analysis of protein folding trajectories.  Our technique is designed to capture similar conformational geometries among frames of a trajectory as well as isolate which portions of the protein are changing most significantly.  We also look at the formation and movement of rigid substructures in the protein (i.e. alpha helices and beta sheets).  In order to be a true in-situ data analysis, the cost of the computation must be light enough so as to not interfere with the ongoing simulation.  This means the CPU usage must be minimal, the memory footprint must be light, and little or no communication between nodes is required.

Publications

Computer Science

Discrete Mathematics

Physics and Chemistry