CISC 879/PHYS 838

Group #1


Self-Scheduling Algorithm For Trajectories Calculations: Field Lines

Advisor: S. Parhi
Members:
(in Alphabetical Order)
Salim Khan
Rui Liu
Gang Qin
Zhicheng Yan

 

1. Introduction


The study of solar wind turbulence is important for a variety of reasons - firstly, it gives information on the coupling of the coronal plasma and the Earth's magnetosphere, which has its own distinct magnetic field; secondly, it helps us in understanding the transport of energetic charged particles throughout the solar terrestrial environment.
 

The ability to trace the magnetic field lines can serve as a useful tool in studying solar wind turbulence (Matthaeus et al 1995) . However, it is generally difficult to do the tracing from analytic solutions because there are very few such solutions in the presence of magnetic field. As a result, we solve the equations numerically (Qin et al 2000). A major drawback of the numerical approach is the intensive computation required. Thus we adopted a parallel programming technique to solve the large-scale problem in a reasonable amount of time.
 

Our work consists of three steps - firstly, we choose the transverse fluctuation model to represent the physical phenomena; secondly, we simulate the trajectories of a few magnetic field lines, and finally, we derive the magnetic field geometry by calculating large number of field lines in parallel with the aid of Message Passing Interface (MPI). The last phase of our project is significantly relevant to the class, and this report will focus primarily on this aspect.
 
 

2. Physical Background

Field lines or flux tubes can describe the magnetic field, which is in space everywhere. They are generally tangential to the local magnetic field B.
 

Here we use what is commonly referred to as a slab-2D model, to describe the magnetic field. In this model, the total magnetic field is written as

B = B0ez + Bslab(z) + B2D(x, y),

where B0 is a constant, and B2D(x, y) and Bslab(z) are perpendicular to ez .

2D-component B2D(x, y) can be obtained through reverse FFT (Fast Fourier Transformation) from

Bx(kxm, kyn) = A[S(kxm, kyn)]1/2exp(ifmn),

By(kxm, kyn) = A[S(kxm, kyn)]1/2exp(iymn),

where S(kxm, kyn) = 1 + [(kxm2 + kyn2)lc2]-5/6, fmn and ymn can be obtained randomly, A is a constant, and lc is a measure of the perpendicular correlation length of the turbulence.

Similarly, slab-component Bslab(z) can also be obtained through reverse FFT from

Bx(kn) = A[S(kn)]1/2exp(ifn),

By(kn) = A[S(kn)]1/2exp(iyn),

where S(kn) = (1 + kn2lc2)-5/6, fn and yn can be obtained randomly, A is a constant, and lc is a measure of the parallel correlation length of the turbulence.

Magnetic field lines can be obtained from the following group of integration equations:

dx / ds = Bx / |B|,

dy / ds = By / |B|,

dz / ds = Bz / |B|,

where s is the length of the field line segment between the original position and the current position.
 

3. Numerical Method

To solve this kind of ODE (ordinary differential equation) which can be commonly written as dy / dx = f (x, y) , we generally use Runge-Kutta method. It is featured by using a trial step at the midpoint of an interval to cancel out lower-order error terms. The relevant equation we use is:

yn + 1 = yn + (1/6) k1 + (1/3) k2 + (1/3) k3 + (1/6) k4 + O(h5),

where k1 = h f(xn, yn), k2 = h f(xn + 1/h, yn + (1/2)k1), k3 = h f(xn + (1/2)h, yn + (1/2)k2), k4 = h f(xn + h, yn + k3), and h is a small change in x.
 

4. Computational Plan

The magnetic field lines, whose trajectories we intend to trace, possess the physical property of mutual exclusivity, i.e., the field lines do not cross each other. This property suggests that the numerical solution should be embarrassingly parallel.

However, we are still left with designing choices in the implementation of the parallel algorithm. For reasons that we describe later, the computation time per magnetic field line is dependent on the curvature of the field line. Thus, generally, the more zig zag the field line is, the more time it takes. In this scenario, load balancing becomes an important criterion for the efficiency of the parallel algorithm. Towards this end, we arrived at three load balancing methods to offset the variability in individual field line computation.

The first strategy was to perform static load balancing. Working on the assumption that the curvature of proximate field lines does not vary excessively along their trajectories, we alternatively distributed proximate field lines to different processors. This method does not involve any communication between the processors. However, we were not able to guarantee that our initial assumption regarding proximate field lines would hold upon introduction of turbulence in the magnetic field, thereby preventing us from bounding the efficiency of the method.

The second strategy was to implement dynamic load balancing by designating a master processor to perform a run-time distribution of the field lines to slave processors. Initially, the master processor assigns one field line to each slave processor resulting in the slave processors tracing the assigned field line. Upon completion, the slave processor writes the results it obtains into a file and then informs the master processor of its availability. The master processor, in turn, assigns a new field line to the slave processor, and continues to do so until it has distributed all the field lines. Even though there is a slight communication overhead compared to the static balancing method, this solution provides a bound to the efficiency in worst case.

Finally, we attempted a more refined approach to dynamic load balancing by dividing an individual field line into smaller slices, which were in turn distributed across the processors. But this approach has a tradeoff -the larger the slice, the closer we approximate the efficiency of the second strategy with increased communication. By decreasing the size of the slice we achieve better load balancing, but compound the communication time between the processors.

Given these performance characteristics, we chose the second strategy wherein the master processor distributes the entire magnetic field lines dynamically to slave processors. We carried out the calculation in the following manner. For a slave processor to trace each field line, it starts from a point on the initial surface(s = s0 = 0) and proceeds to a new point on a nearby surface (s = s1 = s0 + Ds0) by Runga-Kutta method. Then it starts from the new point (s = s1) and proceeds to the next new point on another nearby surface(s = s2 = s1 + Ds1). It repeats this process until the desired number of points are obtained. The final field line is obtained by connecting these points.

In general, the computation of the magnetic field by the reverse FFT subroutine results in considerable usage of memory, especially when solving for large grid sizes. Therefore each slave processor should have access to a huge amount of memory in order to call the subroutine at the beginning of its calculation. Fortunately for us, this was not a major concern because we used the slab-2D model. If nx, ny, nz denote the grids in x, y, z directions respectively, the total memory requirement drops to nx*ny +nz from nx*ny*nz in the general case.
 
 

5. Roadblocks


As it turned out, the majority of our time was spent, not in crafting the parallel algorithms we had devised, but rather, on getting the MPI program we had written to compile the code. Our starting point was the serial implementation that ran on VMS machines in the Bartol Research Institute. After inserting MPI code blocks to parallelize the code, we attempted to port the solution to the CIS department's DEC Alpha cluster. We were unsuccessful in this, primarily because the Fortran 77 compiler on the DEC Alpha cluster did not result in object code that was functionally similar to that of the Fortran 77 compiler on the VMS machines. After spending a lot of time, we gave up on the DEC Alpha cluster, and decided to run our parallel code on Wulfie, the Linux Beowulf cluster in the Bartol Research Institute. Once again, we experienced much difficulty in compiling the Fortran MPI source code. Making changes to the underlying serial code, we were finally successful in getting the  parallel code run on Wulfie.

The numerical solution approach to solve for the magnetic field line trajectories utilizes the Runga-Kutta method to do the integration. The implementation of the Runga-Kutta method that we incorporated into our solution dynamically calculates the length of the step size. Since this is entirely data dependent, we are unable to do a detailed theoretical analysis of the computation involved.
 

6. Discussion and Conclusion

Table1 and Fig 1 describe the total time in seconds taken to perform the computation of the magnetic field trajectories for 100, 300 and 900 magnetic field lines while increasing the number of slave processors. It is easy to see that the time taken to perform a typical calculation with a fixed number of magnetic field lines decreases linearly when the number of processor is increased. This is a very good indication because when Bartol gets 128 processors which we expect to happen in a few weeks we can simulate a million magnetic field lines in just a couple of hours. This will unravel the evolution of magnetic field lines starting from its origin on solar surface. Such a gigantic task has never been taken earlier. This will throw light on several observational data which are not properly understood from theoretical calculations.
 
 

Figs 2, 3 and 4 merely tell us how the magnetic field line evolves in z-direction when the variance of the magnetic field line is 0.1, 0.5 and 0.9 respectively. A simple calculation comparing the stretching of field lines in Fig. 2 and Fig. 3 indicates that they are in right proportion in compared to their magnetic field variations. This is a good test to confirm that our implementation of MPI stuffs in the serial code is correct and also the algorithm gives the correct result when transported from VMS to Linux machine. Actually the curves should look like straight lines when the actual length of z-axix is specified. Our plan is to extend this to 2D and finally 3D calculations when we would see the actual evolution in 3D geometry. We will be able to see how a turbulent magnetic field behaves when it evolves under the constraints of varities of parameters. This will have lage implication in space plasma physics in understanding various turbulent phenomena associated with solar or earth magnetic field.
 

Reference


1. Matthaeus, W. H., P. C. Gray, D. H. Pontius, Jr., and J. W. Bieber, Spatial structure and field-line diffusion in transverse magnetic turbulence, Phys. Rev. Lett. 75, 2136, 1995.

2. Qin, G.,  W. H. Matthaeus  and J. W. Bieber,  Test particle simulations of scattering and spatial diffusion in magnetostatic turbulence,  AGU spring, Washington DC., 2000.