top of page
Get Required Packages 

 

The code requires MPI libraries to handle data transfer between the sub-domains. Its possible to run just one MPI task on a single CPU, but MPI libraries are still required. In order to run the code on a personal computer download and install MPI and HDF5 libraries (parallel version). 

 

https://www.open-mpi.org -- openmpi (a free MPI library)  

https://www.hdfgroup.org -- HDF5 (install the parallel version)  

 

The easiest option is to use a package manager (e.g., Homebrew for Mac users) to install openmpi and HDF5. On a cluster, you may be able to load precompiled modules for MPI and a compatible version of HDF5. 

 

Modify the Code 

 

Generally, a unique simulation can be produced by modifying only the following three files:    

Makefile   

parameters.F90   

setup_PROBLEM_NAME.F90  

 

The role of each variable in Makefile and parameters.F90 should be clear from the text that appears next to them. The main structure of a setup is shown in setup_template.F90. Additional setup specific subroutines can be added in the setup files and a call to those subroutines/functions can be made from the main subroutines listed in setup_template.F90. A number of predefined subroutines from other modules (e.g., help_setup.F90 and memory.F90) can be called from the subroutines in the setup file after including those modules in the setup file.  

 

Compile and Run 

 

The code can be compiled by executing the Makefile in the code folder by simply typing the command "make" at the shell prompt/Terminal. The compilation processes, if successful, will produce an executable called "PICTOR". One can use mpirun, or a job submission script if on a computing cluster, to run the executable. For example, 

 

mpirun -n 4 ./PICTOR 

 

will launch 4 parallel MPI tasks to run the simulation. Note that parameter.F90 must be configured to use all MPI tasks (set the values of nSubDomainsX, nSubDomainsY, and nSubDomainsZ appropriately).

 

Using GPUs 

The GPU modules are written mostly in CUDA Fortran and PGI Fortran compiler is required to compile the code. The following changes are required in the Makefile to produce a GPU compatible executable.

--Change the FC variable in the Makefile to pgf90 (or ftn on a cray machine)

--set GPU_EXCLUSIVE=y 

--set GPU compatible FFLAGS

 

Compile the code and run the executable PICTOR. Note that the code is designed to use only one GPU per MPI task. If there are multiple GPUs on a machine then they must be used in an exclusive mode (e.g., --gpu-compute=exclusive option may be required in a slurm submission script).   

 

 

Output 

Parameters 

The simulation parameters, most of which are defined in the parameters.F90, are saved in a file named as  "param". 

 

Field Data 

Fld_$TIMESTEP$ files contain the value of field quantities over the entire grid. Two adjacent elements in each field data matrix are separated by resgrid. Name of the variables and the corresponding field quantities are as follows   

 

     EM Fields  

          Electric field     : Ex, Ey, Ez   

          Magnetic Field : Bx, By, Bz    

          Current             : Jx, Jy, Jz    (saved only if save_tot_curr=.true. in parameters.F90)

 

     Field quantities derived from the particles

          In the following, $FlvID$ is the ID of each species (by default $FlvID$ = 1 is for the ions and $FlvID$ = 2 is for the electrons).

          The averaging <> includes all the particles within a square (2D) /cube (3D)  of size resgrid. 

           

          Charge Flux (<Charge x Velocity >) :  Jx$FlvID$, Jy$FlvID$, Jz$FlvID$

          Charge Density (<Charge>)              : D$FlvID$ 

 

Particle Data

Prtl_$TIMESTEP$ files contain the data for a selected number of particles for which the tag attribute is a non-zero integer. By default, 1 in every psave_ratio particles at the beginning of the simulation are assigned a unique tag. The x,y, and z arrays contain spatial coordinates of the particles and u,v,w arrays contain components of their three-velocities (three-velocity = particle's Lorentz factor x velocity /c ). An array named "flv" contain integer IDs which are unique for each species, including the test particles. The data corresponding to a particle appears at the same index in all these arrays, but the ordering of particles in these arrays is almost random. The local field quantities can also be saved for each particle. If the value of the logical variables "save_prtl_local_fld" in parameters.F90 is set to .true. then pEx, pEy, pEz arrays contain the value of the local electric field and pBx, pBy, pBz arrays contain the value of the local magnetic field. Additionally, if the value of the logical variable "save_prtl_local_curr" is set to .true. then pJx, pJy, and pJz contain the value of the net current at the particle's location.    

 

Spectra

spec_$TIMESTEP$ files contain spectra of particles. The particles are binned according to their speed and the Lorent factor. The arrays "vbin" ("gbin") contain the edges of the speed (the Lorentz factor) bin.  The matrix "vspec"("gspec") contains the number of particles in the speed (the Lorentz factor) bins.

The size of "vspec"("gspec") matrix = size of vbin array x the number of species.

 

 

Visualization and Analysis

 

Currently, a basic version of the visualization and analysis scripts written in Matlab is included with the main code. The scripts are in the folder named "VIS". A much larger set of the scripts for a detailed visualization/analysis will be released soon. 

 

 

Examples 

 

Weibel instability

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

A non-relativistic perpendicular shock

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Physical Units

 

 

The physical quantities obtained from the simulations can be expressed in a dimensionless form. 

The spatial scales can be normalized to the electron skin-depth (1 grid cell = 1/compe  electron  skin-depth). 

time x electrons plasma frequency = step number x c/compe

The following normalisations hold for the electric and the magnetic field, 

 

( E^2/ 4 \pi ) / (total mass density x  c x c) = E^2_code/ ((mi+me)*epc*c*c) 

( B^2/ 4 \pi ) / (total mass density x  c x c) = B^2_code/ ((mi+me)*epc*c*c) 

 

The numerical values of all the quantities on the right-hand side of the equations can be obtained from the output files.  

 

  

Counter-streaming plasma beams are known to be unstable. If the beams are moving at moderately relativistic speeds then the Weibel instability produce filamentary structures nearly parallel to the flow (Rahul Kumar et al 2015, ApJ 806 165). In setup_weibel.F90 two counter-streaming neutral beams are initialized in a homogeneous and periodic box. In the Makefile "SETUP" variable is currently set to "setup_weibel" to use this setup. Run the executable and the output files will be stored in a subfolder named "data". One can use the Matlab script OneFld2D.m in the VIS folder to visualize the field quantities. At 1000th time step, the out-of-plane magnetic fiels should look similar to what is shown in the figure to the right.

A shock can be produced by reflecting a supersonic plasma flow from a piston (conducting wall). This setup is implemented in setup_shock.F90 ( see Rahul Kumar et al 2018 for a general description of the shock setup). In order to use this setup, change the SETUP variable in the Makefile to "setup_shock" and then compile and run the code. At 4800th time step, the structure of the out-of-plane magnetic field (B_z) should look similar to the structure shown in the figure to the right. The simulation parameters for the shock simulations can be changed in "InitUser" subroutine in setup_shock.F90. 

 

 

bottom of page