In this part of the project you will decompose your 2D material into row blocks (Y direction gets split up) and use MPI to perform a single heat equation simulation over a very large region in parallel.

computer science

Description

In this part of the project you will decompose your 2D material into row blocks (Y direction gets split up) and use MPI to perform a single heat equation simulation over a very large region in parallel. The same constant boundary conditions and finite difference heat equation from part 1 will be used. Because the heat equation simulation requires information about one neighbor on every side of a point, every process must have a padded array that can accomodate one row above and one row below for ghost regions. Before each time step, each process must send the 0th row of its unpadded part of its array to the previous process (except rank 0) and send the last row of its unpadded part of its array to the next process (except the last rank). The 0th (last) rank's 0th (last) row of padding will be full of the boundary value. While this project closely follows the previous class materials on domain decompositions and ghost regions, it is *highly recommended* that you sketch out a small example with a few processes using pencil and paper prior to attempting to code this up. This part of the project does not depend on part 2 of the project. This project has many parts, so note that you are expected to *push your code at least 8 times* throughout the project with meaningful commit messages.

 

 

 

### Deliverables

 

1. **Makefile** (provided, you'll edit)

2. **weak\_scaling.sh** (not provided)

3. **checkPt_scaling.sh** (not provided)

4. **results/slurm-weak.out** (not provided, you'll rename from slurm-######.out)

5. **results/slurm-checkpt.out** (not provided, you'll rename from slurm-######.out)

6. **writeup/writeup.md** (not provided) where you'll put your writeup

7. **writeup/references.md** (not provided) where you'll list all your references and any collaborators you discussed the project with

8. **writeup/README.md** (provided, no edits needed)

9. **writeup/fig/README.md** (provided, no edits needed)

10. **writeup/fig/weak.png** (not provided, you'll put in writeup/fig folder) showing timing of your weak scaling test results

11. **writeup/fig/checkpt.png** (not provided, you'll put in writeup/fig folder) showing timing of your check point scaling results

12. **writeup/fig/pointPlotsPar###.png*** for ###=000, 001, 005, and 049 (not provided, you'll move from results to writeup/fig folder) showing your small simulation test run in parallel on 4 processes

13. **test/README.md** (provided, no edits needed)

14. **test/readPlotSnaps.py** (provided, no edits needed) python script to plot the snapshots of small simulation cases

15. **test/unitTestMatSer.c** (provided, no modifications needed) so you can verify the part 1 code all still works in serial, maybe get some inspiration for debugging your parallel code

16. **test/unitTestCheckPtSer.c** (provided, no modifications needed)  so you can verify the part 1 code all still works in serial, maybe get some inspiration for debugging your parallel code

17. **test/unitTestSimSer.c** (provided, no modifications needed)  so you can verify the part 1 code all still works in serial, maybe get some inspiration for debugging your parallel code

18. **test/pointSimSer.c** (provided, no modifications needed) main program to run a small simulation in serial that records all time snapshots


Related Questions in computer science category