1. Sum of prime factors :
http://people.sc.fsu.edu/%7Ejburkardt/presentations/fdi_2008_lecture8.pdf#page=16
Add up the prime numbers from 2 to N.
Each of P processors will simply take about 1/P of the range of
numbers to check, and add up the primes locally.
When it's done, it will send the partial result to processor 0.
2. Deadlock avoidance when using blocking send and receive in order.
http://www.cs.ucsb.edu/~hnielsen/cs140/mpi-deadlocks.html
- In MPI, when does a non-blocking recv message return?
- What is an MPI communicator? What is MPI_COMM_WORLD?
Two processors must be in a common "communicator group" in order to communicate. This is simply a way for the user to organize processors into sub-groups. All processors can communicate in the
shared group known as MPI_COMM_WORLD.
shared group known as MPI_COMM_WORLD.
- In MPI, what is a process rank?
- True or false: In MPI you set the number of processes when you write the source code.
No. Number of processes are given on execution.
- Give a short piece of pseudocode that illustrates the master/slave programming model.
- Explain if the following MPI code segment is correct or not, and why:
Process 0 executes:
MPI_Recv(&yourdata, 1, MPI_FLOAT, 1, tag, MPI_COMM_WORLD, &status);
MPI_Send(&mydata, 1, MPI_FLOAT, 1, tag, MPI_COMM_WORLD);
MPI_Send(&mydata, 1, MPI_FLOAT, 1, tag, MPI_COMM_WORLD);
Process 1 executes:
MPI_Recv(&yourdata, 1, MPI_FLOAT, 0, tag,MPI_COMM_WORLD, &status);
MPI_Send(&mydata, 1, MPI_FLOAT, 0, tag, MPI_COMM_WORLD);
Both are blocking receives waiting for each other to send. System is deadlocked.
MPI_Send(&mydata, 1, MPI_FLOAT, 0, tag, MPI_COMM_WORLD);
Both are blocking receives waiting for each other to send. System is deadlocked.
- Suppose that process 0 has variable A, and process 1 also has a variable A. Write MPI-like pseudocode to exchange these values between the processes.
P0:
send(P1,A)
recieve(P0,A)
P1:
recieve(P1,A)
send(P0,A)
send(P1,A)
recieve(P0,A)
P1:
recieve(P1,A)
send(P0,A)
- Explain the purpose of each of the library calls listed.
· MPI_Init
· MPI_Finalize
· MPI_Comm_rank
· MPI_Comm_size
· MPI_Send
· MPI_Recv
· MPI_Barrier
· MPI_Bcast
· MPI_Scatter
· MPI_Gather
· MPI_Reduce
- What is an MPI derived datatype and when would you use one? Give an example. Derived datatypes are datatypes that are built from the basic MPI datatypes.
- In MPI, when does a blocking recv message return?Blocks until it receives message
- True or false: You can write a program using MPI that will run across all of the cores of your multicore computer in parallel. Also, if this is possible, indicate if you think this is a good way to write the program. You must justify your answer to receive credit.
- Discuss marshalling in MPI. http://books.google.com/books?id=LLdekoUxmr0C&pg=PA86&lpg=PA86&dq=MPI+marshalling&source=bl&ots=aLn5ivDP2i&sig=ElkL0CwR55hVES-tQJjoAKBIIaI&hl=en&ei=U3nITNq7NYrAsAPqytW9DQ&sa=X&oi=book_result&ct=result&resnum=10&ved=0CE0Q6AEwCQ#v=onepage&q=MPI%20marshalling&f=false
- When does a blocking send return.
16. Programming question 1 : http://www.cs.usfca.edu/~peter/cs220/mt1_old_key#page=6.pdf
17. Programming question2 : http://www.cs.usfca.edu/~peter/cs220/mt1_key#page=4.pdf