Monday, October 18, 2010

MPI Programming Exam Questions

Feel free to give answers to unanswered questions as comments


1.  Sum of prime factors :

http://people.sc.fsu.edu/%7Ejburkardt/presentations/fdi_2008_lecture8.pdf#page=16

Add up the prime numbers from 2 to N.
Each of P processors will simply take about 1/P of the range of
numbers to check, and add up the primes locally.
When it's done, it will send the partial result to processor 0.

2.  Deadlock avoidance when using blocking send and receive in order.

http://www.cs.ucsb.edu/~hnielsen/cs140/mpi-deadlocks.html


  1.  In MPI, when does a non-blocking recv message return?
          
  1.  What is an MPI communicator?   What is MPI_COMM_WORLD?
            Two processors must be in a common "communicator group" in order to communicate. This is simply  a way for the user to organize  processors into sub-groups. All processors can communicate in the
shared group known as MPI_COMM_WORLD.
  1.  In MPI, what is a process rank?
          
  1.  True or false:  In MPI you set the number of processes when you write the source code.
           No. Number of processes are given on execution.
  1.  Give a short piece of pseudocode that illustrates the master/slave programming model.

  1.  Explain if the following MPI code segment is correct or not, and why:
    Process 0 executes:
MPI_Recv(&yourdata, 1, MPI_FLOAT, 1, tag, MPI_COMM_WORLD, &status);
MPI_Send(&mydata, 1, MPI_FLOAT, 1, tag, MPI_COMM_WORLD);
Process 1 executes:
MPI_Recv(&yourdata, 1, MPI_FLOAT, 0, tag,MPI_COMM_WORLD, &status);
MPI_Send(&mydata, 1, MPI_FLOAT, 0, tag, MPI_COMM_WORLD);

Both are blocking receives waiting for each other to send. System is deadlocked.

  1.  Suppose that process 0 has variable A, and process 1 also has a variable A. Write MPI-like pseudocode to exchange these values between the processes.
          P0:
          send(P1,A)
          recieve(P0,A)

          P1:
          recieve(P1,A)
          send(P0,A)


  1.  Explain the purpose of each of the library calls listed.
·         MPI_Init
·         MPI_Finalize
·         MPI_Comm_rank
·         MPI_Comm_size
·         MPI_Send
·         MPI_Recv
·         MPI_Barrier
·         MPI_Bcast
·         MPI_Scatter
·         MPI_Gather
·         MPI_Reduce
  1.  What is an MPI derived datatype and when would you use one?  Give an example. 
  2. Derived datatypes are datatypes that are built from the basic MPI datatypes.
  3. In MPI, when does a blocking recv message return?
               Blocks until it receives message
  4. True or false:  You can write a program using MPI that will run across all of the cores of your multicore computer in parallel.  Also, if this is possible, indicate if you think this is a good way to write the program.  You must justify your answer to receive credit.  
  5. Discuss marshalling in MPI.
  6. http://books.google.com/books?id=LLdekoUxmr0C&pg=PA86&lpg=PA86&dq=MPI+marshalling&source=bl&ots=aLn5ivDP2i&sig=ElkL0CwR55hVES-tQJjoAKBIIaI&hl=en&ei=U3nITNq7NYrAsAPqytW9DQ&sa=X&oi=book_result&ct=result&resnum=10&ved=0CE0Q6AEwCQ#v=onepage&q=MPI%20marshalling&f=false
  7. When does a blocking send return.
           MPI uses buffering by default. Send returns when message is saved in receivers buffer. Process calling  MPI_Send can continue even if destination has not received the message.

16. Programming question 1 : http://www.cs.usfca.edu/~peter/cs220/mt1_old_key#page=6.pdf
17.  Programming question2 :  http://www.cs.usfca.edu/~peter/cs220/mt1_key#page=4.pdf