This kind of parallelism is based on the need to process a lot of data in the same way. Well known examples are found in the area of image processing (raytracing, frame analysis), the queens problem discussed in a later chapter, database transaction systems etc. The common principle is that one program code treats different kinds of data on several independently processing nodes. The corresponding 'farm' software architecture usually consists of one 'master' process, distributing the data and several 'worker' processes that do the work.
If data parallelism is applicable to your sequential application, it will definitely be the best choice. You will have little work to adapt the application, few deadlocks and a high system performance. Try to keep the communication/computation ratio at about 1:10 (split the application into chunks of 0.1 to 10 seconds) and you will reach processor loads beyond 90% even on large networks up to 1000 nodes (assuming your application can be split into 1000 chunks). If the computation time of the subproblems vary in a wide range, try splitting the application into smaller parts.
Functional Parallelism , Combined Functional and Data Parallelism