AIM :- TO build a multithreading application using Blocking IO in Java to download a file. Please don't suggest me to use Non-Blocking IO, I have been told to use this one.
Issue :- My code works fine on a client machine which downloads a file hosted on a server. But, the issue is my Server seeds the file using multiple threads. In all the cases, the file received is of exact length,but, the file appears corrupted. Like, when I download a PDF file, the file pages are halfway written to the last(means all pages are filled with partial content of the original). When I download a song, it is bursted throughout & plays till last with those noise bits.
Question 1 :- How should I maintain the perfect smooth downloading so that the file plays/opens/reads properly? What technique like issues because of multithreading should I resolve here?
My Code :-
Server Multi-threading code ::::
import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
public class FileServer extends UnicastRemoteObject implements FileServerInitialise{
private String file="";
public FileServer() throws RemoteException{
super();
}
public void setFile(String f){
file=f;
//System.out.println("Length in setFile = "+f);
}
@Override
public boolean login(FileClientInitialise fci) throws RemoteException {
try {
InputStream is = new BufferedInputStream(new FileInputStream(file));
long len = new File(file).length();
System.out.println("Length of File = "+len);
WorkerThread wt1=new WorkerThread(0,len/2,fci,is,file);
wt1.setName("Worker Thread 1");
WorkerThread wt2=new WorkerThread(len/2+1,2*len/2,fci,is,file);
wt2.setName("Worker Thread 2");
//WorkerThread wt3=new WorkerThread(2*len/4+1,3*len/4,fci,is,file);
//wt3.setName("Worker Thread 3");
//WorkerThread wt4=new WorkerThread(3*len/4+1,len,fci,is,file);
//wt4.setName("Worker Thread 4");
wt1.start();
wt2.start();
//wt3.start();
//wt4.start();
wt1.join();
wt2.join();
//wt3.join();
//wt4.join();
return true;
}
catch (InterruptedException iex) {
iex.getMessage();
return false;
}
Client Downloading code ::::
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.RandomAccessFile;
public class FileClient implements FileClientInitialise {
public static int count = 1;
public static File f;
public static FileOutputStream fos;
public static RandomAccessFile raf;
public static long pointer;
public FileClient (String filename) throws RemoteException, IOException {
super();
FileClient.f= new File(filename);
FileClient.fos = new FileOutputStream(f, true);
//FileClient.raf= new RandomAccessFile(f,"rwd");
FileClient.pointer=0;
}
@Override
public boolean sendData(String filename, byte[] data, int len, String threadName) throws RemoteException{
try{
FileClient.fos.write(data,0,len);
FileClient.fos.flush();
//FileClient.raf.seek(FileClient.pointer);
//FileClient.raf.write(data,0, len);
//FileClient.pointer=raf.getFilePointer();
System.out.println("Done writing data...");
//fos.close();
return true;
}catch(Exception e){
e.getMessage();
return false;
}
}
}
Question 2 :- Also, should I use RandomAccessFile
to achieve the same? Would it be more better? I checked it and it works very slow(almost 10 times slower). And, if I were to use RandomAccessFile
, should I create a separate object for each thread? How should I use it, if advised in this case?
If code isn't possible, please give me a technical description, the code isn't necessary to be mentioned in the answer.
As others have already mentioned in the comments, this is a poor approach to allow input streams to be shared by multiple threads and allow concurrent writing, which results in corruption of file.
One approach which I performed in my multithreaded distributed file-server project was that I allowed the multi-threaded execution of the file-server, but only sequential thread execution.
I coded in such a way to ensure that the threads access the input-stream in a synchronised way,only one by one manner. This didn't corrupt the file either at the client side. And, this was performance effective too amazingly.
NOTE, before taking any action on this answer :-
I did benchmarking of codes at that time to ensure that what I have stated in this answer really is optimal for the visitors/seekers. This I believe was the optimal case also because I had 4 logical processors(cores/CPU) which reduced the overhead of multiple threads(though they all working 1 at a time).
People would argue that this is worst approach,OR an ugly approach,etc. But I found this very helpful in the file-server seeding. My 40 MB(approximately) of PDF file on
Linux Server [Intel(R) Core(TM) 2 Duo CPU E4600 @ 2.40GHz processor, CPU(s): 2]
was copied to the file-client in almost 33-34 seconds on an average in 4-5 execution tests. Whereas, when I increased the number of threads(8-10 threads), the performance decreased taking about 36-38 seconds. Same was the case when I had the single-threaded server, in which the same file was copied in 45-50 seconds. With increase in the number of threads, the performance improved and it was efficient in the range of 4-6 threads.Though, there would obviously be overhead of maintaining so many threads, and people would have thought that a single thread could win, BUT, SURPRISINGLY, the result was optimal in case of 4-6 threads.
So, my suggestion is to proceed as shown in the code by performing sequential accessing of input-streams by 4-6 threads. This is optimal, and believe me, I can argue with others too about multi-threads overheading which I found optimal in case of 4-6 threads.
For your code, I'd suggest the following change :-