Data loss on concurrent file write in camel -


i using camel technology file operation. system cluster environment.

let say,  have 4 instances   instance  instance b  instance c  instance d 

folders structure

input folder: c:/app/input

output folder: c:/app/output

all 4 instances pointing input folder location. per, business 8 files placed in input folder , output consolidated file. here camel losing data when concurrently writing output file.

route:

 from("file://c:/app/input")  .setheader(exchange.file_name,simple("output.txt"))  .to("file://c:/app/output?fileexist=append")  .end(); 

kindly me resolve issue. there thing write lock in camel? avoid concurrent file writer. in advance

you can use donefile option of file component, see http://camel.apache.org/file2.html more information.

avoid reading files being written application

beware jdk file io api bit limited in detecting whether application writing/copying file. , implementation can different depending on os platform well. lead camel thinks file not locked process , start consuming it. therefore have own investigation suites environment. camel provides different readlock options , donefilename option can use. see section consuming files folders others drop files directly.


Comments

Popular posts from this blog

angularjs - ADAL JS Angular- WebAPI add a new role claim to the token -

php - CakePHP HttpSockets send array of paramms -

node.js - Using Node without global install -