hadoop - How to run multiple map-reduce jobs in parallel -


i have four different map-reduce jobs operating on same data set. have hadoop cluster storage , processing of data. question , how can run 4 jobs in parallel without running them sequentially linux shell ? have read workflow-management systems apache oozie. in situation, whether need use oozie or there other easy way ? appreciated . !


Comments

Popular posts from this blog

node.js - Mongoose: Cast to ObjectId failed for value on newly created object after setting the value -

gradle error "Cannot convert the provided notation to a File or URI" -

python - NameError: name 'subprocess' is not defined -