-
Notifications
You must be signed in to change notification settings - Fork 160
Redesign oozie internal queue #561
Comments
A few points:
My take is that we should fix unique command queuing, that will solve most of not all the issues. |
Queue uniqueness is already implemented. Surely it reduces the occurrence of problem. As part of concurrency control, we are re-queuing the same command with 500ms delay at the head of the queue. In a high loaded system, the same command could be re-queued and causes livelock like situation. The similar situation has created a big trouble in production. |
Well, then the solution would be to use a separate queue exclusively service for coordinator input checks. in that case the threadpool will be the only throttling and no concurrency re-queueing would happen. |
So there will be 2 queues. One for coordinator input checks (queue 1) and other for the rest of the commands (queue 2).
Can we discuss the other approach too? Using queue in DB. If we want to implement hot-hot or load balancing system (a possible future direction), I think DB approach will help that. |
It seems to me that the requeueing logic is not correct, it should not alter the order, but just ignore the dup queueing leaving the original one in the existing place in the queue. Default threadpool size to 120 is a bit too high for a default value. That should be a site configuration value. The optimum size o the threadpool is given by the load of your system and the hardware/OS resources you have. IMO, a database will be an overkill. I would not replace the existing inmemory solution by a DB solution, rather I'd leverage the fact that services are pluggable and have a DB solution as well. Still, I'd suggest you test your current load with a DB solution. Regarding the comment that DB approach would be good for a hot-hot solution, load distribution for an immemory solution could be easily handled by doing something like handling IDs that satisfy JOBID MOD ${LIVE_OOZIE_INSTANCES} == ${OOZIE_INSTANCE_ID}, the number of live instances and the intance ID would be dynamically generated/stored in Zookeeper (which would be needed to provide distributed lock support). |
How could we ensure the re-queuing will not disturb the ordering? |
you'd have a UniqueQueue implementation that has an SET for element IDs. The add/offer methods of the UniqueQueue will first check if the element is in the ID set, if it is the add/offer does a NOP, if it is not it add the element to the queue and to the ID set. The poll/take/remove of elements have to remove the element from the ID set as well. All this has to be done with proper level of synchronization/locking to avoid race conditions. |
We had a lot of issues related to oozie internal queue. It includes queue overflow as well as re-queuing the same overly used commands to avoid starvation. There are other situations too. This problem becomes very obvious in very high-load case.
I would like to open-up the discussion to find out a better architectural design for longer term considering a very high-load situation.
The following proposals are to initiate the discussion that varied from complete overhaul to adjusting the current design:
Implement the queue idea into DB:
Pros: Persistence. In hot-hot or load balancing situation it useful. Single place of truth. Different level of ordering could be done as needed through SQL. Don't bother about queue size. Don't need to recreate in every restart -- recovery service might be less busy.
Cons: Extra DB access overhead.
Middle approach could be to keep a memory cache with strict conditions. The details could be discussed later.
Re-queuing the same commands (that is used for throttling) -- should be redesigned. In this case, make sure queuing happens in the same place -- not at the end of queue. I know this will break the queue meaning. In this case, we might need to use a different data structure.
Currently queuing the same command at the end created starvation ( live-lock) like situation.
Comments?
Regards,
Mohammad
The text was updated successfully, but these errors were encountered: