-
Notifications
You must be signed in to change notification settings - Fork 38
PoolTimeoutException when renew Connection #43
Comments
I have seen one more case also with same PoolTimeoutException, We have started first node scheduler application, after some to we try to start another node of scheduler application, both node started but any one of the node we got PoolTimeoutException, that time cassandra server some connection under "TIME_WAIT" state, what was the reason, any help thanks. |
Hi got following different cases when debug this case,
we have one node scheduler app running when we plan to add additional node and test scheduler functions, ============== Log Started ====================================
============== Log End ==================================== root@csd12:~# netstat -an | grep 9160 20.300.1.1 -> node 1 with scheduler application only node 2 connection in ESTABLISHED state, node connections under TIME_WAIT , node 1 keep trying get new connection always return PoolTimeoutException. @danwenzel any help on this problem, thanks |
@DWvanGeest @davidrusu any help above mention problem, We are in staging to release this application , any help most welcome. thanks |
Hi Pagerduty Team, I have enclosed full log, @DWvanGeest @davidrusu - when ever rebalancing I got following exceptions, after than scheduler application not working , please look at the stack trace, thanks
|
Scheduler starting working properly , if any cassandra connection renew with 9160 has problem. where should be a problem, Cassandra or scheduler library.
Following Exception
java.util.concurrent.ExecutionException: com.netflix.astyanax.connectionpool.exceptions.PoolTimeoutException: PoolTimeoutException: [host=127.0.0.1(127.0.0.1):9160, latency=60001(60001), attempts=2]Timed out waiting for connection
Caused by: com.netflix.astyanax.connectionpool.exceptions.PoolTimeoutException:PoolTimeoutException: [host=127.0.0.1(127.0.0.1):9160, latency=60001(60001), attempts=2]Timed out waiting for connection
thanks
The text was updated successfully, but these errors were encountered: