You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The analysis server has HTTP API endpoints to fetch the contents of GTFS feeds, which are stored and retrieved with gtfs-lib.
At one point on a long-running Analysis server we see the following failure, which brings down the whole Analysis JVM:
16:22:17.871 [qtp1685549173-5086] INFO com.conveyal.gtfs.BaseGTFSCache - Processed GTFS was found cached locally
OpenJDK 64-Bit Server VM warning: Attempt to deallocate stack guard pages failed.
OpenJDK 64-Bit Server VM warning: Attempt to deallocate stack guard pages failed.
OpenJDK 64-Bit Server VM warning: Attempt to deallocate stack guard pages failed.
OpenJDK 64-Bit Server VM warning: Attempt to deallocate stack guard pages failed.
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f6d3867d000, 12288, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 12288 bytes for committing reserved memory.
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f7574dac000, 12288, 0) failed; error='Cannot allocate memory' (errno=12)
[thread 140142449444608 also had an error]
# An error report file with more information is saved as:
# /home/ec2-user/hs_err_pid30661.log
16:22:42.725 [qtp1685549173-5086] ERROR com.conveyal.gtfs.GTFSFeed - Could not construct db from file.
java.io.IOError: java.io.IOException: Map failed
at org.mapdb.Volume$MappedFileVol.makeNewBuffer(Volume.java:632) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.mapdb.Volume$MappedFileVol.<init>(Volume.java:537) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.mapdb.Volume.volumeForFile(Volume.java:202) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.mapdb.Volume$1.createPhysVolume(Volume.java:281) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.mapdb.StoreDirect.<init>(StoreDirect.java:203) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.mapdb.DBMaker.extendStoreDirect(DBMaker.java:971) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.mapdb.DBMaker.makeEngine(DBMaker.java:758) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.mapdb.DBMaker.make(DBMaker.java:701) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at com.conveyal.gtfs.GTFSFeed.constructDB(GTFSFeed.java:905) [analysis-v5.3.0-be9ee30.jar:1.1]
at com.conveyal.gtfs.GTFSFeed.<init>(GTFSFeed.java:883) [analysis-v5.3.0-be9ee30.jar:1.1]
at com.conveyal.gtfs.BaseGTFSCache.retrieveAndProcessFeed(BaseGTFSCache.java:211) [analysis-v5.3.0-be9ee30.jar:1.1]
at com.conveyal.gtfs.BaseGTFSCache.access$000(BaseGTFSCache.java:40) [analysis-v5.3.0-be9ee30.jar:1.1]
at com.conveyal.gtfs.BaseGTFSCache$1.load(BaseGTFSCache.java:97) [analysis-v5.3.0-be9ee30.jar:1.1]
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3524) [analysis-v5.3.0-be9ee30.jar:1.1]
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2250) [analysis-v5.3.0-be9ee30.jar:1.1]
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2133) [analysis-v5.3.0-be9ee30.jar:1.1]
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2046) [analysis-v5.3.0-be9ee30.jar:1.1]
at com.google.common.cache.LocalCache.get(LocalCache.java:3963) [analysis-v5.3.0-be9ee30.jar:1.1]
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3967) [analysis-v5.3.0-be9ee30.jar:1.1]
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4952) [analysis-v5.3.0-be9ee30.jar:1.1]
at com.conveyal.gtfs.BaseGTFSCache.get(BaseGTFSCache.java:182) [analysis-v5.3.0-be9ee30.jar:1.1]
at com.conveyal.gtfs.api.ApiMain.getFeedSource(ApiMain.java:75) [analysis-v5.3.0-be9ee30.jar:1.1]
at com.conveyal.taui.controllers.GraphQLController.lambda$fetchFeeds$2(GraphQLController.java:167) [analysis-v5.3.0-be9ee30.jar:1.1]
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[na:1.8.0_171]
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) ~[na:1.8.0_171]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[na:1.8.0_171]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[na:1.8.0_171]
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[na:1.8.0_171]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[na:1.8.0_171]
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) ~[na:1.8.0_171]
at com.conveyal.taui.controllers.GraphQLController.fetchFeeds(GraphQLController.java:188) [analysis-v5.3.0-be9ee30.jar:1.1]
at graphql.execution.ExecutionStrategy.resolveField(ExecutionStrategy.java:41) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at graphql.execution.SimpleExecutionStrategy.execute(SimpleExecutionStrategy.java:18) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at graphql.execution.ExecutionStrategy.completeValue(ExecutionStrategy.java:89) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at graphql.execution.ExecutionStrategy.completeValueForList(ExecutionStrategy.java:128) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at graphql.execution.ExecutionStrategy.completeValueForList(ExecutionStrategy.java:97) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at graphql.execution.ExecutionStrategy.completeValue(ExecutionStrategy.java:62) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at graphql.execution.ExecutionStrategy.resolveField(ExecutionStrategy.java:47) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at graphql.execution.SimpleExecutionStrategy.execute(SimpleExecutionStrategy.java:18) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at graphql.execution.Execution.executeOperation(Execution.java:60) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at graphql.execution.Execution.execute(Execution.java:33) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at graphql.GraphQL.execute(GraphQL.java:78) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at com.conveyal.taui.controllers.GraphQLController.handleQuery(GraphQLController.java:63) [analysis-v5.3.0-be9ee30.jar:1.1]
at spark.ResponseTransformerRouteImpl$1.handle(ResponseTransformerRouteImpl.java:47) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at spark.http.matching.Routes.execute(Routes.java:61) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at spark.http.matching.MatcherFilter.doFilter(MatcherFilter.java:130) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at spark.embeddedserver.jetty.JettyHandler.doHandle(JettyHandler.java:50) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1568) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.server.Server.handle(Server.java:564) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:317) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:110) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.util.thread.Invocable.invokePreferred(Invocable.java:128) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.util.thread.Invocable$InvocableExecutor.invoke(Invocable.java:222) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:294) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:199) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:673) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:591) ~[analysis-v5.3.0-be9ee30.jar:1.1]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_171]
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940) ~[na:1.8.0_171]
at org.mapdb.Volume$MappedFileVol.makeNewBuffer(Volume.java:629) ~[analysis-v5.3.0-be9ee30.jar:1.1]
... 62 common frames omitted
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_171]
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937) ~[na:1.8.0_171]
... 63 common frames omitted
[thread 140106077222656 also had an error]
16:22:42.730 [qtp1685549173-5058] INFO com.conveyal.taui.AnalysisServer - GET /api/jobs by [email protected] of conveyal
So a GraphQL query is trying to use MapDB via gtfs-lib, and MapDB is trying to allocate some memory outside the Java heap and cannot do so. There are several more GB of memory in the machine than the max JVM heap size we are allowing, so all that memory must be filling up.
The referenced JVM error report file contains:
[ec2-user@ip-10-0-222-98 ~]$ head -n 500 hs_err_pid30661.log
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 12288 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2655), pid=30661, tid=0x00007f6d3877d700
#
# JRE version: OpenJDK Runtime Environment (8.0_171-b10) (build 1.8.0_171-b10)
# Java VM: OpenJDK 64-Bit Server VM (25.171-b10 mixed mode linux-amd64 compressed oops)
This "Dynamic libraries" section may be referring to all memory-mapped files under this process - in any case it contains a line for every MapDB file for every feed every opened. So we suspect that we are not properly closing MapDBs, and they are not deallocating memory or unmapping files, leading to an off-java-heap memory leak.
Currently we appear to be closing the MapDBs in the finalize() method of GTFSFeed. I have read many times to never use finalize() and you're never sure if or when it will be called. We should instead explicitly close the MapDBs when GTFSFeeds are evicted from the cache.
This may or may not be related to the following errors we saw earlier in the logs. These errors might deserve another ticket - for some reason some feeds have zero service dates and we don't gracefully handle that situation:
16:21:44.297 [qtp1685549173-5058] INFO com.conveyal.gtfs.BaseGTFSCache - Returning processed GTFS from S3
16:21:44.298 [qtp1685549173-5058] ERROR com.conveyal.taui.AnalysisServer - UNKNOWN Index: 0, Size: 0
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:657)
at java.util.ArrayList.get(ArrayList.java:433)
at com.conveyal.taui.models.Bundle$FeedSummary.setServiceDates(Bundle.java:81)
at com.conveyal.taui.controllers.BundleController.setBundleServiceDates(BundleController.java:254)
at com.conveyal.taui.controllers.BundleController.getBundle(BundleController.java:226)
at spark.ResponseTransformerRouteImpl$1.handle(ResponseTransformerRouteImpl.java:47)
at spark.http.matching.Routes.execute(Routes.java:61)
at spark.http.matching.MatcherFilter.doFilter(MatcherFilter.java:130)
at spark.embeddedserver.jetty.JettyHandler.doHandle(JettyHandler.java:50)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1568)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:564)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:317)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:110)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)
at org.eclipse.jetty.util.thread.Invocable.invokePreferred(Invocable.java:128)
at org.eclipse.jetty.util.thread.Invocable$InvocableExecutor.invoke(Invocable.java:222)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:294)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:199)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:673)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:591)
at java.lang.Thread.run(Thread.java:748)
-> GET /api/bundle/a0f1ab5facc64f8e9884e01617c5989c by [email protected] of sbb
16:21:44.298 [qtp1685549173-5058] ERROR com.conveyal.taui.AnalysisServer - null
com.conveyal.taui.AnalysisServerException
at com.conveyal.taui.AnalysisServerException.unknown(AnalysisServerException.java:67)
at com.conveyal.taui.controllers.BundleController.getBundle(BundleController.java:228)
at spark.ResponseTransformerRouteImpl$1.handle(ResponseTransformerRouteImpl.java:47)
at spark.http.matching.Routes.execute(Routes.java:61)
at spark.http.matching.MatcherFilter.doFilter(MatcherFilter.java:130)
at spark.embeddedserver.jetty.JettyHandler.doHandle(JettyHandler.java:50)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1568)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:564)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:317)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:110)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)
at org.eclipse.jetty.util.thread.Invocable.invokePreferred(Invocable.java:128)
at org.eclipse.jetty.util.thread.Invocable$InvocableExecutor.invoke(Invocable.java:222)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:294)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:199)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:673)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:591)
at java.lang.Thread.run(Thread.java:748)
16:21:44.301 [qtp1685549173-5078] INFO com.conveyal.gtfs.BaseGTFSCache - Attempting to download cached GTFS MapDB.
16:21:46.365 [qtp1685549173-5078] INFO com.conveyal.gtfs.BaseGTFSCache - Returning processed GTFS from S3
16:21:46.493 [qtp1685549173-5078] ERROR com.conveyal.taui.AnalysisServer - GRAPHQL Exception while fetching data: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 -> GET /api/graphql by [email protected] of sbb
16:21:46.493 [qtp1685549173-5078] ERROR com.conveyal.taui.AnalysisServer - null
com.conveyal.taui.AnalysisServerException
at com.conveyal.taui.AnalysisServerException.graphQL(AnalysisServerException.java:48)
at com.conveyal.taui.controllers.GraphQLController.handleQuery(GraphQLController.java:68)
at spark.ResponseTransformerRouteImpl$1.handle(ResponseTransformerRouteImpl.java:47)
at spark.http.matching.Routes.execute(Routes.java:61)
at spark.http.matching.MatcherFilter.doFilter(MatcherFilter.java:130)
at spark.embeddedserver.jetty.JettyHandler.doHandle(JettyHandler.java:50)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1568)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:564)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:317)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:110)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)
at org.eclipse.jetty.util.thread.Invocable.invokePreferred(Invocable.java:128)
at org.eclipse.jetty.util.thread.Invocable$InvocableExecutor.invoke(Invocable.java:222)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:294)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:199)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:673)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:591)
at java.lang.Thread.run(Thread.java:748)
The text was updated successfully, but these errors were encountered:
The analysis server has HTTP API endpoints to fetch the contents of GTFS feeds, which are stored and retrieved with gtfs-lib.
At one point on a long-running Analysis server we see the following failure, which brings down the whole Analysis JVM:
So a GraphQL query is trying to use MapDB via gtfs-lib, and MapDB is trying to allocate some memory outside the Java heap and cannot do so. There are several more GB of memory in the machine than the max JVM heap size we are allowing, so all that memory must be filling up.
The referenced JVM error report file contains:
Then at the end of this file we see:
This "Dynamic libraries" section may be referring to all memory-mapped files under this process - in any case it contains a line for every MapDB file for every feed every opened. So we suspect that we are not properly closing MapDBs, and they are not deallocating memory or unmapping files, leading to an off-java-heap memory leak.
Currently we appear to be closing the MapDBs in the finalize() method of GTFSFeed. I have read many times to never use finalize() and you're never sure if or when it will be called. We should instead explicitly close the MapDBs when GTFSFeeds are evicted from the cache.
This may or may not be related to the following errors we saw earlier in the logs. These errors might deserve another ticket - for some reason some feeds have zero service dates and we don't gracefully handle that situation:
The text was updated successfully, but these errors were encountered: