Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cloud mount goes away #50

Open
zjpleau opened this issue Sep 17, 2020 · 3 comments
Open

Cloud mount goes away #50

zjpleau opened this issue Sep 17, 2020 · 3 comments

Comments

@zjpleau
Copy link

zjpleau commented Sep 17, 2020

I have been using this container for nearly 3 years now and it has been great, however I have noticed that a few times in the last month I have seemingly lost the mount to Google Drive and the container needs to be restarted. The log (docker logs -f cloud-media-scripts) doesn't show anything. Is there any way to enable debug logging or another place to look for info? Thanks!

@zjpleau
Copy link
Author

zjpleau commented Sep 18, 2020

After digging I managed to find the mongod.log file:

mongod.log.2020-09-17.log

So basically it drops the mount "serverStatus was very slow"? I know that my rmlocal process runs overnight and covers time where Plex is also doing its usual nightly maintenance but I never had this happen until recently.

@zjpleau
Copy link
Author

zjpleau commented Oct 1, 2020

Just had it happen again. rmclocal completed 1 hour before this occurred in the mongod.log:


2020-10-01T02:15:46.116-0700 I COMMAND  [conn5] command admin.$cmd command: ping { ping: 1 } numYields:0 reslen:37 locks:{} protocol:op_query 362ms
2020-10-01T02:15:46.379-0700 I COMMAND  [ftdc] serverStatus was very slow: { after basic: 141, after asserts: 222, after backgroundFlushing: 232, after connections: 252, after dur: 252, after extra_info: 353, after globalLock: 434, after locks: 484, after network: 565, after opLatencies: 968, after opcounters: 1281, after opcountersRepl: 1281, after repl: 1351, after security: 1351, after storageEngine: 1452, after tcmalloc: 1513, after transportSecurity: 1523, after wiredTiger: 4947, at end: 5845 }
2020-10-01T02:15:49.750-0700 I COMMAND  [conn4] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:189 locks:{} protocol:op_query 769ms
2020-10-01T02:15:54.000-0700 I -        [conn4] end connection 127.0.0.1:52818 (3 connections now open)
2020-10-01T02:15:55.783-0700 I COMMAND  [ftdc] serverStatus was very slow: { after basic: 80, after asserts: 111, after backgroundFlushing: 211, after connections: 373, after dur: 383, after extra_info: 564, after globalLock: 1644, after locks: 2643, after network: 3177, after opLatencies: 3208, after opcounters: 3208, after opcountersRepl: 3208, after repl: 3248, after security: 3298, after storageEngine: 3460, after tcmalloc: 3802, after transportSecurity: 3823, after wiredTiger: 4891, at end: 6336 }
2020-10-01T02:15:57.206-0700 I COMMAND  [conn1] command plexdrive.api_objects command: find { find: "api_objects", filter: { parents: "0ADFKLpiEBfOFUk9PVA", name: "media" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true } planSummary: IXSCAN { parents: 1 } keysExamined:25 docsExamined:25 cursorExhausted:1 numYields:6 nreturned:1 reslen:365 locks:{ Global: { acquireCount: { r: 14 } }, Database: { acquireCount: { r: 7 } }, Collection: { acquireCount: { r: 7 } } } protocol:op_query 9699ms
2020-10-01T02:16:00.686-0700 I NETWORK  [thread1] connection accepted from 127.0.0.1:45954 #6 (3 connections now open)
2020-10-01T02:16:00.908-0700 I COMMAND  [conn5] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:189 locks:{} protocol:op_query 2014ms
2020-10-01T02:16:01.771-0700 I -        [conn5] end connection 127.0.0.1:52824 (3 connections now open)
2020-10-01T02:16:03.492-0700 I NETWORK  [thread1] connection accepted from 127.0.0.1:45956 #7 (3 connections now open)
2020-10-01T02:16:11.181-0700 I COMMAND  [ftdc] serverStatus was very slow: { after basic: 70, after asserts: 191, after backgroundFlushing: 191, after connections: 191, after dur: 191, after extra_info: 222, after globalLock: 242, after locks: 262, after network: 262, after opLatencies: 413, after opcounters: 413, after opcountersRepl: 413, after repl: 524, after security: 524, after storageEngine: 726, after tcmalloc: 7015, after transportSecurity: 7035, after wiredTiger: 9290, at end: 10340 }
2020-10-01T02:16:14.835-0700 I COMMAND  [conn6] command admin.$cmd command: getnonce { getnonce: 1 } numYields:0 reslen:65 locks:{} protocol:op_query 815ms
2020-10-01T02:16:24.664-0700 I COMMAND  [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 175ms
2020-10-01T02:16:27.522-0700 I COMMAND  [conn7] command admin.$cmd command: getnonce { getnonce: 1 } numYields:0 reslen:65 locks:{} protocol:op_query 4808ms
2020-10-01T02:16:28.884-0700 I COMMAND  [PeriodicTaskRunner] task: UnusedLockCleaner took: 114ms
2020-10-01T02:16:40.266-0700 I COMMAND  [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 415ms
2020-10-01T02:16:42.486-0700 I COMMAND  [conn6] command admin.$cmd command: isMaster { ismaster: 1 } numYields:0 reslen:189 locks:{} protocol:op_query 10298ms
2020-10-01T02:16:55.730-0700 I COMMAND  [conn7] command admin.$cmd command: ping { ping: 1 } numYields:0 reslen:37 locks:{} protocol:op_query 4114ms
2020-10-01T02:16:59.882-0700 I COMMAND  [ftdc] serverStatus was very slow: { after basic: 30, after asserts: 71, after backgroundFlushing: 71, after connections: 81, after dur: 81, after extra_info: 2869, after globalLock: 2980, after locks: 4941, after network: 7180, after opLatencies: 9180, after opcounters: 9322, after opcountersRepl: 9322, after repl: 12102, after security: 12205, after storageEngine: 17035, after tcmalloc: 18618, after transportSecurity: 19465, after wiredTiger: 34756, at end: 37106 }
2020-10-01T02:17:03.853-0700 I -        [conn7] AssertionException handling request, closing client connection: 6 socket exception [SEND_ERROR] for 127.0.0.1:45956
2020-10-01T02:17:03.896-0700 I -        [conn6] AssertionException handling request, closing client connection: 6 socket exception [SEND_ERROR] for 127.0.0.1:45954
2020-10-01T02:17:07.533-0700 I -        [conn6] end connection 127.0.0.1:45954 (3 connections now open)
2020-10-01T02:17:07.533-0700 I -        [conn7] end connection 127.0.0.1:45956 (3 connections now open)
2020-10-01T02:17:13.078-0700 I COMMAND  [conn1] command plexdrive.api_objects command: find { find: "api_objects", filter: { parents: "0BzFKLpiEBfOFaXk5OWtiZGpuOGs", name: "tv" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true } planSummary: IXSCAN { parents: 1 } keysExamined:49 docsExamined:49 cursorExhausted:1 numYields:11 nreturned:1 reslen:371 locks:{ Global: { acquireCount: { r: 24 } }, Database: { acquireCount: { r: 12 } }, Collection: { acquireCount: { r: 12 } } } protocol:op_query 64960ms
2020-10-01T02:17:13.294-0700 I -        [conn1] end connection 127.0.0.1:35798 (1 connection now open)
2020-10-01T02:17:13.423-0700 I NETWORK  [thread1] connection accepted from 127.0.0.1:46066 #8 (1 connection now open)

@zjpleau
Copy link
Author

zjpleau commented Dec 1, 2020

Just happened again, this time way before the Plex maintenance period. I have a feeling that Plexdrive is crashing when Plex is detecting intros on a ton of remote files. Might have to try one of the newer builds of Plexdrive separate from this container.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant