Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lingering Uploader thread from previous deploy after new deploy #14

Open
crspybits opened this issue Jun 6, 2021 · 6 comments
Open

Comments

@crspybits
Copy link
Member

crspybits commented Jun 6, 2021

I just tailed logs on the server running on AWS, and see:

[2021-06-06T17:43:03.777Z] [DEBUG] [RepeatingTimer.swift:43 deinit] RepeatingTimer: deinit
[2021-06-06T17:43:33.777Z] [DEBUG] [PeriodicUploader.swift:31 schedule()] PeriodicUploader: About to run Uploader
[2021-06-06T17:43:33.777Z] [DEBUG] [DebugAlloc.swift:22 create()] [CREATE: Uploader] Created: 47972; destroyed: 47970
[2021-06-06T17:43:33.777Z] [DEBUG] [Uploader.swift:104 deinit] Uploader: deinit
[2021-06-06T17:43:33.777Z] [DEBUG] [DebugAlloc.swift:27 destroy()] [DESTROY: Uploader] Created: 47972; destroyed: 47971
[2021-06-06T17:43:33.777Z] [DEBUG] [Uploader.swift:141 run()] Attempting to get lock...
[2021-06-06T17:43:33.784Z] [ERROR] [Database.swift:52 init(showStartupInfo:)] Failure connecting to mySQL server syncserver-dev2.<SNIP>.us-west-2.rds.amazonaws.com: Failure: 2005 Unknown MySQL server host 'syncserver-dev2.<SNIP>.us-west-2.rds.amazonaws.com' (0)
[2021-06-06T17:43:33.784Z] [INFO] [Database.swift:97 close()] CLOSING DB CONNECTION: opened: 47973; closed: 47973
[2021-06-06T17:43:33.784Z] [ERROR] [PeriodicUploader.swift:38 schedule()] failedConnectingDatabase

This is ongoing-- from a running process, not stale logs. Note that it is referencing the mySQL server: syncserver-dev2.<SNIP>.us-west-2.rds.amazonaws.com. This is a stale reference. The AWS mySQL RDS instance was only present for a brief interval while I was migrating. It is no longer running and thus it's not surprising that an access to it is failing.

What is surprising is that there is a reference to this old server at all. The currently deployed Server.json configuration does not contain a reference to this mySQL RDS instance.

I believe this is occurring as a result of:

a) The manner in which I'm deploying updates to the server using AWS Elastic Beanstalk tools, and
b) The new architecture of the server which runs a timer-based thread to do processing of deferred uploads.

Somehow that timer-based thread is surviving the deploy of the new server at least in some cases.

@crspybits
Copy link
Member Author

crspybits commented Jun 6, 2021

Some detail about how I'm deploying server updates on AWS.

I'm using eb deploy with a new bundle.zip. i.e., a new .elasticbeanstalk/config.yml deploy artifact. This deploy artifact contains the new Server.json and a reference to the new Docker container.

My configure.yml file includes these lines:

 aws:autoscaling:launchconfiguration:
    IamInstanceProfile: aws-elasticbeanstalk-ec2-role
    InstanceType: t2.micro
    EC2KeyName: amazon1
  aws:autoscaling:asg:
    MaxSize: '1'
  aws:elasticbeanstalk:environment:
    EnvironmentType: LoadBalanced
    LoadBalancerType: classic
    ServiceRole: aws-elasticbeanstalk-service-role

EB deployment policies are listed here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html

(From that reference) "By default, your environment uses all-at-once deployments. If you created the environment with the EB CLI and it's a scalable environment (you didn't specify the --single option), it uses rolling deployments."

The following is from the AWS EB Web UI for my server:
Screen Shot 2021-06-06 at 2 10 15 PM

clearly, it's using rolling updates.

Still from that reference above: "Rolling – Avoids downtime and minimizes reduced availability, at a cost of a longer deployment time. Suitable if you can't accept any period of completely lost service. With this method, your application is deployed to your environment one batch of instances at a time. Most bandwidth is retained throughout the deployment."

I think this means that with rolling updates, the same EC2 instances are used, and the new Docker container is just deployed to those instances. That's consistent with other parts of the docs I'm reading. Such as: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html#environments-cfg-rollingdeployments-method

Rolling updates vs. deployment: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rollingupdates.html

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-autoscalingupdatepolicyrollingupdate

"To maintain full capacity during deployments, you can configure your environment to launch a new batch of instances before taking any instances out of service. This option is known as a rolling deployment with an additional batch. When the deployment completes, Elastic Beanstalk terminates the additional batch of instances." (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html)

https://stackoverflow.com/questions/38656595/difference-between-rolling-rolling-with-additional-batch-and-immutable-deployme

@crspybits
Copy link
Member Author

Conclusion: I believe if new EC2 instances are launched with each deployment, I'll not have this problem. It looks like rolling updates are one way to do this.

@crspybits
Copy link
Member Author

crspybits commented Jun 6, 2021

"Deployment option namespaces": Including RollingWithAdditionalBatch-- see https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

The following example is there too:

option_settings:
  aws:elasticbeanstalk:command:
    DeploymentPolicy: RollingWithAdditionalBatch
    BatchSizeType: Fixed
    BatchSize: 5

For aws:elasticbeanstalk:command, see:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-elasticbeanstalkcommand

@crspybits
Copy link
Member Author

Here are the results of the deploy:

MacBook-Pro-4:neebla-02-production chris$ ./deploy.sh 
Alert: The platform version that your environment is using isn't recommended. There's a recommended version in the same platform branch.

Uploading neebla-02-production/app-210606_145849.zip to S3. This may take a while.
Upload Complete.
2021-06-06 20:58:50    INFO    Environment update is starting.      
2021-06-06 20:59:30    INFO    Rolling with Additional Batch deployment policy enabled. Launching  a new batch of 1 additional instance(s).
2021-06-06 21:01:24    INFO    Batch 1: 1 EC2 instance(s) [i-0ee6788a6f604b1c0] launched. Deploying application version 'app-210606_145849'.
2021-06-06 21:02:00    INFO    Successfully pulled crspybits/syncserver-runner:1.10.5
2021-06-06 21:02:03    INFO    Successfully built aws_beanstalk/staging-app
2021-06-06 21:02:13    INFO    Docker container afb403f5a66a is running aws_beanstalk/current-app.
2021-06-06 21:04:20    INFO    Batch 1: Completed application deployment.
2021-06-06 21:04:20    INFO    Command execution completed on 1 of 2 instances in environment.
2021-06-06 21:05:26    INFO    Terminating excess instance(s): [i-00c4f808416901a91].
2021-06-06 21:05:29    INFO    Excess instance(s) terminated.       
2021-06-06 21:05:31    INFO    New application version was deployed to running EC2 instances.
2021-06-06 21:05:32    INFO    Environment update completed successfully.

@crspybits
Copy link
Member Author

I used eb ssh to connect into the EC2 instance and the server logs look right now too. No more extraneous uploader.

@crspybits
Copy link
Member Author

crspybits commented Jun 6, 2021

So this has patched the issue. But I'd still like to know why a timer-based thread in my server can survive Docker container updates on an EC2 instance. Asked a question on this: https://stackoverflow.com/questions/67863943/swift-server-timer-based-thread-survives-docker-container-redeploy-on-aws

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant