-
Notifications
You must be signed in to change notification settings - Fork 779
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove AWS infrastructure for code upload challenge if disapproved #4377
base: master
Are you sure you want to change the base?
Remove AWS infrastructure for code upload challenge if disapproved #4377
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #4377 +/- ##
==========================================
- Coverage 72.93% 69.30% -3.63%
==========================================
Files 83 20 -63
Lines 5368 3574 -1794
==========================================
- Hits 3915 2477 -1438
+ Misses 1453 1097 -356 see 64 files with indirect coverage changes see 64 files with indirect coverage changes Continue to review full report in Codecov by Sentry.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some high-level comments.
apps/challenges/aws_utils.py
Outdated
@@ -1061,6 +1061,257 @@ def scale_resources(challenge, worker_cpu_cores, worker_memory): | |||
return e.response | |||
|
|||
|
|||
def detach_policies_and_delete_role(challenge, iam): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All these should be called inside each other in sequence (check the aws utils for EKS creation).
All of these should be celery tasks (check the aws utils for EKS creation).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed with new commits
apps/challenges/aws_utils.py
Outdated
challenge_aws_keys = get_aws_credentials_for_challenge(challenge.pk) | ||
iam_client = get_boto3_client("iam", challenge_aws_keys) | ||
eks_client = get_boto3_client("eks", challenge_aws_keys) | ||
efs_client = get_boto3_client("efs", challenge_aws_keys) | ||
ec2_client = get_boto3_client("ec2", challenge_aws_keys) | ||
elb_client = get_boto3_client("elb", challenge_aws_keys) | ||
elbv2_client = get_boto3_client("elbv2", challenge_aws_keys) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is incorrect. This is now how we do things in the AWS utils during EKS cluster creation. Recommend you to read.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh wait I get what you mean, the sequence. You want me to do it in sequence. Okay nvm I got it
Here is the expectation: See this method: https://github.com/Cloud-CV/EvalAI/blob/da7e0680c11a8e60c335c821902211a06ed581de/apps/challenges/aws_utils.py#L1361C1-L1475C15 Now, if you have delete infra, and suppose it starts with deletion of IAM, followed by deletion of EFS, here is what you would write: @app.task
def destroy_eks_cluster(challenge):
"""
Destroy EKS cluster. Starts with deletion of EKS and Nodegroup roles, and relays to deletion of EFS.
Arguments:
instance {<class 'django.db.models.query.QuerySet'>} -- instance of the model calling the post hook
"""
from .models import ChallengeEvaluationCluster
from .serializers import ChallengeEvaluationClusterSerializer
from .utils import get_aws_credentials_for_challenge
for obj in serializers.deserialize("json", challenge):
challenge_obj = obj.object
challenge_aws_keys = get_aws_credentials_for_challenge(challenge_obj.pk)
client = get_boto3_client("iam", challenge_aws_keys)
eks_role_arn = ...
try:
<LOGIC FOR EKS CLUSTER ROLE DELETION>
except ClientError as e:
logger.exception(e)
return
waiter = client.get_waiter("role_deleted") # TODO: Find correct argument
waiter.wait(<TODO>)
node_group_role_name = "evalai-code-upload-nodegroup-role-{}".format(
environment_suffix
)
node_group_arn_role = ...
try:
<LOGIC FOR EKS NODEGROUP ROLE DELETION>
except ClientError as e:
logger.exception(e)
return
waiter = client.get_waiter("role_exists")
waiter.wait(RoleName=node_group_role_name)
# Delete custom ECR all access policy and attach to node_group_role
ecr_all_access_policy_arn = ...
try:
<LOGIC TO DELETE CUSTOM ECR ALL ACCESS POLICY>
except ClientError as e:
logger.exception(e)
return
# Remove these details from the evaluation cluster on backend
try:
challenge_evaluation_cluster = ChallengeEvaluationCluster.objects.get(
challenge=challenge_obj
)
serializer = ChallengeEvaluationClusterSerializer(
challenge_evaluation_cluster,
data={
"eks_arn_role": '',
"node_group_arn_role": '',
"ecr_all_access_policy_arn": '',
},
partial=True,
)
if serializer.is_valid():
serializer.save()
# Delete efs
delete_efs.delay(challenge)
except Exception as e:
logger.exception(e)
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some comments.
Also rebase your branch on top of master.
apps/challenges/models.py
Outdated
# if the challenge: | ||
# - the challenge model created | ||
# - the challenge is disapproved by admin | ||
# - the challenge is docker based | ||
# - the challenge is not remote evaluation | ||
# then removed the aws infrastructure for the code upload challenge (if exists) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove these comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I think there is a change in this one.
Check the name of this method: create_eks_cluster_or_ec2_for_challenge
. This means that we should be creating a new method delete_eks_cluster_for_challenge
with only the new changes you have added here. Can you please change that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure thing
apps/challenges/aws_utils.py
Outdated
return | ||
try: | ||
serializer = ChallengeEvaluationClusterSerializer( | ||
challenge_evaluation_cluster, | ||
data={"eks_arn_role": "", "node_group_arn_role": ""}, | ||
partial=True, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you checked the ChallengeEvaluationCluster
Django model?
Is this all that we should be emptying here? What are the other attributes of the model that should be cleared up?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember that we delete 3 things:
- 2 roles, and 1 created policy
Check if any of these details are in the challenge evaluation cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's true. I have updated the script to remove the policy and write the policy arn to blank
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's true, there are another attribute that I should delete, the ecr_all_access_policy_arn
. Thanks alot @gchhablani
else: | ||
time.sleep(5) | ||
logger.info( | ||
f"Waiting for mount targets to be deleted for EFS {efs_id}" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you check for a better way than time.sleep()
? Is this even needed?
Also, why are we deleting the mount target? Is this needed for deletion of EFS?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi there, currently get_waiter
will allow boto3 to dynamically wait for deletion (or carry out tasks). However, at the moment boto3 doesn't support get_waiter
for delete mount target, so the usage of time.sleep
is justified
To answer your question about deleting mount target, we do need to delete mount_target
before removing EFS
# Optionally delete the security group if no longer needed | ||
if challenge_evaluation_cluster.efs_security_group_id: | ||
ec2.delete_security_group( | ||
GroupId=challenge_evaluation_cluster.efs_security_group_id | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you make sure this attribute exists?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eks.get_waiter("nodegroup_deleted").wait( | ||
clusterName=cluster_name, nodegroupName=nodegroup | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we not have this get_waiter
instead of time.sleep
where you use it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mentioned, get_waiter
does not support for mount_target_deletion
serializer = ChallengeEvaluationClusterSerializer( | ||
challenge_evaluation_cluster, | ||
data={ | ||
"name": "", | ||
}, | ||
partial=True, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No changes needed for nodegroup?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently there are no attribute for nodegroup. The code for node group creation also doesn't save any details for the nodegroup, so I have to retrieve them manually
`
@app.task
def create_eks_nodegroup(challenge, cluster_name):
"""
Creates a nodegroup when a EKS cluster is created by the EvalAI admin
Arguments:
instance {<class 'django.db.models.query.QuerySet'>} -- instance of the model calling the post hook
cluster_name {str} -- name of eks cluster
"""
from .utils import get_aws_credentials_for_challenge
for obj in serializers.deserialize("json", challenge):
challenge_obj = obj.object
environment_suffix = "{}-{}".format(challenge_obj.pk, settings.ENVIRONMENT)
nodegroup_name = "{}-{}-nodegroup".format(
challenge_obj.title.replace(" ", "-")[:20], environment_suffix
)
challenge_aws_keys = get_aws_credentials_for_challenge(challenge_obj.pk)
client = get_boto3_client("eks", challenge_aws_keys)
cluster_meta = get_code_upload_setup_meta_for_challenge(challenge_obj.pk)
# TODO: Move the hardcoded cluster configuration such as the
# instance_type, subnets, AMI to challenge configuration later.
try:
response = client.create_nodegroup(
clusterName=cluster_name,
nodegroupName=nodegroup_name,
scalingConfig={
"minSize": challenge_obj.min_worker_instance,
"maxSize": challenge_obj.max_worker_instance,
"desiredSize": challenge_obj.desired_worker_instance,
},
diskSize=challenge_obj.worker_disk_size,
subnets=[cluster_meta["SUBNET_1"], cluster_meta["SUBNET_2"]],
instanceTypes=[challenge_obj.worker_instance_type],
amiType=challenge_obj.worker_ami_type,
nodeRole=cluster_meta["EKS_NODEGROUP_ROLE_ARN"],
)
logger.info("Nodegroup create: {}".format(response))
except ClientError as e:
logger.exception(e)
return
waiter = client.get_waiter("nodegroup_active")
waiter.wait(clusterName=cluster_name, nodegroupName=nodegroup_name)
construct_and_send_eks_cluster_creation_mail(challenge_obj)
# starting the code-upload-worker
client = get_boto3_client("ecs", aws_keys)
client_token = client_token_generator(challenge_obj.pk)
create_service_by_challenge_pk(client, challenge_obj, client_token)
`
apps/challenges/aws_utils.py
Outdated
ec2 = get_boto3_client("ec2", challenge_aws_keys) | ||
elb = get_boto3_client("elb", challenge_aws_keys) | ||
elbv2 = get_boto3_client("elbv2", challenge_aws_keys) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we ever create this on our end?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I'm so sorry. I don't know why I add the load balancer. We do need ec2
but there is no code for load balancer. I'm sincerely apologize for the mistake
…ccording to Gujan new comments
…o use delete_eks_infra function
apps/challenges/aws_utils.py
Outdated
challenge_aws_keys = get_aws_credentials_for_challenge(challenge.pk) | ||
iam_client = get_boto3_client("iam", challenge_aws_keys) | ||
eks_client = get_boto3_client("eks", challenge_aws_keys) | ||
efs_client = get_boto3_client("efs", challenge_aws_keys) | ||
ec2_client = get_boto3_client("ec2", challenge_aws_keys) | ||
elb_client = get_boto3_client("elb", challenge_aws_keys) | ||
elbv2_client = get_boto3_client("elbv2", challenge_aws_keys) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh wait I get what you mean, the sequence. You want me to do it in sequence. Okay nvm I got it
apps/challenges/aws_utils.py
Outdated
@@ -1061,6 +1061,257 @@ def scale_resources(challenge, worker_cpu_cores, worker_memory): | |||
return e.response | |||
|
|||
|
|||
def detach_policies_and_delete_role(challenge, iam): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed with new commits
apps/challenges/models.py
Outdated
# if the challenge: | ||
# - the challenge model created | ||
# - the challenge is disapproved by admin | ||
# - the challenge is docker based | ||
# - the challenge is not remote evaluation | ||
# then removed the aws infrastructure for the code upload challenge (if exists) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure thing
apps/challenges/aws_utils.py
Outdated
return | ||
try: | ||
serializer = ChallengeEvaluationClusterSerializer( | ||
challenge_evaluation_cluster, | ||
data={"eks_arn_role": "", "node_group_arn_role": ""}, | ||
partial=True, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's true. I have updated the script to remove the policy and write the policy arn to blank
apps/challenges/aws_utils.py
Outdated
return | ||
try: | ||
serializer = ChallengeEvaluationClusterSerializer( | ||
challenge_evaluation_cluster, | ||
data={"eks_arn_role": "", "node_group_arn_role": ""}, | ||
partial=True, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's true, there are another attribute that I should delete, the ecr_all_access_policy_arn
. Thanks alot @gchhablani
apps/challenges/aws_utils.py
Outdated
@app.task | ||
def detach_policies_and_delete_role(challenge): | ||
|
||
from .models import ChallengeEvaluationCluster | ||
from .utils import get_aws_credentials_for_challenge | ||
from .serializers import ChallengeEvaluationClusterSerializer | ||
|
||
for obj in serializers.deserialize("json", challenge): | ||
challenge_obj = obj.object | ||
challenge_aws_keys = get_aws_credentials_for_challenge(challenge.pk) | ||
iam = get_boto3_client("iam", challenge_aws_keys) | ||
|
||
try: | ||
challenge_evaluation_cluster = ChallengeEvaluationCluster.objects.get( | ||
challenge=challenge_obj | ||
) | ||
|
||
eks_arn_role = challenge_evaluation_cluster.eks_arn_role | ||
node_group_arn_role = challenge_evaluation_cluster.node_group_arn_role | ||
except Exception as e: | ||
logger.exception(e) | ||
return | ||
|
||
for role_arn in [eks_arn_role, node_group_arn_role]: | ||
|
||
role_name = role_arn.split("/")[-1] | ||
|
||
try: | ||
attached_policies = iam.list_attached_role_policies( | ||
RoleName=role_name | ||
) | ||
for policy in attached_policies["AttachedPolicies"]: | ||
iam.detach_role_policy( | ||
RoleName=role_name, PolicyArn=policy["PolicyArn"] | ||
) | ||
except Exception as e: | ||
logger.exception(e) | ||
return | ||
|
||
try: | ||
iam.delete_role(RoleName=role_name) | ||
except Exception as e: | ||
logger.exception(e) | ||
return | ||
|
||
try: | ||
iam.delete_policy( | ||
PolicyArn=challenge_evaluation_cluster.ecr_all_access_policy_arn | ||
) | ||
except Exception as e: | ||
logger.exception(e) | ||
return | ||
|
||
try: | ||
serializer = ChallengeEvaluationClusterSerializer( | ||
challenge_evaluation_cluster, | ||
data={ | ||
"eks_arn_role": "", | ||
"node_group_arn_role": "", | ||
"ecr_all_access_policy_arn": "", | ||
}, | ||
partial=True, | ||
) | ||
|
||
if serializer.is_valid(): | ||
serializer.save() | ||
|
||
delete_efs_resources.delay(challenge) | ||
except Exception as e: | ||
logger.exception(e) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Retrieving 2 arn roles from ChallengeEvaluationCluster
and proceed to delete them, then update the value in ChallengeEvaluationCluster
@app.task | ||
def delete_efs_resources(challenge): | ||
import time | ||
|
||
from .models import ChallengeEvaluationCluster | ||
from .utils import get_aws_credentials_for_challenge, get_boto3_client | ||
from .serializers import ChallengeEvaluationClusterSerializer | ||
|
||
challenge_aws_keys = get_aws_credentials_for_challenge(challenge.pk) | ||
efs = get_boto3_client("efs", challenge_aws_keys) | ||
ec2 = get_boto3_client("ec2", challenge_aws_keys) | ||
|
||
try: | ||
challenge_evaluation_cluster = ChallengeEvaluationCluster.objects.get( | ||
challenge=challenge | ||
) | ||
efs_id = challenge_evaluation_cluster.efs_id | ||
mount_target_ids = challenge_evaluation_cluster.efs_mount_target_ids | ||
|
||
for mount_target_id in mount_target_ids: | ||
efs.delete_mount_target(MountTargetId=mount_target_id) | ||
|
||
while True: | ||
existing_mounts = efs.describe_mount_targets(FileSystemId=efs_id) | ||
if not existing_mounts["MountTargets"]: | ||
break | ||
else: | ||
time.sleep(5) | ||
logger.info( | ||
f"Waiting for mount targets to be deleted for EFS {efs_id}" | ||
) | ||
|
||
# Optionally delete the security group if no longer needed | ||
if challenge_evaluation_cluster.efs_security_group_id: | ||
ec2.delete_security_group( | ||
GroupId=challenge_evaluation_cluster.efs_security_group_id | ||
) | ||
|
||
efs.delete_file_system(FileSystemId=efs_id) | ||
|
||
except Exception as e: | ||
logger.exception(e) | ||
return | ||
|
||
try: | ||
serializer = ChallengeEvaluationClusterSerializer( | ||
challenge_evaluation_cluster, | ||
data={ | ||
"efs_id": "", | ||
"efs_security_group_id": "", | ||
"efs_mount_target_ids": [], | ||
}, | ||
partial=True, | ||
) | ||
|
||
if serializer.is_valid(): | ||
serializer.save() | ||
|
||
delete_eks_resources.delay(challenge) | ||
|
||
except Exception as e: | ||
logger.exception(e) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Retrieving efs_id
and efs mount_target_id
from ChallengeEvaluationCluster
then proceed to delete the mount target before deleting the efs
@app.task | ||
def delete_eks_resources(challenge): | ||
from .models import ChallengeEvaluationCluster | ||
from .utils import get_aws_credentials_for_challenge | ||
from .serializers import ChallengeEvaluationClusterSerializer | ||
|
||
for obj in serializers.deserialize("json", challenge): | ||
challenge_obj = obj.object | ||
challenge_aws_keys = get_aws_credentials_for_challenge(challenge.pk) | ||
eks = get_boto3_client("eks", challenge_aws_keys) | ||
|
||
try: | ||
challenge_evaluation_cluster = ChallengeEvaluationCluster.objects.get( | ||
challenge=challenge_obj | ||
) | ||
|
||
cluster_name = challenge_evaluation_cluster.name | ||
|
||
node_groups = eks.list_nodegroups(clusterName=cluster_name)[ | ||
"nodegroups" | ||
] | ||
for nodegroup in node_groups: | ||
|
||
eks.delete_nodegroup( | ||
clusterName=cluster_name, nodegroupName=nodegroup | ||
) | ||
eks.get_waiter("nodegroup_deleted").wait( | ||
clusterName=cluster_name, nodegroupName=nodegroup | ||
) | ||
except Exception as e: | ||
logger.exception(e) | ||
return | ||
|
||
try: | ||
eks.delete_cluster(name=cluster_name) | ||
eks.get_waiter("cluster_deleted").wait(name=cluster_name) | ||
except Exception as e: | ||
logger.exception(e) | ||
return | ||
|
||
try: | ||
serializer = ChallengeEvaluationClusterSerializer( | ||
challenge_evaluation_cluster, | ||
data={ | ||
"name": "", | ||
}, | ||
partial=True, | ||
) | ||
|
||
if serializer.is_valid(): | ||
serializer.save() | ||
|
||
delete_vpc_resources.delay(challenge) | ||
except Exception as e: | ||
logger.exception(e) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Retrieving cluster name from ChallengeEvaluationCluster
, get and remove nodegroup attaches to the cluster dynamically. Then proceed to delete the cluster and remove cluster name from ChallengeEvaluationCluster
@app.task | ||
def delete_vpc_resources(challenge): | ||
from .models import ChallengeEvaluationCluster | ||
from .utils import get_aws_credentials_for_challenge | ||
from .serializers import ChallengeEvaluationClusterSerializer | ||
|
||
try: | ||
for obj in serializers.deserialize("json", challenge): | ||
challenge_obj = obj.object | ||
|
||
challenge_evaluation_cluster = ChallengeEvaluationCluster.objects.get( | ||
challenge=challenge_obj | ||
) | ||
|
||
vpc_id = challenge_evaluation_cluster.vpc_id | ||
internet_gateway_id = challenge_evaluation_cluster.internet_gateway_id | ||
route_table_id = challenge_evaluation_cluster.route_table_id | ||
security_group_id = challenge_evaluation_cluster.security_group_id | ||
subnet_1_id = challenge_evaluation_cluster.subnet_1_id | ||
subnet_2_id = challenge_evaluation_cluster.subnet_2_id | ||
|
||
except Exception as e: | ||
logger.error(f"Challenge or Cluster not found: {e}") | ||
return | ||
|
||
challenge_aws_keys = get_aws_credentials_for_challenge(challenge_obj.pk) | ||
ec2 = get_boto3_client("ec2", challenge_aws_keys) | ||
|
||
try: | ||
addresses = ec2.describe_addresses( | ||
Filters=[{"Name": "vpc-id", "Values": [vpc_id]}] | ||
) | ||
for address in addresses["Addresses"]: | ||
if "AssociationId" in address: | ||
ec2.disassociate_address( | ||
AssociationId=address["AssociationId"] | ||
) | ||
ec2.release_address(AllocationId=address["AllocationId"]) | ||
|
||
if internet_gateway_id != "": | ||
ec2.detach_internet_gateway( | ||
InternetGatewayId=internet_gateway_id, VpcId=vpc_id | ||
) | ||
ec2.delete_internet_gateway(InternetGatewayId=internet_gateway_id) | ||
|
||
nat_gateways = ec2.describe_nat_gateways( | ||
Filters=[{"Name": "vpc-id", "Values": [vpc_id]}] | ||
) | ||
for nat_gateway in nat_gateways["NatGateways"]: | ||
ec2.delete_nat_gateway(NatGatewayId=nat_gateway["NatGatewayId"]) | ||
|
||
subnets = [subnet_1_id, subnet_2_id] | ||
for subnet_id in subnets: | ||
instances = ec2.describe_instances( | ||
Filters=[{"Name": "subnet-id", "Values": [subnet_id]}] | ||
) | ||
for reservation in instances["Reservations"]: | ||
for instance in reservation["Instances"]: | ||
ec2.terminate_instances( | ||
InstanceIds=[instance["InstanceId"]] | ||
) | ||
|
||
if security_group_id != "": | ||
ec2.delete_security_group(GroupId=security_group_id) | ||
|
||
for subnet_id in subnets: | ||
ec2.delete_subnet(SubnetId=subnet_id) | ||
|
||
if route_table_id != "": | ||
ec2.delete_route_table(RouteTableId=route_table_id) | ||
|
||
if vpc_id != "": | ||
ec2.delete_vpc(VpcId=vpc_id) | ||
|
||
except ClientError as e: | ||
logger.exception(f"Failed to delete AWS resources: {e}") | ||
return | ||
|
||
try: | ||
serializer = ChallengeEvaluationClusterSerializer( | ||
challenge_evaluation_cluster, | ||
data={ | ||
"vpc_id": "", | ||
"internet_gateway_id": "", | ||
"route_table_id": "", | ||
"security_group_id": "", | ||
"subnet_1_id": "", | ||
"subnet_2_id": "", | ||
}, | ||
partial=True, | ||
) | ||
|
||
if serializer.is_valid(): | ||
serializer.save() | ||
|
||
except Exception as e: | ||
logger.exception(e) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Retrieving details about the VPC components from ChallengeEvaluationCluster
: 2 subnets, security group, route table, internet gateway, security group, and vpc itself before removing them, then proceed to update their values to ChallengeEvaluationCluster
This pull request removes the AWS infrastructure for the code upload challenge if the challenge is disapproved by the admin.