Skip to content

Commit

Permalink
Merge pull request #208 from SouthernMethodistUniversity/add_cf_faq
Browse files Browse the repository at this point in the history
Add cf faq
  • Loading branch information
jrlagrone authored Aug 16, 2024
2 parents 6a23608 + 05cc0ac commit d946aad
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 6 deletions.
2 changes: 1 addition & 1 deletion docs/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ parts:
- file: coldfront/add_class.md
- file: coldfront/add_remove_users.md
- file: coldfront/request_change_allocation.md
- file: coldfront/faq/md
- file: coldfront/faq.md
- caption: Applications
chapters:
- file: examples/chemistry.md
Expand Down
10 changes: 5 additions & 5 deletions docs/coldfront/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ See the [instructor quick start](qs_instructor.md).
2. Starting on November 1, 2024, user access to HPC compute resources will require being a member of an active project and allocation in ColdFront.
See the [quick start](quick_start.md).

3. Personal user and group directories in '''$WORK''' will be decommissioned before January 15, 2025 (tentative).
All data stored in a personal user or group directories in $WORK must be moved into an appropriate new storage allocation specified in ColdFront project(s).
3. Personal user and group directories in `$WORK` will be decommissioned before January 15, 2025 (tentative).
All data stored in a personal user or group directories in `$WORK` must be moved into an appropriate new storage allocation specified in ColdFront project(s).
OIT will assist this transition as needed, please submit a STABLE Help Desk ticket if you need assistance.
All data must be moved before January 15, 2025, though quotas on existing spaces may be reduced prior to that date.
HPC users will receive regular communications about the status $WORK quotas.
HPC users will receive regular communications about the status `$WORK` quotas.

4. During the transition period (tentatively between August 2024 and May 2025), all allocation requests will be approved with the following exceptions:
* Requests that violate SMU policy (see, for example, [SMU's acceptable use policy](https://www.smu.edu/policy/8-information-technology/8-1-acceptable-use).
Expand Down Expand Up @@ -95,7 +95,7 @@ No. HPC resources are free for SMU researchers and sponsored affiliates.

### Is my $HOME directory part of an allocation?

No. All users with an active HPC account will keep or be granted a home directory on M3 and the SuperPOD with 200GB of space on each system. This space is private and backed up with daily snapshots for 7 days. Sharing data in $HOME directories is not allowed.
No. All users with an active HPC account will keep or be granted a home directory on M3 and the SuperPOD with 200GB of space on each system. This space is private and backed up with daily snapshots for 7 days. Sharing data in `$HOME` directories is not allowed.

### Can I add external collaborators?

Expand Down Expand Up @@ -139,7 +139,7 @@ Request more compute time on an existing allocation or request a new compute all

When a storage or compute allocation expires, user access to that allocation is revoked. For compute resources, jobs will no longer run if submitted with an expired SLURM account. For storage resources, write access will be revoked.

During the ColdFront transition, SMU will not delete any user data except for the existing 60-day $SCRATCH purge policy or by request.  More formal data retention policies for SMU HPC systems will be clarified in the future by the ODSRCI.
During the ColdFront transition, SMU will not delete any user data except for the existing 60-day `$SCRATCH` purge policy or by request.  More formal data retention policies for SMU HPC systems will be clarified in the future by the ODSRCI.

OIT recommends that all HPC users request allocation renewals in a timely manner to avoid disruptions.

Expand Down

0 comments on commit d946aad

Please sign in to comment.