-
Notifications
You must be signed in to change notification settings - Fork 42
Unable to write and view data in container directories #22
Comments
I have a follow up question regarding how this Docker image saves data. On my local machine, in the ~/sas-container-recipes-master/run folder I now have /home/ (since I did --volume ${PWD}/home:/home in launchsas.sh), and as stated above this maps onto /home in the Docker image, where I have for example/ home/sasdemo/jupyter. I just deployed the same image to Azure and Jupyter saves its notebooks not under /home/sasdemo/jupyter, but under /home/cas/jupyter, which is really unexpected. What is the reason for the switch? Is this a bug? Also I execed into the image in Azure, installed R, and upon restart the R installation was gone. Clearly I did not persist / , would you recommend persisting an entire root dir to be able to install new packages? This makes me think that if I run !pip install in a Jupyter session, pip packages may not be available the next time I start Jupyter, am I right? |
If you want to install R then I would suggest one of two things.
If you do not build a image, then you are correct, any actions of installing software in the running container are lost and anything saved in a directory that does not have persistence associated with it is lost as well when the container is restarted. The permission issue comes down to the user (or more specifically the userid) in the container creating things (root in this case) and how that user id maps to the outside environment. If you built the single image with addons/auth-demo, then the sasdemo user is created with a userid value of 1004. Anything that is created by that user in a mapped volume/directory should have that userid associated with it and unless you are that user, then you will not be able to look at the contents unless you become root. If you want to have the sasdemo user match your userid, define the DEMO_USER_UID environment variable to your user id and then you may have better success. Look at the demo_pre_deploy.sh script to see the list of variables the system uses to create the user. The case with the other directories could be an issue with us not setting the correct permissions on them when we created them. What is the error you get when writing to the /data directory in the container?
I am not sure why that would have happened. What is the value of the CASENV_ADMIN_USER in the Azure instance? Looking at the ide-jupyter-python3/post_deploy.sh script, it looks like the default user for Jupyter will be cas if the RUN_USER and the CASENV_ADMIN_USER are not set. I will take a look and see if that is happening in the single-container entrypoint. |
Took a quick look and we are not unsetting either of those variables in the main entrypoint. Can you provide the list of addons you used in the build process? |
@g8sman Thanks for the detailed response. I will play with the UID settings to see what works. One nice aspect of having a deployment in Azure, is that the UID issue goes away, as in, I see my code in the storage account, and the root viewing of code in /home/sasdemo is a VM issue. I am close to showing that with a few simple steps, the docker image will allow for persistent package installations in R and Python, in the cloud, and in the VM. This will pretty much be possible out of the box for Python thanks to how you set up the home/sasdemo , cas and .local folders. I took a look at the post_deploy.sh, and at the cas <-> sasdemo swap, and it is a swap, When I exec into the image $CASENV_ADMIN_USER is empty , is there a different way you want me to check? I could not find it under /cas-shared-default-http/ either, but am no SASViya admin. When you ask about addons, do you mean the recipe addons? I only used "auth-demo ide-jupyter-python3", with the usual ./build.sh command ./build.sh --type single --zip ~/path/to/SAS_Viya_deployment_data.zip --addons "auth-demo" This behaviour occurs before I even start playing with the Dockerfile (for example, to add R) Thanks |
Describe the bug
I am trying to setup data persistence and sharing between the VM hosting my docker image and the working directories of the docker image. In the docker image, I am unable to write to /data, /cas, and /sasinside. I am able to write to cas/permstore and cas/data, but not cas/cache. I am unable to do this in SAS Studio for a very simple .sas script.
Where should we save SAS user programs when using SAS Viya docker images, and how do we make that available in the VM running the docker image? Maybe the UID and GID need to be changed in the Dockerfile to achieve this?
In addition, I want to be able to save and view my Jupyter Notebooks, which get saved in the docker image under /home/sasdemo/jupyter. Although I can successfully setup the share between docker image /home and the VM (using --volume ${PWD}/home:/home), to view the files in the VM, I have to use sudo, otherwise I get permission denied.
To Reproduce
Use this launch script:
SAS_CONTAINER_NAME=sas-viya-single-programming-only
IMAGE=$SAS_CONTAINER_NAME:@REPLACE_ME_WITH_TAG@
SAS_HTTP_PORT=8080
SAS_HTTPS_PORT=8443
mkdir -p ${PWD}/sasinside
mkdir -p ${PWD}/sasdemo
mkdir -p ${PWD}/cas/data
mkdir -p ${PWD}/cas/cache
mkdir -p ${PWD}/cas/permstore
mkdir -p ${PWD}/home
run_args="
--name=$SAS_CONTAINER_NAME
--rm
--hostname $SAS_CONTAINER_NAME
--env RUN_MODE=developer
--env CASENV_ADMIN_USER=sasdemo
--env CASENV_CAS_VIRTUAL_HOST=$(hostname -f)
--env CASENV_CAS_VIRTUAL_PORT=${SAS_HTTPS_PORT}
--env CASENV_CASDATADIR=/cas/data
--env CASENV_CASPERMSTORE=/cas/permstore
--publish-all
--publish 5570:5570
--publish ${SAS_HTTP_PORT}:80
--publish ${SAS_HTTPS_PORT}:443
--volume ${PWD}/sasinside:/sasinside
--volume ${PWD}/sasdemo:/data
--volume ${PWD}/home:/home
--volume ${PWD}/cas/data:/cas/data
--volume ${PWD}/cas/cache:/cas/cache
--volume ${PWD}/cas/permstore:/cas/permstore"
docker run --detach ${run_args}$IMAGE "$ @"
Expected behavior
I should be able to ls ~/sas-container-recipes/run/home/sasdemo and see
authinfo.txt - file
jupyter - folder
sasuser.viya - folder
Cannot do this as my user, I have to do su, log in and then I can see the data and cd into sasdemo/jupyter, otherwise I get permission denied.
Environment (please complete the applicable information):
[azablocki@localhost sas-container-recipes-master]$ docker version
Client:
Version: 18.09.6
API version: 1.39
Go version: go1.10.8
Git commit: 481bc77156
Built: Sat May 4 02:34:58 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.6
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: 481bc77
Built: Sat May 4 02:02:43 2019
OS/Arch: linux/amd64
Experimental: false
Additional context
if I do ls -ld in /run I get
drwxrwxr-x. 6 azablocki azablocki 161
in home I get
drwxrwxr-x. 3 azablocki azablocki 21
and if I do ls sasdemo/ -ld I get
drwx------. 9 1004 1001 238
Thank you
The text was updated successfully, but these errors were encountered: