You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for bringing this issue to our attention. In fact, it seems there is something wrong since the env. variable defined is SPARK_WORKER_MEMORY but the one used in the entrypoint is SPARK_EXECUTOR_MEMORY, see
$ ag 'SPARK_.*_MEMORY' bitnami/spark/3.5/debian-12/rootfs/opt/bitnami/scripts/spark/entrypoint.sh66: "-Xms${SPARK_EXECUTOR_MEMORY}"67: "-Xmx${SPARK_EXECUTOR_MEMORY}"docker-compose.yml21: - SPARK_WORKER_MEMORY=1G
Since you discovered the issue, if you're interested in contributing a solution, we welcome you to create a pull request. The Bitnami team is excited to review your submission and offer feedback. You can find the contributing guidelines here.
Your contribution will greatly benefit the community. Feel free to reach out if you have any questions or need assistance.
So, after some extra search on the internet, I discovered that you can request extra executor memory from the client. By changing the spark.executor.memory parameter. In Scala, that would look like so:
Name and Version
bitnami/spark:3.5.2
What architecture are you using?
amd64
What steps will reproduce the bug?
Simply start with the defaults and set
SPARK_WORKER_MEMORY=1G
to some other valuer other than 1G.What is the expected behavior?
It is expected that the executor memory will also be updated to the value set in
SPARK_WORKER_MEMORY
What do you see instead?
In the console, it can be seen that the executor memory is still the dafault of 1G
Additional information
Passing
SPARK_EXECUTOR_MEMORY
in the docker compose doesn't do anything.The text was updated successfully, but these errors were encountered: