-
Notifications
You must be signed in to change notification settings - Fork 161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
issues starting up CL #150
Comments
I have setup cluster with 2 arm devices with iscsi storage and 1 amd device as quorum.
Let me know your exact setup.
Are you running Proxmox 7 oder Proxmox 8 ?
Is it plain Debian used or RaspOS with chroot kernel ?
I also setup a similar config for somebody else and it‘s running with success
On 7.4.3 and 8.0.3 Proxmox Arm architecture.
Feel free to,contact me to make a fresh clean and running setup.
|
hi i'm running the proxmox 7 version (7.2 to be exact). the OS that are running on the Pi's are the debianbullseye from the rasbarry image aplication. i have not attache a storage device yet so i still run the micro sd cards withe 64 gb. |
Building a Cluster without a shared storage for the nodes isn‘t possible.
|
i can't use the local storage that is on 1 of my pi"s ? i have a synology Nas where i tried to make a shared storrage but that wasn't going as planned |
Create an iSCSI LUN in needed size for the cluster on the Synology NAS if there is a 1GB network connection on your LAN.
Use VLAN‘s if possible in your network for the cluster communication. (Only recommendation).
Install open-iscsi on both nodes (if not installed automatically)
Run scsi discovery on the nodes (keep in mind, that you have min 3 nodes for quorum)
Change the manual startup for iSCSI setting to Automatic on all nodes
use only static ip addresses (no SLAAC) for the iscsi connection.
Do prefer ipv4 static.
To,see the iscsi LUNs on the Proxmox nodes, you can use the iscsi option in web interface, but I do prefer cli only on the nodes.
Good luck 👍🍀
|
aah thank you. i manage to make a shared storage trough ISCSIs. but i still get the error when i want to boot up my first container lxc-console: 100: ../src/lxc/tools/lxc_console.c: main: 129 100 is not running |
do you get that rootfs image from here ?
https://us.lxd.images.canonical.com/images/debian/bookworm/arm64/default/
You need Arm LXC default Container templates.
The templates you see from web interface are amd64 templates and not useable with arm devices.
If yes, keep in mind to not setup dhcp and slaac before „apt update“ and installing ifupdown package with "apt install ifupdown" from repo into the LXC container after you have an ip with "dhclient" command, so please start the container with static and no ip setup first.
![image](https://github.com/pimox/pimox7/assets/72735184/bb0a4920-b85b-4554-bad0-6214aff44a8e)
![image](https://github.com/pimox/pimox7/assets/72735184/00e43a4a-c7f9-478a-8831-9d96c2d92674)
Recommendations for the Proxmox node:
Please make a fresh installation from that repo:
https://github.com/jiangcuo/Proxmox-Port/blob/main/help/repo.md
And Flash Debian plain image, not raspos.
https://raspi.debian.net/tested-images/
I've converted the image after 1st boot from a running arm64 Linux to btrfs (btrfs-progs) installed first to use the snapshot features like the btrfs on amd64.
So raspberry run very similar than the amd64 supported distribution.
You found me on Discord if you like.
|
do you mean make a whole new proxcluster withe the new clients? |
Correct. If you like to contact me on discord Ask for Beatrice in the voice channels, voice language is German. |
thank you, i added you on discord. now i'm trying to use bookworm but every time im making the new sd card it doesn't remember my user credentials. is there a standart user account i know nothing about? XD |
hi,
i'm fairly new to proxmox but i already manage to build a cluster withe 4 Pi's.
now i'm trying to build a CL but it keeps crasing on boot.
in de console line it gives me this error code
lxc-console: 101: ../src/lxc/tools/lxc_console.c: main: 129 101 is not running
The text was updated successfully, but these errors were encountered: