Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to create Storage domain using the domain name #1655

Closed
kushagra-e02562 opened this issue Jan 13, 2023 · 1 comment
Closed

Not able to create Storage domain using the domain name #1655

kushagra-e02562 opened this issue Jan 13, 2023 · 1 comment

Comments

@kushagra-e02562
Copy link

Hi Team,

I am new to oVirt and I have installed a self-hosted engine using the following link:
Install oVirt self-hosted engine

I have installed the version 4.4

Also I have a ceph cluster configured comprising of three nodes.
I want to create a storage domain with volume path comprising of all the three nodes for example:
[abcd:abcd:abcd::23],[abcd:abcd:abcd::22],[abcd:abcd:abcd::21]:6789:/volumes/kush/threetest/cb8a69c4-9c9a-4a2d-9a4a-7b9a5ff14c74
But I am not able to do so and we have raised a ticket:
#oVirt/vdsm#368

As a workaround we have set up a new DNS server to resolve the storage node.
Further in the dns server we configured, we set the configuration such that a single hostname resolves different Ips of the storage server.
When we perform nslookup for the hostname, it is resolved to all the three ips.

[root@ovirt-host ~]# nslookup storagenode.storage.com
Server:         10.0.1.31
Address:        10.0.1.31#53

Name:   storagenode.storage.com
Address: abcd:abcd:abcd::23
Name:   storagenode.storage.com
Address: abcd:abcd:abcd::22
Name:   storagenode.storage.com
Address: abcd:abcd:abcd::21

[root@ovirt-host ~]#

When we perform the ping operation, it is resolved to anyone of the ips:

[root@ovirt-host ~]# ping -c 3 storagenode.storage.com
PING storagenode.storage.com(abcd:abcd:abcd::21 (abcd:abcd:abcd::21)) 56 data bytes
64 bytes from abcd:abcd:abcd::21 (abcd:abcd:abcd::21): icmp_seq=1 ttl=64 time=0.553 ms
64 bytes from abcd:abcd:abcd::21 (abcd:abcd:abcd::21): icmp_seq=2 ttl=64 time=0.481 ms
64 bytes from abcd:abcd:abcd::21 (abcd:abcd:abcd::21): icmp_seq=3 ttl=64 time=0.612 ms

--- storagenode.storage.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2015ms
rtt min/avg/max/mdev = 0.481/0.548/0.612/0.060 ms
[root@ovirt-host ~]#
[root@ovirt-host ~]# ping -c 3 storagenode.storage.com
PING storagenode.storage.com(storagenode2 (abcd:abcd:abcd::22)) 56 data bytes
64 bytes from storagenode2 (abcd:abcd:abcd::22): icmp_seq=1 ttl=64 time=0.431 ms
64 bytes from storagenode2 (abcd:abcd:abcd::22): icmp_seq=2 ttl=64 time=0.534 ms
64 bytes from storagenode2 (abcd:abcd:abcd::22): icmp_seq=3 ttl=64 time=0.579 ms

--- storagenode.storage.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2075ms
rtt min/avg/max/mdev = 0.431/0.514/0.579/0.067 ms

[root@ovirt-host ~]# ping -c 3 storagenode.storage.com
PING storagenode.storage.com(storagenode3 (abcd:abcd:abcd::23)) 56 data bytes
64 bytes from storagenode3 (abcd:abcd:abcd::23): icmp_seq=1 ttl=64 time=0.431 ms
64 bytes from storagenode3 (abcd:abcd:abcd::23): icmp_seq=2 ttl=64 time=0.534 ms
64 bytes from storagenode3 (abcd:abcd:abcd::23): icmp_seq=3 ttl=64 time=0.579 ms

--- storagenode.storage.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2075ms
rtt min/avg/max/mdev = 0.431/0.514/0.579/0.067 ms

We tried mounting the ceph nodes using the hostname on the ovirt-host manually using the command:
“sudo mount -t ceph storagenode:6789:/volumes/kush/threetest/cb8a69c4-9c9a-4a2d-9a4a-7b9a5ff14c74 /rhev/data-center/mnt/storage_node/ -
o name=admin,secret=AQBqXLZj1wyUIhAAsf0/pPmElWQv6HVYq+OOBA==”

The mount was successful and all the ceph nodes are mounted:

[root@ovirt-host ~]# df -kh
Filesystem                                                                                                                                  Size  Used Avail Use% Mounted on
devtmpfs                                                                                                                                    3.8G     0  3.8G   0% /dev
tmpfs                                                                                                                                       3.8G  4.0K  3.8G   1% /dev/shm
tmpfs                                                                                                                                       3.8G  363M  3.4G  10% /run
tmpfs                                                                                                                                       3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/mapper/cs-root                                                                                                                          70G  8.8G   62G  13% /
/dev/sda2                                                                                                                                  1014M  259M  756M  26% /boot
/dev/sda1                                                                                                                                   599M  7.3M  592M   2% /boot/efi
/dev/mapper/cs-home                                                                                                                         852G   36G  817G   5% /home
tmpfs                                                                                                                                       767M   48K  767M   1% /run/user/1000
tmpfs                                                                                                                                       767M     0  767M   0% /run/user/0
10.0.1.169:/home/storage_nfs                                                                                                                852G   36G  817G   5% /rhev/data-center/mnt/10.0.1.169:_home_storage__nfs
[abcd:abcd:abcd::21]:6789,[abcd:abcd:abcd::23]:6789,[abcd:abcd:abcd::22]:6789:/volumes/kush/threetest/cb8a69c4-9c9a-4a2d-9a4a-7b9a5ff14c74   19G     0   19G   0% /rhev/data-center/mnt/storage_node
[root@ovirt-host ~]#

When we are trying to create a storage domain using the ovirt-gui, We are unable to create the domain.
We get the following error:

Error while executing action Add Storage Connections. General Exception.

On parsing /var/log/vdsm/vdsm.log on the ovirt-host, following is the error:

2023-01-12 18:12:44,811+0530 INFO  (jsonrpc/7) [storage.Mount] mounting storagenode.storage.com:6789:/volumes/kush/threetest/cb8a69c4-9c9a-4a2d-9a4a-7b9a5ff14c74 at /rhev/data-center/mnt/storagenode.storage.com:6789:_volumes_kush_threetest_cb8a69c4-9c9a-4a2d-9a4a-7b9a5ff14c74 (mount:207)
2023-01-12 18:12:45,025+0530 ERROR (jsonrpc/7) [storage.HSM] Could not connect to storageServer (hsm:2374)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 2371, in connectStorageServer
    conObj.connect()
  File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 184, in connect
    self.getMountObj().getRecord().fs_file)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 256, in getRecord
    (self.fs_spec, self.fs_file))
FileNotFoundError: [Errno 2] Mount of `storagenode.storage.com:6789:/volumes/kush/threetest/cb8a69c4-9c9a-4a2d-9a4a-7b9a5ff14c74` at `/rhev/data-center/mnt/storagenode.storage.com:6789:_volumes_kush_threetest_cb8a69c4-9c9a-4a2d-9a4a-7b9a5ff14c74` does not exist
2023-01-12 18:12:45,025+0530 INFO  (jsonrpc/7) [storage.StorageDomainCache] Invalidating storage domain cache (sdc:74)
2023-01-12 18:12:45,025+0530 INFO  (jsonrpc/7) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': '00000000-0000-0000-0000-000000000000', 'status': 100}]} from=::ffff:10.0.1.167,40996, flow_id=6d10d509-f353-463f-b8bb-7be59cac00e3, task_id=f72c60b4-afab-432b-a7d3-2727512e2d9e (api:54)

As mentioned earlier the nodes are reachable from the ovirt-hosts, we tried to find out why we are getting not able to connect to the Storage Server error.

Also,On the ovirt-host, following mount-point is getting created:

[root@ovirt-host ~]# df -kh
Filesystem                                                                                                                                  Size  Used Avail Use% Mounted on
devtmpfs                                                                                                                                    3.8G     0  3.8G   0% /dev
tmpfs                                                                                                                                       3.8G  4.0K  3.8G   1% /dev/shm
tmpfs                                                                                                                                       3.8G  363M  3.4G  10% /run
tmpfs                                                                                                                                       3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/mapper/cs-root                                                                                                                          70G  8.8G   62G  13% /
/dev/sda2                                                                                                                                  1014M  259M  756M  26% /boot
/dev/sda1                                                                                                                                   599M  7.3M  592M   2% /boot/efi
/dev/mapper/cs-home                                                                                                                         852G   36G  817G   5% /home
tmpfs                                                                                                                                       767M   48K  767M   1% /run/user/1000
tmpfs                                                                                                                                       767M     0  767M   0% /run/user/0
10.0.1.169:/home/storage_nfs                                                                                                                852G   36G  817G   5% /rhev/data-center/mnt/10.0.1.169:_home_storage__nfs
[abcd:abcd:abcd::22]:6789,[abcd:abcd:abcd::23]:6789,[abcd:abcd:abcd::21]:6789:/volumes/kush/threetest/cb8a69c4-9c9a-4a2d-9a4a-7b9a5ff14c74   19G     0   19G   0% /rhev/data-center/mnt/storagenode.storage.com:6789:_volumes_kush_threetest_cb8a69c4-9c9a-4a2d-9a4a-7b9a5ff14c74

Could anyone please help me out, Is there something I am missing?
My Aim is to create a storage domain using the domain name instead of IP.

Thanks and Regards
Kushagra Gupta

@sgratch
Copy link
Member

sgratch commented Feb 6, 2023

Hi @kushagra-e02562

This seems like a support question related to the engine storage. Please use users@ovirt org for support, not github issues (and specifically this is not related to this ovirt-web-ui project which tracks issues for VM Portal only).

Thanks

@sgratch sgratch closed this as completed Feb 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants