-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Additional params in methods - create_vd #2
Comments
Hi @ulmitov . Absolutely forget about this issues. Let me check if I can reproduce them and generate the corresponding tests & fixes |
Hi @ulmitov Please, check develop branch and confirm this issue is fixed |
posted a comment in the commit, add vd has many additional params |
First of all, Notice that I've added this to the code: if raid == '1' or raid == '10':
if PDperArray is None:
PDperArray = 2 In other scenarios PDperArray is not required AFAIK. I'll consider adding the other params you have suggested |
for raid1 i've rechecked, don't need pdperarray. |
As far as I'm checking, it is not required for any raid that does not have an underlying raid I think this code should work fine: if int(raid) >= 10 and PDperArray is None:
# Try to count the number of drives in the array
# The format of the drives argument is e:s|e:s-x|e:s-x,y;e:s-x,y,z
# The number of drives is the number of commas plus the dashes intervals
numDrives = 0
drives2 = drives.split(':')
drives2 = drives2[1]
numDrives += drives2.count(',')+1
for interval in drives2.split(','):
if '-' in interval:
left = int(interval.split('-')[0])
right = int(interval.split('-')[1])
numDrives += right - left
PDperArray = numDrives//2
if raid == '00' and PDperArray is None:
PDperArray = 1 I don't know what raid "00" stands for, but at least it is not documented on LSI's storcli doc: https://docs.broadcom.com/doc/12352476 |
for now i see it's required only for 00, 10, 60. |
Do you think that fix would be enough or do you propose something else? |
it looks alright, but does pdperarray for sure should be half amount of drives? |
Unless I've been missundertanding the meaning of PdPerArray, means how many disks should be on the "mirror" side of the array. Check this document for reference: https://www.broadcom.com/support/knowledgebase/1211161503234/how-to-create-a-raid-10-50-or-60 In the examples:
pdperarray is the half of the number of disks, but, in the r50/r60 is also the minimal number of disks for a r5/r6 Just in case I've tested it out (notice each drive has 4TB/3,637TB):
To be honest. I don't know exactly what's going on but halving the number of disks sounds OK to me. At least as a "default valid value". Even we wish to discuss if the half is the best number (don't know, probably not), at least is the simplest that should always work as a valid value for any combination of #disks and raid type. You can always overwrite it as a param |
I think you're right, at this point I don't have additional info here, the fix is good for me.
since some vendors that support storcli for their products have some exotic raid levels with chars (raid5t2): also oracle have raid50e, but i don't know if they support storcli |
but those are corner cases, not critical at this point |
I've added a simple check. |
Hit a minor issue in this code:
|
Added a couple of funcs on common that should fix this: def expand_drive_ids(drives: str) -> str:
"""Expand drive ids to range if needed
Args:
drives (str): storcli drives expression (e:s|e:s-x|e:s-x,y;e:s-x,y,z)
Returns:
(str): expanded drives expression (without dashes)
"""
drive_list = drives.split(',')
output = ""
for i, drive in enumerate(drive_list):
drive = drive.strip()
encl, slot = drive.split(':')
new_output = drive
encl = encl.strip()
slot = slot.strip()
if '-' in slot:
begin, end = slot.split('-')
begin = begin.strip()
end = end.strip()
new_output = ','.join(['{0}:{1}'.format(encl, i)
for i in range(int(begin), int(end)+1)])
if i > 0:
output += ',' + new_output
else:
output += new_output
return output
def count_drives(drives: str) -> int:
"""Count number of drives in drives expression
Args:
drives (str): storcli drives expression (e:s|e:s-x|e:s-x,y;e:s-x,y,z)
Returns:
(int): number of drives
"""
expanded_drives = expand_drive_ids(drives)
return len(expanded_drives.split(',')) Also some tests to ensure it should work: def test_expand_drive_ids(self):
assert common.expand_drive_ids(
'0:0') == '0:0'
assert common.expand_drive_ids(
'0:0-1') == '0:0,0:1'
assert common.expand_drive_ids(
'0:0-2') == '0:0,0:1,0:2'
assert common.expand_drive_ids(
'0:0-1,0:3-4') == '0:0,0:1,0:3,0:4'
assert common.expand_drive_ids(
'0:0-1,0:3-4,1:0-1') == '0:0,0:1,0:3,0:4,1:0,1:1'
assert common.expand_drive_ids(
'0:0-1,0:3-4,1:0-1,5:3-4') == '0:0,0:1,0:3,0:4,1:0,1:1,5:3,5:4'
assert common.expand_drive_ids(
'0:0-1 ,0:3-4, 1:0-1 , 5 : 3 - 4 ') == '0:0,0:1,0:3,0:4,1:0,1:1,5:3,5:4'
assert common.expand_drive_ids(
'177:18,177:19,177:20,177:21,177:22,177:23') == '177:18,177:19,177:20,177:21,177:22,177:23'
def test_count_drives(self):
assert common.count_drives(
'0:0') == 1
assert common.count_drives(
'0:0-1') == 2
assert common.count_drives(
'0:0-2') == 3
assert common.count_drives(
'0:0-1,0:3-4') == 4
assert common.count_drives(
'0:0-1,0:3-4,1:0-1') == 6
assert common.count_drives(
'0:0-1,0:3-4,1:0-1,5:3-4') == 8
assert common.count_drives(
'0:0-1 ,0:3-4, 1:0-1 , 5 : 3 - 4 ') == 8
assert common.count_drives(
'177:18,177:19,177:20,177:21,177:22,177:23') == 6 |
Asked in issue Additional params in methods - create_vd #2
For now it's on develop, tell me if you found any other counting issue & thanks! |
Ok, i will check it in couple of weeks.
can add a check if it is dividable by 3 then that's the pd |
…ional params in methods - create_vd #2
Change pushed to develop... |
I've pushed 0.6.1.dev6 into pypi: https://pypi.org/project/PyStorCLI2/0.6.1.dev6/ It's a develop release, so you can only download it if you really ask for dev versions. Tell how does it works, and if you don't need anything else I'll push a true-release |
@ralequi fixes checked, works great |
A new note on this, Up to 8 drives:
9 drives:
10 drives:
11 drives:
12 dirves:
|
Let's have a look on this... |
Issue confirmed. |
i tried to set pdArray = #disks but it also failed (in the examples above) |
Sure, I've tested too and also fails. It seems that r0 has no major logical differences with r00 just the "line" in the r0 is shared across all disks but in r00 it is share across "pdArray" disks. |
Hi,
Regarding controller.create_vd,
For raid1 and raid10 must specify PDperArray=2,
otherwise the add vd command will fail with:
"operation not possible for current RAID level, Cannot create configuration with 1 span".
Can you please add a parameter to this method so it can get some additional params to append to the add vd command ?
Probably the additional params will be needed in other methods also.
The text was updated successfully, but these errors were encountered: