You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The overall compliance is calculated by taking the average, weighted compliance factor
over all executed checks.
We see how this is done in Over all Checks.
Each checks compliance factor is calculated in two separate parts,
which are then combined into one (by multiplication):
Single Check Compliance
The first part indicates the overall state of the check-result:
100% for Perfect
80% for Ok
60% for Acceptable
0% for everything else
The second part tracks the issues.
Starting with the whole 100% in case of no issue,
and subtracting this high a percentage for each issue
(regarding its severity level):
High: 30% for the first issue, and then halving that consecutively (15% for the second, 7.5% for the third, ...)
Middle: 15% for the first issue, and then halving that consecutively (7.5% for the second, 3.75% for the third, ...)
Low: 7.5% for the first issue, and then halving that consecutively (3.75% for the second, 1.875% for the third, ...)
Examples:
Check X has final status Acceptable,
two High issues and one Low issue:
The code for this can be found in procedure CheckRes.calcCompliance()
in src/checker.nim.
Over all Checks
For our example, we assume that there are only three checks,
with these names and sub-rating weights:
Check Name
weight*
openness_
hardware_
CheckX
1.0
0.3
0.6
CheckY
0.5
1.0
0.7
CheckZ
0.3
0.0
1.0
These are fixed values, defined in the source code.
They are not dependent on the project the test is run on.
To get the final ratings used later on,
we multiply all the sub-rating factors
(marked with _ in the table above)
with the weight, and use the weight for the overall compliance weight;
thus we get this table of weights:
Check Name
compliance
openness
hardware
CheckX
1.0
0.3
0.6
CheckY
0.5
0.5
0.35
CheckZ
0.3
0.0
0.3
Sum
1.8
0.8
1.25
The Sum indicates the maximum Sum achievable by any project,
so this is our 100%.
We will use this table from now on.
Now we run the checks on a specific ProjectA,
and we get these compliance factors
(for this example, these are random numbers),
as described in Single Check Compliance.
Check Name
Compliance factor
CheckX
0.38
CheckY
0.64
CheckZ
1.0
We multiply these values with the table we calculated before,
row by row, and get:
Check Name
compliance
openness
hardware
CheckX
0.38
0.114
0.228
CheckY
0.32
0.32
0.224
CheckZ
0.3
0.0
0.3
Sum
1.0
0.434
0.852
Max-Sum
1.8
0.8
1.25
Factor
0.5556
0.5425
0.6816
Percentage
55.56%
54.25%
68.16%
The text was updated successfully, but these errors were encountered:
The overall compliance is calculated by taking the average, weighted compliance factor
over all executed checks.
We see how this is done in Over all Checks.
Each checks compliance factor is calculated in two separate parts,
which are then combined into one (by multiplication):
Single Check Compliance
Perfect
Ok
Acceptable
Starting with the whole 100% in case of no issue,
and subtracting this high a percentage for each issue
(regarding its severity level):
High
: 30% for the first issue, and then halving that consecutively (15% for the second, 7.5% for the third, ...)Middle
: 15% for the first issue, and then halving that consecutively (7.5% for the second, 3.75% for the third, ...)Low
: 7.5% for the first issue, and then halving that consecutively (3.75% for the second, 1.875% for the third, ...)Examples:
Check X has final status
Acceptable
,two
High
issues and oneLow
issue:The code for this can be found in procedure
CheckRes.calcCompliance()
in src/checker.nim.
Over all Checks
For our example, we assume that there are only three checks,
with these names and sub-rating weights:
weight*
openness_
hardware_
These are fixed values, defined in the source code.
They are not dependent on the project the test is run on.
To get the final ratings used later on,
we multiply all the sub-rating factors
(marked with
_
in the table above)with the
weight
, and use the weight for the overall compliance weight;thus we get this table of weights:
compliance
openness
hardware
The Sum indicates the maximum Sum achievable by any project,
so this is our 100%.
We will use this table from now on.
Now we run the checks on a specific ProjectA,
and we get these compliance factors
(for this example, these are random numbers),
as described in Single Check Compliance.
We multiply these values with the table we calculated before,
row by row, and get:
compliance
openness
hardware
The text was updated successfully, but these errors were encountered: