Skip to content

Commit 659ffad

Browse files
committed
Fix spelling typos
1 parent 89d3f95 commit 659ffad

File tree

110 files changed

+142
-142
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

110 files changed

+142
-142
lines changed

apps/createsamples/utility.cpp

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1078,8 +1078,8 @@ void cvCreateTrainingSamples( const char* filename,
10781078
icvPlaceDistortedSample( sample, inverse, maxintensitydev,
10791079
maxxangle, maxyangle, maxzangle,
10801080
0 /* nonzero means placing image without cut offs */,
1081-
0.0 /* nozero adds random shifting */,
1082-
0.0 /* nozero adds random scaling */,
1081+
0.0 /* nonzero adds random shifting */,
1082+
0.0 /* nonzero adds random scaling */,
10831083
&data );
10841084

10851085
if( showsamples )

apps/traincascade/HOGfeatures.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ class CvHOGEvaluator : public CvFeatureEvaluator
4545
};
4646
std::vector<Feature> features;
4747

48-
cv::Mat normSum; //for nomalization calculation (L1 or L2)
48+
cv::Mat normSum; //for normalization calculation (L1 or L2)
4949
std::vector<cv::Mat> hist;
5050
};
5151

@@ -70,7 +70,7 @@ inline float CvHOGEvaluator::Feature::calc( const std::vector<cv::Mat>& _hists,
7070

7171
const float *pnormSum = _normSum.ptr<float>((int)y);
7272
normFactor = (float)(pnormSum[fastRect[0].p0] - pnormSum[fastRect[1].p1] - pnormSum[fastRect[2].p2] + pnormSum[fastRect[3].p3]);
73-
res = (res > 0.001f) ? ( res / (normFactor + 0.001f) ) : 0.f; //for cutting negative values, which apper due to floating precision
73+
res = (res > 0.001f) ? ( res / (normFactor + 0.001f) ) : 0.f; //for cutting negative values, which appear due to floating precision
7474

7575
return res;
7676
}

doc/js_tutorials/js_imgproc/js_contours/js_contours_hierarchy/js_contours_hierarchy.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@ no child, parent is contour-3. So array is [-1,-1,-1,3].
145145
And this is the final guy, Mr.Perfect. It retrieves all the contours and creates a full family
146146
hierarchy list. **It even tells, who is the grandpa, father, son, grandson and even beyond... :)**.
147147

148-
For examle, I took above image, rewrite the code for cv.RETR_TREE, reorder the contours as per the
148+
For example, I took above image, rewrite the code for cv.RETR_TREE, reorder the contours as per the
149149
result given by OpenCV and analyze it. Again, red letters give the contour number and green letters
150150
give the hierarchy order.
151151

doc/py_tutorials/py_feature2d/py_feature_homography/py_feature_homography.markdown

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ In short, we found locations of some parts of an object in another cluttered ima
1717
is sufficient to find the object exactly on the trainImage.
1818

1919
For that, we can use a function from calib3d module, ie **cv.findHomography()**. If we pass the set
20-
of points from both the images, it will find the perpective transformation of that object. Then we
20+
of points from both the images, it will find the perspective transformation of that object. Then we
2121
can use **cv.perspectiveTransform()** to find the object. It needs atleast four correct points to
2222
find the transformation.
2323

@@ -68,7 +68,7 @@ Now we set a condition that atleast 10 matches (defined by MIN_MATCH_COUNT) are
6868
find the object. Otherwise simply show a message saying not enough matches are present.
6969

7070
If enough matches are found, we extract the locations of matched keypoints in both the images. They
71-
are passed to find the perpective transformation. Once we get this 3x3 transformation matrix, we use
71+
are passed to find the perspective transformation. Once we get this 3x3 transformation matrix, we use
7272
it to transform the corners of queryImage to corresponding points in trainImage. Then we draw it.
7373
@code{.py}
7474
if len(good)>MIN_MATCH_COUNT:

doc/py_tutorials/py_feature2d/py_shi_tomasi/py_shi_tomasi.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ If it is a greater than a threshold value, it is considered as a corner. If we p
2828
![image](images/shitomasi_space.png)
2929

3030
From the figure, you can see that only when \f$\lambda_1\f$ and \f$\lambda_2\f$ are above a minimum value,
31-
\f$\lambda_{min}\f$, it is conidered as a corner(green region).
31+
\f$\lambda_{min}\f$, it is considered as a corner(green region).
3232

3333
Code
3434
----

doc/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ cv.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
144144
### 7.b. Rotated Rectangle
145145

146146
Here, bounding rectangle is drawn with minimum area, so it considers the rotation also. The function
147-
used is **cv.minAreaRect()**. It returns a Box2D structure which contains following detals - (
147+
used is **cv.minAreaRect()**. It returns a Box2D structure which contains following details - (
148148
center (x,y), (width, height), angle of rotation ). But to draw this rectangle, we need 4 corners of
149149
the rectangle. It is obtained by the function **cv.boxPoints()**
150150
@code{.py}

doc/py_tutorials/py_imgproc/py_contours/py_contours_hierarchy/py_contours_hierarchy.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -185,7 +185,7 @@ array([[[ 3, -1, 1, -1],
185185
And this is the final guy, Mr.Perfect. It retrieves all the contours and creates a full family
186186
hierarchy list. **It even tells, who is the grandpa, father, son, grandson and even beyond... :)**.
187187

188-
For examle, I took above image, rewrite the code for cv.RETR_TREE, reorder the contours as per the
188+
For example, I took above image, rewrite the code for cv.RETR_TREE, reorder the contours as per the
189189
result given by OpenCV and analyze it. Again, red letters give the contour number and green letters
190190
give the hierarchy order.
191191

doc/tutorials/calib3d/real_time_pose/real_time_pose.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -381,7 +381,7 @@ Here is explained in detail the code for the real time application:
381381
as not, there are false correspondences or also called *outliers*. The [Random Sample
382382
Consensus](http://en.wikipedia.org/wiki/RANSAC) or *Ransac* is a non-deterministic iterative
383383
method which estimate parameters of a mathematical model from observed data producing an
384-
approximate result as the number of iterations increase. After appyling *Ransac* all the *outliers*
384+
approximate result as the number of iterations increase. After applying *Ransac* all the *outliers*
385385
will be eliminated to then estimate the camera pose with a certain probability to obtain a good
386386
solution.
387387

doc/tutorials/gapi/anisotropic_segmentation/porting_anisotropic_segmentation.markdown

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,7 @@ file name before running the application, e.g.:
153153

154154
$ GRAPH_DUMP_PATH=segm.dot ./bin/example_tutorial_porting_anisotropic_image_segmentation_gapi
155155

156-
Now this file can be visalized with a `dot` command like this:
156+
Now this file can be visualized with a `dot` command like this:
157157

158158
$ dot segm.dot -Tpng -o segm.png
159159

@@ -368,7 +368,7 @@ visualization like this:
368368

369369
![Anisotropic image segmentation graph with OpenCV & Fluid kernels](pics/segm_fluid.gif)
370370

371-
This graph doesn't differ structually from its previous version (in
371+
This graph doesn't differ structurally from its previous version (in
372372
terms of operations and data objects), though a changed layout (on the
373373
left side of the dump) is easily noticeable.
374374

doc/tutorials/gapi/face_beautification/face_beautification.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -427,7 +427,7 @@ the ROI, which will lead to accuracy improvement.
427427
Unfortunately, another problem occurs if we do that:
428428
if the rectangular ROI is near the border, a describing square will probably go
429429
out of the frame --- that leads to errors of the landmarks detector.
430-
To aviod such a mistake, we have to implement an algorithm that, firstly,
430+
To avoid such a mistake, we have to implement an algorithm that, firstly,
431431
describes every rectangle by a square, then counts the farthest coordinates
432432
turned up to be outside of the frame and, finally, pads the source image by
433433
borders (e.g. single-colored) with the size counted. It will be safe to take

0 commit comments

Comments
 (0)