Skip to content

Commit 13fd272

Browse files
jsorefsrowen
authored andcommitted
Spelling r common dev mlib external project streaming resource managers python
### What changes were proposed in this pull request? This PR intends to fix typos in the sub-modules: * `R` * `common` * `dev` * `mlib` * `external` * `project` * `streaming` * `resource-managers` * `python` Split per srowen apache#30323 (comment) NOTE: The misspellings have been reported at jsoref@706a726#commitcomment-44064356 ### Why are the changes needed? Misspelled words make it harder to read / understand content. ### Does this PR introduce _any_ user-facing change? There are various fixes to documentation, etc... ### How was this patch tested? No testing was performed Closes apache#30402 from jsoref/spelling-R_common_dev_mlib_external_project_streaming_resource-managers_python. Authored-by: Josh Soref <[email protected]> Signed-off-by: Sean Owen <[email protected]>
1 parent 35ded12 commit 13fd272

File tree

101 files changed

+208
-208
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

101 files changed

+208
-208
lines changed

R/CRAN_RELEASE.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ To release SparkR as a package to CRAN, we would use the `devtools` package. Ple
2525

2626
First, check that the `Version:` field in the `pkg/DESCRIPTION` file is updated. Also, check for stale files not under source control.
2727

28-
Note that while `run-tests.sh` runs `check-cran.sh` (which runs `R CMD check`), it is doing so with `--no-manual --no-vignettes`, which skips a few vignettes or PDF checks - therefore it will be preferred to run `R CMD check` on the source package built manually before uploading a release. Also note that for CRAN checks for pdf vignettes to success, `qpdf` tool must be there (to install it, eg. `yum -q -y install qpdf`).
28+
Note that while `run-tests.sh` runs `check-cran.sh` (which runs `R CMD check`), it is doing so with `--no-manual --no-vignettes`, which skips a few vignettes or PDF checks - therefore it will be preferred to run `R CMD check` on the source package built manually before uploading a release. Also note that for CRAN checks for pdf vignettes to success, `qpdf` tool must be there (to install it, e.g. `yum -q -y install qpdf`).
2929

3030
To upload a release, we would need to update the `cran-comments.md`. This should generally contain the results from running the `check-cran.sh` script along with comments on status of all `WARNING` (should not be any) or `NOTE`. As a part of `check-cran.sh` and the release process, the vignettes is build - make sure `SPARK_HOME` is set and Spark jars are accessible.
3131

R/install-dev.bat

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ MKDIR %SPARK_HOME%\R\lib
2626

2727
rem When you pass the package path directly as an argument to R CMD INSTALL,
2828
rem it takes the path as 'C:\projects\spark\R\..\R\pkg"' as an example at
29-
rem R 4.0. To work around this, directly go to the directoy and install it.
29+
rem R 4.0. To work around this, directly go to the directory and install it.
3030
rem See also SPARK-32074
3131
pushd %SPARK_HOME%\R\pkg\
3232
R.exe CMD INSTALL --library="%SPARK_HOME%\R\lib" .

R/pkg/R/DataFrame.R

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2772,7 +2772,7 @@ setMethod("merge",
27722772
#' Creates a list of columns by replacing the intersected ones with aliases
27732773
#'
27742774
#' Creates a list of columns by replacing the intersected ones with aliases.
2775-
#' The name of the alias column is formed by concatanating the original column name and a suffix.
2775+
#' The name of the alias column is formed by concatenating the original column name and a suffix.
27762776
#'
27772777
#' @param x a SparkDataFrame
27782778
#' @param intersectedColNames a list of intersected column names of the SparkDataFrame
@@ -3231,7 +3231,7 @@ setMethod("describe",
32313231
#' \item stddev
32323232
#' \item min
32333233
#' \item max
3234-
#' \item arbitrary approximate percentiles specified as a percentage (eg, "75\%")
3234+
#' \item arbitrary approximate percentiles specified as a percentage (e.g., "75\%")
32353235
#' }
32363236
#' If no statistics are given, this function computes count, mean, stddev, min,
32373237
#' approximate quartiles (percentiles at 25\%, 50\%, and 75\%), and max.
@@ -3743,7 +3743,7 @@ setMethod("histogram",
37433743
#'
37443744
#' @param x a SparkDataFrame.
37453745
#' @param url JDBC database url of the form \code{jdbc:subprotocol:subname}.
3746-
#' @param tableName yhe name of the table in the external database.
3746+
#' @param tableName the name of the table in the external database.
37473747
#' @param mode one of 'append', 'overwrite', 'error', 'errorifexists', 'ignore'
37483748
#' save mode (it is 'error' by default)
37493749
#' @param ... additional JDBC database connection properties.

R/pkg/R/RDD.R

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -970,7 +970,7 @@ setMethod("takeSample", signature(x = "RDD", withReplacement = "logical",
970970
MAXINT)))))
971971
# If the first sample didn't turn out large enough, keep trying to
972972
# take samples; this shouldn't happen often because we use a big
973-
# multiplier for thei initial size
973+
# multiplier for the initial size
974974
while (length(samples) < total)
975975
samples <- collectRDD(sampleRDD(x, withReplacement, fraction,
976976
as.integer(ceiling(stats::runif(1,
@@ -1512,7 +1512,7 @@ setMethod("glom",
15121512
#'
15131513
#' @param x An RDD.
15141514
#' @param y An RDD.
1515-
#' @return a new RDD created by performing the simple union (witout removing
1515+
#' @return a new RDD created by performing the simple union (without removing
15161516
#' duplicates) of two input RDDs.
15171517
#' @examples
15181518
#'\dontrun{

R/pkg/R/SQLContext.R

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -203,7 +203,7 @@ getSchema <- function(schema, firstRow = NULL, rdd = NULL) {
203203
})
204204
}
205205

206-
# SPAKR-SQL does not support '.' in column name, so replace it with '_'
206+
# SPARK-SQL does not support '.' in column name, so replace it with '_'
207207
# TODO(davies): remove this once SPARK-2775 is fixed
208208
names <- lapply(names, function(n) {
209209
nn <- gsub(".", "_", n, fixed = TRUE)

R/pkg/R/WindowSpec.R

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ setMethod("show", "WindowSpec",
5454
#' Defines the partitioning columns in a WindowSpec.
5555
#'
5656
#' @param x a WindowSpec.
57-
#' @param col a column to partition on (desribed by the name or Column).
57+
#' @param col a column to partition on (described by the name or Column).
5858
#' @param ... additional column(s) to partition on.
5959
#' @return A WindowSpec.
6060
#' @rdname partitionBy
@@ -231,7 +231,7 @@ setMethod("rangeBetween",
231231
#' @rdname over
232232
#' @name over
233233
#' @aliases over,Column,WindowSpec-method
234-
#' @family colum_func
234+
#' @family column_func
235235
#' @examples
236236
#' \dontrun{
237237
#' df <- createDataFrame(mtcars)

R/pkg/R/column.R

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ createMethods()
135135
#' @rdname alias
136136
#' @name alias
137137
#' @aliases alias,Column-method
138-
#' @family colum_func
138+
#' @family column_func
139139
#' @examples
140140
#' \dontrun{
141141
#' df <- createDataFrame(iris)
@@ -161,7 +161,7 @@ setMethod("alias",
161161
#'
162162
#' @rdname substr
163163
#' @name substr
164-
#' @family colum_func
164+
#' @family column_func
165165
#' @aliases substr,Column-method
166166
#'
167167
#' @param x a Column.
@@ -187,7 +187,7 @@ setMethod("substr", signature(x = "Column"),
187187
#'
188188
#' @rdname startsWith
189189
#' @name startsWith
190-
#' @family colum_func
190+
#' @family column_func
191191
#' @aliases startsWith,Column-method
192192
#'
193193
#' @param x vector of character string whose "starts" are considered
@@ -206,7 +206,7 @@ setMethod("startsWith", signature(x = "Column"),
206206
#'
207207
#' @rdname endsWith
208208
#' @name endsWith
209-
#' @family colum_func
209+
#' @family column_func
210210
#' @aliases endsWith,Column-method
211211
#'
212212
#' @param x vector of character string whose "ends" are considered
@@ -224,7 +224,7 @@ setMethod("endsWith", signature(x = "Column"),
224224
#'
225225
#' @rdname between
226226
#' @name between
227-
#' @family colum_func
227+
#' @family column_func
228228
#' @aliases between,Column-method
229229
#'
230230
#' @param x a Column
@@ -251,7 +251,7 @@ setMethod("between", signature(x = "Column"),
251251
# nolint end
252252
#' @rdname cast
253253
#' @name cast
254-
#' @family colum_func
254+
#' @family column_func
255255
#' @aliases cast,Column-method
256256
#'
257257
#' @examples
@@ -300,7 +300,7 @@ setMethod("%in%",
300300
#' Can be a single value or a Column.
301301
#' @rdname otherwise
302302
#' @name otherwise
303-
#' @family colum_func
303+
#' @family column_func
304304
#' @aliases otherwise,Column-method
305305
#' @note otherwise since 1.5.0
306306
setMethod("otherwise",
@@ -440,7 +440,7 @@ setMethod("withField",
440440
#' )
441441
#'
442442
#' # However, if you are going to add/replace multiple nested fields,
443-
#' # it is preffered to extract out the nested struct before
443+
#' # it is preferred to extract out the nested struct before
444444
#' # adding/replacing multiple fields e.g.
445445
#' head(
446446
#' withColumn(

R/pkg/R/context.R

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ makeSplits <- function(numSerializedSlices, length) {
8686
# For instance, for numSerializedSlices of 22, length of 50
8787
# [1] 0 0 2 2 4 4 6 6 6 9 9 11 11 13 13 15 15 15 18 18 20 20 22 22 22
8888
# [26] 25 25 27 27 29 29 31 31 31 34 34 36 36 38 38 40 40 40 43 43 45 45 47 47 47
89-
# Notice the slice group with 3 slices (ie. 6, 15, 22) are roughly evenly spaced.
89+
# Notice the slice group with 3 slices (i.e. 6, 15, 22) are roughly evenly spaced.
9090
# We are trying to reimplement the calculation in the positions method in ParallelCollectionRDD
9191
if (numSerializedSlices > 0) {
9292
unlist(lapply(0: (numSerializedSlices - 1), function(x) {
@@ -116,7 +116,7 @@ makeSplits <- function(numSerializedSlices, length) {
116116
#' This change affects both createDataFrame and spark.lapply.
117117
#' In the specific one case that it is used to convert R native object into SparkDataFrame, it has
118118
#' always been kept at the default of 1. In the case the object is large, we are explicitly setting
119-
#' the parallism to numSlices (which is still 1).
119+
#' the parallelism to numSlices (which is still 1).
120120
#'
121121
#' Specifically, we are changing to split positions to match the calculation in positions() of
122122
#' ParallelCollectionRDD in Spark.

R/pkg/R/deserialize.R

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -250,7 +250,7 @@ readDeserializeWithKeysInArrow <- function(inputCon) {
250250

251251
keys <- readMultipleObjects(inputCon)
252252

253-
# Read keys to map with each groupped batch later.
253+
# Read keys to map with each grouped batch later.
254254
list(keys = keys, data = data)
255255
}
256256

R/pkg/R/functions.R

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ NULL
144144
#' @param y Column to compute on.
145145
#' @param pos In \itemize{
146146
#' \item \code{locate}: a start position of search.
147-
#' \item \code{overlay}: a start postiton for replacement.
147+
#' \item \code{overlay}: a start position for replacement.
148148
#' }
149149
#' @param len In \itemize{
150150
#' \item \code{lpad} the maximum length of each output result.
@@ -2918,7 +2918,7 @@ setMethod("shiftRight", signature(y = "Column", x = "numeric"),
29182918
})
29192919

29202920
#' @details
2921-
#' \code{shiftRightUnsigned}: (Unigned) shifts the given value numBits right. If the given value is
2921+
#' \code{shiftRightUnsigned}: (Unsigned) shifts the given value numBits right. If the given value is
29222922
#' a long value, it will return a long value else it will return an integer value.
29232923
#'
29242924
#' @rdname column_math_functions

0 commit comments

Comments
 (0)