Skip to content

Commit

Permalink
Merge pull request #1857 from AdeelH/rel
Browse files Browse the repository at this point in the history
Pre-release fixes and improvements
  • Loading branch information
AdeelH authored Aug 23, 2023
2 parents ee3fbef + 5121058 commit b54ea6c
Show file tree
Hide file tree
Showing 47 changed files with 990 additions and 437 deletions.
1 change: 1 addition & 0 deletions .coveragerc
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,4 @@ exclude_lines =
@(?:abc\.)?abstractmethod
@(?:abc\.)?abstractproperty
@overload
pass
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
ARG UBUNTU_VERSION=22.04
ARG CUDA_VERSION
ARG UBUNTU_VERSION
FROM nvidia/cuda:${CUDA_VERSION}-cudnn8-runtime-ubuntu${UBUNTU_VERSION}

# wget: needed below to install conda
Expand Down
29 changes: 22 additions & 7 deletions docker/build
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ function usage() {
Build Docker images.
Options:
--arm64 will build image for arm64 architecture
--arm64 Build image for arm64 architecture.
"
}

Expand All @@ -27,13 +27,28 @@ then

PLATFORM="amd64"
IMAGE_EXT=""
if [ "${1:-}" = "--arm64" ]
then
PLATFORM="arm64"
IMAGE_EXT="-arm64"
fi
CUDA_VERSION="12.1.1"
UBUNTU_VERSION="22.04"

while [[ $# -gt 0 ]]
do
case "$1" in
--arm64)
PLATFORM="arm64"
IMAGE_EXT="-arm64"
shift
;;
*)
echo "Unknown option: $1"
usage
exit 1
;;
esac
done

DOCKER_BUILDKIT=1 docker build \
--platform linux/${PLATFORM} --build-arg CUDA_VERSION="11.7.1" \
--platform linux/${PLATFORM} \
--build-arg CUDA_VERSION="${CUDA_VERSION}" \
--build-arg UBUNTU_VERSION="${UBUNTU_VERSION}" \
-t raster-vision-pytorch${IMAGE_EXT} -f Dockerfile .
fi
2 changes: 1 addition & 1 deletion docker/run
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ do
;;
--jupyter-lab)
find_free_port_in_range 8888 9999
JUPYTER="-v ${RASTER_VISION_NOTEBOOK_DIR}:/opt/notebooks -v ${HOME}/.jupyter:/root/.jupyter:ro -p $FREE_PORT:$FREE_PORT"
JUPYTER="-v ${RASTER_VISION_NOTEBOOK_DIR}:/opt/notebooks -v ${HOME}/.jupyter:/root/.jupyter -p $FREE_PORT:$FREE_PORT"
# run jupyter lab in the background
CMD=(/bin/bash -c "jupyter lab --ip 0.0.0.0 --port $FREE_PORT --no-browser --allow-root --notebook-dir=/opt/notebooks & bash")
echo "Starting Jupyter Lab server at 0.0.0.0:$FREE_PORT. This may take a few seconds."
Expand Down
52 changes: 32 additions & 20 deletions docs/release.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,32 +9,44 @@ Minor or Major Version Release
------------------------------

#. It's a good idea to update any major dependencies before the release.
#. Update the docs if needed. See the `docs README <{{ repo }}/docs/README.md>`__ for instructions.
#. Checkout the ``master`` branch, re-build the docker image (``docker/build``), and push it to ECR (``docker/ecr_publish``).
#. Execute all `tutorial notebooks <{{ repo }}/docs/usage/tutorials/>`__ and make sure they work correctly. Do not commit output changes unless code behavior has changed.
#. Run all :ref:`rv examples` and check that evaluation metrics are close to the scores from the last release. (For each example, there should be a link to a JSON file with the evaluation metrics from the last release.) This stage often uncovers bugs, and is the most time consuming part of the release process. There is a `script <{{ repo_examples }}/test.py>`__ to help run the examples and collect their outputs. See the associated `README <{{ repo_examples }}/README.md>`__ for details.
#. Collect all model bundles, and check that they work with the ``predict`` command and sanity check output in QGIS.
#. Update the :ref:`model zoo` by uploading model bundles and sample images to the right place on S3. If you use the ``collect`` command (`described here <{{ repo_examples }}/README.md>`__), you should be able to sync the ``collect_dir`` to ``s3://azavea-research-public-data/raster-vision/examples/model-zoo-<version>``.
#. Update the notebooks that use models from the model zoo so that they use the latest version and re-run.
#. Update `tiny_spacenet.py <{{ repo_examples }}/tiny_spacenet.py>`__ if needed and ensure the line numbers in every ``literalinclude`` of that file are correct. Tip: you can find all instances by searching the repo using the regex: ``\.\. literalinclude:: .+tiny_spacenet\.py$``.
#. Test :ref:`setup` and :ref:`quickstart` instructions and make sure they work.
#. Test examples from :ref:`pipelines plugins`.
#. Test examples:

.. code-block:: console
#. Checkout the ``master`` branch, re-build the docker image (``docker/build``), and push it to ECR (``docker/ecr_publish``).
#. Follow the instructions in `this README <{{ repo_examples }}/README.md>`__ to do the following:

#. Run all :ref:`rv examples` and check that evaluation metrics are close to the scores from the last release. (For each example, there should be a link to a JSON file with the evaluation metrics from the last release.) This stage often uncovers bugs, and is the most time consuming part of the release process.
#. Collect all model bundles, and check that they work with the ``predict`` command and sanity check output in QGIS.
#. Update the :ref:`model zoo` by uploading model bundles and sample images to the right place on S3. If you use the ``collect`` command (`see <{{ repo_examples }}/README.md>`__), you should be able to sync the ``collect_dir`` to ``s3://azavea-research-public-data/raster-vision/examples/model-zoo-<version>``.
#. Screenshot the outputs of the ``compare`` command (for each example) and include them in the PR described below.

rastervision run inprocess rastervision.pipeline_example_plugin1.config1 -a root_uri /opt/data/pipeline-example/1/ --splits 2
rastervision run inprocess rastervision.pipeline_example_plugin1.config2 -a root_uri /opt/data/pipeline-example/2/ --splits 2
rastervision run inprocess rastervision.pipeline_example_plugin2.config3 -a root_uri /opt/data/pipeline-example/3/ --splits 2
#. Test notebooks:

#. Test examples from :ref:`bootstrap`.
#. Update the `tutorial notebooks <{{ repo }}/docs/usage/tutorials/>`__ that use models from the model zoo so that they use the latest version.
#. Execute all `tutorial notebooks <{{ repo }}/docs/usage/tutorials/>`__ and make sure they work correctly. Do not commit output changes unless code behavior has changed.

.. code-block:: console
#. Test/update docs:

#. Update the docs if needed. See the `docs README <{{ repo }}/docs/README.md>`__ for instructions.
#. Update `tiny_spacenet.py <{{ repo_examples }}/tiny_spacenet.py>`__ if needed and ensure the line numbers in every ``literalinclude`` of that file are correct. Tip: you can find all instances by searching the repo using the regex: ``\.\. literalinclude:: .+tiny_spacenet\.py$``.
#. Test :ref:`setup` and :ref:`quickstart` instructions and make sure they work.
#. Test examples from :ref:`pipelines plugins`.

.. code-block:: console
rastervision run inprocess rastervision.pipeline_example_plugin1.config1 -a root_uri /opt/data/pipeline-example/1/ --splits 2
rastervision run inprocess rastervision.pipeline_example_plugin1.config2 -a root_uri /opt/data/pipeline-example/2/ --splits 2
rastervision run inprocess rastervision.pipeline_example_plugin2.config3 -a root_uri /opt/data/pipeline-example/3/ --splits 2
#. Test examples from :ref:`bootstrap`.

.. code-block:: console
cookiecutter /opt/src/cookiecutter_template
cookiecutter /opt/src/cookiecutter_template
#. Update the `the changelog <{{ repo }}/docs/changelog.rst>`__, and point out API changes.
#. Fix any broken badges on the GitHub repo readme.

#. Update the `the changelog <{{ repo }}/docs/changelog.rst>`__, and point out API changes.
#. Fix any broken badges on the GitHub repo readme.
#. Update the version number. This occurs in several places, so it's best to do this with a find and replace over the entire repo.
#. Update the version number. This occurs in several places, so it's best to do this with a find-and-replace over the entire repo.
#. Make a PR to the ``master`` branch with the preceding updates. In the PR, there should be a link to preview the docs. Check that they are building and look correct.
#. Make a git branch with the version as the name, and push to GitHub.
#. Ensure that the docs are building correctly for the new version branch on `readthedocs <https://readthedocs.org/projects/raster-vision/>`_. You will need to have admin access on your RTD account. Once the branch is building successfully, Under *Versions -> Activate a Version*, you can activate the version to add it to the sidebar of the docs for the latest version. (This might require manually triggering a rebuild of the docs.) Then, under *Admin -> Advanced Settings*, change the default version to the new version.
Expand Down
2 changes: 1 addition & 1 deletion docs/usage/tutorials/pred_and_eval_ss.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@
},
"source": [
"Load a :class:`Learner` with a trained model from bundle -- :meth:`.Learner.from_model_bundle`\n",
"---------------------------------------------------------------------------------------------"
"----------------------------------------------------------------------------------------------"
]
},
{
Expand Down
22 changes: 21 additions & 1 deletion rastervision_core/rastervision/core/box.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from typing import TYPE_CHECKING, Callable, Dict, Union, Tuple, Optional, List
from typing import (TYPE_CHECKING, Callable, Dict, List, Optional, Sequence,
Tuple, Union)
from typing_extensions import Literal
from pydantic import PositiveInt as PosInt, conint
import math
Expand Down Expand Up @@ -484,3 +485,22 @@ def within_aoi(window: 'Box', aoi_polygons: List[Polygon]) -> bool:
if w.within(polygon):
return True
return False

def __contains__(self, query: Union['Box', Sequence]) -> bool:
"""Check if box or point is contained within this box.
Args:
query: Box or single point (x, y).
Raises:
NotImplementedError: if query is not a Box or tuple/list.
"""
if isinstance(query, Box):
ymin, xmin, ymax, xmax = query
return (ymin >= self.ymin and xmin >= self.xmin
and ymax <= self.ymax and xmax <= self.xmax)
elif isinstance(query, (tuple, list)):
x, y = query
return self.xmin <= x <= self.xmax and self.ymin <= y <= self.ymax
else:
raise NotImplementedError()
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ def get_cell_scores(self, cell: Box) -> Optional[Sequence[float]]:
"""
result = self.cell_to_label.get(cell)
if result is not None:
return result.score
return result.scores
else:
return None

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ def get_class_ids(self) -> np.ndarray:
def __len__(self) -> int:
return self.boxlist.get().shape[0]

def __str__(self) -> str:
def __str__(self) -> str: # prama: no cover
return str(self.boxlist.get())

def to_boxlist(self) -> NpBoxList:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
from typing import (TYPE_CHECKING, Any, Iterable, List, Optional, Sequence,
Union)
from typing import (TYPE_CHECKING, Any, Iterable, List, Optional, Sequence)
from abc import abstractmethod

import numpy as np
Expand All @@ -23,8 +22,8 @@ def __init__(self, extent: Box, num_classes: int, dtype: np.dtype):
"""Constructor.
Args:
extent (Box): The extent of the region to which
the labels belong, in global coordinates.
extent (Box): The extent of the region to which the labels belong,
in global coordinates.
num_classes (int): Number of classes.
"""
self.extent = extent
Expand Down Expand Up @@ -143,9 +142,8 @@ def transform_shape(x, y, z=None):
del self[window]

@classmethod
def make_empty(cls, extent: Box, num_classes: int, smooth: bool = False
) -> Union['SemanticSegmentationDiscreteLabels',
'SemanticSegmentationSmoothLabels']:
def make_empty(cls, extent: Box, num_classes: int,
smooth: bool = False) -> 'SemanticSegmentationLabels':
"""Instantiate an empty instance.
Args:
Expand All @@ -157,8 +155,7 @@ def make_empty(cls, extent: Box, num_classes: int, smooth: bool = False
SemanticSegmentationDiscreteLabels object. Defaults to False.
Returns:
Union[SemanticSegmentationDiscreteLabels,
SemanticSegmentationSmoothLabels]: If smooth=True, returns a
SemanticSegmentationLabels: If smooth=True, returns a
SemanticSegmentationSmoothLabels. Otherwise, a
SemanticSegmentationDiscreteLabels.
Expand All @@ -174,15 +171,14 @@ def make_empty(cls, extent: Box, num_classes: int, smooth: bool = False
extent=extent, num_classes=num_classes)

@classmethod
def from_predictions(cls,
windows: Iterable['Box'],
predictions: Iterable[Any],
extent: Box,
num_classes: int,
smooth: bool = False,
crop_sz: Optional[int] = None
) -> Union['SemanticSegmentationDiscreteLabels',
'SemanticSegmentationSmoothLabels']:
def from_predictions(
cls,
windows: Iterable['Box'],
predictions: Iterable[Any],
extent: Box,
num_classes: int,
smooth: bool = False,
crop_sz: Optional[int] = None) -> 'SemanticSegmentationLabels':
"""Instantiate from windows and their corresponding predictions.
Args:
Expand All @@ -202,8 +198,7 @@ def from_predictions(cls,
windows. Defaults to None.
Returns:
Union[SemanticSegmentationDiscreteLabels,
SemanticSegmentationSmoothLabels]: If smooth=True, returns a
SemanticSegmentationLabels: If smooth=True, returns a
SemanticSegmentationSmoothLabels. Otherwise, a
SemanticSegmentationDiscreteLabels.
"""
Expand Down Expand Up @@ -349,8 +344,7 @@ def from_predictions(cls,
extent: Box,
num_classes: int,
crop_sz: Optional[int] = None
) -> Union['SemanticSegmentationDiscreteLabels',
'SemanticSegmentationSmoothLabels']:
) -> 'SemanticSegmentationDiscreteLabels':
labels = cls.make_empty(extent, num_classes)
labels.add_predictions(windows, predictions, crop_sz=crop_sz)
return labels
Expand Down Expand Up @@ -522,8 +516,7 @@ def from_predictions(cls,
extent: Box,
num_classes: int,
crop_sz: Optional[int] = None
) -> Union['SemanticSegmentationDiscreteLabels',
'SemanticSegmentationSmoothLabels']:
) -> 'SemanticSegmentationSmoothLabels':
labels = cls.make_empty(extent, num_classes)
labels.add_predictions(windows, predictions, crop_sz=crop_sz)
return labels
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,24 +7,12 @@
from rastervision.core.data.label import SemanticSegmentationLabels
from rastervision.core.data.label_source.label_source import LabelSource
from rastervision.core.data.raster_source import RasterSource
from rastervision.core.data.utils import pad_to_window_size

if TYPE_CHECKING:
from rastervision.core.data import CRSTransformer


def fill_edge(label_arr: np.ndarray, window: Box, extent: Box,
fill_value: int) -> np.ndarray:
"""If window goes over the edge of the extent, buffer with fill_value."""
if window.ymax <= extent.ymax and window.xmax <= extent.xmax:
return label_arr

x = np.full(window.size, fill_value)
ylim = extent.ymax - window.ymin
xlim = extent.xmax - window.xmin
x[0:ylim, 0:xlim] = label_arr[0:ylim, 0:xlim]
return x


class SemanticSegmentationLabelSource(LabelSource):
"""A read-only label source for semantic segmentation."""

Expand Down Expand Up @@ -101,8 +89,9 @@ def get_label_arr(self, window: Optional[Box] = None) -> np.ndarray:
of the null class as defined by the class_config.
Args:
window (Optional[Box], optional): Window to get labels for. If
None, returns a label array covering the full extent of the scene.
window (Optional[Box], optional): Window (in pixel coords) to get
labels for. If None, returns a label array covering the full
extent of the scene.
Returns:
np.ndarray: Label array.
Expand All @@ -113,8 +102,10 @@ def get_label_arr(self, window: Optional[Box] = None) -> np.ndarray:
label_arr = self.raster_source.get_chip(window)
if label_arr.ndim == 3:
label_arr = np.squeeze(label_arr, axis=2)
label_arr = fill_edge(label_arr, window, self.extent,
self.class_config.null_class_id)
h, w = label_arr.shape
if h < window.height or w < window.width:
label_arr = pad_to_window_size(label_arr, window, self.extent,
self.class_config.null_class_id)
return label_arr

@property
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
from rastervision.core.data.label_source import SemanticSegmentationLabelSource
from rastervision.core.data.raster_transformer import RGBClassTransformer
from rastervision.core.data.raster_source import RasterioSource
from rastervision.core.data.utils import write_window

if TYPE_CHECKING:
from rastervision.core.data import (VectorOutputConfig,
Expand Down Expand Up @@ -266,7 +267,8 @@ def write_smooth_raster_output(
score_arr = self._scores_to_uint8(score_arr)
else:
score_arr = score_arr.astype(dtype)
self._write_array(ds, window, score_arr)
score_arr = score_arr.transpose(1, 2, 0)
write_window(ds, score_arr, window)

# save pixel hits too
np.save(hits_path, labels.pixel_hits)
Expand All @@ -291,8 +293,7 @@ def write_discrete_raster_output(
if self.class_transformer is not None:
label_arr = self.class_transformer.class_to_rgb(
label_arr)
label_arr = label_arr.transpose(2, 0, 1)
self._write_array(ds, window, label_arr)
write_window(ds, label_arr, window)

def write_vector_outputs(self, labels: SemanticSegmentationLabels,
vector_output_dir: str) -> None:
Expand All @@ -314,18 +315,6 @@ def write_vector_outputs(self, labels: SemanticSegmentationLabels,
out_uri = vo.get_uri(vector_output_dir, self.class_config)
json_to_file(geojson, out_uri)

def _write_array(self, dataset: rio.DatasetReader, window: Box,
arr: np.ndarray) -> None:
"""Write array out to a rasterio dataset. Array must be of shape
(C, H, W).
"""
rio_window = window.rasterio_format()
if len(arr.shape) == 2:
dataset.write_band(1, arr, window=rio_window)
else:
for i, band in enumerate(arr, start=1):
dataset.write_band(i, band, window=rio_window)

def _clip_to_extent(self,
extent: Box,
window: Box,
Expand Down
Loading

0 comments on commit b54ea6c

Please sign in to comment.