Skip to content

Iterator api #23

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 6 commits into
base: main
Choose a base branch
from
Draft

Iterator api #23

wants to merge 6 commits into from

Conversation

j-fu
Copy link
Member

@j-fu j-fu commented Jan 25, 2024

Use iterator api for 2D function plot as a first attempt to generalize to discontinuous and higher order plots.

Linked to WIAS-PDELib/GridVisualizeTools.jl#3

@j-fu
Copy link
Member Author

j-fu commented Jan 25, 2024

@chmerdon, this would be the API idea: write an iterator over local subtriangles.
This also would be the solution for #22.

@j-fu j-fu marked this pull request as draft November 25, 2024 12:53
@j-fu j-fu requested review from Copilot and pjaap June 17, 2025 12:22
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Generalizes 2D plotting using an iterator-based API for simplex grids, paving the way for discontinuous and higher-order visualizations.

  • Introduces a new LinearSimplices iterator type and integrates it into marching_triangles.
  • Adds dedicated tests in test/griditerators.jl to verify allocation limits and correctness across dimensions.
  • Exposes LinearSimplices in GridVisualize and updates module imports.

Reviewed Changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
test/runtests.jl Includes the new griditerators.jl in the main test suite
test/griditerators.jl Adds a test set for testloop over 1D–3D grids, checking both result and allocations
src/griditerator.jl Defines the LinearSimplices struct and its iterate implementation
src/common.jl Refactors marching_triangles to use LinearSimplices
src/GridVisualize.jl Adds ChunkSplitters import, includes griditerator.jl, and exports LinearSimplices
Comments suppressed due to low confidence (2)

src/griditerator.jl:1

  • [nitpick] Add a docstring for LinearSimplices describing its purpose, type parameters, and keyword arguments (gridscale, nthreads) to improve discoverability and maintainability.
struct LinearSimplices{D, Tc, Ti, Tf} <: LinearSimplexIterator{D}

src/GridVisualize.jl:19

  • Since ChunkSplitters is a new dependency, ensure it is added to Project.toml so the package will install correctly.
using ChunkSplitters

@@ -4,6 +4,9 @@ import CairoMakie, PyPlot, PlutoVista

CairoMakie.activate!(; type = "svg", visible = false)


include("griditerators.jl")
Copy link
Preview

Copilot AI Jun 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Use an explicit path with @__DIR__ to ensure the file is found regardless of working directory, e.g., include(joinpath(@__DIR__, "griditerators.jl")).

Suggested change
include("griditerators.jl")
include(joinpath(@__DIR__, "griditerators.jl"))

Copilot uses AI. Check for mistakes.

nalloc = @allocated sum_f = testloop(ls)


@test nalloc < 256 # allow for some allocations
Copy link
Preview

Copilot AI Jun 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The magic number 256 is unexplained; consider defining a named constant (e.g. MAX_ALLOC_BYTES) or adding a comment on how this threshold was chosen.

Suggested change
@test nalloc < 256 # allow for some allocations
@test nalloc < MAX_ALLOC_BYTES # Threshold for memory allocation to ensure efficiency

Copilot uses AI. Check for mistakes.

Comment on lines +87 to +92
ls = LinearSimplices(grid, func; gridscale)
return vcat(marching_triangles(ls, levels)...)
end

function GridVisualizeTools.marching_triangles(grids::Vector{ExtendableGrid{Tv, Ti}}, funcs, lines, levels; gridscale = 1.0) where {Tv, Ti}
coords = [grid[Coordinates] * gridscale for grid in grids]
cellnodes = [grid[CellNodes] for grid in grids]
return marching_triangles(coords, cellnodes, funcs, lines, levels)
all_ls = [LinearSimplices(grids[i], funcs[i]; gridscale) for i in 1:length(grids)]
Copy link
Preview

Copilot AI Jun 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here func (a user-provided function) is passed directly as the values vector to LinearSimplices. You likely need to evaluate func at each node coordinate to build a numeric vector before constructing the iterator.

Copilot uses AI. Check for mistakes.

coords = [grid[Coordinates] * gridscale for grid in grids]
cellnodes = [grid[CellNodes] for grid in grids]
return marching_triangles(coords, cellnodes, funcs, lines, levels)
all_ls = [LinearSimplices(grids[i], funcs[i]; gridscale) for i in 1:length(grids)]
Copy link
Preview

Copilot AI Jun 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, funcs[i] is passed in place of a values vector. You should apply each funcs[i] over the corresponding grid nodes to produce the values vector for LinearSimplices.

Suggested change
all_ls = [LinearSimplices(grids[i], funcs[i]; gridscale) for i in 1:length(grids)]
all_ls = [LinearSimplices(grids[i], funcs[i](grids[i][Coordinates]); gridscale) for i in 1:length(grids)]

Copilot uses AI. Check for mistakes.

@j-fu
Copy link
Member Author

j-fu commented Jun 17, 2025

The idea is that instead of passing a grid and a vector, users would implement a LinearSimplices constructor of a interator over triangles/tests possibly created on the fly

In the moment, this PR just replaces the marching_triangles method called in the extensions by one which uses LinearSimplices.

@pjaap
Copy link
Member

pjaap commented Jun 18, 2025

I reviewed this approach and I like it in general.

I have two major complaints:

  1. the PR in GridVisualizeTools is divergent from the main branch since the marching_triangles method changed recently. It is unfortunately not straight forward to rebase. This should be done in a small pair programming session.
  2. The value handing in LinearSimplices. Should this be the general class for the iterators of just a concept of an iterator? This is the standard continuous data approach since it assigns values to nodes shared by multiple triangles. We should discuss how to pass values for each grid cell in a certain order

@j-fu
Copy link
Member Author

j-fu commented Jun 18, 2025

I reviewed this approach and I like it in general.

I have two major complaints:

  1. the PR in GridVisualizeTools is divergent from the main branch since the marching_triangles method changed recently. It is unfortunately not straight forward to rebase. This should be done in a small pair programming session.

I think this is natural: I think we may needto keep two versions of it: one for iterators ande for grid data. Of course they should use as much common code as possible. Same formarching_tetrahedra`.

  1. The value handing in LinearSimplices. Should this be the general class for the iterators of just a concept of an iterator? This is the standard continuous data approach since it assigns values to nodes shared by multiple triangles. We should discuss how to pass values for each grid cell in a certain order

No, this is just an example for two things:

  • Replacement of the marching_* functions for grids with linear simplices in GridVisualize - we would gain multithreading from this.
  • Prototype for ConstantSimplices, DiscontinuousLinearSimplices and whatever users write for higher order elements and general element geometries.

As a result, a new "gridless" core API for GridVisualize could emerge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants