Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improves Model Evaluation UX so the user better understand the model evaluation type for a run #5509

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

manivoxel51
Copy link
Contributor

@manivoxel51 manivoxel51 commented Feb 22, 2025

What changes are proposed in this pull request?

https://voxel51.atlassian.net/browse/FOEPD-467

test2.mov

How is this patch tested? If it is not, please explain why.

Manually

Release Notes

ME: As a user, I want to understand the model evaluation type for a run

The Model Evaluation panel currently supports a variety of different task types, which are implicitly defined by the type of ground truth/predicted labels that the user is evaluating (classification, detection, segmentation, regression). See the next section for details.

this PR introduces iconography for each task type that we can use in various places in the panel to reinforce what the task semantically represents.

  • Introduce evaluation types and their icons
  • Add type icon to the model evaluations run list
  • Add tooltip to the type icon
  • Add type icon to the model evaluation run report/analysis page
  • Add type icon to the “evaluation switcher” so users distinctly understand the type and why only similar types are available for comparison, and not all types.

Is this a user-facing change that should be mentioned in the release notes?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release
    notes for FiftyOne users.

(Details in 1-2 sentences. You can just refer to another PR with a description
if this PR is part of a larger change.)

What areas of FiftyOne does this PR affect?

  • App: FiftyOne application changes
  • Build: Build and test infrastructure changes
  • Core: Core fiftyone Python library changes
  • Documentation: FiftyOne documentation changes
  • Other

Summary by CodeRabbit

  • New Features

    • Introduced dynamic evaluation icons with descriptive tooltips, providing clear visual cues for various evaluation types such as classification, detection, segmentation, and regression.
  • Refactor

    • Enhanced evaluation data handling to include additional details like type and method, with streamlined layout adjustments for a more informative and consistent user interface.

Copy link
Contributor

coderabbitai bot commented Feb 22, 2025

Walkthrough

This pull request enhances the model evaluation functionality across both frontend and backend components. The evaluation data structure is updated to include detailed properties such as "type" and "method". The UI now integrates a new EvaluationIcon component for rendering icons based on these properties. Additionally, the codebase adopts new TypeScript type definitions for clearer evaluation data handling, and helper function imports have been streamlined. Backend evaluation data loading is updated to consistently include the enriched fields.

Changes

File(s) Change Summary
app/.../NativeModelEvaluationView/Evaluation.tsx, Overview.tsx, index.tsx Updated evaluation data structure (changing compareKeys from string[] to objects with key, type, and method), modified UI rendering to include the new EvaluationIcon, adjusted mapping logic, and replaced lodash get usage with a direct import.
app/.../NativeModelEvaluationView/EvaluationIcon.tsx Introduced a new React component that conditionally renders icons (e.g., for classification, detection, segmentation, regression) based on evaluation type and method, with tooltip support.
app/.../NativeModelEvaluationView/Types.tsx New file providing TypeScript definitions for evaluation components, including OverviewProps, EvaluationType, PendingEvaluationType, EvaluationCardProps, and ConcreteEvaluationType.
plugins/.../model_evaluation/__init__.py Modified backend methods (on_load and load_pending_evaluations) to append "type" and "method" fields to evaluation and pending evaluation objects by fetching configuration from the dataset.

Sequence Diagram(s)

sequenceDiagram
    participant U as User
    participant EP as EvaluationPanel
    participant DS as Dataset
    participant EV as EvaluationComponent
    participant EI as EvaluationIcon

    U->>EP: Request evaluation data
    EP->>DS: Fetch evaluation config (type, method)
    DS-->>EP: Return evaluation info
    EP->>EV: Supply enriched evaluation objects
    EV->>EI: Render icons based on type and method
Loading

Possibly related PRs

Suggested labels

enhancement

Suggested reviewers

  • imanjra
  • Br2850
  • lanzhenw

Poem

I'm a rabbit hopping through the code,
New icons and types lighten my load.
Evaluation fields now sparkle with "type" and "method",
In this garden of code, my joy's unbridled.
With every update, my heart does a hop—
Celebrating changes that make bugs stop!

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/index.tsx (1)

12-14: 🛠️ Refactor suggestion

Add TypeScript props interface.

The component is missing TypeScript type definitions for its props.

+interface NativeModelEvaluationViewProps {
+  data?: {
+    evaluations?: Array<{
+      key: string;
+      id: string;
+      type: string;
+      method?: string;
+    }>;
+    view?: Record<string, any>;
+    statuses?: Record<string, string>;
+    notes?: Record<string, string>;
+    permissions?: Record<string, boolean>;
+    pending_evaluations?: Array<any>;
+  };
+  schema: {
+    view: {
+      on_change_view: any;
+      on_evaluate_model: any;
+      load_evaluation: any;
+      load_evaluation_view: any;
+      set_status: any;
+      set_note: any;
+      load_view: any;
+    };
+  };
+  onChange: (path: string, value: any) => void;
+  layout: any;
+}
+
-export default function NativeModelEvaluationView(props) {
+export default function NativeModelEvaluationView(props: NativeModelEvaluationViewProps) {
🧹 Nitpick comments (6)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Types.tsx (2)

1-9: Consider making props readonly for better type safety.

Since props should be immutable, consider making the properties readonly to prevent accidental mutations.

 export type OverviewProps = {
-  evaluations: EvaluationType[];
+  readonly evaluations: readonly EvaluationType[];
-  onSelect: (key: string, id: string) => void;
+  readonly onSelect: (key: string, id: string) => void;
-  onEvaluate: () => void;
+  readonly onEvaluate: () => void;
-  statuses?: Record<string, string>;
+  readonly statuses?: Readonly<Record<string, string>>;
-  notes?: Record<string, string>;
+  readonly notes?: Readonly<Record<string, string>>;
-  permissions?: Record<string, boolean>;
+  readonly permissions?: Readonly<Record<string, boolean>>;
-  pending_evaluations: PendingEvaluationType[];
+  readonly pending_evaluations: readonly PendingEvaluationType[];
 };

38-43: Make evaluation type more type-safe using an enum.

Using a string union type is good, but an enum would provide better type safety and autocompletion.

-export type ConcreteEvaluationType =
-  | "classification"
-  | "detection"
-  | "segmentation"
-  | "regression";
+export enum ConcreteEvaluationType {
+  Classification = "classification",
+  Detection = "detection",
+  Segmentation = "segmentation",
+  Regression = "regression",
+}
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationIcon.tsx (2)

23-34: Simplify icon selection using a lookup object.

The current if-else chain can be replaced with a more maintainable lookup object.

-  let evalIcon = <Layers />;
-  if (type === "classification" && method === "binary") {
-    evalIcon = <CallSplitOutlinedIcon />;
-  } else if (type === "classification" && method !== "binary") {
-    evalIcon = <Layers />;
-  } else if (type === "detection") {
-    evalIcon = <CrisisAlertOutlinedIcon />;
-  } else if (type === "segmentation") {
-    evalIcon = <PieChartOutlinedIcon />;
-  } else if (type === "regression") {
-    evalIcon = <ShowChartOutlinedIcon />;
-  }
+  const iconMap = {
+    classification: method === "binary" ? CallSplitOutlinedIcon : Layers,
+    detection: CrisisAlertOutlinedIcon,
+    segmentation: PieChartOutlinedIcon,
+    regression: ShowChartOutlinedIcon,
+  };
+  const IconComponent = iconMap[type] || Layers;
+  const evalIcon = <IconComponent />;

46-54: Use theme colors instead of hardcoded values.

Hardcoded colors make it difficult to maintain consistent theming across the application.

       <IconButton
         size="small"
         sx={{
-          color: "#FFC48B",
+          color: (theme) => theme.palette.warning.light,
           "&:hover": {
             backgroundColor: "transparent",
           },
         }}
       >
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Overview.tsx (1)

79-85: Add ARIA label for better accessibility.

The clickable card should have an accessible name for screen readers.

       <Card
         sx={{ p: 2, cursor: "pointer" }}
+        aria-label={`View evaluation ${eval_key}`}
         onClick={() => {
           onSelect(eval_key, id);
         }}
       >
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/index.tsx (1)

32-41: Optimize memoization dependencies.

The memoization dependency array could be more specific to prevent unnecessary recalculations.

   const computedEvaluations = useMemo(() => {
     return evaluations.map(({ key, id, type, method }) => ({
       key,
       id,
       type,
       method,
       description: "The description for evaluation " + key,
       status: "reviewed",
     }));
-  }, [evaluations]);
+  }, [evaluations?.map(e => `${e.key}${e.id}${e.type}${e.method}`).join()]);
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ad05c8d and 06723cd.

📒 Files selected for processing (6)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (7 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationIcon.tsx (1 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Overview.tsx (5 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Types.tsx (1 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/index.tsx (1 hunks)
  • plugins/panels/model_evaluation/__init__.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: Review the Typescript and React code for co...

**/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/index.tsx
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationIcon.tsx
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Types.tsx
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Overview.tsx
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx
⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: test / test-python (ubuntu-latest-m, 3.10)
  • GitHub Check: test / test-app
  • GitHub Check: lint / eslint
  • GitHub Check: build / build
  • GitHub Check: e2e / test-e2e
  • GitHub Check: build
🔇 Additional comments (8)
plugins/panels/model_evaluation/__init__.py (2)

92-95: LGTM! The changes enhance evaluation metadata.

The addition of type and method fields to the evaluation dictionary improves the data structure by providing essential information about the evaluation type and method, which aligns with the PR's goal of clarifying evaluation types.


434-442: LGTM! Efficient implementation of pending evaluations.

The changes:

  1. Efficiently retrieve the config once and store it in conf
  2. Add type and method fields to pending evaluations for consistency
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (6)

50-52: LGTM! Clean import additions.

The new imports are well-organized:

  • EvaluationIcon for displaying evaluation type icons
  • ConcreteEvaluationType for type safety
  • Specific import of get from lodash follows best practices

133-139: LGTM! Enhanced type safety and data structure.

The compareKeys array now includes type and method information for each evaluation key, enabling better type safety and supporting the new icon functionality.


528-539: LGTM! Improved header layout with evaluation type indicator.

The header now includes an evaluation type icon and maintains proper spacing, enhancing the visual hierarchy and user understanding.


607-610: LGTM! Enhanced Select component styling.

The added styles improve the alignment of items within the Select input, ensuring consistent vertical alignment.


637-647: LGTM! Enhanced MenuItem with evaluation type indicators.

The MenuItem now includes:

  • Proper destructuring of key, type, and method
  • EvaluationIcon with correct type casting
  • Clear visual hierarchy with icon and text

817-824: LGTM! Consistent use of imported get function.

The changes maintain the same functionality while using the specifically imported get function, improving code consistency.

Comment on lines +92 to +95
<EvaluationIcon
type={type as ConcreteEvaluationType}
method={method}
/>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove type casting by ensuring proper type flow.

The type casting to ConcreteEvaluationType suggests a potential type mismatch that should be handled at the data level.

-            <EvaluationIcon
-              type={type as ConcreteEvaluationType}
-              method={method}
-            />
+            {isConcreteEvaluationType(type) && (
+              <EvaluationIcon
+                type={type}
+                method={method}
+              />
+            )}

Add this type guard function:

function isConcreteEvaluationType(type: string): type is ConcreteEvaluationType {
  return ["classification", "detection", "segmentation", "regression"].includes(type);
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant