-
Notifications
You must be signed in to change notification settings - Fork 605
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improves Model Evaluation UX so the user better understand the model evaluation type for a run #5509
base: develop
Are you sure you want to change the base?
Conversation
WalkthroughThis pull request enhances the model evaluation functionality across both frontend and backend components. The evaluation data structure is updated to include detailed properties such as "type" and "method". The UI now integrates a new Changes
Sequence Diagram(s)sequenceDiagram
participant U as User
participant EP as EvaluationPanel
participant DS as Dataset
participant EV as EvaluationComponent
participant EI as EvaluationIcon
U->>EP: Request evaluation data
EP->>DS: Fetch evaluation config (type, method)
DS-->>EP: Return evaluation info
EP->>EV: Supply enriched evaluation objects
EV->>EI: Render icons based on type and method
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/index.tsx (1)
12-14
: 🛠️ Refactor suggestionAdd TypeScript props interface.
The component is missing TypeScript type definitions for its props.
+interface NativeModelEvaluationViewProps { + data?: { + evaluations?: Array<{ + key: string; + id: string; + type: string; + method?: string; + }>; + view?: Record<string, any>; + statuses?: Record<string, string>; + notes?: Record<string, string>; + permissions?: Record<string, boolean>; + pending_evaluations?: Array<any>; + }; + schema: { + view: { + on_change_view: any; + on_evaluate_model: any; + load_evaluation: any; + load_evaluation_view: any; + set_status: any; + set_note: any; + load_view: any; + }; + }; + onChange: (path: string, value: any) => void; + layout: any; +} + -export default function NativeModelEvaluationView(props) { +export default function NativeModelEvaluationView(props: NativeModelEvaluationViewProps) {
🧹 Nitpick comments (6)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Types.tsx (2)
1-9
: Consider making props readonly for better type safety.Since props should be immutable, consider making the properties readonly to prevent accidental mutations.
export type OverviewProps = { - evaluations: EvaluationType[]; + readonly evaluations: readonly EvaluationType[]; - onSelect: (key: string, id: string) => void; + readonly onSelect: (key: string, id: string) => void; - onEvaluate: () => void; + readonly onEvaluate: () => void; - statuses?: Record<string, string>; + readonly statuses?: Readonly<Record<string, string>>; - notes?: Record<string, string>; + readonly notes?: Readonly<Record<string, string>>; - permissions?: Record<string, boolean>; + readonly permissions?: Readonly<Record<string, boolean>>; - pending_evaluations: PendingEvaluationType[]; + readonly pending_evaluations: readonly PendingEvaluationType[]; };
38-43
: Make evaluation type more type-safe using an enum.Using a string union type is good, but an enum would provide better type safety and autocompletion.
-export type ConcreteEvaluationType = - | "classification" - | "detection" - | "segmentation" - | "regression"; +export enum ConcreteEvaluationType { + Classification = "classification", + Detection = "detection", + Segmentation = "segmentation", + Regression = "regression", +}app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationIcon.tsx (2)
23-34
: Simplify icon selection using a lookup object.The current if-else chain can be replaced with a more maintainable lookup object.
- let evalIcon = <Layers />; - if (type === "classification" && method === "binary") { - evalIcon = <CallSplitOutlinedIcon />; - } else if (type === "classification" && method !== "binary") { - evalIcon = <Layers />; - } else if (type === "detection") { - evalIcon = <CrisisAlertOutlinedIcon />; - } else if (type === "segmentation") { - evalIcon = <PieChartOutlinedIcon />; - } else if (type === "regression") { - evalIcon = <ShowChartOutlinedIcon />; - } + const iconMap = { + classification: method === "binary" ? CallSplitOutlinedIcon : Layers, + detection: CrisisAlertOutlinedIcon, + segmentation: PieChartOutlinedIcon, + regression: ShowChartOutlinedIcon, + }; + const IconComponent = iconMap[type] || Layers; + const evalIcon = <IconComponent />;
46-54
: Use theme colors instead of hardcoded values.Hardcoded colors make it difficult to maintain consistent theming across the application.
<IconButton size="small" sx={{ - color: "#FFC48B", + color: (theme) => theme.palette.warning.light, "&:hover": { backgroundColor: "transparent", }, }} >app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Overview.tsx (1)
79-85
: Add ARIA label for better accessibility.The clickable card should have an accessible name for screen readers.
<Card sx={{ p: 2, cursor: "pointer" }} + aria-label={`View evaluation ${eval_key}`} onClick={() => { onSelect(eval_key, id); }} >
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/index.tsx (1)
32-41
: Optimize memoization dependencies.The memoization dependency array could be more specific to prevent unnecessary recalculations.
const computedEvaluations = useMemo(() => { return evaluations.map(({ key, id, type, method }) => ({ key, id, type, method, description: "The description for evaluation " + key, status: "reviewed", })); - }, [evaluations]); + }, [evaluations?.map(e => `${e.key}${e.id}${e.type}${e.method}`).join()]);
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx
(7 hunks)app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationIcon.tsx
(1 hunks)app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Overview.tsx
(5 hunks)app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Types.tsx
(1 hunks)app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/index.tsx
(1 hunks)plugins/panels/model_evaluation/__init__.py
(2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: Review the Typescript and React code for co...
**/*.{ts,tsx}
: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/index.tsx
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationIcon.tsx
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Types.tsx
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Overview.tsx
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx
⏰ Context from checks skipped due to timeout of 90000ms (6)
- GitHub Check: test / test-python (ubuntu-latest-m, 3.10)
- GitHub Check: test / test-app
- GitHub Check: lint / eslint
- GitHub Check: build / build
- GitHub Check: e2e / test-e2e
- GitHub Check: build
🔇 Additional comments (8)
plugins/panels/model_evaluation/__init__.py (2)
92-95
: LGTM! The changes enhance evaluation metadata.The addition of
type
andmethod
fields to the evaluation dictionary improves the data structure by providing essential information about the evaluation type and method, which aligns with the PR's goal of clarifying evaluation types.
434-442
: LGTM! Efficient implementation of pending evaluations.The changes:
- Efficiently retrieve the config once and store it in
conf
- Add
type
andmethod
fields to pending evaluations for consistencyapp/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (6)
50-52
: LGTM! Clean import additions.The new imports are well-organized:
EvaluationIcon
for displaying evaluation type iconsConcreteEvaluationType
for type safety- Specific import of
get
from lodash follows best practices
133-139
: LGTM! Enhanced type safety and data structure.The compareKeys array now includes
type
andmethod
information for each evaluation key, enabling better type safety and supporting the new icon functionality.
528-539
: LGTM! Improved header layout with evaluation type indicator.The header now includes an evaluation type icon and maintains proper spacing, enhancing the visual hierarchy and user understanding.
607-610
: LGTM! Enhanced Select component styling.The added styles improve the alignment of items within the Select input, ensuring consistent vertical alignment.
637-647
: LGTM! Enhanced MenuItem with evaluation type indicators.The MenuItem now includes:
- Proper destructuring of key, type, and method
- EvaluationIcon with correct type casting
- Clear visual hierarchy with icon and text
817-824
: LGTM! Consistent use of imported get function.The changes maintain the same functionality while using the specifically imported get function, improving code consistency.
<EvaluationIcon | ||
type={type as ConcreteEvaluationType} | ||
method={method} | ||
/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove type casting by ensuring proper type flow.
The type casting to ConcreteEvaluationType suggests a potential type mismatch that should be handled at the data level.
- <EvaluationIcon
- type={type as ConcreteEvaluationType}
- method={method}
- />
+ {isConcreteEvaluationType(type) && (
+ <EvaluationIcon
+ type={type}
+ method={method}
+ />
+ )}
Add this type guard function:
function isConcreteEvaluationType(type: string): type is ConcreteEvaluationType {
return ["classification", "detection", "segmentation", "regression"].includes(type);
}
What changes are proposed in this pull request?
https://voxel51.atlassian.net/browse/FOEPD-467
test2.mov
How is this patch tested? If it is not, please explain why.
Manually
Release Notes
ME: As a user, I want to understand the model evaluation type for a run
The Model Evaluation panel currently supports a variety of different task types, which are implicitly defined by the type of ground truth/predicted labels that the user is evaluating (classification, detection, segmentation, regression). See the next section for details.
this PR introduces iconography for each task type that we can use in various places in the panel to reinforce what the task semantically represents.
Is this a user-facing change that should be mentioned in the release notes?
notes for FiftyOne users.
(Details in 1-2 sentences. You can just refer to another PR with a description
if this PR is part of a larger change.)
What areas of FiftyOne does this PR affect?
fiftyone
Python library changesSummary by CodeRabbit
New Features
Refactor