Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dashboard improvements #892

Open
abrichr opened this issue Oct 25, 2024 · 3 comments
Open

Dashboard improvements #892

abrichr opened this issue Oct 25, 2024 · 3 comments
Assignees
Labels
enhancement New feature or request

Comments

@abrichr
Copy link
Member

abrichr commented Oct 25, 2024

Feature request

image

Here are some UI/UX improvement suggestions for the OpenAdapt dashboard based on the screenshot:

Visual Hierarchy & Spacing:

Increase the padding and margin around each action event card to avoid clutter.
Use dividers or subtle shadows around each action event block to better separate them.
Reduce the text size for less important details (e.g., timestamps, ID) and increase it for key elements (e.g., event name).
Color and Contrast:

Introduce a more sophisticated color palette, using muted backgrounds with higher contrast for text.
Differentiate between 'press' and 'release' events by adding color codes (e.g., green for 'press', red for 'release').
Typography:

Use consistent font sizes and styles across similar elements. For example, make timestamps smaller and slightly less bold.
Opt for a modern, sans-serif font to improve readability and style.
Button Styling:

Update the "Remove action event" button to a sleeker, icon-based button or a smaller, modern flat button.
Use color to indicate action priority (e.g., red for delete, green for save, etc.).
Consider adding hover effects to the buttons for better interactivity feedback.
Event Cards Layout:

Display key information (like the event name and timestamp) more prominently, while collapsing or hiding less critical data (like canonical key vk) under expandable sections.
Use collapsible/expandable cards to allow users to toggle details if they need more information.
Header & Sidebar:

Add icons next to the sidebar items ("Recordings", "Settings", "Scrubbing") for better visual cues.
Make the active sidebar section more prominent with a highlight or different background color.
Introduce a cleaner, more minimalistic header design, potentially with more informative icons and labels.
Overall Aesthetic:

Switch to a card-based design with rounded corners for a more modern look.
Reduce the overall contrast by using lighter backgrounds for each card and a slightly darker background for the dashboard.
Introduce animations (e.g., smooth transitions for expanding/collapsing sections or button clicks).
Icons & Tooltips:

Use icons to represent key elements like "keyboard events," "mouse clicks," etc., for quicker visual identification.
Add tooltips for elements that may not be immediately clear, especially where space constraints limit detail.
These updates would enhance the application's visual appeal and improve overall usability, making it look modern and polished.

Motivation

UX

@abrichr abrichr added the enhancement New feature or request label Oct 25, 2024
@abrichr
Copy link
Member Author

abrichr commented Oct 28, 2024

@Animesh404 I believe we want to move away from NextJS as a dependency so that we can support Electron (e.g. Vite + React).

@Animesh404
Copy link
Contributor

@abrichr but doesn't Next already supports electron?

@abrichr
Copy link
Member Author

abrichr commented Oct 30, 2024

doesn't Next already supports electron
not as far as I know.

@Animesh404 I think we want to revisit #761. Essentially we want to paint each action.screenshot to the HTML Canvas while keeping track of action.timestamp, e.g.:

      this.canvas = document.querySelector('#canvas')
      this.bufCanvas = document.querySelector('#canvas')
      this.$doodle = $(this.doodle)
      this.$canvas = $(this.canvas)
      this.$doodleParent = this.$doodle.parent()
      this.ctx = this.canvas.getContext('2d')
      this.bufCtx = this.bufCanvas.getContext('2d')

Using https://stefanopini.github.io/vatic.js/:

        drawFrame: function(frameNumber) {
          return new Promise((resolve, _) => {
            that.annotatedObjectsTracker.getFrameWithObjects(frameNumber).then(
              (frameWithObjects) => {
                let img = frameWithObjects.img

                that.bufCanvas.width = img.width
                that.bufCanvas.height = img.height
                that.bufCtx.putImageData(img, 0, 0);
                that.ctx.drawImage(that.bufCanvas, 0, 0);

Or some more modern equivalent approach.

Edit: the above code is taken from an unrelated project. I don't think we want to use vatic.js since that is designed for labelling objects in videos which we are not doing here.

Some options:

Here's a ChatGPT-generated example using fabric.js:

import React, { useEffect, useRef } from "react";
import { fabric } from "fabric";

const FabricVideoPlayer = ({ base64Images, frameRate }) => {
  const canvasRef = useRef(null);
  const fabricCanvasRef = useRef(null);
  const frameIndexRef = useRef(0);

  useEffect(() => {
    // Initialize the Fabric.js canvas
    fabricCanvasRef.current = new fabric.Canvas(canvasRef.current);

    const fabricCanvas = fabricCanvasRef.current;

    const playFrames = () => {
      const frameIndex = frameIndexRef.current;
      const base64Image = base64Images[frameIndex];

      // Create a new fabric Image from base64 string
      fabric.Image.fromURL(base64Image, (img) => {
        fabricCanvas.clear(); // Clear previous frame
        fabricCanvas.add(img); // Add the new image frame
        fabricCanvas.renderAll(); // Render the canvas
      });

      // Increment the frame index
      frameIndexRef.current = (frameIndex + 1) % base64Images.length;
    };

    // Start the playback loop
    const interval = setInterval(playFrames, 1000 / frameRate);

    return () => {
      clearInterval(interval); // Cleanup on unmount
      fabricCanvas.dispose(); // Remove the canvas instance
    };
  }, [base64Images, frameRate]);

  return <canvas ref={canvasRef} width={800} height={600} />;
};

const App = () => {
  const base64Images = [
    "data:image/jpeg;base64,...", Replace with base64 strings from ActionEvent.Screenshot
    "data:image/jpeg;base64,...",
    "data:image/jpeg;base64,...",
  ];

  return <FabricVideoPlayer base64Images={base64Images} frameRate={24} />;
};

export default App;

Here's a ChatGPT-generated example using Konva.js for React:

import React, { useEffect, useRef, useState } from "react";
import { Stage, Layer, Image } from "react-konva";

const FramePlayer = ({ frames, frameRate }) => {
  const [currentFrameIndex, setCurrentFrameIndex] = useState(0);
  const imageRef = useRef(null);

  useEffect(() => {
    const interval = setInterval(() => {
      setCurrentFrameIndex((prevIndex) => (prevIndex + 1) % frames.length);
    }, 1000 / frameRate);

    return () => clearInterval(interval);
  }, [frames.length, frameRate]);

  useEffect(() => {
    if (imageRef.current) {
      const img = new window.Image();
      img.src = frames[currentFrameIndex];
      img.onload = () => {
        imageRef.current.image(img);
        imageRef.current.getLayer().batchDraw();
      };
    }
  }, [currentFrameIndex, frames]);

  return (
    <Stage width={800} height={600}>
      <Layer>
        <Image ref={imageRef} />
      </Layer>
    </Stage>
  );
};

const App = () => {
  const frameUrls = [
    "frame1.jpg", // Replace with base64 strings from ActionEvent.Screenshot
    "frame2.jpg",
    "frame3.jpg",
    "frame4.jpg",
  ];

  return <FramePlayer frames={frameUrls} frameRate={24} />;
};

export default App;

See

def base64(self) -> str:
for getting base64-encoded image strings.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants