Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow services to be omitted in usage_scenario #556

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

davidkopp
Copy link
Contributor

Quote from an existing source code comment:

technically the usage_scenario needs no services and can also operate on an empty list
This use case is when you have running containers on your host and want to benchmark some code running in them

Actually, it was not possible to use an usage_scenario without services. This PR solves it.

This PR also includes a change to the README in the tests directory to make clear, that some more requirements have to be installed to run the unit tests. I hope that's ok.

@ribalba
Copy link
Member

ribalba commented Nov 24, 2023

Hey, thank you for the PR. Looks great. We are currently at a conference so we will take a look probably next week.

@ArneTR
Copy link
Member

ArneTR commented Nov 28, 2023

Thanks for spotting this. Actually this is a mix of stale info and a design decision that was still lingering.

Technically we do not need the services and can still run the GMT. However it will fail if you have at least one of the cgroups reporters active, as they need a container ID to work (why else would you start them right :) )

So we had the initial idea of making the GMT work without services, but then opted to not implement it and apparently left the comments in the code.

We have internally been thinking if the GMT should not get something like a "monitor" mode. So you start it without a usage scenario, it will automatically detect all containers on the system and just track them until you hit CTRL+C (or a pre-defined time).

My question to you @davidkopp : What did you use the feature in this PR for? What is the use case you have?
And would a monitor mode not be more helpful in the end?

I see the danger that bringing this feature in the GMT can be tricky to understand as forcing a usage_scenario, but then having it empty seems counter intuitive.

@davidkopp
Copy link
Contributor Author

Actually, I don't have a special use case for this PR or a monitor feature at the moment. While looking at the code to understand GMT, I noticed this comment and had a closer look, which led to this PR.

@ArneTR
Copy link
Member

ArneTR commented Nov 29, 2023

I would like to leave this PR open and actually see if we can internally derive a concept for a monitor mode / run mode / dev mode.

We have the issue that the GMT should be also usable in a monitor mode, but also we need the functionaly to quickly burst through all the steps to just see if a usage scenario is working without having the strong overhead of setting up and starting the measurement infrastructure.
Also vice versa we need a mode to just check if the containers can get setup and queried without having any metric reporters attached and the system validated.

So in summary:

  • A monitor mode that needs no usage scenario and will just "observe" the current system. Container auto discovery included
  • A "usage scenario testing" mode that starts no metrics reporters, no sleeps but just checks if the "flow" commands work
  • A "configuration testing" mode that executes no "flow" commands, but just checks if the metric providers and tool itself is configured correctly

@ribalba What do you think of these modes?

@davidkopp Do you have any remarks on these ideas? Do you think they would be useful for your work or any future work that you are planning?

@davidkopp
Copy link
Contributor Author

I like the ideas.

For me at the moment, a "usage scenario testing" would be quite helpful and would have saved me already some time during the creation and testing of new usage scenarios. Currently, I'm using the flags --skip-system-checks, --dev-repeat-run and --dry-run to make testing of usage scenarios faster. A "usage scenario testing" mode would probably combine these three flags and could probably leave out some more steps (haven't checked it).
A "usage scenario testing" mode could also be helpful to make the tests in test_usage_scenario.py run faster.

Some thoughts regarding the monitor mode that come into my mind:

  • Could be helpful in scenarios that need docker compose features that are currently not supported by GMT
  • How to stop it? With Ctrl+c? After a predefined amount of time?
  • Would it still be possible to generate a helpful report, without the information from a usage_scenario.yml?
  • Would it make sense to provide an option to define a subset of monitored containers? Or would it always monitor every container running on the system?

@ArneTR
Copy link
Member

ArneTR commented Nov 30, 2023

Thanks for the input! I have scheduled it internally for the winter break to make a draft on this.

@ribalba
Copy link
Member

ribalba commented Dec 4, 2023

So the problem I often encounter is that I am working on a usage scenario file and I want to see if everything is set up correctly and the run will work. I am not so much interested in the quality of the data. I have always thought of a "dev" mode and a "measurement" mode. But why not make this as flexible as possible.

This was referenced Dec 26, 2023
@ArneTR
Copy link
Member

ArneTR commented Mar 4, 2024

Update for us internally: This is waiting for monitor mode to be finalized and then will probably be closed

@ribalba ribalba added the blocked label Mar 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants