Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Call for feedback #859

Closed
lachmanfrantisek opened this issue Nov 4, 2020 · 16 comments
Closed

Call for feedback #859

lachmanfrantisek opened this issue Nov 4, 2020 · 16 comments
Labels
pinned Ignored by stale-bot.

Comments

@lachmanfrantisek
Copy link
Member

It's some time we are running our GitHub application and proving you a CLI to help you with the upstream-downstream integration. Some of you just get onboarded, some are with us for a long time, but there are also people still not onboarded.

TLDR:

  • We would like to get a feedback from you.
  • You can track (and influence) our work on the upstream issues.

Currently, we are working hard on the contribution mechanism for centos-stream, but we still want to run and work on the GitHub app. Some work is shared, but not everything. To be transparent and to help us with the triaging (for what I am responsible), we set up the GitHub project-board that shows upstream issues that are:

  • 📆 on our short-term plan (the To do column)
  • 🏃 currently being worked on this sprint (the In progress column)

As you can see, we fix 2-3 issues per sprint. Sometimes even more.

Since there is a lot of work to do and only limited time, we want to work on the most relevant issues. And here is, where you can help us:

  • Please, give us some feedback.
    • Is our service useful to you? How (un)important is it to you? What are the most important features?
    • Add a comment here or in the separate issue.
  • Let us know when you have any problem with the service.
  • 💡 Is there any blocker, missing feature or UX suggestion?
    • Please, create an issue and describe, how it is essential to you.
    • More people support the idea, more likely it is to be implemented.
    • If it is a huge chunk of work and all the work will be usable for a single package, we will probably not work on that.
  • Be patient, don't expect miracles. Ask us for an update so we know you are still interested.

All of you should have some experience with the service. Sorry if I tag anyone not interested.

@michalfabik @mgrabovsky @ernestask @mgrabovsky @xsuchy @MartinBasti @rcerven @athos-ribeiro @crobinso @t8m @sgallagher @ueno @cathay4t @ffmancera @fellipeh @bocekm @sturivny @Honny1 @matejak @jlebon @praiskup @psss @thrix @pvalena @nforro @FrNecas @martinpitt @jkonecny12 @larskarlitski @msehnout @MartinStyk @lukash @clebergnu @beraldoleal

If you are interested where I get all of you, look at our dashboard that was created as a GSOC project this year. The second project was about GitLab support that you can see e.g. in this merge-request.

@lachmanfrantisek lachmanfrantisek added the pinned Ignored by stale-bot. label Nov 4, 2020
@lachmanfrantisek lachmanfrantisek pinned this issue Nov 4, 2020
@ernestask
Copy link

Since I have moved on from packaging anything, I can only say that Packit is the bees knees when it comes to having an integrated, cohesive developer experience.

Fedora is still stuck in the late 90s/early 00s with its processes, but being able to at least close the distance between upstream and downstream was great. With GitLab support, maybe it would even be possible to get major upstreams (Haskell, fd.o, GNOME) involved.

@msehnout
Copy link

We are not using Packit any more. Since our team is owner of both the upstream and the downstream we were looking for a way to make both of them as close as possible. We keep the downstream tarballs as an exact snapshot of our upstream repo, therefore the upstream-downstream synchronization features from Packit don't bring additional value to our project. Building and testing in CI is something we are still working on, but we deploy a develop our own infrastructure. I'm not sure testing farm would help us.

@lachmanfrantisek
Copy link
Member Author

@msehnout Thanks for the comment. Here are a few use-cases you might find useful (let us know if you want some help with the setup):

  • Creating downstream pull-requests with the new version to all branches. (e.g. like this).
    • So you don't need to do the changes manually.
  • Copr builds for PRs, branches or releases.
    • As a CI or easy way to provide anyone installable artifcats for PRs,
    • but also possible to have a long-term copr project for any upstream branch. (e.g. We have packit-master and packit-releases.)
  • Scratch builds in koji for upstream PRs/commits/releases.
  • If you don't have any tmt/fmf tests you want to run in testing-farm, we try to install your newly-built package in a fedora/centos-stream/centos image.

We are in a similar situation (=owner of both upstream and downstream, same content) and we still found packit very useful.

@msehnout
Copy link

@lachmanfrantisek Thanks for the suggestions, I think the problem is that when we stopped using Packit it was because of issues with its reliability and since then we've developed our own infrastructure for almost everything you listed except the creation of downstream PRs. We are also quite greedy when it comes to resources, we are migrating most of our workload to AWS to actually meet our needs. Most of our projects need full VMs to actually execute the tests. Some tests even require nested virtualization. I must admit I haven't read the documentation for testing farm thoroughly but running our own infra seems like a good way for now.

Please don't take this like we don't appreciate the features of Packit, we do! It's just that running our own infrastructure seems like a natural decision given that we actually build OS images for the infrastructure :-)

@lachmanfrantisek
Copy link
Member Author

@msehnout no, problem, thanks for the reasoning. (@thrix is the right person if you want to know more about testing-farm capabilities.)

@thrix
Copy link
Contributor

thrix commented Nov 12, 2020

@msehnout in terms of CI we run also on AWS, but Packit is in process of being migrated right now. Nested virt does not work in AWS, unless you use bare metal instances, right?

@martinpitt
Copy link

I tried the GitHub integration, and still ran into several of problems (obscure error messages, package: install, test does not run in git checkout). @thrix was very helpful with guiding me through them, and mentioned that most of these will go away with the pending cluster migration.

I also tried the testing farm API directly, which does not suffer from most of the above problems, but has its own (e.g. test.fmf.ref does not work properly with a branch name, it needs to be a SHA -- again, @thrix said he'll fix that).

Ignoring these bugs, it's a fine, welcome, and easy way to cover Fedora packaging tests, mostly due to the automatic rpm build/copr integration. I would call it less useful as a generic upstream CI tool, GitHub actions or Travis are more useful for that, as in these cases you often dont' want to go through a separate COPR step (or even rpm build at all). But packit isn't meant to replace them, so I think overall this is going into a great direction.

The main missing feature for the projects I work on (Cockpit, Anaconda, kickstart tests) is missing /dev/kvm support, and the specs of the instance (1 CPU, 2 GiB RAM) are too low to sensibly run browser or virt-install tests. I realize that this is a big ask, and I really don't expect this to change fast, as it's a question of hard $$$ and also difficult in principle (if e.g. AWS doesn't offer nested kvm, then we can't do much about it). Just bringing it up as you asked for blockers.

Thanks a lot for your great work here!

@msehnout
Copy link

@msehnout in terms of CI we run also on AWS, but Packit is in process of being migrated right now. Nested virt does not work in AWS, unless you use bare metal instances, right?

Correct, we use OpenStack for nested virt, but we are not exactly happy with it.

@crobinso
Copy link

As a system to test RPM build + install on copr targets it's worked well in my minimal experience with python-bugzilla. I plan to enable it for virt-manager for that purpose too.

First time I attempted adding support to python-bugzilla, the action seemed to hang and not report success in github UI, and I didn't commit it because I was pressed for time. But then jpopelka added it later and it's worked well minus one issue: a recently submitted PR from another user triggered packit permission failures:

python-bugzilla/python-bugzilla#140

Maybe that's a config issue on python-bugzilla side though.

@thrix
Copy link
Contributor

thrix commented Nov 12, 2020

@crobinso those permission errors are expected, I believe you can enable by commenting /packit build. This is there to mitigate somebody is doing something nasty I believe

@lachmanfrantisek
Copy link
Member Author

lachmanfrantisek commented Nov 12, 2020

@crobinso thanks for the feedback. Regarding your issue, you can comment /packit build to trigger the build.

The problem here is that we don't allow other contributors to run the (potentially danger) code.

edit: Here is an issue to make it more useful: #250

@crobinso
Copy link

Thanks for the info. If I disable the copr build step for PRs by default, can /packit build still be used to trigger a build? I'd prefer not to have CI report failure for every PR until I manually intervene

@lachmanfrantisek
Copy link
Member Author

@crobinso Unfortunately, that's not how it works now. Removing the job from config will remove that completely.

We've had some discussion about this here but agreed on not implementing any manual triggering and try to fix the main causes. We would like to use neutral states in such situations, but we can't use them yet -- we'll use them once ready in the library we are using (packit/ogr#461).

But if there is a higher demand for the manual trigger, we can work on it.

@crobinso
Copy link

For my projects it's not too important that RPM build is triggered on PR, just a nice to have

@jkonecny12
Copy link

jkonecny12 commented Dec 4, 2020

Hello everyone,

Finally I'm getting to this, I wanted first check a few more configurations and settings.

We (Anaconda team) are trying to integrate Packit and we are still on 'start', I would say. We already have COPR builds on pull requests for ELN and Rawhide on all important projects. It was a pleasure to find out that enabling ELN builds is just one line for the current configuration so 👍 for that! It was much more complicated because of COPR chroots configuration but that is solved now and is not related to Packit.

Just now I migrated our daily builds on our projects. Namely Anaconda, python-dasbus and python-simpleline. It was a pleasure to set everything up for persistent builds, nice how Packit guide you to get the correct configuration of COPR repository and Packit configuration.

However, to not telling you that everything is gold. I also have plenty of issues. Most of them were solved pretty quickly, you are really responsive on IRC or GH issues! So I'm fine from the Packit PoV here. Unfortunately my strongest point from the Packit still does not work and that is running TMT tests. These tests could be then shared between Packit (PR testing) and Gating. That would be great benefit for us if the TFT cluster won't be broken. I know this is not something Packit team is responsible for or could really fix, however, it's still my biggest pain point. Hopefully, this will be fixed during January with the new TFT solution.

Last thing we don't have yet but still pretty interesting is automation of releases. On that we need first make automation of GH releases which we are not doing right now and would be just more work for us if not automated. We don't have it already because of missing changelog support -- resolved now 👏 👏 👏 for packit/packit#1004 ! It shouldn't be that hard but I did not had a time for that yet.

I also want to use Packit for releasing one personal upstream project and doing daily builds. That situation is a bit special because I have a fork from where I want to do this because upstream is not inclined to have packit support (totally understandable). However, can't find time to finish that.

From my PoV there were obstacles but your team is really trying to solve them if someone points on them. It would be great to have Packit stable from the start but I don't see a problem in that when you are getting closer and closer. Wish you good luck!

P.S.: Having RHEL support would be great!
P.P.S.: Also solving the flakes would be nice but they are not happening too often so I'm able to live with these.

@lachmanfrantisek
Copy link
Member Author

Thanks all of you for your feedback and comments!

We would like to give you a better place for reacting to the ideas that are being prepared or the dilemmas we have. (And to avoid a mess with the notifications.) Please, take a look at the discussions in the packit repo: https://github.com/packit/packit/discussions

You can easily be notified only about the discussions and not about our too-technical issues in this repository. Just click custom in the notification settings of the repo:
Screenshot from 2021-07-26 18-37-31

Other communication channels are still in place, you can still open issues as you are used to. This is just an additional place for brainstorming and collecting the feedback.

Here are the first two topics we have for now:

@packit packit locked and limited conversation to collaborators Mar 24, 2022
@mfocko mfocko converted this issue into a discussion Mar 24, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
pinned Ignored by stale-bot.
Projects
None yet
Development

No branches or pull requests

7 participants