-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
brainstorm workarounds for RHEL 7's rpmbuild static buffer limits #169
Comments
(This is solved for RHEL 8 - we just need to fix it for RHEL 7.) |
First, static buffers - lol :D Second, 900 patches?! Whoa, that's an equivalent of a fork... Instead of invoking creative hacks in new software to support bugs of old software, I suggest one-time straightforward hack of simply generating tarball from patches branch and using that for RHEL 7. I think the simple script to do this might even live in distgit. If that's a problem due to process reasons (although upstream tarball + 900 patches is the same thing as a different tarball), then your solution is next option I'd consider. |
I can't resist the urge to mention that I can't quite imagine how a pre-singularity human could effectively manage 900 patches, I'd consider such number as a hint to reconsider workflow... just sayin' ;) |
How would you recommend reconsidering our workflow? |
For context, we manage this We hit the rpmbuild buffer limit Friday in our main ceph-3.2-rhel-patches branch, at ~530 downstream patches to ceph v12.2.8, so we need to come up with a solution this week in order to keep that branch going. In the medium term, we will rebase to a new upstream version in order to drop some of these. Rebasing is not something I can force onto the rest of my QE team without a schedule impact, so that will not happen for another month or so. Even after rebasing to the latest upstream version, we still will carry at least 200 patches that we might or might not land upstream. Long term, we need something that is going to scale to hundreds or even a thousand downstream changes. (Imagine the RHEL kernels). I considered uploading full Ceph source tarballs on every change, but those are 74MB each. We've had over a hundred builds for RH Ceph Storage 3.2, so 74MB x 100 = 7.4 GB of space in the dist-git lookaside cache, and RCM never garbage-collects that system. We've also encountered some issues with regard to .patch files for binary files. For example Ceph's dashboard UI has a couple images (like .png files) that we want to patch downstream for Red Hat, so we need a solution to do that in an automated way. Today I'm looking at making rdopkg generate a "Source1" tarball that is an overlay of all the changes we need on top of the upstream tarball, so that I can just unpack both tarballs in |
Here's my tool that uses rdopkg's APIs to generate the tarball and inject it as Source1: https://github.com/ktdreyer/rdopkg-tar Maybe I can bring this into rdopkg somehow, or else make it a custom rdopkg action (#171) |
Cool tool ;) You definitely can bring this into rdopkg. You can always do custom action but perhaps this would be worth a full integration with existing actions. So there is
Let me write the docs on howto do all this as part of #171 and then you can tests the docs by attempting to integrate this :) I could obviously just tell you how to do it specifically or do it myself, but allowing anyone to do it with proper docs sounds like a worthy pursuit. |
not doing development for RHEL 7 and there is workaround |
In RHEL 7, we are reaching the limit of what rpmbuild can do within
%prep
with%autosetup
. When I tryrdopkg patch
with about 900 patches for ceph, it fails: https://bugzilla.redhat.com/1643991The RPM maintainer indicated this is a problem with RHEL 7's rpmbuild version using static buffers for things like the
%prep
script.Maybe we could tar up all the .patch files, ship them as another "SourceX" tarball, and then apply them with a wildcard in
%prep
, instead of using%autosetup
.What do you think? Any other ideas to solve this problem?
The text was updated successfully, but these errors were encountered: