Replies: 6 comments 4 replies
-
This is a fair request, that we didn't have reason to prioritize earlier. There were good reasons (cost, security, existing expertise within the team) for choosing the setup that we did. As I'm no longer working on SplinterDB for my day job, I don't have much to offer here in terms of a path forward. |
Beta Was this translation helpful? Give feedback.
-
Sketch of a proposed solution for this CI-infrastructure item:
Above post shows an example of only sending out a brief pass / fail mail, with brief contents in the body of the mail. What we need is a way to push out the output from test.sh so off-line triaging can happen to some extent. Here is what I'm thinking we can do to achieve that:
There may be some more CI-workflows-, notifications gotchas to work through, but something like above machinery seems like it should work. @rosenhouse -- Can you please give this a run in your head and vet this approach? thanks! |
Beta Was this translation helpful? Give feedback.
-
When we first set up CI for this project, GitHub Actions were not a viable option due to budget issues. At the time, the Some of these factors may have changed, but I am not sure and it would take some time to run-down and re-evaluate options. I don't have the bandwidth right now to drive this kind of effort. I'm sorry. |
Beta Was this translation helpful? Give feedback.
-
I would like to switch us to using github's infra for CI. I have forked splinterdb and written a github workflow for that purpose. You can see it here: To get it running, I had to fix a few bugs in splinter and also figure out how to get enough disk space cobbled together on the github runner machines to hold our databases. I also cleaned up our testing script to not leave databases lying around, and I wrote the workflow to try to minimize the number of minutes that we use. With this workflow, we have ample space for our tests (over 100GB of disk space, whereas our tests currently use around 25GB), so I think this is a sustainable setup for the future, as well. If you guys like it, then I propose the following:
We could just fork the repo and abandon the vmware/splinterdb repo, but I prefer to try to keep the vmware branding. Thoughts? |
Beta Was this translation helpful? Give feedback.
-
Alright, since the vmware ci infrastructure appears to be non-functional again, I went ahead and switched over to the new CI system. To try it out, do a dummy push to your PR (e.g. pull into your branch the changes that I just made to |
Beta Was this translation helpful? Give feedback.
-
Sorry for the delay getting it done. Invite sent. |
Beta Was this translation helpful? Give feedback.
-
HI, @rosenhouse , @rtjohnso -- I am going thru the motions of what it would take to be a non-VMware contributor working in this OSS repo.
As we discussed internally earlier, non-VMware contributors do not have access to the CI-job reports, or access to the Nimbus-VMs used to run these jobs. (E.g. this link will fail to open from a personal non-VMware machine.)
I am trying to foresee the dev workflow in such a situation. If everything passes, that's good. If something fails, what are the plans / mechanisms to make those failures be known to the code authors?
Right now, there may not be much of a non-VMware volume of work to contend with. Still, just asking -- if something simple can be done to let these failure results be emailed to the contributor, so at least they have some errors / diagnostics to look at, to figure out the source for failures.
[ Cc: @ajhconway -- have you tried to do any code work in this repo in your new work situation? What's been your experience, and what are your thoughts on
addressingimproving this issue?Any thoughts?
Beta Was this translation helpful? Give feedback.
All reactions