-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should we add a 'base' build container so we have a technology agnostic foundation? #26
Comments
I like the idea of having smaller build containers that are more specific and clear in concept but I wonder if practically it is worth it. For example, if I hold up three different technologies as test cases, one based on Node, on based on Ruby (on Rails most likely) and one based on PHP. I think the Node and Ruby ones stand the best chance of not needing other technology but I know our PHP ones very frequently need Node at least and maybe Ruby (though libsass may have removed that). So in my mind we'd have the benefit of a set of smaller containers for those who weren't working on a PHP based project and those working on a PHP based one wind up with a build image with everything else they'd need to do a more technology specific project already in it. It might still be valuable to split out a base container with all of the debugging utilities and stuff just to keep from needing to rebuild those layers every single time. |
I'm not suggesting as part of this that we force build containers to be focused on a single technology stack. Rather I'm suggesting the base case of a build container is simply agnostic. We can then have ruby, node, and php/node images based on it. If we do have a baseline image already built, it will reduce the uncompressed, download size of build images after the first one pulled by hundreds of megabytes. CentOS, gcc and so on tend to add a lot of stuff under the hood. |
I think that the right "level" for a base should be not just for "build" images, but for other things like one-shot / custom images for things like db restores or other maintenance related images. I consider "build" images to be focused around development, where as I could see cases where we need various production-mode utility images too and it would be nice to have a common toolset to build off of. I think your vision of a base image could be useful for both. Thoughts? |
That seems about right. We call it the "build" container because we're making the point that it's where builds should run as a sort of mantra marketing for devs. In practice, this is our approach to command-line tools which don't make sense as traditional, hyper-focused containers. The case for "making sense" is part one of user mindset and part breaking down some of the complexity of tools interacting with each other. So given that, does the base image proposed above do too much? Too little? |
I think it looks about right, I might add the yaml equivalent of jq, and ensure that wget is there, but otherwise it looks like a good cut at it. Now the hard part. What the hell do we NAME it? |
Might want to add the aws cli tool as well. |
yq (the jq of YAML) is a wrapper around jq that parses the yaml and in-line converts it to json before processing with jq. Both yq and aws-cli are python-based, which would mean we'd need to add the python stack to the image. I'm not sure we want to do that, as it will add several dozen megabytes. (maybe triple digits?) yq is simple enough, would make a good golang learning project. :) aws-cli isn't needed by most projects, maybe that should be kept standalone or that we should have an aws variant? As far as names go, busybox is taken. I was thinking just called it build-base, but if we want to get more general purpose, we could outrigger/toolbox, outrigger/utility, outrigger/tools to keep it simple. |
More names: Helm, Terminal, Command. I assume by the request for a name you want this to have its own repo? |
As a base for development extension it's pretty close for me. As a base for one shot utility instances I think it may be a little heavy. Depends on the stance the base image would take on being lean versus developer/user experience. If we're going lean (has that ship already sailed with our choice of image to derive from . . .) for one shot instances I'd consider dropping: make, pv, git. I could even see dropping curl and the various zip utilities though I'd probably add them all back as the first layer on top of the base to get to the base for derived build images. If we don't really have a desire to shoot for lean-ness I'd leave them all in (and consider adding telnet and bind-utils which I find myself using for debugging all the time). If we're going for more generalized names for the base and want to keep to the nautical theme perhaps keel. |
Our philosophy of build containers is not really aimed at lean, single-purpose services, and I'm not arguing to change that here. The idea of this is more along the lines of a specialized remote server environment, with binary compatibility to production hosts (such that using Alpine could be done, but musl is just different enough that we're avoiding the complication). My thought is to try to keep the image as tidy as possible while providing all the utilities commonly assumed in a build/console environment, especially one that may be activated in a production cloud to perform operational tasks. From that standpoint, I'd say, yes, we should consider adding telnet, bind-utils, and perhaps strace.
One-shot utility instances should probably stick with looking for the right base image in the Docker ecosystem. I usually pick the alpine equivalent of whatever official package is available when Dockerizing a tool. We can always build the lean tools, the problem is those times when the next step is needing to support executing other tools with Docker, which is something we can do in specialized situations but don't want to require people to manage/think on routinely.
I like it. |
outrigger-keel is now published, with outrigger/keel:1.0 pinned to a tag and and outrigger/keel:dev as latest master. I've been trying to refactor outrigger/build:php70 to lean on it, and getting into some yum issues. |
I've been thinking about how we present Outrigger as a technology-agnostic solution, but our build container isn't all that agnostic. It's a big pile of stuff which is aimed at the tools and tech stacks Phase2 tends to work with around PHP & Drupal.
What if we centralized the truly tech-agnostic stuff we want in a build container for any application into a "base", then extend that into others? I've put together a bit of a demonstration in the expando-matic sections below.
Base Dockerfile
Extending php7.0 Dockerfile
The text was updated successfully, but these errors were encountered: