Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ansible integration points for Subutai #1

Open
akarasulu opened this issue Jan 15, 2017 · 0 comments
Open

Ansible integration points for Subutai #1

akarasulu opened this issue Jan 15, 2017 · 0 comments
Assignees
Labels

Comments

@akarasulu
Copy link
Contributor

akarasulu commented Jan 15, 2017

I've been playing with ansible for a few hours to get an idea of how it works. It's pretty neat and I like all the concepts. The python side and speed are horrible though :(. Anyways I did some research and experiments to see how I could get it to work outside of a Subutai environment or inside transparently. A playbook should basically work when run inside an environment container as well as outside if the p2p client is setup.

ActionModule

After some Googling, I discovered the extension points. Ansible is pretty easy to extend. I got the idea then that we can create an ActionModules to conditionally modify ssh connections based on whether Ansible was executing inside Subutai or outside of it. Here's the presentation that triggered the idea:

http://www.llabs.io/ansible-action-plugins/#51

Connection Magic for SSH

class ActionModule(object):
    def __init__(self, runner):
        self.runner = runner

        if C.ANSIBLE_SSH_ARGS is not None and len(C.ANSIBLE_SSH_ARGS):
            # __init__ gets evaluated twice, and the latter in a nested context,
            # so ignore both None and the empty string
            self.runner.connector = connection.Connector(self.runner)

        self.old_ssh_args = C.ANSIBLE_SSH_ARGS
        C.ANSIBLE_SSH_ARGS = ""
    def __del__(self):
        C.ANSIBLE_SSH_ARGS = self.old_ssh_args
    def run(): # ship your module

Connecting from the Outside

The tray really does this already to allow for easy SSH into environment containers. It's hitting the Hub to grab environment information through a REST API on the Hub. Obviously the user has to be logged in via the tray to get this information. The REST call returns the following JSON information:

[
  {
    "environment_ttl": 3600,
    "environment_key": "0a35d3419c604404bc45d7b6c85d4213",
    "environment_id": "d7839c45-42ec-4432-90f7-f1adcffc42df",
    "environment_status_desc": "Environment is ready",
    "environment_status": "HEALTHY",
    "environment_name": "foobar-tests",
    "environment_hash": "swarm-d7833c45-42ec-4732-90f7-f1adcf2242df",
    "environment_containers": [
      {
        "container_ip": "172.26.179.3",
        "container_name": "Container1",
        "container_id": "5627AA187449A16128473E366C5631776846594F"
      },
      {
        "container_ip": "172.26.179.4",
        "container_name": "g1",
        "container_id": "EE04E4236DC6FFA17044AF3D3E72746D3762F95C"
      },
      {
        "container_ip": "172.26.179.5",
        "container_name": "g2",
        "container_id": "C57A27CABB4463925DBC7CF2BFA9F357A8F13167"
      }
    ]
  }
]

From this information we can extract the swarm key, and get a local p2p daemon to connect to the environment. The p2p daemon will put us into the swarm where we can access the via an interface it creates:

$ p2p start -ip dhcp -hash 'swarm-d7833c45-42ec-4732-90f7-f1adcf2242df' -key '0a35d3419c604404bc45d7b6c85d4213'
$ p2p status 
swarm-d7833c45-42ec-4732-90f7-f1adcf2242df | 10.29.223.245
33da5452-db28-11e6-87d4-022d0fae0f03|10.29.223.1|State:Connected|
$ p2p debug
Hash: swarm-d7839c45-42ec-4732-90f7-f1adcffc42df
ID: 81ed0437-db46-11e6-8a34-022d0fae0f03
Interface vptp1, HW Addr: 06:3c:3b:21:97:05, IP: 10.29.223.245
Peers:
	--- 33da5452-db28-11e6-87d4-022d0fae0f03 ---
		HWAddr: 06:08:29:92:b1:ab
		IP: 10.29.223.1
		Endpoint: 54.183.100.182:40622
		Peer Address: 71.125.0.191:36557
		Proxy ID: 5
	--- End of 33da5452-db28-11e6-87d4-022d0fae0f03 ---

As you can see p2p has setup the interface to connect to the RH from the inside now. All it takes is a proper ssh command to get into the container environment with connectivity to the swarm established. The general formula is:

ssh -p (<last_octet>+10000) <user>@<my-p2p-ip>

Getting into container g1 can use this command:

ssh -p 10004 [email protected]

When to Trigger P2P

The Ansible inventory configuration should presume it is inside the environment. The inventory IP addresses and/or hostnames used for building the SSH commands by Ansible will be intercepted and modified by our ActionModule. The SSH ActionModule is invoked before the SSH command is initiated.

A simple test to see if we're within the environment by checking our IP address is enough. We also know we're outside because the there's no connectivity at all to the target inventory machines. This will work nicely once this feature is complete: subutai-io/p2p#289. Everything will be transparent this way with the playbooks being the same inside or outside of the Subutai environment. Sweet!

Quick and Dirty Test

Another option, to do this quick and dirty, is to copy the ssh connection driver to the subutai driver and make in place changes to handle all the p2p aspects while connecting:

https://github.com/amenonsen/ansible/tree/9921bb9d2002e136c030ff337c14f8b7eab0fc72/lib/ansible/plugins/connections

Here's a nice link that talks about the docker connection driver. It might be helpful to create a connection driver for subutai. http://blog.oddbit.com/2015/10/13/ansible-20-the-docker-connection-driver/

See Also

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant