Skip to content

Asynchronous communication with the Gatekeeper (GTK)

Pol Alemany edited this page Dec 24, 2019 · 4 revisions

The following page describes with a simple network Slice instantiation. By describing the communication between tng-slice-mngr and tng-gtk-sp in order to not block the whole SONATA. For a mroe detailed information on how a Network Slcie is instantiated, please check the Detaile NSI Instantiation Flow link.

How does the Network Slice manager know which Network Service has to update?

To keep control of which one of the multiple services belonging to the diferent network slices there might exist, the Network Slice manager uses the "request ID" as a reference.

Which is the procedure?

when a service instantiation is requested to the Gatekeeper, this generates the request object and answers back with its information like the following json:

 {
  'params': [],
  'egresses': [],
  'callback': 'http://tng-slice-mngr:5998/api/nsilcm/v1/nsi/4421aa90-8144-4fb3-a49f-53931e398198/instantiation-change',
  'mapping': {
     'network_functions': [
       {
         'vim_id': '6551ac6c-31e6-4c49-8906-b0971122f3a3',
         'vnf_id': 'vnf-A'
       },
       {
         'vim_id': '6551ac6c-31e6-4c49-8906-b0971122f3a3',
         'vnf_id': 'vnf-B'
       }
      ],
     'virtual_links': []
   },
  'service_uuid': 'ec87a0f1-9c1e-4735-9a86-3c22aca9ab66',
  'ingresses': [],
  'name': 'slice_-pilot-subnet-1',
  'sla_id': '5dc9aec0-1b97-41fe-8415-5e0bc6409037'
 }

For each Network Service request, the slice-mngr takes the "id" value and keeps it inside the "requestId" field of each service:

  ...
  "nsr-list": [
   {
     "nsrName": "slice_pilot-subnet-1",
     "nsrId": "",
     "subnet-nsdId-ref": "4c7d854f-a0a1-451a-b31d-8447b4fd4fbc",
     "subnet-ref": "ns-pilot",
     "sla-name": "",
     "sla-ref": "",
     "working-status": "INSTANTIATING",
     "requestId": "275cc82e-ca8d-4b6e-95cb-02bc9555b195",
     "vimAccountId:"7ca78bbc-3e94-4f78-ac06-417587a6184c",
     "isshared": True,
     "isinstantiated": False,
     "vld": [
       {"vld-ref": "mgmt"},
       {"vld-ref": "input"},
       {"vld-ref": "output"}
     ]
   }
  ],
  ...

When the Gk has an update regarding one request, calls the slice-mngr throught an itnernalt enpodint with the latest information of that request. The slice-mngr will look for the slice containning the service associated to the received request_id and it updates the corresponding service instantiation information. So, if the received information is because a request is done 8its status is READY):

 {
   'callback': '',
   'created_at': '2018-09-25T12:56:26.754Z',
   'service': {
        'uuid': '4c7d854f-a0a1-451a-b31d-8447b4fd4fbc',
        'version': '0.2',
        'name': 'ns-pilot',
        'vendor': 'eu.5gtango'
    },
   'id': '275cc82e-ca8d-4b6e-95cb-02bc9555b195',
   'ingresses': '[]',
   'status': 'READY',
   'egresses': '[]',
   'request_type': 'CREATE_SERVICE',
   'name': 'slice_pilot-subnet-1',
   'updated_at': '2018-09-25T12:56:26.754Z',
   'customer_uuid': None,
   'error': None,
   'instance_uuid': '50cf498c-d132-4800-98de-b6c4d8604596',
   'blacklist': '[]',
   'sla_id': None
 }

The value in the instance_uuid field is the one the slice-mngr must keep to have the reference of the VM created in the datacenter. So, the final service data model remains like this:

  ...
  "nsr-list": [
   {
     "nsrName": "slice_pilot-subnet-1",
     "nsrId": "50cf498c-d132-4800-98de-b6c4d8604596",
     "subnet-nsdId-ref": "4c7d854f-a0a1-451a-b31d-8447b4fd4fbc",
     "subnet-ref": "ns-pilot",
     "sla-name": "",
     "sla-ref": "",
     "working-status": "INSTANTIATED",
     "requestId": "275cc82e-ca8d-4b6e-95cb-02bc9555b195",
     "vimAccountId:"7ca78bbc-3e94-4f78-ac06-417587a6184c",
     "isshared": True,
     "isinstantiated": True,
     "vld": [
       {"vld-ref": "mgmt"},
       {"vld-ref": "input"},
       {"vld-ref": "output"}
     ]
   }
  ],
  ...

When the whole network slice has all its service in either INSTANTIATED or ERROR status, then the Network Slice Manager calls back the Gatekeeper to inform about it and this can close the request regard the network slice creation.

Steps:

  1. GK calls the slice-mngr and passes a json
  2. Slice-mngr gets the correct NetSlice instantiation from repositories and updates the specific service data.
  3. Checks if all services are READY/ERROR, if true updates the NetSlice Instantiation general status to READY/ERROR.
  4. Sends to repositories the updated NetSlice instantiation object
  5. Calls back the GTK to inform a slice is done so, the GTK might close the request object.