From 06a220940fcd807fec79a601a27469532611adc3 Mon Sep 17 00:00:00 2001 From: Claudio Ardagna Date: Thu, 9 Nov 2023 16:30:37 +0100 Subject: [PATCH 1/5] typo --- service_composition.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/service_composition.tex b/service_composition.tex index 962dcf9..8d0c2d7 100644 --- a/service_composition.tex +++ b/service_composition.tex @@ -121,7 +121,7 @@ \subsection{Data Protection Annotation \myLambda}\label{sec:nonfuncannotation} An access control policy $P$ annotated in a pipeline template $G^{\myLambda,\myGamma}$ is used to filter out those candidate services $s$ that do not match data protection requirements. Specifically, a policy $P_i$ is evaluated to verify whether a candidate service $s_j$ for vertex \vi{i} is compatible with data protection requirements in $P_i$ (\myLambda(\vi{i})). Policy evaluation matches the profile of candidate service $s_j$ with the policy conditions in $P_i$. If the credentials and attributes in the candidate service profile fails to meet the policy conditions, the service is discarded, otherwise it is added to the set of compatible service, which is used in Section~\ref{} to generate the pipeline instance $G^{\theta}$. No policy enforcement is done at this stage. -\subsection{Functional Annotations}\label{sec:funcannotation} +\subsection{Functional Annotations \myGamma}\label{sec:funcannotation} A proper data management approach must track functional data manipulations across the entire pipeline execution, defining the functional requirements of each service operating on data. To this aim, each vertex \vi{i}$\in$\V$_S$ is annotated with a label \myGamma(\vi{i}), corresponding to the functional description $F_i$ of the service $s_i$ represented by \vi{i}. $F_i$ describes the functional requirements on the corresponding service $s_i$, such as API, inputs, expected outputs. From 03fc7dbcda6ab51fdfac60140c15920745b784a9 Mon Sep 17 00:00:00 2001 From: Claudio Ardagna Date: Thu, 9 Nov 2023 16:31:16 +0100 Subject: [PATCH 2/5] typo --- service_composition.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/service_composition.tex b/service_composition.tex index 8d0c2d7..e0994f5 100644 --- a/service_composition.tex +++ b/service_composition.tex @@ -119,7 +119,7 @@ \subsection{Data Protection Annotation \myLambda}\label{sec:nonfuncannotation} \end{description} \end{definition} -An access control policy $P$ annotated in a pipeline template $G^{\myLambda,\myGamma}$ is used to filter out those candidate services $s$ that do not match data protection requirements. Specifically, a policy $P_i$ is evaluated to verify whether a candidate service $s_j$ for vertex \vi{i} is compatible with data protection requirements in $P_i$ (\myLambda(\vi{i})). Policy evaluation matches the profile of candidate service $s_j$ with the policy conditions in $P_i$. If the credentials and attributes in the candidate service profile fails to meet the policy conditions, the service is discarded, otherwise it is added to the set of compatible service, which is used in Section~\ref{} to generate the pipeline instance $G^{\theta}$. No policy enforcement is done at this stage. +An access control policy $P$ annotated in a pipeline template $G^{\myLambda,\myGamma}$ is used to filter out those candidate services $s$ that do not match data protection requirements. Specifically, a policy $P_i$ is evaluated to verify whether a candidate service $s_j$ for vertex \vi{i} is compatible with data protection requirements in $P_i$ (\myLambda(\vi{i})). Policy evaluation matches the profile of candidate service $s_j$ with the policy conditions in $P_i$. If the credentials and attributes in the candidate service profile fails to meet the policy conditions, the service is discarded, otherwise it is added to the set of compatible service, which is used in Section~\ref{} to generate the pipeline instance $G'$. No policy enforcement is done at this stage. \subsection{Functional Annotations \myGamma}\label{sec:funcannotation} A proper data management approach must track functional data manipulations across the entire pipeline execution, defining the functional requirements of each service operating on data. From 8cfa98c0ae5a6463b3d5a7d061e83b2ea36e8576 Mon Sep 17 00:00:00 2001 From: Claudio Ardagna Date: Thu, 9 Nov 2023 16:33:08 +0100 Subject: [PATCH 3/5] typo --- service_composition.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/service_composition.tex b/service_composition.tex index e0994f5..620d406 100644 --- a/service_composition.tex +++ b/service_composition.tex @@ -214,7 +214,7 @@ \subsection{Example}\label{sec:example} \section{Pipeline Instance} % \subsection{Instance} % \hl{ANCHE QUA COME PER IL TEMPLATE PROVEREI A ESSERE UN POCO PIU' FORMALE. GUARDA IL PAPER CHE TI HO PASSATO.} - We define a \pipeline instantiation technique as a function that takes as input a \pipelineTemplate \tChartFunction and a set $S^c$ of compatible services, one for each vertex \vi{i}$\in$\V, and returns as output a \pipelineInstance \iChartFunction. We recall that compatible services $S^c_i$ are candidate services satisfying data protection annotations \myLambda(\vi{i}), for each \vi{i}$\in$$\V_S$. + We define a \pipeline instantiation technique as a function that takes as input a \pipelineTemplate \tChartFunction and a set $S'$ of compatible services, one for each vertex \vi{i}$\in$\V, and returns as output a \pipelineInstance \iChartFunction. We recall that compatible services $S'_i$ are candidate services satisfying data protection annotations \myLambda(\vi{i}), for each \vi{i}$\in$$\V_S$. In \iChartFunction, every invocations $\vi{i}$$\in$\V$_S$ contains a service instance, and every branching $v\in\Vplus\bigcup\Vtimes$ is maintained as it is. We formally define our \pipelineInstance as follows. \begin{definition}[Pipeline Instance]\label{def:instance} From b6b741c539113b3f6ac5cd42c07035b96db45891 Mon Sep 17 00:00:00 2001 From: Claudio Ardagna Date: Thu, 9 Nov 2023 16:39:33 +0100 Subject: [PATCH 4/5] typo --- service_composition.tex | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/service_composition.tex b/service_composition.tex index 620d406..7c2f0a6 100644 --- a/service_composition.tex +++ b/service_composition.tex @@ -233,7 +233,7 @@ \subsection{Example}\label{sec:example} Condition 1 is needed to preserve the process functionality, as it simply states that each service $s'_i$ must satisfy the functional requirements $F_i$ of the corresponding vertex \vi{i} in the \pipelineTemplate. Condition 2 states that each service $s'_i$ must satisfy the policy requirements \P{i} of the corresponding vertex \vi{i} in the \pipelineTemplate. - We assume that Condition 1 is satisfied for all candidate services and therefore concentrate on Condition 2 in the following. + We assume that Condition 1 is satisfied for all candidate services and we therefore concentrate on Condition 2 in the following. % Le considerazioni che seguono partono dall'assunto che T sia uguale a T_p U T_f senza lack of generality @@ -246,12 +246,12 @@ \subsection{Example}\label{sec:example} Formally, let us consider a set $S^c$ of candidate services \si{j}, each one annotated with a profile. The filtering algorithm is executed for each \si{j}; it is successful if \si{j}'s profile satisfies \myLambda(\vi{i}) as the access control policy \P{i}; otherwise, \si{j} is discarded and not considered for selection. The filtering algorithm finally returns a subset $S'\subseteq S^c$ of compatible services, which represent the possible candidates for selection. - \item \textit{Comparison Algorithm} - Upon retrieving a set $S'$ of compatible services \si{j}, it produces a ranking of these services according to some metrics that evaluates the quality loss introduced by each service when integrated in the pipeline instance. More details about the metrics are provided in Section \ref{sec:metrics}. + \item \textit{Comparison Algorithm} - Upon retrieving a set $S'$ of compatible services \si{j}, it produces a ranking of these services according to some metrics that evaluate the quality loss introduced by each service when integrated in the pipeline instance. More details about the metrics are provided in Section \ref{sec:metrics}. %Formally, compatible services \si{j}$\in$S' are ranked on the basis of a scoring function. The best service \si{j} is then selected and integrated in $\vii{i}\in \Vp$. There are many ways of choosing relevant metrics, we present those used in this article in Section \ref{sec:metrics}. \end{itemize} - When all vertices $\vi \in V$ have been visited, G' contains a service instance $s'_i$ for each \vii{i}$\in$\Vp, and the \pipelineInstance is complete. We note that each vertex \vii{i} is annotated with a policy \P{i} according to \myLambda. When pipeline instance is triggered, before any services can be executed, policy \P{i} is evaluated and enforced. In case, policy evaluation is \emph{true}, data transformation \TP\ is applied, otherwise a default transformation that delete all data is applied. + When all vertices $\vi \in V$ have been visited, G' contains a service instance $s'_i$ for each \vii{i}$\in$\Vp, and the \pipelineInstance is complete. We note that each vertex \vii{i} is annotated with a policy \P{i} according to \myLambda. When pipeline instance is triggered, before any services can be executed, policy \P{i} is evaluated and enforced. In case policy evaluation returns \emph{true}, data transformation \TP$\in$\P{i} is applied, otherwise a default transformation that delete all data is applied. \begin{example}\label{ex:instance} @@ -259,8 +259,8 @@ \subsection{Example}\label{sec:example} It includes three key stages in our reference scenario: data anonymization (\vi{1}), data enrichment (\vi{2}), and data aggregation (\vi{3}), each stage with its policy $p$. The first vertex (\vi{1}) responsible for data anonymization is associated with three candidate services that satisfy the functional requirements of the first vertex, namely $s_1$, $s_2$ and $s_3$. - Services $s_1$ and $s_2$ are annotated with a profile that satisfies the data protection requirements in \P{1} and \P{2}, respectively. - The third service $s_3$ is annotated with a profile that does not satisfy the data protection requirements in \P{3}. + Services $s_1$ and $s_2$ are annotated with a profile that satisfies the data protection requirements in \P{1}, respectively. + The third service $s_3$ is annotated with a profile that does not satisfy the data protection requirements in \P{1}. The filtering algorithm then returns the set $S'=\{s_1,s_2\}$. The comparison algorithm is fnally applied to $S'$ and returns a ranking of the services according to quality metrics, where $s_1$ is ranked first. $s_1$ is then selected and integrated in $\vii{1}\in \Vp$. From 1f0c4884d9c84b443d47e03cce33fbbf9a33da13 Mon Sep 17 00:00:00 2001 From: Claudio Ardagna Date: Thu, 9 Nov 2023 17:30:53 +0100 Subject: [PATCH 5/5] typo --- service_composition.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/service_composition.tex b/service_composition.tex index 7c2f0a6..f925c8b 100644 --- a/service_composition.tex +++ b/service_composition.tex @@ -262,7 +262,7 @@ \subsection{Example}\label{sec:example} Services $s_1$ and $s_2$ are annotated with a profile that satisfies the data protection requirements in \P{1}, respectively. The third service $s_3$ is annotated with a profile that does not satisfy the data protection requirements in \P{1}. The filtering algorithm then returns the set $S'=\{s_1,s_2\}$. - The comparison algorithm is fnally applied to $S'$ and returns a ranking of the services according to quality metrics, where $s_1$ is ranked first. $s_1$ is then selected and integrated in $\vii{1}\in \Vp$. + The comparison algorithm is finally applied to $S'$ and returns a ranking of the services according to quality metrics, where $s_1$ is ranked first. $s_1$ is then selected and integrated in $\vii{1}\in \Vp$. The same logic is applied to the \vi{2} and \vi{3}.