Skip to content

Commit

Permalink
Ultime modifiche Chiara
Browse files Browse the repository at this point in the history
  • Loading branch information
cardagna committed Jun 13, 2024
1 parent 3eaa14e commit 73ca0a0
Show file tree
Hide file tree
Showing 6 changed files with 24 additions and 140 deletions.
83 changes: 4 additions & 79 deletions experiment.tex
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
\section{Experiments}\label{sec:experiment}
We experimentally evaluated the performance and quality of our methodology (heuristic algorithm in \cref{subsec:heuristics}), and compared it against the exhaustive approach in~\cref{sec:nphard}. In the following,
\cref{subsec:experiments_infrastructure} presents the simulator and experimental settings used in our experiments;
%, as well as the complete experimental settings;
\cref{subsec:experiments_performance} analyses the performance of our solution in terms of execution time; \cref{subsec:experiments_quality} discusses the quality of the best pipeline instance generated by our solution according to the metrics $M_J$ and $M_{JSD}$ in \cref{subsec:metrics}.

\subsection{Testing Infrastructure and Experimental Settings}\label{subsec:experiments_infrastructure}
Expand Down Expand Up @@ -30,7 +29,6 @@ \subsection{Testing Infrastructure and Experimental Settings}\label{subsec:exper
\resizebox{0.7\columnwidth}{!}{%
\begin{tikzpicture}[framed]


\node[draw, circle, fill=gray!40,minimum width=0.7cm] (v1) at (1,5.2) {$\vi{1}$};
\node[draw, circle, fill=gray!40,minimum width=0.7cm] (v2) at (3,5.2) {$\vi{2}$};
\node[draw, circle, fill=gray!40,minimum width=0.7cm] (v3) at (5,5.2) {$\vi{3}$};
Expand Down Expand Up @@ -58,24 +56,11 @@ \subsection{Testing Infrastructure and Experimental Settings}\label{subsec:exper
\node[draw, rectangle] (s42) at (9,1.7) {$\sii{42}$};
\node[draw, rectangle] (s43) at (9,0) {$\sii{43}$};



% \draw[->] (node2) -- (node3);
% \draw[->] (s1) -- (s11);
%\draw[->] (s2) -- (s12);
% \draw[->] (s3) -- (s13);

% \draw[->] (s1) -- (s11);
% \draw[->] (s1) -- (s12);
% \draw[->] (s1) -- (s13);

\draw[->,line width= 1.2pt] (s2) -- (s11);
\draw[->,dashdotted] (s2) -- (s12);
\draw[->,dashdotted] (s2) -- (s13);

\draw[->,line width= 1pt] (s11) -- (s22);
% \draw[->,dashdotted] (s2) -- (s12);
% \draw[->,dashdotted] (s2) -- (s13);

\draw[->,dashdotted] (s11) -- (s21);
\draw[->,dashdotted] (s11) -- (s23);
Expand All @@ -88,7 +73,6 @@ \subsection{Testing Infrastructure and Experimental Settings}\label{subsec:exper
\draw[->,dashdotted] (s13) -- (s22);
\draw[->,dashdotted] (s13) -- (s23);


\draw[->,dashdotted] (s21) -- (s31);
\draw[->,dashdotted] (s21) -- (s32);
\draw[->,dashdotted] (s21) -- (s33);
Expand All @@ -97,7 +81,6 @@ \subsection{Testing Infrastructure and Experimental Settings}\label{subsec:exper
\draw[->,dashdotted] (s22) -- (s32);
\draw[->,dashdotted] (s22) -- (s33);


\draw[->,dashdotted] (s23) -- (s31);
\draw[->,dashdotted] (s23) -- (s32);
\draw[->,dashdotted] (s23) -- (s33);
Expand All @@ -107,23 +90,17 @@ \subsection{Testing Infrastructure and Experimental Settings}\label{subsec:exper
\draw[->] (v3) -- (v4);
\draw[->] (v4) -- (v5);



\begin{scope}[on background layer]
\draw[thick, dashed, fill=red!10, opacity=0.5]
([shift={(-0.5,0.5)}]s11.north west) rectangle ([shift={(0.5,-0.5)}]s33.south east);



\end{scope}
\begin{scope}[on background layer]
\draw[thick, dashed, fill=red!10, opacity=0.5]
([shift={(-0.5,0.5)}]v2.north west) rectangle ([shift={(0.5,-0.5)}]v4.south east);

\end{scope}

% \node[align = center, below,yshift=-20pt ] at (s23.south) {\ttfamily \scriptsize vertices=5 services=3 \windowsize=3 i=1};

\end{tikzpicture}
}
\caption{Execution example of the sliding window heuristic using v=5, s=3, \windowsize=3 at i=1 step.}
Expand All @@ -133,7 +110,7 @@ \subsection{Testing Infrastructure and Experimental Settings}\label{subsec:exper
\subsection{Perfomance}\label{subsec:experiments_performance}
We first measured the performance (execution time) of our exhaustive and heuristic solutions by varying the number of vertices in the pipeline template from 2 to 7 and the number of services per vertex from 2 to 7. \cref{fig:time_window_perce_average} presents our results for both the exhaustive and heuristic solutions.
The exhaustive approach is able to provide the optimal solution for all configurations, but its execution time grows exponentially with the number of vertices and services, making it impractical for large instances. For \windowsize from 1 to 3 (step 1), we observed a substantial reduction in execution time, with the heuristic always able to produce an instance in less than $\approx2.7\times10^5ms$ . The worst heuristic performance (7 vertices, 7 services, \windowsize=6) is $\approx3.8\times10^7ms$ is still one order of magnitude lower than the best exhaustive performance (7 vertices, 7 services, \windowsize=7) $\approx1.35\times10^8ms$.
\begin{figure}[!htb]
\begin{figure}[!t]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{Images/graphs/window_time_performance_qualitative_n7_s7_50_80_n3}
Expand Down Expand Up @@ -165,57 +142,11 @@ \subsection{Testing Infrastructure and Experimental Settings}\label{subsec:exper
\end{subfigure}
\label{fig:time_window_perce_average}
\end{figure}
% % \hfill
% % \begin{subfigure}{0.33\textwidth}
% % \includegraphics[width=\textwidth]{Images/graphs/quality_plot_average_n7.eps}
% % \caption{7 vertices}
% % \label{fig:third}
% % \end{subfigure}
% \caption{Evaluation of Execution Time Using the \emph{Qaulitative} Metric in a \average Profile Configuration.} \label{fig:time_window_perce_wide}
% \end{figure}
% \begin{figure}[!htb]
% \centering
% \begin{subfigure}{0.45\textwidth}
% \includegraphics[width=\textwidth]{Images/graphs/window_time_performance_n7_s7_20_100_n3}
% \caption{3 vertices}
% \label{fig:time_window_perce_average_3n}
% \end{subfigure}
% \hfill
% \begin{subfigure}{0.45\textwidth}
% \includegraphics[width=\textwidth]{Images/graphs/window_time_performance_n7_s7_20_100_n4}
% \caption{4 vertices}
% \label{fig:time_window_perce_average_4n}
% \end{subfigure}
% \hfill
% \begin{subfigure}{0.45\textwidth}
% \includegraphics[width=\textwidth]{Images/graphs/window_time_performance_n7_s7_20_100_n5}
% \caption{5 vertices}
% \label{fig:time_window_perce_average_5n}
% \end{subfigure}
% \hfill
% \begin{subfigure}{0.45\textwidth}
% \includegraphics[width=\textwidth]{Images/graphs/window_time_performance_n7_s7_20_100_n6}
% \caption{6 vertices}
% \label{fig:time_window_perce_average_6n}
% \end{subfigure}
% \begin{subfigure}{0.45\textwidth}
% \includegraphics[width=\textwidth]{Images/graphs/window_time_performance_n7_s7_20_100_n7}
% \caption{7 vertices}
% \label{fig:time_window_perce_average_7n}polo
% \end{subfigure}
% % \hfill
% % \begin{subfigure}{0.33\textwidth}
% % \includegraphics[width=\textwidth]{Images/graphs/quality_plot_average_n7.eps}
% % \caption{7 vertices}
% % \label{fig:third}
% % \end{subfigure}
% \caption{Evaluation of Execution Time Using the \emph{Quantitative} Metric in a \average Profile Configuration.}


\subsection{Quality}\label{subsec:experiments_quality}
We finally evaluated the quality of our heuristic algorithm with different \windowsize\ comparing, where possible, its results with the optimal solution retrieved by executing the exhaustive approach. %The latter is executed with window size equals to the number of vertices in the pipeline template, and provides the best, among all possible, solutions.
We finally evaluated the quality of our heuristic algorithm with different \windowsize\ comparing, where possible, its results with the optimal solution retrieved by executing the exhaustive approach.
The quality $Q$ of the heuristic has been normalized between 0 and 1 by dividing it by the quality $Q^*$ retrieved by the exhaustive approach.


We run our experiments varying: \emph{i)} the length $l$ of the pipeline template in [3,7], that is, the depth of the pipeline template as the number of vertices composed in a sequence, \emph{ii)} the window size \windowsize\ in [1,$l$], and \emph{iii)} the number of candidate services for each vertex in the pipeline template in [2, 7]. Each vertex is associated with a (set of) policy that applies a filtering transformation that either remove a percentage of data in $[0.5,0.8]$ (\average) or in $[0.2,1]$ (\wide).

\cref{fig:quality_window_perce} present our quality results using metric $M_J$ in \cref{subsec:metrics} for settings \wide and \average.
Expand All @@ -224,21 +155,15 @@ \subsection{Testing Infrastructure and Experimental Settings}\label{subsec:exper
When considering setting \wide, the greedy approach (\windowsize=1) provides good results on average (0.71, 0.90), while showing substantial quality oscillations in specific runs: between 0.882 and 0.970 for 3 vertices, 0.810 and 0.942 for 4 vertices, 0.580 and 0.853 for 5 vertices, 0.682 and 0.943 for 6 vertices, 0.596 and 0.821 for 7 vertices. This same trend emerges when the window size is $<$$l$/2, while it starts approaching the optimum when the window size is $\geq$$l$/2. For instance, when \windowsize=$l$-1, the quality varies between 0.957 and 1.0 for 3 vertices, 0.982 and 1.0 for 4 vertices, 0.986 and 0.998 for 5 vertices, 0.977 and 1.0 for 6 vertices, 0.996 and 1.0 for 7 vertices.

When considering setting \average, the heuristic algorithm still provides good results, limiting the quality oscillations observed for setting \wide\ and approaching the quality of the exhaustive also for lower window sizes. The greedy approach (\windowsize=1) provides good results on average (from 0.842 to 0.944), as well as in specific runs: between 0.927 and 0.978 for 3 vertices, 0.903 and 0.962 for 4 vertices, 0.840 and 0.915 for 5 vertices, 0.815 and 0.934 for 6 vertices, 0.721 and 0.935 for 7 vertices.
%This same trend emerges when the window size is less than $l$/2, while it starts approaching the optimum when the window size is higher than $l$/2. For instance,
When \windowsize=$l$-1, the quality varies between 0.980 and 1.0 for 3 vertices, 0.978 and 1.0 for 4 vertices, 0.954 and 1 for 5 vertices, 0.987 and 1.0 for 6 vertices, 0.990 and 1.0 for 7 vertices.


\cref{fig:quality_window_qualitative} present our quality results using metric $M_{JSD}$ in \cref{subsec:metrics} for settings \wide and \average, respectively.
% In general, \hl{ANTONGIACOMO}

When considering setting \wide, the greedy approach (\windowsize=1) provides good results on average (0.92, 0.97), limiting oscillations observed with metric $M_J$; for instance, the quality varies between 0.951 and 0.989 for 3 vertices, 0.941 and 0.988 for 4 vertices, 0.919 and 0.974 for 5 vertices, 0.911 and 0.971 for 6 vertices, 0.877 and 0.924 for 7 vertices. %In this case the quality oscillations are more stable than the ones observed for the metric $M_J$.
When considering setting \wide, the greedy approach (\windowsize=1) provides good results on average (0.92, 0.97), limiting oscillations observed with metric $M_J$; for instance, the quality varies between 0.951 and 0.989 for 3 vertices, 0.941 and 0.988 for 4 vertices, 0.919 and 0.974 for 5 vertices, 0.911 and 0.971 for 6 vertices, 0.877 and 0.924 for 7 vertices.
The worst quality results are obtained with window size equal to 1, while the oscillations are negligible when the window size is >2. For instance, when \windowsize=$l$-2, the quality varies between, 0.982 and 0.996 for 4 vertices, 0.981 and 0.998 for 5 vertices, 0.988 and 1.0 for 6 vertices, 0.976 and 0.999 for 7 vertices. When \windowsize=$l$-1, the quality varies between 0.987 and 0.998 for 3 vertices, 0.993 and 1.0 for 4 vertices, 0.985 and 0.999 for 5 vertices, 0.997 and 1.0 for 6 vertices, 0.995 and 1.0 for 7 vertices.

When considering setting \average, the greedy approach (\windowsize=1) provides results similar to setting \wide. On average, quality varies from 0.920 to 0.969, limiting oscillations; for instance, the quality varies between 0.951 and 0.989 for 3 vertices, 0.942 and 0.988 for 4 vertices, 0.919 and 0.975 for 5 vertices, 0.912 and 0.972 for 6 vertices, 0.878 and 0.925 for 7 vertices. The \average configuration provides even tighter quality oscillations than the \wide configuration. Notably, the poorest quality outcomes are observed when the window size is set to 1. Conversely, these oscillations become negligible when the window size exceeds 1 in configurations with three and four vertices, and when it exceeds 2 in configurations involving five, six, and seven vertices. For instance, when \windowsize=3, the quality varies between 0.993 and 1 for 4 vertices, 0.981 and 0.998 for 5 vertices, 0.982 and 997 for 6 vertices, 0.960 and 0.991 for 7 vertices.




Our results suggest that the proposed heuristic well approximates the results obtained by the exhaustive approach. While larger window sizes generally lead to better performance, there exists a breakpoint where the balance between window size and performance is optimized. Beyond this point, the incremental gains in metric values may not justify the additional computational burden or complexity introduced by larger windows. It is worth noting that lower window sizes are more unstable, especially with setting \wide, meaning that the quality varies significantly among different configurations. This effect stabilizes with higher window sizes (e.g., \windowsize$\geq$$l$/2).
\begin{figure}[H]
\centering
Expand Down
Loading

0 comments on commit 73ca0a0

Please sign in to comment.