YS5O2UANXCWRI3W7KMRKKK7OT7R6V3QDL4JS4DRLXM2EZUIPFALAC M5WHUJXX7KNK425RXFM2U6WT3M4PARJFO7T7MLN2IBDEYDKPXTMAC GNUONJQNZG2MNEE55RW3XOQP3UICEBOKLVLKV42H3K64IBGLUGIQC 567NUQ633R5ESTE2LWXVUGM7XEZGJ65IRKPOCDZJS3JE5RAMR5KAC QIVGMR6INKBVT5IFCO2UJJQEO4PVW3ZM7IDTCTCOKOX7VKG7UPFAC R2A2OD7J5ATBRR7W4QPTZDVH6GGHQ7XBX2JE5R3GKVS5WIQ43O3AC JCK77YLTPL6VV5ZOTMGUJSFZ5DNMS3MZGP5UQLXYW6WSMEK6GAQAC QCVDPRXH57UFLU4YV3P2I5RQZGCBUKROYUZHDWOCHL43KDYN7MPAC 6PTHERE26NXSC3ZXSI5UYKWTMPYZD4WMYFBJKZE3MRRTKC7MIU6QC TKJMPTB73W5TLLTNUOC4FGRQ7DHE5GRBTL7C5QBXRCNQPMJKGEQAC A5NVU6DZYOG6GRXJU6DZLCUQTKA7LPUWOGQ4NCJSY6FJB6GSP3OAC B5T3HFKSQF3T4T6Z2KZOA5ZDMI6TVON7BS7KKB3EAGGYAS7RJPBAC EIPHTX56WTRFBMBNA4VYMKK4L7GWHA4Q2AQ3BCG2ZGZB3DBRENBQC UVN5HOW3H2OVESJ62UFNQNS6VXHI5TRV22HQ44EOYU42453SDD2QC DDNESFMD4LGNTDGTCW27FQPCOR5Y5MJWN2TWCSIRNBK6QGBITBRQC MQJMCLGFSBSFJ3JPCIHQSBZS4XZPGJI7NGGRASX5UVGJO6HES25AC 45HZPHVHYJ2BXATQ4MKCMPMMPZFBNJECY7JV3WX63BSA4CSQN6HAC }@Article{johnson1975,author = {Johnson, Donald B.},journal = {SIAM Journal on Computing},title = {Finding All the Elementary Circuits of a Directed Graph},year = {1975},number = {1},pages = {77-84},volume = {4},abstract = {An algorithm is presented which finds all the elementary circuits of a directed graph in time bounded by \$O((n + e)(c + 1))\$ and space bounded by \$O(n + e)\$, where there are n vertices, e edges and c elementary circuits in the graph. The algorithm resembles algorithms by Tiernan and Tarjan, but is faster because it considers each edge at most twice between any one circuit and the next in the output sequence.},doi = {10.1137/0204007},eprint = {https://doi.org/10.1137/0204007},url = {https://doi.org/10.1137/0204007},
Detecting all loops(cycles) in a graph is easily done by \gls{dfs} on the graph.And the problem can instead be formulated as a hitting set problem.
Detecting all loops(cycles) in a graph can be done easily done with analgorithm describe\gls{dfs} onthe graph. And the problem can instead be formulated as a hitting set problem.
one symbol is replaced by another is denoted with $[\cdot\backslash{}\cdot]$.For example $(x<y)[x\backslash{}1] = (1 < y)$. For a finite set $A ={x_1,\dots,x_n}$, the process of $n$ consecutive substitutions$[x_i\backslash{}y(i)]$ where the replacement expression $y$ varies only in someindex $i$ is denoted using a shortened notation: $[x_i\backslash{}y_i]_{x_i \inA} = [x_1\backslash{}y(1)] \dots [x_n\backslash{}y(n)]$. For example$[x_i\backslash{}i]_{x_i\in A} =[x_0\backslash{}0]\dots[x_n\backslash{}n]\dots$, is the substution where everyvariable is replaced by its index in $A = \{x_0, \dots, x_n\}$. The restrictionof a function $f: A \rightarrow B$ to a smaller domain $A' \subset A$ is denotedby $f|A' : A' \rightarrow B$.
one symbol is replaced by another is denoted with $[\cdot/\cdot]$.For example $(x<y)[x/1] = (1 < y)$. For a finite set $A =\{x_1,\dots,x_n\}$, the $n$ consecutive substitutions $[x_i/y(i)]$where the replacement expression $y$ varies only in some index $i$ is denotedusing a shortened notation: $[x_i/y_i]_{x_i \in A} =[x_1/y(1)] \dots [x_n/y(n)]$. For example$[x_i/i]_{x_i\in A} = [x_0/0]\dots[x_n/{}n]$, isthe substution where every variable is replaced by its index in $A = \{x_0,\dots, x_n\}$. The restriction of a function $f: A \rightarrow B$ to a smallerdomain $A' \subseteq A$ is denoted by $f|A' : A' \rightarrow B$.
This section will define the foundations of various types of constraints thatare special cases of formulas of \gls{fo} logic. In the following$\V$ denotes a finite set of variables $\V$. The set of quantifier-free formulasof first-order logic over a variable set $A\subset\V$ is denoted with $\Phi_A$,or simply $\Phi$, when $A=\V$. For foundations on FO-logic we'll refer to theliterature \cite{barwise1977}.
This section will define constraints that are special cases of formulas of\gls{fo} logic. In the following $\V$ denotes a finite set of variables $\V$.The set of quantifier-free formulas of first-order logic over a variable set$A\subset\V$ is denoted with $\Phi_A$, or simply $\Phi$, when $A=\V$. Forfoundations on FO-logic we'll refer to the literature \cite{barwise1977}.
Using the $\models$ relation and $\equiv$ relation from \gls{fo} logic, thenotion of satisfiability is defined.\cite{barwise1977}
% Let $\mathfrak{I} = ()$ be the% structure of% \begin{definition}[Satisfiability\cite{barwise1977}]\label{def:satisfiability}% For a domain $A$ and a structure $\mathfrak{A} = (A, +, \cdot, 0, 1, \leq,% <, =)$ we say a variable assignment $\beta: \V \rightarrow A$% \textit{satisfies} the formula $\phi \in \Phi$ when $\mathfrak{A}, \beta% \models \psi$.
\begin{definition}[Satisfiability\cite{barwise1977}]\label{def:satisfiability}For a domain $A$ and a structure $\mathfrak{A} = (A, +, \cdot, 0, 1, \leq,<, =)$ we say a variable assignment $\beta: \V \rightarrow A$\textit{satisfies} the formula $\phi \in \Phi$ when $\mathfrak{A}, \beta\models \psi$.$\phi $ is \textit{satisfiable} in when an assignment $\beta: \V \rightarrowA$ exists such that $\mathfrak{A},\beta\models \phi$. When $\phi$ issatisfiable for any assignments, one writes $\mathfrak{A} \models \phi$.\end{definition}
% $\phi $ is \textit{satisfiable} in when an assignment $\beta: \V \rightarrow% A$ exists such that $\mathfrak{A},\beta\models \phi$. When $\phi$ is% satisfiable for any assignments, one writes $\mathfrak{A} \models \phi$.% \end{definition}
Analogously, the set of \textit{linear strict inequality} constraints isdefined as the set\begin{align*}\Lin^{<}=\set{a_0 + \sum_{i=1}^{n} a_i v_i < 0}{a_i \in \R}.\end{align*}
% Analogously, the set of \textit{linear strict inequality} constraints is% defined as the set% \begin{align*}% \Lin^{<}=\set{a_0 + \sum_{i=1}^{n} a_i v_i < 0}{a_i \in \R}.% \end{align*}
The union of strict and non-strict linear constraints is called\textit{mixed linear inequality} constraints denoted as $\Lin = \Lin^{\leq}\union \Lin^{<}$.
% The union of strict and non-strict linear constraints is called% \textit{mixed linear inequality} constraints denoted as $\Lin = \Lin^{\leq}% \union \Lin^{<}$.
whereas the set of \textit{strict polynomial inequality} constraints isdefined as the set\begin{equation*}\C^< = \set{e < 0}{e \in \Z[\V]},\end{equation*}and the set of \textit{mixed polynomial inequality} constraints is the set$\C = \C^{\leq} \union \C^{<}$.
% whereas the set of \textit{strict polynomial inequality} constraints is% defined as the set% \begin{equation*}% \C^< = \set{e < 0}{e \in \Z[\V]},% \end{equation*}% and the set of \textit{mixed polynomial inequality} constraints is the set% $\C = \C^{\leq} \union \C^{<}$.
Strict and non-strict linear constraints are a subset of strict polynomialconstraints and non-strict polynomial constraints respectively. For the mostpart we will just refer to \enquote{constraints} when the distinctionbetween strict and non-strict or linear and polynomial constraints isunimportant.
Non-strict linear constraints are a subset of non-strict polynomialconstraints. For the most part we will just refer to \enquote{constraints}when the distinction between linear and polynomial constraints isunimportant.
Constraints are a subset of formulas in \gls{fo} logic, thus, their semanticsshall be defined accordingly.
Using the $\models$ relation and $\equiv$ relation from \gls{fo} logic, thenotion of satisfiability is defined.\cite{barwise1977} Constraints are a subsetof formulas in \gls{fo} logic, thus, their semantics shall be definedaccordingly. In this thesis we will only consider integer arithmetic. Let $\Zs:= (\Z, +, \cdot, 0, 1, \leq, <, =)$ be the structure standard integerarithmetic. For an assignment $\beta : \V \rightarrow \Z$ and a formula$\varphi \in Phi$ we write $\beta \models \varphi$ instead of $\Zs, \beta\models \varphi$.
A set of constraints $C \subseteq \C$ is satisfiable, when their conjunction issatisfiable and we write $\mathfrak{A}, \beta \models C$ instead of$\mathfrak{A} , \beta \models \LAnd_{\psi_i \in C} \psi_i$. In addition thisthesis will only consider integer arithmetic. Let $\Zs := (\Z, +, \cdot, 0, 1,\leq, <, =)$ be the structure standard integer arithmetic. We write $\models\psi$ and $\beta \models \psi$ instead of $\mathfrak{Z} \models \psi$ and$\mathfrak{Z}, \beta \models \psi$ respectively.
A finite set of constraints $C \subseteq \C$ is satisfiable, when theirconjunction is satisfiable and we write $\beta \models C$ instead of $\beta\models \LAnd_{\psi_i \in C} \psi_i$.
Two formulas, constraints, or set of constraints are considered\textit{equivalent}, denoted as $\varphi_1 \equiv \varphi_2$ when theirsolutions are equal.
Two formulas, constraints, or set of constraints $\varphi_1$ and $\varphi_2$are considered \textit{equivalent}, denoted as $\varphi_1 \equiv \varphi_2$when $\llbracket \varphi_1 \rrbracket = \llbracket \varphi_2 \rrbracket$.
(\varphi[x'\backslash{}u(x)(s)]_{x_\in A}) \Land (\LAnd_{x\in A}x'[x'\backslash{}u(x)(s)]_{x_\in A} = [x'\backslash{}u(x)(s)]_{x\in
(\varphi[x'/u(x)(s)]_{x_\in A}) \Land (\LAnd_{x\in A}x'[x'/u(x)(s)]_{x_\in A} = [x'/u(x)(s)]_{x\in
The set of all markovian schedulers for a program $\Prog$ is denoted by$\MDS^\Prog$, where \enquote{MD} stands for \enquote{markovian
The set of all Markovian schedulers for a program $\Prog$ is denoted by$\MDS^\Prog$, where \enquote{MD} stands for \enquote{Markovian
computer science known to be undecidable. The so-called Halting Problemis strongly related to the search for upper and lower runtime bounds, or theruntime complexity of a program, is equally undecidable. Nevertheless, theanswers to those questions are very important in practice. For example acompiler might want to warn the programmer about faulty code sections that werenot marked explicitly to run indefinitely, or the developer could be forced toprove the efficiency of his program in critical scenarios. In an ever morecomplex world, with ever more complex algorithms, the need for automatic toolsarised. Even though the question for runtime-complexity can not be answered in ageneral case, many tools were developed to automatically analyze the complexityof various programming paradigms as tight and fast as possible.
computer science known to be undecidable. The so-called Halting Problem isstrongly related to the search for upper and lower runtime complexity bounds, isequally undecidable. Nevertheless, the answers to those questions are veryimportant in practice. For example a compiler might want to warn the programmerabout faulty code sections that were not marked explicitly to run indefinitely,or the developer could be forced to prove the efficiency of his program incritical scenarios. In an ever more complex world, with ever more complexalgorithms, the need for automatic tools arised. Even though the question forruntime-complexity can not be answered in a general case, many tools weredeveloped to automatically analyze the complexity of various programmingparadigms as tight and fast as possible.
\caption{Program with hard to find runtime-complexity bounds using only\gls{mprf}.\cite{giesl2022arxiv}}
\caption{Partial evaluation of a classic integer program with hard to findruntime-complexity bounds using only \gls{mprf}.\cite{giesl2022arxiv}}
% \begin{figure}% \centering% \begin{subcaptionblock}[t]{0.4\textwidth}% \centering% \input{figures/ch1_classic}% \caption{Classic integer program $\Prog_1$\label{fig:classic}}% \end{subcaptionblock}% \begin{subcaptionblock}[t]{0.5\textwidth}% \centering% \input{figures/ch1_classic_pe}% \caption{(UNFINISHED) Partial evaluation of% $\Prog_1$\label{fig:classic_pe}}% \end{subcaptionblock}% \caption{Program with hard to find runtime-complexity bounds using only% \gls{mprf}.\cite{giesl2022arxiv}}% \end{figure}
program variables $x, y$ and two temporary variables $u,w$ where $u$ is used tonon-deterministically sample $x$ in the first transition $t_0$ and $w$ is usedto branch non-deterministically between $t_1$ and $t_2$. The program clearlyterminates, since both loops $t_1$ and $t_2,t_3$ decrement either $x$ or $y$ inevery iteration. However, the classical analysis using only \gls{mprf} fails tofind a finite runtime-complexity bounds for this program.\cite{giesl2022arxiv}.With the help of \gls{cfr} on sub-\gls{scc} level, the program (see\fref{fig:classic_pe}) can be transformed into an equivalent program where theanalysis with \gls{mprf} succeeds at finding a finite runtime-complexity bound.
program variables $x, y$ and one temporary variables $u$ which is used tonon-deterministically sample $x$ in the first transition $t_0$. The programclearly terminates. The first loop increments $x$ up to 3 if $x$ ispositive to begin with. The second loop decrements $y$ while $y$ is positive.However, the classical analysis using only \gls{mprf} fails to find a finiteruntime-complexity bounds for this program\cite{giesl2022arxiv}. With the helpof \gls{cfr} on sub-\gls{scc} level, the program (see \fref{fig:classic_pe}) canbe transformed into an equivalent program where the analysis with \gls{mprf}succeeds at finding a finite runtime-complexity bound.
\fref{fig:prob}. The variables $x$ and $y$ are now only decremented with 50\%probability in every iteration. The program will probably always terminate,
\fref{fig:prob}. The variables $x$ and $y$ are now only in/-decremented with50\% probability in every iteration. The program will probably always terminate,
The \gls{koat}, the complexity analysis tool developed at RWTH University,was recently extended to the analysis probabilistic integer programs, withthe goal to find tight expected runtime-complexity bounds.At the time of writing \gls{koat} does control-flow-refinement via partialevaluation by calling the tool iRankFinder externally, making it slow in theprocess and hard to maintain. iRankFinder cannot handle probabilisticprogram, making it impossible to be used for their analysis.This thesis applies the techniques of partial evaluation to probabilisticprograms. A partially evaluated probabilistic integer programs is proven tohave the same expected runtime-complexity as its original, making thistechnique useful when searching tight expected runtime-complexity bounds onprobabilistic integer program.The evaluation is shown to work on a Sub-SCC level similar to the originalthe old control-flow-refinement.The newly developed algorithms are implemented natively in \gls{koat} andintegrated into the existing analysis algorithms using Sub-SCC evaluation.The native implementation turns out to be faster on classical integerprograms, making it a good candidate to replace the old implementation inthe future. Furthermore, the implementation is shown to improve the foundexpected runtime-bounds found for some probabilistic integer programs aswell.
\acrshort{KoAT} is an automatic complexity analysis tool for integer programs, whichwas recently extended to the analysis of probabilistic integer programs.Currently, \acrshort{koat} applies so-called control-flow-refinement viapartial evaluation on classic programs by calling the tool iRankFinderexternally.In this thesis, we extend partial evaluation to probabilistic programs. Apartially evaluated probabilistic integer program has the same expectedruntime-complexity as the original program. However, the transformed programmight be easier to analyse.The new algorithm is implemented natively in \acrshort{koat} and isintegrated into the existing analysis algorithm modular and on-demand. Ourevaluation shows that the native implementation is faster on classicalprograms. Futhermore, the new implementation infers tighterruntime complexity bounds for some probabilistic programs as well.