Damas Gruska. Probabilistic Opacity Abstract: We define a security concept  "probabilistic opacity" in a probabilistic timed process algebra framework. A process is opaque with respect to a given property over its traces (represented by predicate $\Phi$) if an observer (represented by an observational function) cannot decide whether the property has been satisfied. In general, opacity is undecidable even for finite state systems so we express both the security predicate and the observation function by means of finite state processes. The resulting security properties can be used for specifications and verifications of concurrent systems. Joost-Pieter Katoen, Mani Swaminathan and Martin Fränzle. Symbolic Robustness Analysis of Probabilistic Timed Systems Abstract: We propose a symbolic algorithm that attempts to address \emph{robustness} w.r.t the interplay of non-deterministic and probabilistic behaviour in the quantitative analysis of real-time systems, modelled as Probabilistic Timed Automata (PTA). We consider enhanced non-deterministic behaviour by accounting for model-imprecisions such as drifting clocks, unlike the existing analysis technqiues for PTA that consider perfect clocks. Erika Abraham, Immo Grabe, Andreas Gruener and Martin Steffen. Abstract interface behavior of  an object-oriented language with futures and promises Abstract: This paper describes the observable interface behavior of a simple   object-oriented language with futures, promises, and active objects,   which captures the core of \Creol's concurrency.  The operational   semantics is written in such a way that it highlights the commonalities   and the (few, but crucial) differences to the more familiar   multi-threaded concurrency model of languages like Java and C-sharp. In   addition, we extend the language and the result by \emph{promises}.      \textbf{Keywords:} oo languages, formal semantics, concurrency, futures   and  promises, open systems, observable behavior% Eva Suci and Magne Haveraaen. A Hardware Independent Parallel Programming Model Abstract: Programming a highly parallel machine or chip can be formulated as finding an embedding of the computation's data dependency into the underlying hardware architecture. With the data dependency pattern of a computation extracted as a separate entity in a programming language, one has a powerful tool to code parallel programs. Pavel Krcal, Leonid Mokrushin and Wang Yi. Compositional Analysis of Timed Systems by Abstraction Abstract: We present a new technique for compositional timing and performance analysis of real-time systems modeled using timed automata and the real-time calculus \cite{tcgmg-emsoft01}. It is an over-approximation technique in which a component of a system, modeled as a timed automaton is abstracted as a transducer of event streams described by arrival curves from the real-time calculus. This allows us to characterize the semantics of a system as a set of equations over streams. Many interesting properties such as schedulability and buffer boundedness can be checked in solving the equations. Based on UPPAAL and the real-time calculus, a prototype tool has been implemented, that can be used to check the schedulability of a system and to estimate the best and worst case response times of its computation tasks. Kai Trojahner and Clemens Grelck. Dependently Typed Array Programs Don't Go Wrong Abstract: The array programming paradigm adopts arrays as the fundamental data structures of computation. Array operations work on entire arrays instead of just single elements. This makes array programs highly expressive and introduces data-parallelism in a natural way. In this paper, we present a special type system for static verification of array programs. The system uses families of array types that do not only reflect the type of an array's elements but also contain an expression describing its shape. For specific arrays such as shape vectors, singleton types are employed to capture a vector's value in its type. The type system is based on a new variant of dependent types: Vectors of integers are used to index into the family of array types. Jens Schoenborn, Heiko Schmidt and Harald Fecher. UML state machines: Fairness Conditions specify the Event Pool Abstract: The communication between different instances of UML state machines is handled by using underlying event pools but the UML standard leaves the behavior completely unspecified. Thus, in general, liveness properties cannot be verified. We give semantics of Streett fairness constraints for the event pool and present an algorithm that turns a set of fairness constraints into an abstract event pool. Since common fairness suffers from the drawback that the time until something good happens may be unbounded the presented algorithm allows the modeler to specify such a bound. The resulting abstract event pool can be further refined, justified by an example where a priority scheme on events is introduced. Gyrd Br�ndeland and Ketil St�len. A probabilistic semantic paradigm for component-based security risk analysis Abstract: We propose a probabilistic semantic paradigm for component-based security risk analysis. By security risk, we mean behaviour that constitutes a risk with regard to ICT security aspects, such as confidentiality, integrity and availability. The purpose of this work is to investigate the nature of security risk in the setting of component-based system development. A better understanding of security risk at the level of components facilitates the prediction of risks related to introducing a new component into a system. The semantic paradigm provides a first step towards integrating security risk analysis into component-based system development. Fritz Henglein. What is a sorting function? Abstract: We exhibit intrinsic definitions of inequality tests, sorting functions and comparators as isomorphic presentations of total preorders (orders): � Inequality tests are functions of type T � T ! Bool that are reflexive, transitive and total. � Sorting functions are functions of type T ! T (mapping finite sequences of elements of type T) that permute their input and do so consistently for each pair of elements. � Comparators are functions of type T � T ! T � T that are permutative, transitive and idempotent. The definitions are intrinsic in the sense that they do not refer to predefined orders or to each other. Furthermore the isomorphisms can be defined parametrically polymorphically: the mapping from inequality tests to sorting functions is the class of stable comparator-based sorting algorithms; from comparators to sorting functions it is sorting networks (comparator-based sorting algorithms); the other mappings are trivial. So defining any one concrete presentation of an order automatically provides all three presentations. Whilst mathematically elementary and pleasingly simple, we believe our definitions of sorting functions and comparators as isomorphic characterizations of orders to be novel and of interest in themselves. We investigate a number of variations on the definition of sorting functions to illustrate that some care must be exercised in formulating their definition. Adrian Rutle, Yngve Lamo and Uwe Wolter. Generalized Sketches and Model Driven Architectrue Abstract: Software developers increasingly recognize the need for portable, scalable and distributed applications which are reliable, secure and run at high performance. These applications are also required to be produced both faster and at a lower cost. An approach to achieve this is the Object Management Group's Model Driven Architecture which involves automatic development processes and model management in addition to executable models. This makes the importance of formal modeling languages more recognizable since the automatization of processes requires formal models. However, current modeling languages are either semi-formal, ambiguous, or both, therefore, a generic framework for formal, diagrammatic software specification is inevitable. In this presentation, Generalized Sketches will be introduced as a generic framework for the specification of modeling languages and transformations between them. Then an example will be presented to show how Generalized Sketches may be used in Model Driven Architecture. Frank de Boer and Immo Grabe. Finite-State Call-Chain Abstractions for Deadlock Detection in Multithreaded Object-Oriented Languages Abstract: We present an algorithm to model check deadlock freedom for an automaton model of a Java program. For the sake of this paper, we introduce method automata and thread automata. Classes and objects are abstracted by method automata. Based on these automata thread automata are constructed to represent the calling behaviour of the program. Thread automata have no notion of history. This provides us with a finite-state model for model checking. The imprecision introduced by this abstraction is addressed by a refinement leading to a more precise model with respect to the calling behaviour of the program. David Frutos Escrig and Carlos Gregorio-Rodr�guez. Algebraic and Coinductive Characterizations of Semantics Provide General Results for Free Abstract: There have been quite a number of proposals for behavioural equivalences for concurrent processes, and many of them are presented in van Glabbeek's linear time-branching time spectrum, where in particular an axiomatization for each one of these semantics is provided. Recently we have been able of presenting also a coinductive characterization of these semantics by means of \emph {bisimulations up-to}, which give us the possibility to apply coinductive technique to their study. The most important property of these two approaches is their generality: once we have captured the essence of many different semantics in the same way we are able to get quite general results, which can be proved once for all, covering not only the semantics that have just been considered, but any other that could be defined in those frameworks having some quite general properties that  are the only ones used when proving our results. We illustrate these asserts by means of a new and quite interesting result by Aceto, Fokkink and Ingolfsdottir, where they prove how to get for free an axiomatization of the equivalence induced by a behaviour preorder coarser than the ready simulation from that of the given preorder. We have got much simpler and shorter proofs of the main results in their paper that besides are valid not only for the semantics in the spectrum that they need to consider one by one, but for any other semantics fulfilling the quite general and simple properties that we need in our general proofs. Aslan Askarov and Andrei Sabelfeld. Gradual Release: Unifying Declassification, Encryption and Key Release Policies (Extended Abstract) Abstract: Information security has a challenge to address: enabling information-flow controls with expressive information release (or declassification) policies. Existing approaches tend to address some aspects of information release, exposing the other aspects for possible attacks. It is striking that these approaches fall into two mostly separate categories: revelation-based (as in information purchase, aggregate computation, moves in a game, etc.) and encryption-based declassification (as in sending encrypted secrets over an untrusted network, storing passwords, etc.). This paper introduces gradual release, a policy that unifies declassification, encryption, and key release policies. We model an attacker�s knowledge by the sets of possible secret inputs as functions of publicly observable outputs. The essence of gradual release is that this knowledge must remain constant between releases. Gradual release turns out to be a powerful foundation for release policies, which we demonstrate by formally connecting revelation-based and encryption-based declassification. Furthermore, we show that gradual release can be provably enforced by security types and effects. Alejandro Russo. Controlling Timing Channels in Multithreaded Programs Abstract: The problem of controlling information flow in multithreaded programs remains an important open challenge. A major difficulty for tracking information flow in concurrent programs is due to the internal timing covert channel. Information is leaked via this channel when secrets affect the timing behavior of a thread, which, via the scheduler, affects the interleaving of public events. Volpano and Smith propose a special primitive called protect. By definition, protect(c) takes one atomic step in the semantics with the effect of executing c to the end. Internal timing leaks are removed if every computation that branches on secrets is wrapped by protect() commands. However, implementing protect imposes a major challenge. This work introduces a novel treatment of the interaction between threads and the scheduler. As a result, a permissive security specification and a compositional security type system are obtained. The type system guarantees security for a wide class of schedulers and provides a flexible treatment of dynamic thread creation. While this approach allows the implementation of a generalized version of protect, it relies on the modification of the scheduler in the run-time environment. In some scenarios, the modification of the run-time environment might not be an acceptable requirement. For such scenarios, we also present two transformations that eliminate the need for protect or interactions with the scheduler while avoiding internal timing leaks. The first transformation is given for programs running under cooperative schedulers. It states that threads must not yield control inside of computations that branch on secrets. The second transformation closes internal timing channel when the scheduler is preemptive and behaves as round-robin. It spawns dedicated threads, whenever computation may affect secrets, and carefully synchronizes them. To evaluate some of the ideas described above, we implement a library in Haskell  that provides information-flow security for multithreaded code. The implementation includes an online-shopping case study. The case study reveals that exploiting concurrency to leak secrets is feasible and dangerous in practice and shows how the library can help avoiding internal timing leaks. Up to the publication date, this is the first tool that guarantees information-flow security in multithreaded programs and the first implementation of a case study that involves concurrency and information-flow policies. H�rmel Nestra. Transfinite Semantics in the form of Greatest Fixpoint Abstract: In some applications like program slicing, it is useful to imagine program runs as they were able to overcome non-termination. According to this view, a computation that falls into an infinite loop or infinitely deep recursion continues after completing the infinite subcomputation. Program semantics that follow this view may be called transfinite but the word "transfinite" actually fails to characterize the whole variety of computation processes. In the case where only iterative loops can be non-terminating, the sequence of execution steps is really transfinite in the sense that the steps can be enumerated by ordinals so that their execution order corresponds to the natural order of ordinals. But in the case of infinitely deep recursion, the sequence is more like a fractal structure. Defining such semantics even for rather simple programs with infinitely deep recursion has not succeeded so far. We show some possible definitions via greatest fixpoints of monotone operators. In order to achieve this, usual traces are replaced by either fractional traces where steps are enumerated by rational numbers from some fixed interval or trees. Gift Samuel, Yoshinao Isobe and Markus Roggenbach. Reasoning on Responsiveness  -- Extending CSP-Prover by the model R Abstract: We report on the instantiation of CSP-Prover with the model R. CSP-Prover is an interactive theorem prover for CSP based on Isabelle/HOL. The model R is the latest development in the realm of CSP models. The model R is dedicated to the study of responsiveness. Joseph Morris and Malcolm Tyrrell. Higher-Order Data Type Refinement Abstract: We show data refinement and nondeterminacy are intimately related, and that a strong theory of data refinement can be elegantly constructed on a sufficiently rich theory of nondeterminacy. Bj�rnar Solhaug and Ketil Stoelen. Refinement, Compliance and Adherence of Policies in the Setting of UML Interactions Abstract: The UML is the de facto standard for information system specification, but offers little support for policy specification. We introduce extensions to the sequence diagram notation suitable for expressing policy rules. A formal semantics is defined which gives a precise meaning to the specifications and allows the formalization of the notions of refinement and compliance of policies. The relation between a policy and systems for which the policy applies is formalized, defining what it means for a system to adhere to the policy. Mohammad Mahdi Jaghoori, Frank de Boer and Marjan Sirjani. Task Scheduling in Rebeca Abstract: In this paper, we add real time constraints to Rebeca and present a compositional approach based on task automata for schedulability analysis of timed Rebeca models. In this approach, instead of just best-case and worst-case execution times, the behavior of each task is given (in terms of timed automata) and used in the schedulability analysis. These timed automata may in turn generate new tasks. Task automata, as introduced in \cite{FersmanYi07acc}, cannot model tasks generated {\em during} the execution of another task. Marta Plaska, Marina Wald�n and Colin Snook. Visualising program transformations in a stepwise manner Abstract: For designing and developing complex, correct and reliable systems formal methods are the most beneficial approach. However, a formal methodology could be difficult for industry practitioners due to its mathematical notation. Hence it needs to be supported by more approachable platform, which would give guidance both for industry and research world representatives. The Unified Modelling Language (UML) with its semi-formal notation gives the intuitive image of the system and is commonly used within the industry. In the stepwise development of our system we combine UML with the formal methods approach and refinement patterns.   For a formal top-down approach we use the Event B formalism and associated proof tool to develop the system and prove its correctness. Event-B is based on Action Systems as well as the B Method. It is related to B Action Systems for developing distributed systems in B. With the Event-B formalism we have tool support for proving the correctness of the development. In order to translate UML models into Event B, the UML-B tool is used. UML-B is a specialisation of UML that defines a formal modelling notation combining UML and B.   The first phase of the design approach is to state the goals that the system needs to satisfy. Afterwards the functional requirements of the system are specified using natural language and illustrated by various UML diagrams, such as statechart diagrams that depict the behaviour of the system. The system is built up gradually in small steps using superposition refinement. We strongly rely on patterns in the refinement process, since these are the cornerstones for creating reusable and robust software. UML diagrams and corresponding Event B code are developed for each step simultaneously. To get a better overview of the design process, we benefit from the progress diagrams that illustrate only the refinement-affected parts of the system. These diagrams are based on well-known and widely-used UML statechart diagrams. In our previous research we introduced progress diagrams, which were illustrated by a case study. Here we focus on exploring refinement patterns in view of progress diagrams, as their combination supports constructing large software systems in an incremental and layered fashion. Moreover, this combination facilitates to master the complexity of the project and to reason about the properties of the system. Aske Brekling, Michael R. Hansen and Jan Madsen. Hardware Modelling Language and Verification of Design Properties Abstract: We give a semantics domain that can be used for hardware design languages like Gezel. With this semantics, we believe that a new Gezel-like language could be defined where the syntax reflects the semantics in a direct manner. We also show how the semantics can be used in connection with verification by relating the semantical domain to timed-automata. We have experimented with verification of some examples, e.g. the Simplified Data Encryption Standard Algorithm, using the UPPAAL system. Fredrik Degerlund, Mats Neovius and Kaisa Sere. A Framework for Formal Reasoning about Distributed Webs of Trust Abstract: Information technology is rapidly developing towards pervasive and ubiquitous applications assisting us in making our everyday decisions. As such, they need to be reliable. Technical reliability can already be achieved by traditional formal methods, but modelling human interaction demands new perspectives. We are developing a coordination language for use within the Action Systems formalism in order to mimic human perception of trust. This language allows for componentization by separating trust, as well as for reasoning about trust in a fully distributed manner. We believe that this will dramatically extend the applicable areas for mathematically verified applications, and that it will make us comfortable relying on microprocessors making critical decisions in our favour. Juri Vain, Kullo Raiend, Andres Kull and Juhan Ernits. Test Purpose Directed Reactive Planning Tester for Nondeterministic Systems Abstract: We describe a model-based construction of an online tester for black-box testing of the implementation under test (IUT). The external behaviour of the IUT is modelled as an output observable nondeterministic EFSM with the assumption that all transition paths are feasible. A test purpose is attributed to the IUT model by a set of Boolean variables called traps that are used to measure the progress of the test run. These variables are associated with the transitions of the IUT model. The situation where all traps have been reached means that the test purpose has been achieved. We present a way to construct a tester that at runtime selects a suboptimal test path from trap to trap by finding the shortest path to the next unvisited trap. The principles of reactive planning are implemented in the form of the decision rules of selecting the shortest paths at runtime. The decision rules are constructed in advance from the IUT model and the test purpose. Preliminary experimental results confirm that this method clearly outperforms random choice and is better than the anti-ant algorithm in terms of resultant test sequence length to achieve the test purpose. Steffen van Bakel and Maria Grazia Vigliotti. Note on a simple type system for non-interference Abstract: We consider  CCS with value passing  and elaborate a notion of noninterference for the process calculi, which matches closely that of the programming language. The idea is to view channels as information carriers rather than as events'', so that emitting a secret  on output channel can  considered   safe, while inputting a secret may lead to some kind of leakage. This is in contrast with the standard notion of noninterference for the process calculi where any causal dependency of low-level action from any high-level action is forbidden. Stefan Bygde. Analysis of Arithmetical Congruences on Low-Level Code Abstract: Abstract Interpretation is a well known formal framework for abstracting programming language semantics. It provides a systematic way of building static analyses which can be used for optimisation and debugging purposes. Different semantic properties can be captured by so-called abstract domains which then easily can be combined in various ways to yield more precise analyses. The most known abstract domain is probably the one of intervals. An analysis using the interval domain yields bindings of each integer-valued program variable to an interval at each program point. The interval is the smallest interval that contains the set of integers possible for that particular variable to assume at that program point during execution. Abstract interpretation can be used in many contexts, such as in debugging, program transformation, correctness proving, Worst Case Execution Time analysis etc. In 1989 Philippe Granger introduced a static analysis of arithmetical congruences. The analysis is formulated as an abstract interpretation computing the smallest (wrt. inclusion) congruence (residue) class that includes the set of possible values that that variable may assume during execution. The result of the analysis is a binding of each integer-valued variable at each program point to a congruence class. Applications for this analysis include automatic vectorisation, pointer analysis (for determining pointer strides) and loop-bound analysis (for detecting loops with non-unit strides). However, in the original presentation, the analysis is not well suited to use on realistic low-level code. By low-level code we mean either compiled and linked object code where high-level constructions has been replaced with target-specific assembly code, or code in a higher-level language written in a fashion close to the hardware. A good example of low-level code is code written for embedded systems which often is using advantages of the target hardware and/or using a lot of bit-level operations. Code for embedded systems is an increasingly important target for analysis, since it is often safety-critical. The reason that the congruence domain in its original presentation is not suitable for low-level code is mainly due to the three following properties of low-level code: A) Bit-level operations are commonly used in low-level code. Programs that contain bit-operations are not supported in the original presentation. For any computation of an expression which contain operations that has not been defined in the analysis, it has to assume that nothing is known about the result and assign the result to the largest congruence class (equal to Z). This can potentially lead to very imprecise analysis results. B) The interpretation of the values of integer-valued variables is not obvious (e.g. they can be signed or unsigned), the original presentation assumes that values has unambiguous representations. C) The value-domain is limited by its representation (integers are often represented by a fixed number of bits). In Grangers presentation integer-valued variables are assumed to take values in the infinite set of integers. Our contribution is to extend the theory of the analysis of arithmetical congruences to be able to handle low-level or assembly code, still in the framework of abstract interpretation. This paper provides accurate definitions to the abstract bit-operations AND,NOT,XOR, left- and right shifting and truncation for the congruence domain in order to make the domain support these operations. We provide definitions for the operations together with proofs of their correctness. In these definitions care has been taken to the finite, fixed representation of integers as well as their sometimes ambiguous interpretations as signed or unsigned. With these definitions, congruence analysis can efficiently be performed on low-level code. The paper illustrates the usefulness of the new analysis by an example which shows that variables keep important parity information after executing a XOR-swap. Sebastian Hunt and David Sands. Forget about it Abstract: Sometimes programs are supposed to forget information. A typical   example is an on-line credit card transaction. Credit card data is   required to complete a financial transaction. While in use, the   credit card information should only flow to the intended processing   agent -- not the merchant. But once the transaction is complete the   credit card information should be erased -- or at least only be   available to the user.   This information flow problem was recently described by Chong and   Myers [CSFW'05]. Chong and Myers studied a semantics of erasure in   combination with declassification. This combination gives rise to a   complex semantic model, and no enforcement (verification) mechanism   is described.   In this work aim to give a simple semantics to \emph{pure}   information erasure policies. By relating erasure to flow-sensitive   information flow properties (Hunt and Sands [POPL'06]) we show that   erasure policies can be enforced by recasting erasure policies as a   standard noninterference problem. This enables erasure to be   verified statically using any flow sensitive noninterference tool. Tobias Gedell and Daniel Hedin. A Method for Parameterizing Type Systems over Relational Information Abstract: We present a method for parameterizing type systems over relational information and demonstrate its applicability via two examples. Ando Saabas and Tarmo Uustalu. Relational Soundness and Optimality Proofs for Simple Partial Redundancy Elimination Abstract: Type systems are a useful and compact way of describing dataflow analyses. They can explain analyses and optimizations well, they can be used for transformation of functional correctness proofs as soundness of the optimization is straightforward by structural induction on type derivations. In this work in progress, we investigate the type-systematic approach further on the example of partial redundancy elimination and describe how this approach can also be used to show certain optimality results. Frank de Boer, Marcello Bonsangue, Andreas Gruener and Martin Steffen. Test Driver Generation from Object-Oriented Interaction Traces Abstract: We formalize a specification language for object-oriented class-based concurrent components which specifies a component's behavior based on its interface trace and which, at the same time, has a similar look-and-feel as the target language. Moreover, we propose a transformation algorithm which synthesizes a tester program from a given specification, such that the tester and the component under test form an executable closed program. Wang Yi and Simon Tschirner. Validating QoS Properties in Biomedical Sensor Networks Abstract: In this paper, we present a formal model of a Biomedical Sensor Network whose sensor nodes are constructed based on the IEEE 802.15.4 ZigBee standard for wireless communication. We have used the UPPAAL tool to tune and validate the temporal configuration parameters of the network in order to guarantee the desired QoS properties for a medical application scenario. The case study shows that even though the main feature of UPPAAL is model checking, it is also a promising and competitive tool for efficient simulation. Tarmo Uustalu and Varmo Vene. Guarded and Mendler-Style Structured (Co)Recursion in Circular Proofs Abstract: Inspired by circular sequent calculi for classical predicate and modal logics with inductive and coinductive definitions, we study two wellfounded reformulations of a similar intuitionistic propositional sequent calculus as total functional programming languages. We arrive at interesting novel typed lambda calculi with guarded and Mendler-style (co)recursion.