OPUS 4 Latest Documents RSS FeedLatest documents
http://publikationen.stub.uni-frankfurt.de/index/index/
Sat, 22 Nov 2014 11:18:25 +0100Sat, 22 Nov 2014 11:18:25 +01001D-3D hybrid modeling-from multi-compartment models to full resolution models in space and time
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/35554
Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator—which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics.Stephan Grein; Martin Stepniewski; Sebastian Reiter; Markus M. Knodel; Gillian Queisserarticlehttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/35554Sat, 22 Nov 2014 11:18:25 +0100Ein Ansatz für semantisches Selbstmanagement von verteilten Anwendungen im privaten Lebensumfeld
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/35152
Die Anreicherung des privaten Lebensumfelds mit intelligenten technischen Assistenzsystemen wird in den nächsten Jahrzehnten stark zunehmen. Als Teil dieser Entwicklung wird die Nutzung von externen und hauseigenen IT-Diensten steigen, wodurch sich auch die Komplexität der entstehenden Gesamtsysteme erhöht. Hier sind Ansätze gefordert, diese Systeme auch für technisch nicht versierte Benutzer produktiv nutzbar und beherrschbar zu gestalten, um eine Überforderung zu vermeiden. Im Umfeld häuslicher Dienstplattformen, die eine zentrale Rolle in solchen Systemen übernehmen, nimmt seit ein paar Jahren die Bedeutung der semantischen Modellierung von Diensten stark zu. Diese dient zum einen der formalen Repräsentation von zugehörigen Kontextinformationen, die durch Interaktion mit Sensoren und Aktoren entstehen, und zum anderen der Verbesserung der Interoperabilität zwischen Systemen unterschiedlicher Hersteller. Bisherige Ansätze beschränken sich jedoch auf den Einsatz eines zentralen Rechenknotens zur Ausführung der Dienstplattform und nutzen Semantik – wenn überhaupt – nur zur Verarbeitung von Kontextinformationen. Ein technisches Management des Gesamtsystems findet i.d.R. nicht statt.
Vor diesem Hintergrund ist das Ziel dieser Arbeit die Entwicklung eines Ansatzes für semantisches Selbstmanagement von verteilten dienstbasierten Anwendungen speziell im Umfeld häuslicher Dienstplattformen.
Die vorliegende Arbeit definiert zunächst formale Ontologien für Dienste, Dienstgütemanagement, Selbstmanagement und zugehörige Managementregeln, die zur Laufzeit mit konkreten Diensten und deren erfassten Leistungskenngrößen integriert werden. Durch einen modellgetriebenen Architekturansatz (Model Driven Architecture, MDA) wird ein technologieunabhängiges Management auf abstrakter Ebene ermöglicht, das die Wiederverwendbarkeit von Managementregeln in anderen Szenarien erlaubt.
Dieser Ansatz wird zunächst in eine Architektur für einen hochverfügbaren autonomen Manager überführt, der die Überwachung und Steuerung von Diensten und zugehörigen Dienstplattformen übernehmen kann und auf der aus dem Autonomic Computing bekannten MAPE-K-Kontrollschleife (Monitor, Analyze, Plan, Execute, Knowledge) basiert.
Den Abschluss der Arbeit bildet eine qualitative und quantitative Evaluation (mittels einer OSGi-basierten prototypischen Umsetzung) der erreichten Ergebnisse, die einen Einsatz über die Grenzen des privaten Lebensumfelds hinaus nahelegen.
Jan Schäferdoctoralthesishttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/35152Wed, 15 Oct 2014 14:31:41 +0200Strömung in geklüftet porösen Medien
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/35100
Die Simulation von Strömung in geklüftet porösen Medien ist von entscheidender Bedeutung in Hinblick auf viele hydrogeologische Anwendungsgebiete, wie beispielsweise der Vorbeugung einer Grundwasserverschmutzung in der Nähe einer Mülldeponie oder einer Endlagerstätte für radioaktive Abfälle, der Förderung fossiler Brennstoffe oder der unterirdischen Speicherung von Kohlendioxid. Aufgrund ihrer Beschaffenheit und insbesondere der großen Permeabilität innerhalb der Klüfte, stellen diese bevorzugte Transportwege dar und können das Strömungsprofil entscheidend beeinflussen. Allerdings stellt die anisotrope Geometrie der Klüfte in Zusammenhang mit den enormen Sprüngen in Parametern wie der Permeabilität auf kleinstem Raum große Anforderungen an die numerischen Verfahren.
Deswegen werden in dieser Arbeit zwei Ansätze zur Modellierung der Klüfte verfolgt. Ein niederdimensionaler Ansatz motiviert durch die anisotrope Geometrie mit sehr geringer Öffnungsweite und sehr langer Erstreckung der Klüfte und ein volldimensionaler Ansatz, der alle Vorgänge innerhalb der Kluft auflöst. Es werden die Ergebnisse dieser Ansätze für Benchmark-Probleme untersucht, mit dem Ergebnis, dass nur bei sehr dünnen Klüften der numerisch günstigere niederdimensionale Ansatz zufriedenstellende Ergebnisse liefert. Weiterhin wird ein Kriterium eingeführt, dass während der Laufzeit anhand von Eigenschaften der Kluft und Strömungsparametern angibt, ob der niederdimensionale Ansatz ausreichende Gültigkeit besitzt. Es wird ein dimensions-adaptiver Ansatz präsentiert, der dann entsprechend dieses Kriteriums einen Wechsel zum volldimensionalen Modell durchführt. Die Ergebnisse zeigen, dass so wesentlich genauere Ergebnisse erzielt werden können, ohne dass eine volle Auflösung in jedem Fall und über den gesamten Rechenzeitraum erforderlich ist.Sabine Sticheldoctoralthesishttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/35100Thu, 25 Sep 2014 10:02:08 +0200Trusted Computing Sicherheit für Webbrowser
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34003
Gökhan Baldiplomthesishttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34003Wed, 30 Jul 2014 15:10:21 +0200Mathematical modeling of Arabidopsis thaliana with focus on network decomposition and reduction
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34522
Systems biology has become an important research field during the last decade. It focusses on the understanding of the systems which emit the measured data. An important part of this research field is the network analysis, investigating biological networks. An essential point of the inspection of these network models is their validation, i.e., the successful comparison of predicted properties to measured data. Here especially Petri nets have shown their usefulness as modeling technique, coming with sound analysis methods and an intuitive representation of biological network data.
A very important tool for network validation is the analysis of the Transition-invariants (TI), which represent possible steady-state pathways, and the investigation of the liveness property. The computational complexity of the determination of both, TI and liveness property, often hamper their investigation.
To investigate this issue, a metabolic network model is created. It describes the core metabolism of Arabidopsis thaliana, and it is solely based on data from the literature. The model is too complex to determine the TI and the liveness property.
Several strategies are followed to enable an analysis and validation of the network. A network decomposition is utilized in two different ways: manually, motivated by idea to preserve the integrity of biological pathways, and automatically, motivated by the idea to minimize the number of crossing edges. As a decomposition may not be preserving important properties like the coveredness, a network reduction approach is suggested, which is mathematically proven to conserve these important properties. To deal with the large amount of data coming from the TI analysis, new organizational structures are proposed. The liveness property is investigated by reducing the complexity of the calculation method and adapting it to biological networks.
The results obtained by these approaches suggest a valid network model. In conclusion, the proposed approaches and strategies can be used in combination to allow the validation and analysis of highly complex biological networks.Joachim Nöthendoctoralthesishttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34522Fri, 11 Jul 2014 15:05:07 +0200Correctness of an STM Haskell implementation
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34440
A concurrent implementation of software transactional memory in Concurrent Haskell using a call-by-need functional language with processes and futures is given. The description of the small-step operational semantics is precise and explicit, and employs an early abort of conflicting transactions. A proof of correctness of the implementation is given for a contextual semantics with may- and should-convergence. This implies that our implementation is a correct evaluator for an abstract specification equipped with a big-step semantics.Manfred Schmidt-Schauß; David Sabelworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34440Tue, 08 Jul 2014 15:04:26 +0200Towards correctness of program transformations through unification and critical pair computation
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34439
Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules, and then of so-called complete sets of diagrams. The method is similar to the computation of critical pairs for the completion of term rewriting systems.We explore cases where the computation of these overlaps can be done in a first order way by variants of critical pair computation that use unification algorithms. As a case study we apply the method to a lambda calculus with recursive let-expressions and describe an effective unification algorithm to determine all overlaps of a set of transformations with all reduction rules. The unification algorithm employs many-sorted terms, the equational theory of left-commutativity modelling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive let-expressions.Conrad Rau; Manfred Schmidt-Schaußworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34439Tue, 08 Jul 2014 14:56:08 +0200Reconstructing a logic for inductive proofs of properties of functional programs
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34438
David Sabel; Manfred Schmidt-Schaußworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34438Tue, 08 Jul 2014 14:44:41 +0200Reconstruction of a logic for inductive proofs of properties of functional programs
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34437
The interactive verification system VeriFun is based on a polymorphic call-by-value functional language and on a first-order logic with initial model semantics w.r.t. constructors. It is designed to perform automatic induction proofs and can also deal with partial functions. This paper provides a reconstruction of the corresponding logic and semantics using the standard treatment of undefinedness which adapts and improves the VeriFun-logic by allowing reasoning on nonterminating expressions and functions. Equality of expressions is defined as contextual equivalence based on observing termination in all closing contexts. The reconstruction shows that several restrictions of the VeriFun framework can easily be removed, by natural generalizations: mutual recursive functions, abstractions in the data values, and formulas with arbitrary quantifier prefix can be formulated. The main results of this paper are: an extended set of deduction rules usable in VeriFun under the adapted semantics is proved to be correct, i.e. they respect the observational equivalence in all extensions of a program. We also show that certain classes of theorems are conservative under extensions, like universally quantified equations. Also other special classes of theorems are analyzed for conservativity.David Sabel; Manfred Schmidt-Schaußworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34437Tue, 08 Jul 2014 14:35:53 +0200Reconstruction of a logic for inductive proofs of properties of functional programs
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34436
The interactive verification system VeriFun is based on a polymorphic call-by-value functional language and on a first-order logic with initial model semantics w.r.t. constructors. This paper provides a reconstruction of the corresponding logic when partial functions are permitted. Typing is polymorphic for the definition of functions but monomorphic for terms in formulas. Equality of terms is defined as contextual equivalence based on observing termination in all contexts. The reconstruction also allows several generalizations of the functional language like mutual recursive functions and abstractions in the data values. The main results are: Correctness of several program transformations for all extensions of a program, which have a potential usage in a deduction system. We also proved that universally quantified equations are conservative, i.e. if a universally quantified equation is valid w.r.t. a program P, then it remains valid if the program is extended by new functions and/or new data types.David Sabel; Manfred Schmidt-Schaußworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34436Tue, 08 Jul 2014 14:25:24 +0200On correctness of buffer implementations in a concurrent lambda calculus with futures
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34435
Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and handled futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence, and as a consequence also yields soundness of the encodings with respect to a contextually defined notion of program equivalence. While these translations encode blocking into queuing and waiting, we also describe an adequate encoding of buffers in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation process from high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs.Jan Schwinghammer; David Sabel; Joachim Niehren; Manfred Schmidt-Schaußworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34435Tue, 08 Jul 2014 14:12:40 +0200Adequacy of compositional translations for observational semantics
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34433
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.Manfred Schmidt-Schauß; Joachim Niehren; Jan Schwinghammer; David Sabelworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34433Tue, 08 Jul 2014 14:02:58 +0200Adequacy of compositional translations for observational semantics
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34432
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.Manfred Schmidt-Schauß; Joachim Niehren; Jan Schwinghammer; David Sabelworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34432Tue, 08 Jul 2014 13:55:58 +0200Adequacy of compositional translations for observational semantics
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34431
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
Manfred Schmidt-Schauß; Joachim Niehren; Jan Schwinghammer; David Sabelworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34431Tue, 08 Jul 2014 13:48:09 +0200Adequacy of compositional translations for observational semantics
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34430
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.Manfred Schmidt-Schauß; Joachim Niehren; Jan Schwinghammer; David Sabelworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34430Tue, 08 Jul 2014 13:41:34 +0200A finite simulation method in a non-deterministic call-by-need calculus with letrec, constructors and case
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34429
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus’ semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as nondeterminism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe’s proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.Manfred Schmidt-Schauß; Elena Machkasovaworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34429Tue, 08 Jul 2014 13:30:21 +0200On generic context lemmas for lambda calculi with sharing
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34428
This paper proves several generic variants of context lemmas and thus contributes to improving the tools for observational semantics of deterministic and non-deterministic higher-order calculi that use a small-step reduction semantics. The generic (sharing) context lemmas are provided for may- as well as two variants of must-convergence, which hold in a broad class of extended process- and extended lambda calculi, if the calculi satisfy certain natural conditions. As a guide-line, the proofs of the context lemmas are valid in call-by-need calculi, in callby-value calculi if substitution is restricted to variable-by-variable and in process calculi like variants of the π-calculus. For calculi employing beta-reduction using a call-by-name or call-by-value strategy or similar reduction rules, some iu-variants of ciu-theorems are obtained from our context lemmas. Our results reestablish several context lemmas already proved in the literature, and also provide some new context lemmas as well as some new variants of the ciu-theorem. To make the results widely applicable, we use a higher-order abstract syntax that allows untyped calculi as well as certain simple typing schemes. The approach may lead to a unifying view of higher-order calculi, reduction, and observational equality.Manfred Schmidt-Schauß; David Sabelworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34428Tue, 08 Jul 2014 13:09:22 +0200On generic context lemmas for lambda calculi with sharing
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34427
This paper proves several generic variants of context lemmas and thus contributes to improving the tools to develop observational semantics that is based on a reduction semantics for a language. The context lemmas are provided for may- as well as two variants of mustconvergence and a wide class of extended lambda calculi, which satisfy certain abstract conditions. The calculi must have a form of node sharing, e.g. plain beta reduction is not permitted. There are two variants, weakly sharing calculi, where the beta-reduction is only permitted for arguments that are variables, and strongly sharing calculi, which roughly correspond to call-by-need calculi, where beta-reduction is completely replaced by a sharing variant. The calculi must obey three abstract assumptions, which are in general easily recognizable given the syntax and the reduction rules. The generic context lemmas have as instances several context lemmas already proved in the literature for specific lambda calculi with sharing. The scope of the generic context lemmas comprises not only call-by-need calculi, but also call-by-value calculi with a form of built-in sharing. Investigations in other, new variants of extended lambda-calculi with sharing, where the language or the reduction rules and/or strategy varies, will be simplified by our result, since specific context lemmas are immediately derivable from the generic context lemma, provided our abstract conditions are met.Manfred Schmidt-Schauß; David Sabelworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34427Tue, 08 Jul 2014 13:02:37 +0200A call-by-need lambda-calculus with locally bottom-avoiding choice: context lemma and correctness of transformations
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34426
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <=c a is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. may-convergence.David Sabel; Manfred Schmidt-Schaußworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34426Tue, 08 Jul 2014 12:52:05 +0200A call-by-need lambda-calculus with locally bottom-avoiding choice: context lemma and correctness of transformations
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34425
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <=c a is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. may-convergence.David Sabel; Manfred Schmidt-Schaußworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34425Tue, 08 Jul 2014 12:38:20 +0200Equivalence of call-by-name and call-by-need for lambda-calculi with letrec
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34424
We develop a proof method to show that in a (deterministic) lambda calculus with letrec and equipped with contextual equivalence the call-by-name and the call-by-need evaluation are equivalent, and also that the unrestricted copy-operation is correct. Given a let-binding x = t, the copy-operation replaces an occurrence of the variable x by the expression t, regardless of the form of t. This gives an answer to unresolved problems in several papers, it adds a strong method to the tool set for reasoning about contextual equivalence in higher-order calculi with letrec, and it enables a class of transformations that can be used as optimizations. The method can be used in different kind of lambda calculi with cyclic sharing. Probably it can also be used in non-deterministic lambda calculi if the variable x is “deterministic”, i.e., has no interference with non-deterministic executions. The main technical idea is to use a restricted variant of the infinitary lambda-calculus, whose objects are the expressions that are unrolled w.r.t. let, to define the infinite developments as a reduction calculus on the infinite trees and showing a standardization theorem.Manfred Schmidt-Schaußworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34424Tue, 08 Jul 2014 12:31:08 +0200On equivalences and standardization in a non-deterministic call-by-need lambda calculus
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34451
The goal of this report is to prove correctness of a considerable subset of transformations w.r.t. contextual equivalence in an extended lambda-calculus LS with case, constructors, seq, let, and choice, with a simple set of reduction rules; and to argue that an approximation calculus LA is equivalent to LS w.r.t. the contextual preorder, which enables the proof tool of simulation. Unfortunately, a direct proof appears to be impossible.
The correctness proof is by defining another calculus L comprising the complex variants of copy, case-reduction and seq-reductions that use variable-binding chains. This complex calculus has well-behaved diagrams and allows a proof of correctness of transformations, and that the simple calculus LS, the calculus L, and the calculus LA all have an equivalent contextual preorder.Manfred Schmidt-Schauß; Matthias Mannworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34451Tue, 08 Jul 2014 12:16:49 +0200On equivalences and standardization in a non-deterministic call-by-need lambda calculus
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34450
The goal of this report is to prove correctness of a considerable subset of transformations w.r.t. contextual equivalence in an extended lambda-calculus LS with case, constructors, seq, let, and choice, with a simple set of reduction rules; and to argue that an approximation calculus LA is equivalent to LS w.r.t. the contextual preorder, which enables the proof tool of simulation. Unfortunately, a direct proof appears to be impossible The correctness proof is by defining another calculus L comprising the complex variants of copy, case-reduction and seq-reductions that use variable-binding chains. This complex calculus has well-behaved diagrams and allows a proof of correctness of transformations, and that the simple calculus LS, the calculus L, and the calculus LA all have an equivalent contextual preorder.Manfred Schmidt-Schauß; Matthias Mannworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34450Tue, 08 Jul 2014 11:24:31 +0200On conservativity of concurrent Haskell
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34312
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.David Sabel; Manfred Schmidt-Schaußworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34312Thu, 03 Jul 2014 14:49:03 +0200Simulation in the call-by-need lambda-calculus with letrec, case, constructors, and seq
http://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34234
This paper shows equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seqoperator. LR models an untyped version of the core language of Haskell. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations.
The proof is by a fully abstract and surjective transfer of the contextual approximation into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of similarity and contextual approximation can be shown by Howe's method. Using an equivalent but inductive definition of behavioral preorder we then transfer similarity back to the calculus LR.
The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite tress, which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, and also with a call-by-need letrec calculus with a less complex definition of reduction than LR.Manfred Schmidt-Schauß; David Sabel; Elena Machkasovaworkingpaperhttp://publikationen.stub.uni-frankfurt.de/frontdoor/index/index/docId/34234Thu, 03 Jul 2014 14:36:22 +0200