• Home
  • Events
  • About
  • Why functional programming really matters – episode three

    January 19th, 2015

    Why functional programming really matters:
    providing more options for incremental development.

    In episode two we have seen how designing with functions naturally leads to incremental development. We will now extend our running example with higher-order functions and data type evolution.

    Episode Three: higher-order functions and data type evolution

    In episode one we made the case for incremental development and the importance of staying non-intrusive. What we will do now is changing code from the last increment.

    a stack of increments

    a stack of increments

    Changing code is what we declared as being intrusive  but changing the last increment is allowed.

    The problem with changing code is that any code that depends on it may be compromised. That is the intrusion. Since no code can possibly depend on the last increment,  changing it is safe.

    With that consideration out of the way, let’s look at our last increment that finds the minimax value for a given board:

    It works fine but we would like to make it even more idiomatic in the functional sense.

    We have already seen that functional programmers like to introduce abstractions very early and there is a hidden abstraction in the code above. We map a tree of boards to a tree of doubles. The structure of the tree remains untouched. This is the nature of an abstraction called Functor. Functors support a general function fmap that keeps the current structure but applies a function f to all elements. It turns out that our Tree type is such a Functor.

    To use the Functor nature of trees, we use fmap f instead of static.

    Note that evaluateBy now takes a parameter f, which can be any function that takes a Board and returns an Ord as required by maximize.

    A function like evaluateBy that takes another function as a parameter is called a higher-order function. They are heavily used in functional programming and in fact we have used them before without giving it any attention. The current case, though, is different. We use this very common functional feature to set the stage for non-intrusive increments.

    Tic Tac Toe with forecast

    Computer knows he will win

    A new requirement comes up: predict the future

    It appears that playing the game is fun but sometimes it is difficult to understand why the computer does a certain move. It would be interesting if we had a bit of insight and could show what the computer is considering. The idea is that we display a forecast: the final board that the computer thinks the game will develop into.

    To this end, we need to capture the board that led to the minimax value. It turns out that this is easier than expected. We can use evaluateBy but we do not pass the static function any more. Instead, we pass a function that maps a board not only to its static value but to a pair of the static value and the board itself.

    This is nice. First and foremost it is a fully non-intrusive increment! We did not change any existing code at all.

    Second, it falls in place very naturally when following the functional style with higher-order functions – without any anticipation of new requirements.

    Think what would have happened in a typical object-oriented style. We can of course achieve similar effects in OO but this requires some forethought to prepare for new use cases: template methods, strategy pattern, and the likes. Without such anticipation, OO design would lead to intrusive changes. One way or the other: incremental development is harder.

    The downside is: it doesn’t compile.

    Our mapping function must return an Ord but a pair (any tuple actually) is only of type Ord if all its elements are of type Ord as well. The Double value is but the Board type is not.

    Hm, what next? Going back to the definition of Board and changing its definition? That would be an intrusive change.

    Here we see a big benefit of Frege typeclasses (as opposed to let’s say Java interfaces): we can evolve the data type! Board becomes an instance of the Ord typeclass non-intrusively in a new increment with full static type safety!
    There is not much logic to add since we make all boards equal. For the purpose of ordering our pairs we only need the static Double values of the pair.

    This code looks so simple but the implications are huge. By making Board an instance of Ord, we have given the Board type a new capability!

    Note that this even works when the source code of Board is not available. It can come from a third-party library that we only have in binary form.
    We can even evolve foreign data types.

    Now we have everything compiling and working. You can play the game and see the forecast working.

    In the next episode, we will look back and draw some general conclusions from what we have seen.

    Dierk König



    Episode 1
    Episode 2
    Episode 3
    The paper Why functional programming matters

    The full source code. Play the game here.











    Why functional programming really matters – episode two

    January 17th, 2015

     Why functional programming really matters:
    providing more options for incremental development.

    In episode one we have seen how lazy evaluation allows us to separate the generation of potentially infinite data structures from functions that work on it. We will now follow up on the same example from a different angle: how to design with functions for incremental development.

    Episode Two: Designing with functions and composition

    So far we have developed means for building a tree of game boards where child nodes represent the boards that result from the players mutually placing their moves.

    In order to find the next best move we have to give some value to a board. Let’s go with a Double value where 1 represents that the computer has won and -1 for the opponent. A function that tells us the static value of a board has this type:

    For the purpose of our discussion, we don’t need to know how it is implemented for our special game. If you are curious, you find a simple valuation for the tic tac toe game here.

    Let’s pause for a minute and think what we would do in object design now. We may be tempted to make “static” a method of the board, right? That would be beneficial because we can encapsulate the method with the data structure that it works upon, especially when it uses internals of the board or makes assumptions about them. The downside is that when we change our mind on how to use the board, this becomes an intrusive change.

    So when we consider our possible moves, we would pick the one that leads to the board with the highest value – at least if we only look one move ahead. As soon as we look a little further, we also have to consider what our opponent does. It is reasonable to assume that he will also do his best move, which for him is the one with the lowest value. And he in turn will assume that we do our best, and so it goes on.

    That is to say that the static value of a board is only interesting for the leaves in our tree. All other values are derived from the maximum/minimum value one level below. Here is a figure that calculates the next move for a given board. Note that the static valuation is in circles, the minimax value has no circle around it.

    minimax algorithm

    minimax algorithm

    Here is the natural way of putting this into a functional design:

    Is this really “natural”? Yes. The functional way assumes the least what is needed (the “weakest precondition” like in Hoare proofs). In order to minimax a tree, we only need to assume that the payload can be ordered in some way (in Java terms, it is “comparable”). In the type annotation Tree a -> a the “a” is a generic type parameter. It tells that our method maps from a tree of “a” payloads to a value of exactly that type “a”. The only thing that we assume about the “a” type is the Ord a => constraint, i.e. the “a” type must be an instance of the Ord typeclass such that we can call minimum and maximum on a list thereof.

    There is a second indication that such an early abstraction is a normal thing to do: we can totally delete the type declarations and let Frege infer the types. Guess what. It will exactly infer the types that we have given! What can be more natural than that?

    Thanks to this early abstraction, we have a totally generic solution for the minimaxing of trees, which is fully independent of our game trees. Again, we have made a non-intrusive increment.

    But does it work? In our game tree, “a” is of type Board and boards are not (yet) comparable. This is where function composition appears again. We have already seen it when we composed the pruning function with the gameTree to produce a prunedTree from (prune 5 . gameTree). By the same means we can make a tree of boards into a tree of the static values of those boards.

    Remember that the tree is not materialized! Only on demand when we access the children will they be built. And so it is as well for the “static” function. Only when when we access the payload (static board) will be evaluated.

    The function that calculates the maximum value for a board is again a composition

    or in its expanded form

     You may ask where the board is in this definition. After all, maxValue takes a board and returns a Double. Where is the board parameter? Well we could bring it in but the definition above nicely shows that we don’t care about function application at this point but function definition: the maxValue function is composed from the functions on the right.
    Note that the “=” sign denotes a definition, not an assignment. We never use assignments in pure functional programming.

    We now have a set of functions that we can compose to our liking. The compiler will infer all the types and makes sure that they properly line up. For example, we cannot accidentally maximize a tree of values that don’t define an order. But we can have such trees – actually, gameTree is of that nature.
    We can prune before or after calling static while the type system catches errors such as putting prune in a place where it cannot work.

    This is not only a nice toolbox to play with. It is compositional in more than one meaning of the word.
    We have come to this point in a fully non-intrusive, incremental manner, never going back to change previously defined functions or data types.

    In the next episode we will do incremental development with the help of higher-order functions and typeclasses. The goal will be to show a forecast of how we think the game will develop.

    Dierk König



    Episode 1
    Episode 2
    Episode 3
    The paper Why functional programming matters 

    The full source code. Play the game here.







    Why functional programming really matters – episode one

    January 12th, 2015

     This is why functional programming really matters: 
    it provides new options for incremental development.

    How do we develop software? One line at a time.

    All software development is incremental. We have learned to embrace this fact and use it to our advantage. We start small, seek feedback, learn fast, and change direction as we go.

    This requires means to develop incrementally, which at first sight looks like a simple task. After all, software is soft and can be changed any time. But whenever we touch existing code we may introduce subtle new errors and we have therefore devised a whole series of methods and tools to mitigate that risk. Between them are:

    • object encapsulation to limit the error scope,
    • test-driven development,
    • static code analysis, refactoring, and other tools.

    When we have to change and recompile existing code in order to proceed, we call this approach intrusive.

    Of course, the best would be if we could apply increments non-intrusively: without touching any existing code at all! This is sometimes possible.

    • Modular architectures allow to just add new components or services.
    • In object-oriented systems we can use subtyping to add new implementations for existing abstractions.
    • With dynamic languages, we can add new capabilities to existing classes.

    But all of these options have their limitations. They either work only for anticipated extensions or they come at the expense of increased complexity or weakened type safety, which in the future makes it harder to extend the extensions again.

    We would certainly profit if we had more ways of extending software safely.

    Functional programming comes with such new options as John Hughes has shown in his seminal paper “Why functional programming matters“. He puts it in terms of modularity and new “glue” for combining modules: high-order functions  and lazy evaluation.

    tic tac toe

    tic tac toe

    The paper is now 25 years old but still offers a lot of fresh insights. The examples are presented in the Miranda language and to fully appreciate them, I translated them to Frege (a Haskell for the JVM). His line of argumentation accumulates in the last example: a generic solution for playing board games. A running implementation plus the full source code for a simple tic tac toe game is available here.

    Following the paper while implementing the game led to some unexpected experiences that we will talk about in this series of blog posts.

    Episode One: Be infinitely lazy and defer all work

    A board game is a series of mutual moves, each resulting in a new board position. Since there are many possible moves and a number of possible reactions by the opponent, the possible unfoldings of the game form a tree of board positions.

    Depending on the game rules, that tree can be infinitely large.

    The natural way of modeling this in a lazy functional language like Frege or Haskell is to not care about the size of the tree at all but only about how to construct it.
    We keep focussed on generation task and defer the limitation work to later.

    Here is the construction for an arbitrary payload “a”. Children of each node recursively arise from making Nodes for all the payloads that “a” can unfold to.

    We do not care about a base case to stop the recursion or limit the size of the tree but note that calling buildTree will neither exhaust memory nor computation time.
    Laziness in the language cares for constructing children only when needed.

    This is now something that we can specialize for a given data type Board and a function moves that unfolds to all boards that result from applying a valid move.

    The spezialization needs no subtyping and is yet fully type-safe.
    We have extended the system non-intrusively and we will do so again for limiting the size of the game tree.

    When an evaluation of the tree shall be limited to a depth “n” we simply prune the tree at that depth. That code is independent of our special game use case and can therefore work on nodes of arbitrary trees. Here is the recursive definition.

    To make a pruned game tree we use function composition (.) for the generation:

    Note that we still haven’t materialized the tree! A user of the pruned tree can never look below level 5. No children below that level are ever evaluated, and thus – thanks to laziness – never get constructed.

    Laziness allows us to work incrementally not only from generic tree generation to specialized logic and data types. We even apply pruning conditions non-intrusively “from the outside”. We didn’t need to anticipate the need for pruning.

    We never had to go back to previous code to change it. We did not even re-compile!

    This is it for episode one. Stay tuned for more episodes.

    Dierk König




    Episode 1
    Episode 2
    Episode 3
    The paper Why functional programming matters 

    The full source code. Play the game here.








    OpenDolphin 0.9 released with push for many devices

    March 13th, 2014

    The OpenDolphin project has released version 0.9.

    This new release includes a new “push” capability for instant, bi-directional updates and a full-featured OpenDolphin web client for use with HTML5 and JavaScript/TypeScript.

    The inclusion of the web client OpenDolphin.js allows to share the same OpenDolphin application on desktop and web, including mobile web.

    OpenDolphin on many devices

    OpenDolphin on many devices

    The push capability has been moved from the demos into the OpenDolphin core and is now an official feature with a simple API, standard behavior, and – as always – the option the replace the default implementation with your preferred technology.

    Both OpenDolphin.js and the push capability in combination with the existing OpenDolphin features allow for new interesting use cases many of which are show-cased in this release:

    • multi-device applications where the same application runs on many devices
    • re-connect to an application exactly where you left off even after a longer offline period
    • follow-me applications where you start your work on one device – let’s says on the road with a smartphone or a tablet – and complete your work in the office with a desktop application for longer text entries
    • team-applications where a whole team works collaboratively and everybody immediately sees all contributions from all other team members, avoiding double-work and conflicts, and mutually completing, correcting, and helping with each other’s work
    • point-of-sale solutions where personnel shares a limited set of workstations, freely moving between them and always having their latest context available
    • application integration where you combine the strength of different view technologies, say JavaFX for the layout and web technology for charting, into an integrated OpenDolphin application.

    Furthermore, a number of performance enhancements and consistency checks have been included.

    Please find the full list of changes in the release notes and check out the latest demos.

    From Java to JavaScript/TypeScript

    January 14th, 2014

    Experiences when porting the OpenDolphin client to the web

    dolphinIn spring of 2013 we started with a prototype of a JavaScript port for the OpenDolphin client, which at that time was only available for the Java platform and mainly used with Swing and JavaFX.
    We did a few demos with that prototype and found it to be valuable and promising such that we decided to bring it to production quality.

    This blog post describes the experiences that we made along the way.

    Non-functional requirements and technology choices

    For the sake of getting to speed quickly, the prototype had used a zoo of technologies:
    grunt, bower, require, jquery, angular, node, and a few more. For a production version, though, we
    wanted to have the fewest amount of dependencies such that a user of OpenDolphin.js
    would not run into compatibility issues with his own choice of libraries and their
    respective versions.
    We found namespacing and module management to be critical and decided to stay with
    require.js (with the option to drop that later) but drop all other runtime dependencies.

    Since OpenDolphin.js is a library (as opposed to an application) we expect that we
    have to maintain it over quite some time – a longer period of time than most
    current development tools will be available.
    That is: we can only use tools that produce tool-independent artifacts.
    This consideration ruled out tools like Dart and GWT for development since their
    generated JavaScript is not well-suited for development once the tool is

    We tried TypeScript for a week before we decided that it looked pretty much like what we
    needed (our state of information at the time of decision):
    – namespacing
    – modularization
    – abstraction over types (classes and interfaces)
    – type safety
    – code navigation (“where used”)
    – simple test facility
    – source-code debugging
    – very readable JavaScript source code generation
    In fact, we found the generated JavaScript to be more consistent and readable
    than what I had written manually – mainly because it wouldn’t have any of the
    dirty tricks that one can do in JavaScript.

    Experiences with TypeScript

    In hindsight, we should have experimented more before starting the real
    development and should have tested the distribution story. As it turned out
    I misunderstood TypeScript modules as being mainly a tool to provide
    namespaces – like packages in Java – but it is just as well a distribution
    mechanism. You see that when you try to “package” a module that spans
    over multiple files, which is not supported.
    A helpful voice pointed that out to me on stackoverflow and correctly
    mentioned that the documentation also says so (even though not very

    I was finally able to combine all my generated JavaScript in a single file
    with some Gradle/Groovy trickery but overall it costed me a day of
    investigation, trial, error, and fixing.

    We wasted two more days with hunting down two TypeScript errors that we
    simply didn’t expect to be possible: we ran into a situation where
    TypeScript inheritance silently didn’t work and on a second occasion
    we got two different instances of the same static reference.

    Lesson learned: avoid inheritance and statics even more than usual.

    This lesson actually turned out to be benefitial because it led to a simpler
    design, e.g. around the HttpClientConnector that we refactored from a
    subclass into a Transmitter interface that the ClientConnector points to
    with NoTransmitter and HttpTransmitter as implementations.
    We plan to refactor the Java version accordingly.

    While it first felt a bit weird that TypeScript cannot compile all
    our *.ts files at once, we learned to value the fast incremental compile
    especially when combined with the Intellij IDEA file watcher setup.

    In the beginning TypeScript feels as if it made your JavaScript more object-
    or better say type-oriented. But the real benefit is that it
    very much improves the “functional programming” story, which is already
    pretty strong in JavaScript but TypeScript adds more structure to it
    when declaring, implementing, or calling functions.
    This is kind of an under-marketed feature of TypeScript.

    We struggled a bit with how to organize *.ts files and references
    between them in a way that works consistently in all scenarios
    – when delivered from a web server or node.js
    – when loaded from a static html file
    – when testing
    – when debugging
    and I’m afraid our solution is not optimal since it led to
    quite a lot of duplication in the references (relative file paths).
    It would be helpful if the TypeScript documentation gave better
    advice on the topic or propose best practices.

    The TypeScript documentation was generally a doubly-edged experience.
    While the language part is well documented, the API is not.
    When searching for a feature I often landed at places where the
    shape of that feature was discussed between the developers
    but could not find any kind of “user guide”.
    But, well, it is version 0.9 and it is free, so we should not complain
    especially since the response time for questions on stackoverflow
    was awesome.

    IDE support inside Intellij IDEA is pretty strong and has helped us a lot.
    Basic things like highlighting, navigation, code completion, and so on
    work reasonably well and certainly better than for plain JavaScript.
    One should not expect the same sophistication like for Java or Groovy,
    though, when it comes to refactoring across multiple files.

    Would I decide for TypeScript again? Definitely yes!
    But I would allow more time for finding the best setup.

    OpenDolphin for the web

    OpenDolphin is use-case driven. We only include a feature, if we have
    a use-case for it and with any use case comes a demo. This nicely
    leads to the comfortable situation that every feature is used
    in at least one demo.

    The additional feature of OpenDolphin.js over the Java version is:
    “it also works in the browser” be it desktop or mobile.
    Consequently, we created some demos.

    Creating the demos immediately revealed how well the OpenDolphin architecture
    (structure, concepts, abstractions) fits for web development. Having
    – views in HTML5 that are totally separate,
    – a binding technology of your choice
    (JavaScript, TypeScript, GSP, vanilla.js, angular.js, GWT, …), and
    – server-side control over the presentation models with Java
    leads to a new level of enterprise applications.

    These new enterprise applications are automatically consistent,
    they update their content immediately, they share information with other
    users, and facilitate user collaboration at a totally new level.
    One can see these features boiled down to the bare minimum in the chat demo.

    Best of all: desktop, web, mobile, and even embedded applications all share
    the same information at runtime and the same programming model!
    They all connect to the exact same server-side application logic.

    The web for application development

    For anyone with experience in desktop UI toolkits like JavaFX, Swing, or
    equivalents web development feels casual and coincidental.
    Obviously, one can “get things done” but it is hard to say whether they
    only work by chance, what the actual constraints are, and on which
    guarantees one can build (pun intended).

    Take a slider as in the master-detail demo.
    There is no such control in HTML5 – as there are no
    real “controls” in the browser anyway. Ok, you can use the “input” element
    with type “range” in some browsers. But which events are fired when
    you drag the thumb? And are they fired while you drag or only after
    stop? Happy testing in all your supported browsers!
    Now try to enlarge the thumb for touch support with CSS – and
    consider that some browsers use circles and others use squares…

    I wouldn’t even go into more advanced controls like tables. HTML5 has no control
    that even remotely compares to what you have in the most basic JavaFX table view.

    But sliders and tables are only the tip of the iceberg. There are no controls.
    There are no layout managers. There is no usable logging. Basic datatypes are missing.
    Performance characteristics of existing datatypes are unclear. Object lifecycle
    is unclear. Scoping is counter-intuitive.
    To make matters worse, you can add ever-changing libraries that try to fill these gaps.
    Welcome to dependency hell!

    In short: the browser is a problematic execution environment.

    Now, if developing applications for the web is so immature, why do we even
    bother creating OpenDolphin.js? Because the web is everywhere and easy to access
    and there is an undeniable request for enterprise applications to have selected parts
    of their functionality visible and accessible on the web.

    The web makes an extremely good distribution channel for information.
    While creating the OpenDolphin web demos, I started to make extensive use of
    the browser features for developers (I know, I’m late to the party).
    These are extremely helpful while developing and occasionally scary when they
    reveal what many of the websites I visit are doing on my machine.
    One should always keep in mind that there is no privacy in the browser.

    As it stands today, we can recommend this mix for enterprise applications:
    desktop (preferrably in JavaFX) for the office worker to safely edit the core of the system
    web and mobile “extensions” for special purposes
    (dashboard, directions, check lists, mobile data capture, location, local sensors, field work)
    all conveniently integrated in the OpenDolphin architecture.

    Lessons learned

    When creating the JavaScript implementation of the OpenDolphin architecture, we of
    course profited much from having the Java implementation as a blueprint.
    But it actually turned out to be benefitial both ways: the Java version also
    profited since it was implicitly reviewed along the way and we could improve it in two places
    where even our 100% test coverage was not good enough to uncover a bug.

    Furthermore, the architecture was scrutinized a second time and I feel
    “implement a second time in a different technology” is a great validation
    technique for any architecture. This also led to a better distinction between
    rules and concepts of the architecture versus special design decisions for
    each implementation.

    JavaScript/TypeScript has less features than Java – especially around
    concurrency and inheritance -, which forced us into simpler albeit less powerful designs
    that we may partially bring back to the Java (“compact-0 profile”) implementation in the future.

    It was a pleasure to see that with OpenDolphin.js we can reap the same
    architectural benefits for web applications that we experienced for desktop apps
    while adding the web as a new option for exchangeable visualization technologies.

    That was what we were hoping for but then to see it working out so well
    has been a huge reward for the effort.

    Dierk König

    P.S. OpenDolphin.js will be part of the imminent 0.9 release of OpenDolphin.