• Home
  • Events
  • About
  • Review Frege Day 2015

    September 18th, 2015

    On a sunny day in September 2015 a group of functional programming enthusiasts assembled in Basel, Switzerland, for the first Frege Day.

    Frege is a Haskell for the JVM and brings the purely functional programming paradigm to the Java platform.

    It was so nice to finally meet in person all those who we otherwise only know from mail, twitter, and other internet channels. There is nothing like the energy that comes from dedicated people coming together.

    But the gathering was not limited to only the Canoo office Basel. It also virtually included Frege friends in New Zealand, Australia, and the US, which extended their usual working hours to very late in the evening or very early hours in the morning. This made it a truly international conference!

    It was a special honor to also have the Haskell legend Simon Peyton Jones joining via video. He encouraged us to further pursue our path of becoming “the” Haskell for the JVM. You can read the transcript here: https://github.com/Frege/frege/wiki/Simon-Peyton-Jones-transcript,-Frege-Day-2015

    This has become the major topic of the day: how to evolve the language, the libraries, the tools, and the surrounding material to make pure functional programming on the JVM a fruitful and enjoyable experience. You can read more about the details here: https://github.com/Frege/frege/wiki/Protocol-for-Frege-Day-2015

    The day was a major boost for the emerging community and there is lots of potential for even higher popularity.

    Stay tuned for Frege Day 2016!

    Meet us at the JavaOne 2015

    July 31st, 2015

    Only a few weeks until the start of JavaOne 2015, the leading developer conference for Java. The event takes place from October 25th to 29th in San Francisco.

    Don’t miss the chance to attend the talks of our experts and to get in touch with us. Please visit the official Content Catalogue for more details.

    You would like to have a personal meeting? No problem. Please use our meeting request form to tell us your preferred date.

    See you in San Francisco!

    WhyFpReallyMatters: going safely parallel

    February 11th, 2015

    After having written episodes 1, 2, and 3, I planned to close the series with some parting thoughts about what we have achieved and where I struggled along the way. But in the process of writing down my findings, it occurred to me that we have achieved an unusual state that we can leverage to our advantage:

    All our code is purely functional

    1) There is not a single assignment in the code, no state that needs to get updated, and therefore also no shared mutable state that could impose any issues when going parallel.

    When recognizing a = sign in the code the casual reader may be misinterpret this as an assignment, which it is not. It is a one-time definition that never changes. All is safe.

    2) There are no side effects in the code.

    3) You cannot imagine my surprise when I recognized that we don’t even have to consider 1) and 2) for the total of our code because the Frege type system validates our assumptions about purity automatically!

    One often advertised advantage of functional programming is it’s improved support for parallel execution when compared to imperative solutions. And that claim is true – to some degree. The point is that “a little bit of functional style” only helps very little when going parallel. As long as you cannot guarantee that all your code that you are going to execute in parallel is MT safe, the ability of calling parallel versions of higher-order functions like map, filter, reduce, etc. is only of superficial syntactial help. With that approach, it remains the obligation of the programmer to inspect the whole call chain from all methods that are called from a inside concurrent operation (like map) down to their very last primitive statement to validate them for MT safety. And even if all the code is safe at the time of introducing the concurrent use, how about later changes where the maintainer is not aware that he has to keep that characteristic?

    Tic Tac Toe in parallel

    Tic Tac Toe in parallel

    This is where the big difference between
    functional style” and
    pure functional” programming
    becomes not only undeniably apparent but also of utmost practical relevance.

    A new requirement comes up: go parallel

    Let’s try to evaluate our Tic Tac Toe moves in parallel. There is one line where we map all boards that can arise from making a valid move to their evaluation, where probe is a function that is used for both, using the minimax algorithm for finding the best board and for returning the forecast:

    In order to make the mapping parallel, we can use the mapP function from the frege.lib.ForkJoin module:

    The superficial syntactical advantage is the opportunity to make the code parallel with one (!) additional character in the code.

    While this is kind of cool in itself, the really impressive effect is that this change is guaranteed to be safe! This is fundamental and it works because of the way that purity is handled through the type system.

    And this comes so: the type signature of map is

    map :: ( α -> β ) -> [α] -> [β]

    Let’s start with the first argument ( α -> β ).

    It means that map‘s first argument is a function from any unconstrained type α to any unconstrained type β. Let’s call that function f. The map function applies its argument function f to all elements of a list of αs to produce a list of βs.
    In our case f is (probe lookahead).
    If (probe lookahead) would use any kind of state or use any kind of side effect, then type α or type β (or both types) would need to have a type constraint like State α or IO β .
    In such a case, map (probe lookahead) would simply fail to compile.

    In other words: the signature of map requires purity of f and the Frege type system enforces this down the whole call chain! We do not have to go back and check ourselves nor can any later intrusive change by a maintainer undermine our foundation.

    Now, what happens when we go from map to mapP ? Here is the type signature of mapP :

    mapP :: ( α -> β ) -> [α] -> [β]

    That is exactly the same! The whole line of argumentation from above persists!

    Let that sink in for a minute.

    With Frege, we can again apply a non-intrusive increment: from sequential to parallel execution in an absolutely safe manner. Purity and its handling in the type system make that possible.
    In all other popular JVM languages this would have been an immensly intrusive change (even if old code was not edited at all – changing its mode of operation is intrusive) with a huge potential to introduce hard-to-detect errors.
    And again, we were able to apply this non-intrusive increment without having planned for it. We just did what is usual in Frege and it turned out to just fit.

    We can almost always safely go from map to mapP.

    There are two considerations, that one should apply before blindly making all calls to map parallel:
    1) Granularity
    We applied parallel execution on the top-most layer of the evaluation only, not down to all splits of possible boards. That is a usual performance concern for data-parallel concurrency where we weigh the relative overhead of thread scheduling against the size of the unit of work.
    2) Infinite data structures
    Parallel execution makes usually no sense to be done lazily since new threads should start working early enough to have the result ready when requested. But that approach would never finish when applied to infinite data structures on the top-most level of expressions. Therefore, mapP enforces the weak-head-normal-form (WHNF) of the given list. That still allows laziness at lower levels.

    But those considerations are fully inside our mental context when applying our increment. We are in the position to decide about granularity and we know that possibleMoves is a finite list. It is all on one line and we do not have to go further.

    My personal experience was very strange when I applied concurreny to the tic tac toe game. I went from map to mapP,  let the code run, and enjoyed the speedup. Then I fell back into my old habit of paranoid concurrency thinking: what could possibly go wrong?

    And for the first time ever in such a situation, I couldn’t think of any possible issue.

    I still have trouble believing it.

    In the next episode, we may finally come to the parting thoughts.

    Dierk König


    Episode 1
    Episode 2
    Episode 3
    Episode 4
    The paper Why functional programming matters

    The full source code. Play the game here.

    Why functional programming really matters – episode three

    January 19th, 2015

    Why functional programming really matters:
    providing more options for incremental development.

    In episode two we have seen how designing with functions naturally leads to incremental development. We will now extend our running example with higher-order functions and data type evolution.

    Episode Three: higher-order functions and data type evolution

    In episode one we made the case for incremental development and the importance of staying non-intrusive. What we will do now is changing code from the last increment.

    a stack of increments

    a stack of increments

    Changing code is what we declared as being intrusive  but changing the last increment is allowed.

    The problem with changing code is that any code that depends on it may be compromised. That is the intrusion. Since no code can possibly depend on the last increment,  changing it is safe.

    With that consideration out of the way, let’s look at our last increment that finds the minimax value for a given board:

    It works fine but we would like to make it even more idiomatic in the functional sense.

    We have already seen that functional programmers like to introduce abstractions very early and there is a hidden abstraction in the code above. We map a tree of boards to a tree of doubles. The structure of the tree remains untouched. This is the nature of an abstraction called Functor. Functors support a general function fmap that keeps the current structure but applies a function f to all elements. It turns out that our Tree type is such a Functor.

    To use the Functor nature of trees, we use fmap f instead of static.

    Note that evaluateBy now takes a parameter f, which can be any function that takes a Board and returns an Ord as required by maximize.

    A function like evaluateBy that takes another function as a parameter is called a higher-order function. They are heavily used in functional programming and in fact we have used them before without giving it any attention. The current case, though, is different. We use this very common functional feature to set the stage for non-intrusive increments.

    Tic Tac Toe with forecast

    Computer knows he will win

    A new requirement comes up: predict the future

    It appears that playing the game is fun but sometimes it is difficult to understand why the computer does a certain move. It would be interesting if we had a bit of insight and could show what the computer is considering. The idea is that we display a forecast: the final board that the computer thinks the game will develop into.

    To this end, we need to capture the board that led to the minimax value. It turns out that this is easier than expected. We can use evaluateBy but we do not pass the static function any more. Instead, we pass a function that maps a board not only to its static value but to a pair of the static value and the board itself.

    This is nice. First and foremost it is a fully non-intrusive increment! We did not change any existing code at all.

    Second, it falls in place very naturally when following the functional style with higher-order functions – without any anticipation of new requirements.

    Think what would have happened in a typical object-oriented style. We can of course achieve similar effects in OO but this requires some forethought to prepare for new use cases: template methods, strategy pattern, and the likes. Without such anticipation, OO design would lead to intrusive changes. One way or the other: incremental development is harder.

    The downside is: it doesn’t compile.

    Our mapping function must return an Ord but a pair (any tuple actually) is only of type Ord if all its elements are of type Ord as well. The Double value is but the Board type is not.

    Hm, what next? Going back to the definition of Board and changing its definition? That would be an intrusive change.

    Here we see a big benefit of Frege typeclasses (as opposed to let’s say Java interfaces): we can evolve the data type! Board becomes an instance of the Ord typeclass non-intrusively in a new increment with full static type safety!
    There is not much logic to add since we make all boards equal. For the purpose of ordering our pairs we only need the static Double values of the pair.

    This code looks so simple but the implications are huge. By making Board an instance of Ord, we have given the Board type a new capability!

    Note that this even works when the source code of Board is not available. It can come from a third-party library that we only have in binary form.
    We can even evolve foreign data types.

    Now we have everything compiling and working. You can play the game and see the forecast working.

    In the next episode, I there is a big surprise waiting with regard to parallel execution.

    Dierk König



    Episode 1
    Episode 2
    Episode 3
    Episode 4
    The paper Why functional programming matters

    The full source code. Play the game here.











    Why functional programming really matters – episode two

    January 17th, 2015

    Why functional programming really matters:
    providing more options for incremental development.

    In episode one we have seen how lazy evaluation allows us to separate the generation of potentially infinite data structures from functions that work on it. We will now follow up on the same example from a different angle: how to design with functions for incremental development.

    Episode Two: Designing with functions and composition

    So far we have developed means for building a tree of game boards where child nodes represent the boards that result from the players mutually placing their moves.

    In order to find the next best move we have to give some value to a board. Let’s go with a Double value where 1 represents that the computer has won and -1 for the opponent. A function that tells us the static value of a board has this type:

    For the purpose of our discussion, we don’t need to know how it is implemented for our special game. If you are curious, you find a simple valuation for the tic tac toe game here.

    Let’s pause for a minute and think what we would do in object design now. We may be tempted to make “static” a method of the board, right? That would be beneficial because we can encapsulate the method with the data structure that it works upon, especially when it uses internals of the board or makes assumptions about them. The downside is that when we change our mind on how to use the board, this becomes an intrusive change.

    So when we consider our possible moves, we would pick the one that leads to the board with the highest value – at least if we only look one move ahead. As soon as we look a little further, we also have to consider what our opponent does. It is reasonable to assume that he will also do his best move, which for him is the one with the lowest value. And he in turn will assume that we do our best, and so it goes on.

    That is to say that the static value of a board is only interesting for the leaves in our tree. All other values are derived from the maximum/minimum value one level below. Here is a figure that calculates the next move for a given board. Note that the static valuation is in circles, the minimax value has no circle around it.

    minimax algorithm

    minimax algorithm

    Here is the natural way of putting this into a functional design:

    Is this really “natural”? Yes. The functional way assumes the least what is needed (the “weakest precondition” like in Hoare proofs). In order to minimax a tree, we only need to assume that the payload can be ordered in some way (in Java terms, it is “comparable”). In the type annotation Tree a -> a the “a” is a generic type parameter. It tells that our method maps from a tree of “a” payloads to a value of exactly that type “a”. The only thing that we assume about the “a” type is the Ord a => constraint, i.e. the “a” type must be an instance of the Ord typeclass such that we can call minimum and maximum on a list thereof.

    There is a second indication that such an early abstraction is a normal thing to do: we can totally delete the type declarations and let Frege infer the types. Guess what. It will exactly infer the types that we have given! What can be more natural than that?

    Thanks to this early abstraction, we have a totally generic solution for the minimaxing of trees, which is fully independent of our game trees. Again, we have made a non-intrusive increment.

    But does it work? In our game tree, “a” is of type Board and boards are not (yet) comparable. This is where function composition appears again. We have already seen it when we composed the pruning function with the gameTree to produce a prunedTree from (prune 5 . gameTree). By the same means we can make a tree of boards into a tree of the static values of those boards.

    Remember that the tree is not materialized! Only on demand when we access the children will they be built. And so it is as well for the “static” function. Only when when we access the payload (static board) will be evaluated.

    The function that calculates the maximum value for a board is again a composition

    or in its expanded form

     You may ask where the board is in this definition. After all, maxValue takes a board and returns a Double. Where is the board parameter? Well we could bring it in but the definition above nicely shows that we don’t care about function application at this point but function definition: the maxValue function is composed from the functions on the right.
    Note that the “=” sign denotes a definition, not an assignment. We never use assignments in pure functional programming.

    We now have a set of functions that we can compose to our liking. The compiler will infer all the types and makes sure that they properly line up. For example, we cannot accidentally maximize a tree of values that don’t define an order. But we can have such trees – actually, gameTree is of that nature.
    We can prune before or after calling static while the type system catches errors such as putting prune in a place where it cannot work.

    This is not only a nice toolbox to play with. It is compositional in more than one meaning of the word.
    We have come to this point in a fully non-intrusive, incremental manner, never going back to change previously defined functions or data types.

    In the next episode we will do incremental development with the help of higher-order functions and typeclasses. The goal will be to show a forecast of how we think the game will develop.

    Dierk König


    Episode 1
    Episode 2
    Episode 3

    Episode 4

    The paper Why functional programming matters

    The full source code. Play the game here.