• Home
  • Events
  • About
  • Java Performance Workshop | June 23-26, 2015 | Zurich

    May 13th, 2015

    Java Champion Kirk Pepperdine will give a Java Performance Workshop on 23-26 June 2015 in Zurich. He will provide you with techniques that have been proven to improve your ability to find and fix performance bottlenecks. During the four-day workshop you will learn:

    Java Performance_Button
    • Quickly identify the root causes of poor performance in your applications
    • Eliminate conditions that will prevent you from finding performance bottlenecks
    • Find critical supportive evidence before deciding on a potentially expensive course of action
    • Find performance issues before they make their escape into your production system
    Please find the content of the training days as well as price information on our website. JUG members will get a 10% discount.

    Registration:
    Please us the registration form on our website or send us an email to services@canoo.com.

    Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
    • DZone
    • Y!GG
    • Webnews
    • Digg
    • del.icio.us
    • DotNetKicks
    • Facebook
    • Google Bookmarks
    • Newsrider
    • Twitter
    • YahooBuzz

    WhyFpReallyMatters: going safely parallel

    February 11th, 2015

    After having written episodes 1, 2, and 3, I planned to close the series with some parting thoughts about what we have achieved and where I struggled along the way. But in the process of writing down my findings, it occurred to me that we have achieved an unusual state that we can leverage to our advantage:

    All our code is purely functional

    1) There is not a single assignment in the code, no state that needs to get updated, and therefore also no shared mutable state that could impose any issues when going parallel.

    When recognizing a = sign in the code the casual reader may be misinterpret this as an assignment, which it is not. It is a one-time definition that never changes. All is safe.

    2) There are no side effects in the code.

    3) You cannot imagine my surprise when I recognized that we don’t even have to consider 1) and 2) for the total of our code because the Frege type system validates our assumptions about purity automatically!

    One often advertised advantage of functional programming is it’s improved support for parallel execution when compared to imperative solutions. And that claim is true – to some degree. The point is that “a little bit of functional style” only helps very little when going parallel. As long as you cannot guarantee that all your code that you are going to execute in parallel is MT safe, the ability of calling parallel versions of higher-order functions like map, filter, reduce, etc. is only of superficial syntactial help. With that approach, it remains the obligation of the programmer to inspect the whole call chain from all methods that are called from a inside concurrent operation (like map) down to their very last primitive statement to validate them for MT safety. And even if all the code is safe at the time of introducing the concurrent use, how about later changes where the maintainer is not aware that he has to keep that characteristic?

    Tic Tac Toe in parallel

    Tic Tac Toe in parallel

    This is where the big difference between
    functional style” and
    pure functional” programming
    becomes not only undeniably apparent but also of utmost practical relevance.

    A new requirement comes up: go parallel

    Let’s try to evaluate our Tic Tac Toe moves in parallel. There is one line where we map all boards that can arise from making a valid move to their evaluation, where probe is a function that is used for both, using the minimax algorithm for finding the best board and for returning the forecast:

    In order to make the mapping parallel, we can use the mapP function from the frege.lib.ForkJoin module:

    The superficial syntactical advantage is the opportunity to make the code parallel with one (!) additional character in the code.

    While this is kind of cool in itself, the really impressive effect is that this change is guaranteed to be safe! This is fundamental and it works because of the way that purity is handled through the type system.

    And this comes so: the type signature of map is

    map :: ( α -> β ) -> [α] -> [β]

    Let’s start with the first argument ( α -> β ).

    It means that map‘s first argument is a function from any unconstrained type α to any unconstrained type β. Let’s call that function f. The map function applies its argument function f to all elements of a list of αs to produce a list of βs.
    In our case f is (probe lookahead).
    If (probe lookahead) would use any kind of state or use any kind of side effect, then type α or type β (or both types) would need to have a type constraint like State α or IO β .
    In such a case, map (probe lookahead) would simply fail to compile.

    In other words: the signature of map requires purity of f and the Frege type system enforces this down the whole call chain! We do not have to go back and check ourselves nor can any later intrusive change by a maintainer undermine our foundation.

    Now, what happens when we go from map to mapP ? Here is the type signature of mapP :

    mapP :: ( α -> β ) -> [α] -> [β]

    That is exactly the same! The whole line of argumentation from above persists!


    Let that sink in for a minute.


    With Frege, we can again apply a non-intrusive increment: from sequential to parallel execution in an absolutely safe manner. Purity and its handling in the type system make that possible.
    In all other popular JVM languages this would have been an immensly intrusive change (even if old code was not edited at all – changing its mode of operation is intrusive) with a huge potential to introduce hard-to-detect errors.
    And again, we were able to apply this non-intrusive increment without having planned for it. We just did what is usual in Frege and it turned out to just fit.

    We can almost always safely go from map to mapP.

    There are two considerations, that one should apply before blindly making all calls to map parallel:
    1) Granularity
    We applied parallel execution on the top-most layer of the evaluation only, not down to all splits of possible boards. That is a usual performance concern for data-parallel concurrency where we weigh the relative overhead of thread scheduling against the size of the unit of work.
    2) Infinite data structures
    Parallel execution makes usually no sense to be done lazily since new threads should start working early enough to have the result ready when requested. But that approach would never finish when applied to infinite data structures on the top-most level of expressions. Therefore, mapP enforces the weak-head-normal-form (WHNF) of the given list. That still allows laziness at lower levels.

    But those considerations are fully inside our mental context when applying our increment. We are in the position to decide about granularity and we know that possibleMoves is a finite list. It is all on one line and we do not have to go further.

    My personal experience was very strange when I applied concurreny to the tic tac toe game. I went from map to mapP,  let the code run, and enjoyed the speedup. Then I fell back into my old habit of paranoid concurrency thinking: what could possibly go wrong?

    And for the first time ever in such a situation, I couldn’t think of any possible issue.

    I still have trouble believing it.

    In the next episode, we may finally come to the parting thoughts.

    Dierk König
    @mittie


    Links:

    Episode 1
    Episode 2
    Episode 3
    Episode 4
    The paper Why functional programming matters

    The full source code. Play the game here.

    Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
    • DZone
    • Y!GG
    • Webnews
    • Digg
    • del.icio.us
    • DotNetKicks
    • Facebook
    • Google Bookmarks
    • Newsrider
    • Twitter
    • YahooBuzz

    Why functional programming really matters – episode three

    January 19th, 2015

    Why functional programming really matters:
    providing more options for incremental development.

    In episode two we have seen how designing with functions naturally leads to incremental development. We will now extend our running example with higher-order functions and data type evolution.

    Episode Three: higher-order functions and data type evolution

    In episode one we made the case for incremental development and the importance of staying non-intrusive. What we will do now is changing code from the last increment.

    a stack of increments

    a stack of increments

    Changing code is what we declared as being intrusive  but changing the last increment is allowed.

    The problem with changing code is that any code that depends on it may be compromised. That is the intrusion. Since no code can possibly depend on the last increment,  changing it is safe.

    With that consideration out of the way, let’s look at our last increment that finds the minimax value for a given board:

    It works fine but we would like to make it even more idiomatic in the functional sense.

    We have already seen that functional programmers like to introduce abstractions very early and there is a hidden abstraction in the code above. We map a tree of boards to a tree of doubles. The structure of the tree remains untouched. This is the nature of an abstraction called Functor. Functors support a general function fmap that keeps the current structure but applies a function f to all elements. It turns out that our Tree type is such a Functor.

    To use the Functor nature of trees, we use fmap f instead of static.

    Note that evaluateBy now takes a parameter f, which can be any function that takes a Board and returns an Ord as required by maximize.

    A function like evaluateBy that takes another function as a parameter is called a higher-order function. They are heavily used in functional programming and in fact we have used them before without giving it any attention. The current case, though, is different. We use this very common functional feature to set the stage for non-intrusive increments.

    Tic Tac Toe with forecast

    Computer knows he will win

    A new requirement comes up: predict the future

    It appears that playing the game is fun but sometimes it is difficult to understand why the computer does a certain move. It would be interesting if we had a bit of insight and could show what the computer is considering. The idea is that we display a forecast: the final board that the computer thinks the game will develop into.

    To this end, we need to capture the board that led to the minimax value. It turns out that this is easier than expected. We can use evaluateBy but we do not pass the static function any more. Instead, we pass a function that maps a board not only to its static value but to a pair of the static value and the board itself.

    This is nice. First and foremost it is a fully non-intrusive increment! We did not change any existing code at all.

    Second, it falls in place very naturally when following the functional style with higher-order functions – without any anticipation of new requirements.

    Think what would have happened in a typical object-oriented style. We can of course achieve similar effects in OO but this requires some forethought to prepare for new use cases: template methods, strategy pattern, and the likes. Without such anticipation, OO design would lead to intrusive changes. One way or the other: incremental development is harder.

    The downside is: it doesn’t compile.

    Our mapping function must return an Ord but a pair (any tuple actually) is only of type Ord if all its elements are of type Ord as well. The Double value is but the Board type is not.

    Hm, what next? Going back to the definition of Board and changing its definition? That would be an intrusive change.

    Here we see a big benefit of Frege typeclasses (as opposed to let’s say Java interfaces): we can evolve the data type! Board becomes an instance of the Ord typeclass non-intrusively in a new increment with full static type safety!
    There is not much logic to add since we make all boards equal. For the purpose of ordering our pairs we only need the static Double values of the pair.

    This code looks so simple but the implications are huge. By making Board an instance of Ord, we have given the Board type a new capability!

    Note that this even works when the source code of Board is not available. It can come from a third-party library that we only have in binary form.
    We can even evolve foreign data types.

    Now we have everything compiling and working. You can play the game and see the forecast working.

    In the next episode, I there is a big surprise waiting with regard to parallel execution.

    Dierk König
    @mittie

     


    Links:

    Episode 1
    Episode 2
    Episode 3
    Episode 4
    The paper Why functional programming matters

    The full source code. Play the game here.

     

     

     

     

     

     

     

     

     

     

    Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
    • DZone
    • Y!GG
    • Webnews
    • Digg
    • del.icio.us
    • DotNetKicks
    • Facebook
    • Google Bookmarks
    • Newsrider
    • Twitter
    • YahooBuzz

    Why functional programming really matters – episode two

    January 17th, 2015

    Why functional programming really matters:
    providing more options for incremental development.

    In episode one we have seen how lazy evaluation allows us to separate the generation of potentially infinite data structures from functions that work on it. We will now follow up on the same example from a different angle: how to design with functions for incremental development.

    Episode Two: Designing with functions and composition

    So far we have developed means for building a tree of game boards where child nodes represent the boards that result from the players mutually placing their moves.

    In order to find the next best move we have to give some value to a board. Let’s go with a Double value where 1 represents that the computer has won and -1 for the opponent. A function that tells us the static value of a board has this type:

    For the purpose of our discussion, we don’t need to know how it is implemented for our special game. If you are curious, you find a simple valuation for the tic tac toe game here.

    Let’s pause for a minute and think what we would do in object design now. We may be tempted to make “static” a method of the board, right? That would be beneficial because we can encapsulate the method with the data structure that it works upon, especially when it uses internals of the board or makes assumptions about them. The downside is that when we change our mind on how to use the board, this becomes an intrusive change.

    So when we consider our possible moves, we would pick the one that leads to the board with the highest value – at least if we only look one move ahead. As soon as we look a little further, we also have to consider what our opponent does. It is reasonable to assume that he will also do his best move, which for him is the one with the lowest value. And he in turn will assume that we do our best, and so it goes on.

    That is to say that the static value of a board is only interesting for the leaves in our tree. All other values are derived from the maximum/minimum value one level below. Here is a figure that calculates the next move for a given board. Note that the static valuation is in circles, the minimax value has no circle around it.

    minimax algorithm

    minimax algorithm

    Here is the natural way of putting this into a functional design:

    Is this really “natural”? Yes. The functional way assumes the least what is needed (the “weakest precondition” like in Hoare proofs). In order to minimax a tree, we only need to assume that the payload can be ordered in some way (in Java terms, it is “comparable”). In the type annotation Tree a -> a the “a” is a generic type parameter. It tells that our method maps from a tree of “a” payloads to a value of exactly that type “a”. The only thing that we assume about the “a” type is the Ord a => constraint, i.e. the “a” type must be an instance of the Ord typeclass such that we can call minimum and maximum on a list thereof.

    There is a second indication that such an early abstraction is a normal thing to do: we can totally delete the type declarations and let Frege infer the types. Guess what. It will exactly infer the types that we have given! What can be more natural than that?

    Thanks to this early abstraction, we have a totally generic solution for the minimaxing of trees, which is fully independent of our game trees. Again, we have made a non-intrusive increment.

    But does it work? In our game tree, “a” is of type Board and boards are not (yet) comparable. This is where function composition appears again. We have already seen it when we composed the pruning function with the gameTree to produce a prunedTree from (prune 5 . gameTree). By the same means we can make a tree of boards into a tree of the static values of those boards.

    Remember that the tree is not materialized! Only on demand when we access the children will they be built. And so it is as well for the “static” function. Only when when we access the payload (static board) will be evaluated.

    The function that calculates the maximum value for a board is again a composition

    or in its expanded form

     You may ask where the board is in this definition. After all, maxValue takes a board and returns a Double. Where is the board parameter? Well we could bring it in but the definition above nicely shows that we don’t care about function application at this point but function definition: the maxValue function is composed from the functions on the right.
    Note that the “=” sign denotes a definition, not an assignment. We never use assignments in pure functional programming.

    We now have a set of functions that we can compose to our liking. The compiler will infer all the types and makes sure that they properly line up. For example, we cannot accidentally maximize a tree of values that don’t define an order. But we can have such trees – actually, gameTree is of that nature.
    We can prune before or after calling static while the type system catches errors such as putting prune in a place where it cannot work.

    This is not only a nice toolbox to play with. It is compositional in more than one meaning of the word.
    We have come to this point in a fully non-intrusive, incremental manner, never going back to change previously defined functions or data types.

    In the next episode we will do incremental development with the help of higher-order functions and typeclasses. The goal will be to show a forecast of how we think the game will develop.

    Dierk König
    @mittie


    Links:

    Episode 1
    Episode 2
    Episode 3

    Episode 4

    The paper Why functional programming matters

    The full source code. Play the game here.

    Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
    • DZone
    • Y!GG
    • Webnews
    • Digg
    • del.icio.us
    • DotNetKicks
    • Facebook
    • Google Bookmarks
    • Newsrider
    • Twitter
    • YahooBuzz

    Why functional programming really matters – episode one

    January 12th, 2015

    This is why functional programming really matters:
    it provides new options for incremental development.

    How do we develop software? One line at a time.

    All software development is incremental. We have learned to embrace this fact and use it to our advantage. We start small, seek feedback, learn fast, and change direction as we go.

    This requires means to develop incrementally, which at first sight looks like a simple task. After all, software is soft and can be changed any time. But whenever we touch existing code we may introduce subtle new errors and we have therefore devised a whole series of methods and tools to mitigate that risk. Between them are:

    • object encapsulation to limit the error scope,
    • test-driven development,
    • static code analysis, refactoring, and other tools.

    When we have to change and recompile existing code in order to proceed, we call this approach intrusive.

    Of course, the best would be if we could apply increments non-intrusively: without touching any existing code at all! This is sometimes possible.

    • Modular architectures allow to just add new components or services.
    • In object-oriented systems we can use subtyping to add new implementations for existing abstractions.
    • With dynamic languages, we can add new capabilities to existing classes.

    But all of these options have their limitations. They either work only for anticipated extensions or they come at the expense of increased complexity or weakened type safety, which in the future makes it harder to extend the extensions again.

    We would certainly profit if we had more ways of extending software safely.

    Functional programming comes with such new options as John Hughes has shown in his seminal paper “Why functional programming matters“. He puts it in terms of modularity and new “glue” for combining modules: high-order functions  and lazy evaluation.

    tic tac toe

    tic tac toe

    The paper is now 25 years old but still offers a lot of fresh insights. The examples are presented in the Miranda language and to fully appreciate them, I translated them to Frege (a Haskell for the JVM). His line of argumentation accumulates in the last example: a generic solution for playing board games. A running implementation plus the full source code for a simple tic tac toe game is available here.

    Following the paper while implementing the game led to some unexpected experiences that we will talk about in this series of blog posts.

    Episode One: Be infinitely lazy and defer all work

    A board game is a series of mutual moves, each resulting in a new board position. Since there are many possible moves and a number of possible reactions by the opponent, the possible unfoldings of the game form a tree of board positions.

    Depending on the game rules, that tree can be infinitely large.

    The natural way of modeling this in a lazy functional language like Frege or Haskell is to not care about the size of the tree at all but only about how to construct it.
    We keep focussed on generation task and defer the limitation work to later.

    Here is the construction for an arbitrary payload “a”. Children of each node recursively arise from making Nodes for all the payloads that “a” can unfold to.

    We do not care about a base case to stop the recursion or limit the size of the tree but note that calling buildTree will neither exhaust memory nor computation time.
    Laziness in the language cares for constructing children only when needed.

    This is now something that we can specialize for a given data type Board and a function moves that unfolds to all boards that result from applying a valid move.

    The spezialization needs no subtyping and is yet fully type-safe.
    We have extended the system non-intrusively and we will do so again for limiting the size of the game tree.

    When an evaluation of the tree shall be limited to a depth “n” we simply prune the tree at that depth. That code is independent of our special game use case and can therefore work on nodes of arbitrary trees. Here is the recursive definition.

    To make a pruned game tree we use function composition (.) for the generation:

    Note that we still haven’t materialized the tree! A user of the pruned tree can never look below level 5. No children below that level are ever evaluated, and thus – thanks to laziness – never get constructed.

    Laziness allows us to work incrementally not only from generic tree generation to specialized logic and data types. We even apply pruning conditions non-intrusively “from the outside”. We didn’t need to anticipate the need for pruning.

    We never had to go back to previous code to change it. We did not even re-compile!

    This is it for episode one. Stay tuned for more episodes.

    Dierk König
    @mittie


    Links:

    Episode 1
    Episode 2
    Episode 3
    Episode 4
    The paper Why functional programming matters

    The full source code. Play the game here.

    Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
    • DZone
    • Y!GG
    • Webnews
    • Digg
    • del.icio.us
    • DotNetKicks
    • Facebook
    • Google Bookmarks
    • Newsrider
    • Twitter
    • YahooBuzz

    css.php