Wednesday, February 11, 2015

Functional Patterns in Domain Modeling - Composing a domain workflow with statically checked invariants

I have been doing quite a bit of domain modeling using functional programming mostly in Scala. And as it happens when you work on something for a long period of time you tend to identify more and more patterns that come up repeatedly within your implementations. You may ignore these as patterns the first time, get a feeling of mere coincidence the next time, but third time really gives you that aha! moment and you feel like documenting it as a design pattern. In course of my learnings I have started blogging on some of these patterns - you can find the earlier ones in the series in:

  • Functional Patterns in Domain Modeling - The Specification Pattern

  • Functional Patterns in Domain Modeling - Immutable Aggregates and Functional Updates

  • Functional Patterns in Domain Modeling - Anemic Models and Compositional Domain Behaviors


  • In this continuing series of functional patterns in domain modeling, I will go through yet another idiom which has been a quite common occurrence in my explorations across various domain models. You will find many of these patterns explained in details in my upcoming book on Functional and Reactive Domain Modeling, the early access edition of which is already published by Manning.

    One of the things that I strive to achieve in implementing domain models is to use the type system to encode as much domain logic as possible. If you can use the type system effectively then you get the benefits of parametricity, which not only makes your code generic, concise and polymorphic, but also makes it self-testing. But that's another story which we can discuss in another post. In this post I will talk about a pattern that helps you design domain workflows compositionally, and also enables implementing domain invariants within the workflow, all done statically with little help from the type system.

    As an example let's consider a loan processing system (simplified for illustration purposes) typically followed by banks issuing loans to customers. A typical simplified workflow looks like the following :-

    The Domain Model


    The details of each process is not important - we will focus on how we compose the sequence and ensure that the API verifies statically that the correct sequence is followed. Let's start with a domain model for the loan application - we will keep on enriching it as we traverse the workflow.

    case class LoanApplication private[Loans](
      // date of application
      date: Date,
      // name of applicant
      name: String,
      // purpose of loan
      purpose: String,
      // intended period of repayment in years
      repayIn: Int,
      // actually sanctioned repayment period in years
      actualRepaymentYears: Option[Int] = None,
      // actual start date of loan repayment
      startDate: Option[Date] = None,
      // loan application number
      loanNo: Option[String] = None,
      // emi
      emi: Option[BigDecimal] = None
    )

    Note we have a bunch of attributes that are defined as optional and will be filled out later as the loan application traverses through the sequence of workflow. Also we have declared the class private and we will have a smart constructor to create an instance of the class.

    Wiring the workflow with Kleisli


    Here are the various domain behaviors modeling the stages of the workflow .. I will be using the scalaz library for the Kleisli implementation.

    def applyLoan(name: String, purpose: String, repayIn: Int, 
      date: Date = today) =
      LoanApplication(date, name, purpose, repayIn)
    
    def approve = Kleisli[Option, LoanApplication, LoanApplication] { l => 
      // .. some logic to approve
      l.copy(
        loanNo = scala.util.Random.nextString(10).some,
        actualRepaymentYears = 15.some,
        startDate = today.some
      ).some
    }
    
    def enrich = Kleisli[Option, LoanApplication, LoanApplication] { l => 
      //.. may be some logic here
      val x = for {
        y <- l.actualRepaymentYears
        s <- l.startDate
      } yield (y, s)
    
      l.copy(emi = x.map { case (y, s) => calculateEMI(y, s) }).some
    }

    applyLoan is the smart constructor that creates the initial instance of LoanApplication. The other 2 functions approve and enrich perform the approval and enrichment steps of the workflow. Note both of them return an enriched version of the LoanApplication within a Kleisli, so that we can use the power of Kleisli composition and wire them together to model the workflow ..

    val l = applyLoan("john", "house building", 10)
    val op = approve andThen enrich
    op run l

    When you have a sequence to model that takes an initial object and then applies a chain of functions, you can use plain function composition like h(g(f(x))) or using the point free notation, (h compose g compose f) or using the more readable order (f andThen g andThen h). But in the above case we need to have effects along with the composition - we are returning Option from each stage of the workflow. So here instead of plain composition we need effectful composition of functions and that's exactly what Kleisli offers. The andThen combinator in the above code snippet is actually a Kleisli composition aka function composition with effects.

    So we have everything the workflow needs and clients use our API to construct workflows for processing loan applications. But one of the qualities of good API design is to design it in such a way that it becomes difficult for the client to use it in the wrong way. Consider what happens with the above design of the workflow if we invoke the sequence as enrich andThen approve. This violates the domain invariant that states that enrichment is a process that happens after the approval. Approval of the application generates some information which the enrichment process needs to use. But because our types align, the compiler will be perfectly happy to accept this semantically invalid composition to pass through. And we will have the error reported during run time in this case.

    Remembering that we have a static type system at our disposal, can we do better ?

    Phantom Types in the Mix


    Let's throw in some more types and see if we can tag in some more information for the compiler to help us. Let's tag each state of the workflow with a separate type ..

    trait Applied
    trait Approved
    trait Enriched

    Finally make the main model LoanApplication parameterized on a type that indicates which state it is in. And we have some helpful type aliases ..

    case class LoanApplication[Status] private[Loans]( //..
    
    type LoanApplied  = LoanApplication[Applied]
    type LoanApproved = LoanApplication[Approved]
    type LoanEnriched = LoanApplication[Enriched]

    These types will have no role in modeling domain behaviors - they will just be used to dispatch to the correct state of the sequence that the domain invariants mandate. The workflow functions need to be modified slightly to take care of this ..

    def applyLoan(name: String, purpose: String, repayIn: Int, 
      date: Date = today) =
      LoanApplication[Applied](date, name, purpose, repayIn)
    
    def approve = Kleisli[Option, LoanApplied, LoanApproved] { l => 
      l.copy(
        loanNo = scala.util.Random.nextString(10).some,
        actualRepaymentYears = 15.some,
        startDate = today.some
      ).some.map(identity[LoanApproved])
    }
    
    def enrich = Kleisli[Option, LoanApproved, LoanEnriched] { l => 
      val x = for {
        y <- l.actualRepaymentYears
        s <- l.startDate
      } yield (y, s)
    
      l.copy(emi = x.map { case (y, s) => calculateEMI(y, s) }).some.map(identity[LoanEnriched])
    }

    Note how we use the phantom types within the Kleisli and ensure statically that the sequence can flow only in one direction - that which is mandated by the domain invariant. So now an invocation of enrich andThen approve will result in a compilation error because the types don't match. So once again yay! for having the correct encoding of domain logic with proper types.

    Thursday, January 01, 2015

    Probabilistic techniques, data streams and online learning - Looking forward to a bigger 2015

    I look forward to 2015 as the year when randomized algorithms, probabilistic techniques and data structures become more pervasive and mainstream. The primary driving factors for this will be more and more prevalence of big data and the necessity to process them in near real time using minimal (or constant) memory bandwidth. You are given data streams where possibly you will see every data only once in your lifetime and you need to churn out analytics from them in real time. You cannot afford to store all of them in a database on disk since it will incur an unrealistic performance penalty to serve queries in real time. And you cannot afford to store all information in memory even if you add RAM at your own will. You need to find clever ways to optimize your storage, employ algorithms and data structures that use sublinear space and yet deliver information in real time.

    Many such data structures are already being used quite heavily for specialized processing of data streams ..


    These data structures are becoming more and more useful as we prepare to embrace and process larger data sets with fairly strict online requirements. And it has started making a difference. Take for example Impala, the open source analytic database from Cloudera that works on top of Hadoop. Impala's NDV aggregate function (number of distinct values) uses the HyperLogLog algorithm to estimate this number, in parallel, in a fixed amount of space. This blog post has the details of the performance improvement that it offers in comparison to the standard distinct count. The immensely popular NoSQL store Redis also offers a HyperLogLog implementation that you can use to get an approximation on the cardinality of a set using randomization. Salvatore has the details here on the implementation of HyperLogLog algorithm in Redis.

    The most important reason these algorithms and data structures are becoming popular is the increased focus on our "online" requirements. We are not only processing bigger and bigger data set, we need results faster too. We just cannot afford to push all analytics to the batch mode and expect results coming out after an overnight batch processing. Various architectural paradigms like the lambda architecture also target to address this niche area. But before investing on such complex architectures, often some neat data structures that use probabilistic techniques and randomization may offer a much lighter weight solution that you are looking for.

    Consider processing the Twitter stream and generating analytics (of whatever form) online. This means that immediately after seeing one twitter feed you must be able to predict something and update your model at the same time. Which means you need to memorize the data that you see in the feed, apply it to update your model and yet cannot store the entire hose that you have seen so far. This is online learning and is the essence of techniques like stochastic gradient descent that help you do this - the model is capable of making up to date predictions after every data that you see. John Myles White has an excellent presentation on this topic.

    Consider this other problem of detecting similarities between documents. When you are doing this on a Web scale you will have to deal with millions of documents to find the similar sets. There are techniques like minhash which enable you to compress documents into signature matrices. But even then the scale becomes too big to be processed and reported to the user in a meaningful amount of time. As an example (from Mining Massive Datasets), if you process 1 million document using signatures of length 250, you still have to use 1000 bytes per document - the total comes to 1 gigabyte which very well fits into the memory of a standard laptop. But when you check for similar pairs, you need to process (1,000,000 choose 2) or half a trillion pairs of documents which will take almost 6 days to compute all similarities on a laptop. Enter probabilistic techniques and locality sensitive hashing (LSH) algorithm fits this problem like a charm. Detecting similarity is a problem that arises in recommender systems with collaborative filtering and LSH can be used there as well. The basic idea of LSH as applied to similarity detection is to use hashing multiple number of times and identify candidate pairs that qualify for similarity checking. The idea is to reduce the search space using probabilistic techniques so that we can eliminate a class of candidates which have very low chance of being similar.

    Here I have only scratched the surface of the areas where we apply randomization and probabilistic techniques to solve problems that are very real today. There are plentiful other areas in data mining, graph clustering, machine learning and big data processing where similar techniques are employed to reduce the curse of dimensionality and provide practical solution at scale. 2014 has already seen a big surge in terms of popularizing these techniques. I expect 2015 to be bigger and more mainstream in terms of their usage.

    Personally I have been exploring data stream algorithms a lot and have prepared a collection of some useful references. Feel free to share in case you find it useful. I hope to do something more meaningful with stream processing data structures and online learning in 2015. Have a very happy and joyous new year ..

    Monday, November 03, 2014

    Functional and Reactive Domain Modeling

    Manning has launched the MEAP of my upcoming book on Domain Modeling.



    The first time I was formally introduced to the topic was way back when I played around with Erik Evans' awesome text on the subject of Domain Driven Design. In the book he discusses various object lifecycle patterns like the Factory, Aggregate or Repository that help separation of concerns when you are implementing the various interactions between the elements of the domain model. Entities are artifacts with identities, value objects are pure values while services model the coarse level use cases of the model components.

    Traditionally we followed the recommendations of Erik in our real world implementations and used the object oriented paradigm for modeling all interactions. We started talking about rich domain models and anemic domain models. The rich model espoused a richer agglomeration of state and behavior within the model, while the anemic model preferred to keep them decoupled. Martin Fowler sums this up in his post on Anemic Domain Models ..
    "The basic symptom of an Anemic Domain Model is that at first blush it looks like the real thing. There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data."

    Go Functional

    In Functional and Reactive Domain Modeling I look at the problem with a different lens. The primary focus of the book is to encourage building domain models using the principles of functional programming. It's a completely orthogonal approach than OO and focuses on verbs first (as opposed to nouns first in OO), algebra first (as opposed to objects in OO), function composition first (as opposed to object composition in OO), lightweight objects as ADTs (instead of rich class models).

    The book starts with the basics of functional programming principles and discusses the virtues of purity and the advantages of keeping side-effects decoupled from the core business logic. The book uses Scala as the programming language and does an extensive discussion on why the OO and functional features of Scala are a perfect fit for modelling complex domains. Chapter 3 starts the core subject of functional domain modeling with real world examples illustrating how we can make good use of patterns like smart constructors, monads and monoids in implementing your domain model. The main virtue that these patterns bring to your model is genericity - they help you extract generic algebra from domain specific logic into parametric functions which are far more reusable and less error prone. Chapter 4 focuses on advanced usages like typeclass based design and patterns like monad transformers, kleislis and other forms of compositional idioms of functional programming. One of the primary focus of the book is an emphasis on algebraic API design and to develop an appreciation towards ability to reason about your model.

    Go Reactive

    The term "reactive" has recently become quite popular in describing systems that are responsive, scalable and adaptive. In implementing complex domain models, we find many areas which can be made more responsive by implementing them as non blocking operations instead of the standard blocking function calls. Using higher level concurrency primitives like actors, futures and data flow based computing we can compose asynchronous operations and increase the net throughput of your model. The second part of the book discusses how you can combine the principles of functional programming with the reactive way of implementing behaviors. The paradigm of application design using the principles of functional programming together with an asynchronous non-blocking mode of communication between the participating entities promises to be a potent combination towards developing performant systems that are relatively easy to manage, maintain and evolve. But designing and implementing such models need a different way of thinking. The behaviors that you implement have to be composable using pure functions and they can form building blocks of bigger abstractions that communicate between them using non-blocking asynchronous message passing.

    Functional meets Reactive

    Functional and Reactive Domain Modeling takes you through the steps teaching you how to think of the domain model in terms of pure functions and how to compose them to build larger abstractions. You will start learning with the basics of functional programming principles and gradually progress to the advanced concepts and patterns that you need to know to implement complex domain models. The book demonstrates how advanced FP patterns like algebraic data types, typeclass based design and isolation of side-effects can make your model compose for readability and verifiability. On the subject of reactive modeling, the book focuses on the higher order concurrency patterns like actors and futures. It uses the Akka framework as the reference implementation and demonstrates how advanced architectural patterns like event sourcing and command-query-responsibility-segregation can be put to great use while implementing scalable models. You will learn techniques that are radically different from the standard RDBMS based applications that are based on mutation of records. You’ll also pick up important patterns like using asynchronous messaging for interaction based on non blocking concurrency and model persistence, which delivers the speed of in-memory processing along with suitable guarantees of reliability.

    Looking forward to an exciting journey with the book. I am sure you will also find interest in the topics that I discuss there. And feel free to jump on to AuthorOnline and fire questions that we all can discuss. I am sure this will also lead to an overall improvement in the quality of the book.

    Monday, May 12, 2014

    Functional Patterns in Domain Modeling - Anemic Models and Compositional Domain Behaviors

    I was looking at the presentation that Dean Wampler made recently regarding domain driven design, anemic domain models and how using functional programming principles help ameliorate some of the problems there. There are some statements that he made which, I am sure made many OO practitioners chuckle. They contradict popular beliefs that encourage OOP as the primary way of modeling using DDD principles.

    One statement that resonates a lot with my thought is "DDD encourages understanding of the domain, but don't implement the models". DDD does a great job in encouraging developers to understand the underlying domain model and ensuring a uniform vocabulary throughout the lifecycle of design and implementation. This is what design patterns also do by giving you a vocabulary that you can heartily exchange with your fellow developers without influencing any bit of implementation of the underlying pattern.

    On the flip side of it, trying to implement DDD concepts using standard techniques of OO with joined state and behavior often gives you a muddled mutable model. The model may be rich from the point of view that you will find all concepts related to the particular domain abstraction baked in the class you are modeling. But it makes the class fragile as well since the abstraction becomes more locally focused losing the global perspective of reusability and composability. As a result when you try to compose multiple abstractions within the domain service layer, it becomes too much polluted with glue code that resolves the impedance mismatch between class boundaries.

    So when Dean claims "Models should be anemic", I think he means to avoid this bundling of state and behavior within the domain object that gives you the false sense of security of richness of the model. He encourages the practice that builds domain objects to have the state only while you model behaviors using standalone functions.



    One other strawman argument that I come across very frequently is that bundling state and behavior by modeling the latter as methods of the class increases encapsulation. If you are still a believer of this school of thought, have a look at Scott Meyer's excellent article which he wrote as early as 2000. He eschews the view that a class is the right level of modularization and encourages more powerful module systems as better containers of your domain behaviors.

    As continuation of my series on functional domain modeling, we continue with the example of the earlier posts and explore the theme that Dean discusses ..

    Here's the anemic domain model of the Order abstraction ..

    case class Order(orderNo: String, orderDate: Date, customer: Customer, 
      lineItems: Vector[LineItem], shipTo: ShipTo, 
      netOrderValue: Option[BigDecimal] = None, status: OrderStatus = Placed)

    In the earlier posts we discussed how to implement the Specification and Aggregate Patterns of DDD using functional programming principles. We also discussed how to do functional updates of aggregates using data structures like Lens. In this post we will use these as the building blocks, use more functional patterns and build larger behaviors that model the ubiquitous language of the domain. After all, one of the basic principles behind DDD is to lift the domain model vocabulary into your implementation so that the functionality becomes apparent to the developer maintaining your model.

    The core idea is to validate the assumption that building domain behaviors as standalone functions leads to an effective realization of the domain model according to the principles of DDD. The base classes of the model contain only the states that can be mutated functionally. All domain behaviors are modeled through functions that reside within the module that represents the aggregate.

    Functions compose and that's precisely how we will chain sequence of domain behaviors to build bigger abstractions out of smaller ones. Here's a small function that values an Order. Note it returns a Kleisli, which essentially gives us a composition over monadic functions. So instead of composing a -> b and b -> c, which we do with normal function composition, we can do the same over a -> m b and b -> m c, where m is a monad. Composition with effects if you may say so.

    def valueOrder = Kleisli[ProcessingStatus, Order, Order] {order =>
      val o = orderLineItems.set(
        order,
        setLineItemValues(order.lineItems)
      )
      o.lineItems.map(_.value).sequenceU match {
        case Some(_) => right(o)
        case _ => left("Missing value for items")
      }
    }

    But what does that buy us ? What exactly do we gain from these functional patterns ? It's the power to abstract over families of similar abstractions like applicatives and monads. Well, that may sound a bit rhetoric and it needs a separate post to justify their use. Stated simply, they encapsulate effects and side-effects of your computation so that you can focus on the domain behavior itself. Have a look at the process function below - it's actually a composition of monadic functions in action. But all the machinery that does the processing of effects and side-effects are abstracted within the Kleisli itself so that the user level implementation is simple and concise.

    With Kleisli it's the power to compose over monadic functions. Every domain behavior has a chance of failure, which we model using the Either monad - here ProcessingStatus is just a type alias for this .. type ProcessingStatus[S] = \/[String, S]. Using the Kleisli, we don't have to write any code for handling failures. As you will see below, the composition is just like the normal functions - the design pattern takes care of alternate flows.

    Once the Order is valued, we need to apply discounts to qualifying items. It's another behavior that follows the same pattern of implementation as valueOrder.

    def applyDiscounts = Kleisli[ProcessingStatus, Order, Order] {order =>
      val o = orderLineItems.set(
        order,
        setLineItemValues(order.lineItems)
      )
      o.lineItems.map(_.discount).sequenceU match {
        case Some(_) => right(o)
        case _ => left("Missing discount for items")
      }
    }

    Finally we check out the Order ..

    def checkOut = Kleisli[ProcessingStatus, Order, Order] {order =>
      val netOrderValue = order.lineItems.foldLeft(BigDecimal(0).some) {(s, i) => 
        s |+| (i.value |+| i.discount.map(d => Tags.Multiplication(BigDecimal(-1)) |+| Tags.Multiplication(d)))
      }
      right(orderNetValue.set(order, netOrderValue))
    }

    And here's the service method that composes all of the above domain behaviors into the big abstraction. We don't have any object to instantiate. Just plain function composition that results in an expression modeling the entire flow of events. And it's the cleanliness of abstraction that makes the code readable and succinct.

    def process(order: Order) = {
      (valueOrder andThen 
         applyDiscounts andThen checkOut) =<< right(orderStatus.set(order, Validated))
    }
    In case you are interested in the full source code of this small example, feel free to take a peek at my github repo.

    Sunday, April 06, 2014

    Functional Patterns in Domain Modeling - Immutable Aggregates and Functional Updates

    In the last post I looked at a pattern that enforces constraints to ensure domain objects honor the domain rules. But what exactly is a domain object ? What should be the granularity of an object that my solution model should expose so that it makes sense to a domain user ? After all, the domain model should speak the language of the domain. We may have a cluster of entities modeling various concepts of the domain. But only some of them can be published as abstractions to the user of the model. The others can be treated as implementation artifacts and are best hidden under the covers of the published ones.

    An aggregate in domain driven design is a published abstraction that provides a single point of interaction to a specific domain concept. Considering the classes I introduced in the last post, an Order is an aggregate. It encapsulates the details that an Order is composed of in the real world (well, only barely in this example, which is only for illustration purposes :-)).

    Note an aggregate can consist of other aggregates - e.g. we have a Customer instance within an Order. Eric Evans in his book on Domain Driven Design provides an excellent discussion of what constitutes an Aggregate.

    Functional Updates of Aggregates with Lens


    This is not a post about Aggregates and how they fit in the realm of domain driven design. In this post I will talk about how to use some patterns to build immutable aggregates. Immutable data structures offer a lot of advantages, so build your aggregates ground up as immutable objects. Algebraic Data Type (ADT) is one of the patterns to build immutable aggregate objects. Primarily coming from the domain of functional programming, ADTs offer powerful techniques of pattern matching that help you match values against patterns and bind variables to successful matches. In Scala we use case classes as ADTs that give immutable objects out of the box ..

    case class Order(orderNo: String, orderDate: Date, customer: Customer, 
      lineItems: Vector[LineItem], shipTo: ShipTo, netOrderValue: Option[BigDecimal] = None, 
      status: OrderStatus = Placed)
    

    Like all good aggregates, we need to provide a single point of interaction to users. Of course we can access all properties using accessors of case classes. But what about updates ? We can update the orderNo of an order like this ..

    val o = Order( .. )
    o.copy(orderNo = newOrderNo)

    which gives us a copy of the original order with the new order no. We don't mutate the original order. But anybody having some knowledge of Scala will realize that this becomes pretty clunky when we have to deal with nested object updation. e.g in the above case, ShipTo is defined as follows ..

    case class Address(number: String, street: String, city: String, zip: String)
    case class ShipTo(name: String, address: Address)

    So, here you go in order to update the zip code of a ShipTo ..

    val s = ShipTo("abc", Address("123", "Monroe Street", "Denver", "80233"))
    s.copy(address = s.address.copy(zip = "80231"))

    Not really pleasing and can go off bounds in comprehensibility pretty soon.

    In our domain model we use an abstraction called a Lens for updating Aggregates. In very layman's terms, a lens is an encapsulated get and set combination. The get extracts a small part from a larger whole, while the set transforms the larger abstraction with a smaller part taken as a parameter.

    case class Lens[A, B](get: A => B, set: (A, B) => A)

    This is a naive definition of a Lens in Scala. Sophisticated lens designs go a long way to ensure proper abstraction and composition. scalaz provides one such implementation out of the box that exploits the similarity in structure between the get and the set to generalize the lens definition in terms of another abstraction named Store. As it happens so often in functional programming, Store happens to abstract yet another pattern called the Comonad. You can think of a Comonad as the inverse of a Monad. But in case you are more curious, and have wondered how lenses form "the Coalgebras for the Store Comonad", have a look at the 2 papers here and here.

    Anyway for us mere domain modelers, we will use the Lens implementation as in scalaz .. here's a lens that helps us update the OrderStatus within an Order ..

    val orderStatus = Lens.lensu[Order, OrderStatus] (
      (o, value) => o.copy(status = value),
      _.status
    )

    and use it as follows ..

    val o = Order( .. )
    orderStatus.set(o, Placed)

    will change the status field of the Order to Placed. Let's have a look at some of the compositional properties of a lens which help us write readable code for functionally updating nested structures.

    Composition of Lenses

    First let's define some individual lenses ..

    // lens for updating a ShipTo of an Order
    val orderShipTo = Lens.lensu[Order, ShipTo] (
      (o, sh) => o.copy(shipTo = sh),
      _.shipTo
    )
    
    // lens for updating an address of a ShipTo
    val shipToAddress = Lens.lensu[ShipTo, Address] (
      (sh, add) => sh.copy(address = add),
      _.address
    )
    
    // lens for updating a city of an address
    val addressToCity = Lens.lensu[Address, String] (
      (add, c) => add.copy(city = c),
      _.city
    )

    And now we compose them to define a lens that directly updates the city of a ShipTo belonging to an Order ..

    // compositionality FTW
    def orderShipToCity = orderShipTo andThen shipToAddress andThen addressToCity

    Now updating a city of a ShipTo in an Order is as simple and expressive as ..

    val o = Order( .. )
    orderShipToCity.set(o, "London")

    The best part of using such compositional data structures is that it makes your domain model implementation readable and expressive to the users of your API. And yet your aggregate remains immutable.

    Let's look at another use case when the nested object is a collection. scalaz offers partial lenses that you can use for such composition. Here's an example where we build a lens that updates the value member within a LineItem of an Order. A LineItem is defined as ..

    case class LineItem(item: Item, quantity: BigDecimal, value: Option[BigDecimal] = None, 
      discount: Option[BigDecimal] = None)
    

    and an Order has a collection of LineItems. Let's define a lens that updates the value within a LineItem ..

    val lineItemValue = Lens.lensu[LineItem, Option[BigDecimal]] (
      (l, v) => l.copy(value = v),
      _.value
    )

    and then compose it with a partial lens that helps us update a specific item within a vector. Note how we convert our lineItemValue lens to a partial lens using the unary operator ~ ..

    // a lens that updates the value in a specific LineItem within an Order 
    def lineItemValues(i: Int) = ~lineItemValue compose vectorNthPLens(i)

    Now we can use this composite lens to functionally update the value field of each of the items in a Vector of LineItems using some specific business rules ..

    (0 to lis.length - 1).foldLeft(lis) {(s, i) => 
      val li = lis(i)
      lineItemValues(i).set(s, unitPrice(li.item).map(_ * li.quantity)).getOrElse(s)
    }

    In this post we saw how we can handle aggregates functionally and without any in-place mutation. This keeps the model pure and helps us implement domain models that has sane behavior even in concurrent settings without any explicit use of locks and semaphores. In the next post we will take a look at how we can use such compositional structures to make the domain model speak the ubiquitous language of the domain - another pattern recommended by Eric Evans in domain driven design.

    Monday, March 31, 2014

    Functional Patterns in Domain Modeling - The Specification Pattern

    When you model a domain, you model its entities and behaviors. As Eric Evans mentions in his book Domain Driven Design, the focus is on the domain itself. The model that you design and implement must speak the ubiquitous language so that the essence of the domain is not lost in the myriads of incidental complexities that your implementation enforces. While being expressive the model needs to be extensible too. And when we talk about extensibility, one related attribute is compositionality.

    Functions compose more naturally than objects and In this post I will use functional programming idioms to implement one of the patterns that form the core of domain driven design - the Specification pattern, whose most common use case is to implement domain validation. Eric's book on DDD says regarding the Specification pattern ..
    It has multiple uses, but one that conveys the most basic concept is that a SPECIFICATION can test any object to see if it satisfies the specified criteria.
    A specification is defined as a predicate, whereby business rules can be combined by chaining them together using boolean logic. So there's a concept of composition and we can talk about Composite Specification when we talk about this pattern. Various literature on DDD implement this using the Composite design pattern so commonly implemented using class hierarchies and composition. In this post we will use function composition instead.

    Specification - Where ?

    One of the very common confusions that we have when we design a model is where to keep the validation code of an aggregate root or any entity, for that matter.
    • Should we have the validation as part of the entity ? No, it makes the entity bloated. Also validations may vary based on some context, while the core of the entity remains the same.
    • Should we have validations as part of the interface ? May be we consume JSON and build entities out of it. Indeed some validations can belong to the interface and don't hesitate to put them there.
    • But the most interesting validations are those that belong to the domain layer. They are business validations (or specifications), which Eric Evans defines as something that "states a constraint on the state of another object". They are business rules which the entity needs to honor in order to proceed to the next stage of processing.
    We consider the following simple example. We take an Order entity and the model identifies the following domain "specifications" that a new Order must satisfy before being thrown into the processing pipeline:

    1. it must be a valid order obeying the constraints that the domain requires e.g. valid date, valid no of line items etc.
    2. it must be approved by a valid approving authority - only then it proceeds to the next stage of the pipeline
    3. customer status check must be passed to ensure that the customer is not black-listed
    4. the line items from the order must be checked against inventory to see if the order can be fulfilled
    These are the separate steps that are to be done in sequence by the order processing pipeline as pre-order checks before the actual order is ready for fulfilment. A failure in any of them takes the order out of the pipeline and the process stops there. So the model that we will design needs to honor the sequence as well as check all constraints that need to be done as part of every step.

    An important point to note here is that none of the above steps mutate the order - so every specification gets a copy of the original Order object as input, on which it checks some domain rules and determines if it's suitable to be passed to the next step of the pipeline.

    Jumping on to the implementation ..

    Let's take down some implementation notes from what we learnt above ..

    • The Order can be an immutable entity at least for this sequence of operations
    • Every specification needs an order, can we can pull some trick out of our hat which prevents this cluttering of API by passing an Order instance to every specification in the sequence ?
    • Since we plan to use functional programming principles, how can we model the above sequence as an expression so that our final result still remains composable with the next process of order fulfilment (which we will discuss in a future post) ?
    • All these functions look like having similar signatures - we need to make them compose with each other
    Before I present more of any explanation or theory, here are the basic building blocks which will implement the notes that we took after talking to the domain experts ..

    type ValidationStatus[S] = \/[String, S]
    
    type ReaderTStatus[A, S] = ReaderT[ValidationStatus, A, S]
    
    object ReaderTStatus extends KleisliInstances with KleisliFunctions {
      def apply[A, S](f: A => ValidationStatus[S]): ReaderTStatus[A, S] = kleisli(f)
    }

    ValidationStatus defines the type that we will return from each of the functions. It's either some status S or an error string that explains what went wrong. It's actually an Either type (right biased) as implemented in scalaz.

    One of the things which we thought will be cool is to avoid repeating the Order parameter for every method when we invoke the sequence. And one of the idioamtic ways of doing it is to use the Reader monad. But here we already have a monad - \/ is a monad. So we need to stack them together using a monad transformer. ReaderT does this job and ReaderTStatus defines the type that somehow makes our life easier by combining the two of them.

    The next step is an implementation of ReaderTStatus, which we do in terms of another abstraction called Kleisli. We will use the scalaz library for this, which implements ReaderT in terms of Kleisli. I will not go into the details of this implementation - in case you are curious, refer to this excellent piece by Eugene.

    So, how does one sample specification look like ?

    Before going into that, here are some basic abstractions, grossly simplified only for illustration purposes ..

    // the base abstraction
    sealed trait Item {
      def itemCode: String
    }
    
    // sample implementations
    case class ItemA(itemCode: String, desc: Option[String], 
      minPurchaseUnit: Int) extends Item
    case class ItemB(itemCode: String, desc: Option[String], 
      nutritionInfo: String) extends Item
    
    case class LineItem(item: Item, quantity: Int)
    
    case class Customer(custId: String, name: String, category: Int)
    
    // a skeleton order
    case class Order(orderNo: String, orderDate: Date, customer: Customer, 
      lineItems: List[LineItem])

    And here's a specification that checks some of the constraints on the Order object ..

    // a basic validation
    private def validate = ReaderTStatus[Order, Boolean] {order =>
      if (order.lineItems isEmpty) left(s"Validation failed for order $order") 
      else right(true)
    }

    It's just for illustration and does not contain much domain rules. The important part is how we use the above defined types to implement the function. Order is not an explicit argument to the function - it's curried. The function returns a ReaderTStatus, which itself is a monad and hence allows us to sequence in the pipeline with other specifications. So we get the requirement of sequencing without breaking out of the expression oriented programming style.

    Here are a few other specifications based on the domain knowledge that we have gathered ..

    private def approve = ReaderTStatus[Order, Boolean] {order =>
      right(true)
    }
    
    private def checkCustomerStatus(customer: Customer) = ReaderTStatus[Order, Boolean] {order =>
      right(true)
    }
    
    private def checkInventory = ReaderTStatus[Order, Boolean] {order =>
      right(true)
    }

    Wiring them together

    But how do we wire these pieces together so that we have the sequence of operations that the domain mandates and yet all goodness of compositionality in our model ? It's actually quite easy since we have already done the hard work of defining the appropriate types that compose ..

    Here's the isReadyForFulfilment method that defines the composite specification and invokes all the individual specifications in sequence using for-comprehension, which, as you all know does the monadic bind in Scala and gives us the final expression that needs to be evaluated for the Order supplied.

    def isReadyForFulfilment(order: Order) = {
      val s = for {
        _ <- validate
        _ <- approve
        _ <- checkCustomerStatus(order.customer)
        c <- checkInventory
      } yield c
      s(order)
    }

    So we have the monadic bind implement the sequencing without breaking the compositionality of the abstractions. In the next instalment we will see how this can be composed with the downstream processing of the order that will not only read stuff from the entity but mutate it too, of course in a functional way.

    Thursday, January 23, 2014

    A Sketch as the Query Model of an EventSourced System

    In my last post I discussed the count-min sketch data structure that can be used to process data streams using sub-linear space. In this post I will continue with some of my thoughts on how count-min sketches can be used in a typical event sourced application architecture. An event sourcing system typically has a query model which provides a read only view of how all the events are folded to provide a coherent view of the system. I have seen applications where the query model is typically rendered from a relational database. And the queries can take a lot of time to be successfully processed and displayed to the user if the data volume is huge. And when we are talking about Big Data, this is not a very uncommon use case.

    Instead of rendering the query from the RDBMS, quite a few types of them can be rendered from a count-min sketch using sub-linear space. Consider the use case where you need to report the highest occuring user-ids in a Twitter stream. The stream is continuous, huge and non ending and you get to see each item once. So you get each item from where you parse out the user-id occurring in it and update the sketch. So each entry of the sketch contains the frequency of the user-id that hashes to that slot. And we can take the minimum of all the slots to which a user-id hashes to, in order to get the frequency of that user-id. The details of how this works can be found in my last post.

    Consider the case where we need to find the heavy-hitters - those user-ids whose frequency exceeds a pre-determined threshold. For that, in addition to the sketch we can also maintain a data structure like heap or tree where we update the top-k heavy hitters. When a user-id appears, we update the sketch, get its estimated frequency from the sketch and if it exceeds the threshold, also record it in the data structure. So at any point in time we can probe this accessary data structure to get the current heavy-hitters. Spark examples contain a sample implementation of this heavy hitters query from a Twitter stream using the CountMinSketchMonoid of Algebird.

    Can this be a viable approach of implementing the query model in an event sourced system if the use case fits the approximation query approach ? It can be faster, relatively cheap in space and can prove to be responsive enough to be displayed in dashboards in the form of charts or graphs.