Showing posts with label jpa. Show all posts
Showing posts with label jpa. Show all posts

Sunday, October 18, 2009

Are ORMs really a thing of the past ?

Stephan Schmidt has blogged on the ORMs being a thing of the past. While he emphasizes on ORMs' performance concerns and dismisses them as leaky abstractions that throw LazyInitializationException, he does not present any concrete alternative. In his concluding section on alternatives he mentions ..

"What about less boiler plate code due to ORMs? Good DAOs with standard CRUD implementations help there. Just use Spring JDBC for databases. Or use Scala with closures instead of templates. A generic base dao will provide create, read, update and delete operations. With much less magic than the ORM does."

Unfortunately, all these things work on small projects with a few number of tables. Throw in a large project with a complex domain model, requirements for relational persistence and the usual stacks of requirements that today's enterprise applications offer, you will soon discover that your home made less boilerplated stuff goes for a toss. In most cases you will end up either rolling out your own ORM or start building a concoction of domain models invaded with indelible concerns of persistence. In the former case, obviously your ORM will not be as performant or efficient as the likes of Hibernate. And in the latter case, either you will end up building an ActiveRecord model with the domain object mirroring your relational table or you may be more unfortunate with a bigger unmanageable bloat.

It's very true that none of the ORMs in the market today are without their pains. You need to know their internals in order to make them generate efficient queries, you need to understand all the nuances to make use of their caching behaviors and above all you need to manage all the reams of jars that they come with.

Yet, in the Java stack, Hibernate and JPA are still the best of options when we talk about big persistent domain models. Here are my points in support of this claim ..

  • If you are not designing an ActiveRecord based model, it's of paramount importance that you keep your domain model decoupled from the persistent model. And ORMs offer the most pragmatic way towards this approach. I know people will say that it's indeed difficult to achieve this in a real life world and in typical situations compromises need to be made. Yet, I think if you need to make compromise for performance or whatever reasons, it's only an exception. Ultimately you will find that the mjority of your domain model is decoupled enough for a clean evolution.

  • ORMs save you from writing tons of SQL code. This is one of the compelling advantages that I have found with an ORM that my Java code is not littered with SQL that's impossible to refactor when my schema changes. Again, there will be situations when your ORM may not churn out the best of optimized SQLs and you will have to do that manually. But, as I said before, it's an exception and decisions cannot be made based on exceptions only.

  • ORMs help you virtualize your data layer. And this can have huge gains in your scalability aspect. Have a look at how grids like Terracotta can use distributed caches like EhCache to scale out your data layer seamlessly. Without the virtualization of the ORM, you may still achieve scalability using vendor specific data grids. But this comes at the price of lots of $$ and the vendor lock-ins.


Stephan also feels that the future of ORMs will be jeopardized because of the advent of polyglot persistence and nosql data stores. The fact is that the use cases that nosql datastores address are very much orthogonal to those served by the relational databases. Key/value lookups with semi-structured data, eventual consistency, efficient processing of web scale networked data backed with the power of map/reduce paradigms are not something that your online transactional enterprise application with strict requirements of ACID will comply with. So long we have been trying to shoehorn every form of data processing with a single hammer of relational databases. It's indeed very refreshing to see the onset of nosql paradigm and it being already in use in production systems. But ORMs will still have their roles to play in the complementary set of use cases.

Tuesday, December 18, 2007

Domain Modeling - What exactly is a Rich Model ?

It is possibly an understatement to emphasize the usefulness of making your domain model rich. It has been reiterated many times that a rich domain model is the cornerstone of building a scalable application. It places business rules in proper perspective to wherever they belong, instead of piling stuffs in the form of a fat service layer. But domain services are also part of the domain model - hence when we talk about the richness of a domain model, we need to ensure that the richness is distributed in a balanced way between all the artifacts of the domain model, like entities, value objects, services and factories. In course of designing a rich domain model, enough care should be taken to avoid the much dreaded bloat in your class design. Richer is not necessarily fatter and a class should only have the responsibilities that encapsulates its interaction in the domain with other related entities. By related, I mean, related by the Law of Demeter. I have seen instances of developers trying to overdo the richness thing and ultimately land up with bloated class structures in the domain entities. People tend to think of a domain entity as the sole embodiment of all the richness and often end up with an entity design that locks itself up in the context of its execution at the expense of reusability and unit testability. One very important perspective of architecting reusable domain models is to appreciate the philosophical difference in design forces amongst the various types of domain artifacts. Designing an entity is different from designing a domain service, you need to focus on reusability and a clean POJO based model while designing a domain entity. OTOH a domain service has to interact a lot with the context of execution - hence it is very likely that a domain service needs to have wiring with infrastructure services and other third party functionalities. A value object has different lifecycle considerations than entities and we need not worry about its identity. Hence when we talk of richness, it should always be dealt with the perspective of application. This post discusses some of the common pitfalls in entity design that developers face while trying to achieve rich domain models.

Entities are the most reusable artifacts of a domain model. Hence an entity should be extremely minimalistic in design and should encapsulate only the state that is required to support the persistence model in the Aggregate to which it belongs. Regarding the abstraction of the entity's behavior, it should contain only business logic and rules that model its own behavior and interaction with its collaborating entities.

Have a look at this simple domain entity ..


class Account {
    private String accountNo;
    private Customer customer;
    private Set<Address> addresses;
    private BigDecimal currentBalance;
    private Date date;

    //.. other attributes

    //.. constructors etc.

    public Account addAddress(final Address address) {
        addresses.add(address);
        return this;
    }

    public Collection<Address> getAddresses() {
        return Collections.unmodifiableSet(addresses);
    }

    public void debit(final BigDecimal amount) {
        //..
    }

    public void credit(final BigDecimal amount) {
        //..
    }

    //.. other methods
}



Looks ok ? It has a minimalistic behavior and encapsulates the business functionalities that it does in the domain.

Question : Suppose I want to do a transfer of funds from one account to another. Will transfer() be a behavior of Account ? Let's find out ..


class Account {
    //..
    //.. as above

    // transfer from this account to another
    public void transfer(Account to, BigDecimal amount) {
        this.debit(amount);
        to.credit(amount);
    }
    //.. other methods
}



Looks cool for now. We have supposedly made the domain entity richer by adding more behaviors. But at the same time we need to worry about transactional semantics for transfer() use case. Do we implant transactional behavior also within the entity model ? Hold on to that thought for a moment, while we have some fresh requirements from the domain expert.

In the meantime the domain expert tells us that every transfer needs an authorization and logging process through corporate authorization service. This is part of the statutory regulations and need to be enforced as part of the business rules. How does that impact our model ? Let us continue adding to the richness of the entity in the same spirit as above ..


class Account {
    //.. as above

    // dependency injected
    private AuthorizationService auth;
    //..

    public void transfer(Account to, BigDecimal amount) {
        auth.authorize(this, to, amount);
        this.debit(amount);
        to.credit(amount);
    }
    //.. other methods
}



Aha! .. so now we start loading up our entity with services that needs to be injected from outside. If we use third party dependency injection for this, we can make use of @Configurable of Spring and have DI in entities which are not instantiated by Spring.


import org.springframework.beans.factory.annotation.Configurable;
class Account {
    //.. as above

    // dependency injected
    @Configurable
    private AuthorizationService auth;

    //..
}



How rich is my entity now ?

Is the above Account model still a POJO ? There have already been lots of flamebaits over this, I am not going into this debate. But immediately certain issues crop up with the above injection :

  • The class Account becomes compile-time dependent on a third party jar. That import lying out there is a red herring.

  • The class loses some degree of unit-testability. Of course, you can inject mocks through Spring DI and do unit testing without going into wiring the hoops of an actual authorization service. But still, the moment you make your class depend on a third party framework, both reusability and unit testability get compromised.

  • Using @Configurable makes you introduce aspect weaving - load time or compile time. The former has performance implications, the latter is messy.



Does this really make my domain model richer ?

The first question you should ask yourself is whether you followed the minimalistic principle of class design. A class should contain *only* what it requires to encapsulate its own behavior and nothing else. Often it is said that making an abstraction design better depends on how much code you can remove from it, rather than how much code you add to it.

In the above case, transfer() is not an innate behavior of the Account entity per se, it is a use case which involves multiple accounts and maybe, usage of external services like authorization, logging and various operational semantics like transaction behavior. transfer() should not be part of the entity Account. It should be designed as a domain service that uses the relationship with the entity Account.


class AccountTransferService {
    // dependency injected
    private AuthorizationService auth;

    void transfer(Account from, Account to, BigDecimal amount) {
        auth.authorize(from, to, amount);
        from.debit(amount);
        to.credit(amount);
    }
    //..
}



Another important benefit that you get out of making transfer() a service is that you have a much cleaner transactional semantics. Now you can make the service method transactional by adding an annotation to it. There are enough reasons to justify that transactions should always be handled at the service layer, and not at the entity layer.

So, this takes some meat out of your entity Account but once again gives it back the POJO semantics. Taking out transfer() from Account, also makes Account decoupled from third party services and dependency injection issues.

What about Account.debit() and Account.credit() ?

In case debit() and credit() need to be designed as independent use cases under separate transaction cover, then it definitely makes sense to have service wrappers on these methods. Here they are ..


class AccountManagementService {
    // dependency injected
    private AuthorizationService auth;

    @Transactional
    public void debit(Account from, BigDecimal amount) {
        from.debit(amount);
    }
    @Transactional
    public void credit(Account to, BigDecimal amount) {
        to.credit(amount);
    }
    @Transactional
    public void transfer(Account from, Account to, BigDecimal amount) {
        //..
    }
    //..
}



Now the Account entity is minimalistic and just rich enough ..

Injection into Entities - Is it a good idea ?

I don't think there is a definite yes/no answer, just like there is no definite good or bad about a particular design. A design is a compromise of all the constraints in the best possible manner and the goodness of a design depends very much on the context in which it is used. However, with my experience of JPA based modeling of rich domain models, I prefer to consider this as my last resort. I try to approach modeling an entity with a clean POJO based approach, because this provides me the holy grail of complete unit-testability that I consider to be one of the most important trademarks of good design. In most of the cases where I initially considered using @Configurable, I could come up with alternate designs to make the entity decoupled from the gymnastics of third party weaving and wiring. In your specific design there may be cases where possibly you need to use @Configurable to make rich POJOs, but make a judgement call by considering other options as well before jumping on to the conclusion. Some of the other options to consider are :

  • using Hibernate interceptors that does not compromise with the POJO model

  • instead of injection, use the service as an argument to the entity method. This way you keep the entity still a pure POJO, yet open up an option to inject mocks during unit testing of the entity


Another point to consider is that @Configurable makes a constructor interception, which means that construction of every instance of that particular entity will be intercepted for injection. I do not have any figures, but that can be a performance overhead for entities which are created in huge numbers. A useful compromise in such cases may be to use a getter injection on the service, which means that the service will be injected only when it is accessed within the entity.

Having said all these, @Configurable has some advantages over Hibernate interceptors regarding handling of serialization and automatic reconstruction of the service object during de-serialization.

For more on domain modeling using JPA, have a look at the mini series which I wrote sometime back. And don't miss the comments too, there are some interesting feedbacks and suggestions ..

Wednesday, December 12, 2007

How do you model a Domain Entity ?

Steve Freeman recommends using an interface for every domain entity in order to have a clean layering in architecture and no dependency between the domain and the persistence model of implementation. He does not mind if there is a single implementation for every interface and recommends his paradigm for expressing the needs of the domain code more clearly by limiting its dependency to an interface that defines just the services it needs from other parts of the system.

I am not sure if I agree to his principles. While not being a fanboy of interfaces-even-for-single-implementations club, I do not think using concrete classes for domain entities will incur any dependency between the domain layer and the persistence services. Standards like JPA backed up by ORM implementations like Hibernate provide transparent persistence services today, which can be plugged in non-intrusively into your domain model. I have indicated the same in the comments to his post, but just thought of having a separate post to make my point more clear.

Regarding data access using JPA, Repositories provide a great abstraction to encapulate them. While repositories belong to the domain services layer, they use domain entities and value objects to transport data across the layers of your model. Repositories also abstract away specific query languages like EJB QL or Hibernate HQL behind intention-revealing interfaces, keeping your domain entities free of any such dependencies. I had blogged about generic repository implementations to abstract away transparent data access code from domain models using the Bridge design pattern. All configuration parameters including EntityManagerFactories can be injected into your repository implementations through DI containers like Spring, keeping the domain model clean from these dependencies.

And JPA provides a nice standardized set of contracts to map your relational data model into your object-oriented domain entity class. All the annotations are from JPA, where you do not have to import any non-standard stuff into your codebase - all imports are from javax.persistence.*. And if you think annotations couple your code with the data model, go ahead and use XML for a completely transparent and decoupled model mapping. I have talked about the virtues of JPA based domain modeling and repository abstraction some time back.

Using transparent data persistence backing up your domain model, I tend to follow the policy of having one concrete class for every domain entity. I use JPA annotations for mapping domain model to the relational data model. This way the implementation adheres to the standards, and I have one clean artifact as my domain entity abstraction.

Monday, October 15, 2007

Domain Modeling with JPA - The Gotchas - Part 4 - JPA is a nice way to abstract Repository implementation

When we model a domain, we love to work at a higher level of abstraction through Intention Revealing Interfaces, where the focus is always on the domain artifacts. Whenever at the domain layer we start coding in terms of SQLs and pulling resultsets out of the database through a JDBC connection, we lose the domain focus. When we start dealing with the persistent hierarchy directly instead of the domain hierarchy, we make the domain model more vulnerable, exposing it to the lower layers of abstraction. By decoupling the domain model from the underlying persistent relational model, JPA provides us an ideal framework for building higher levels of abstraction towards data access. The domain model can now access data in terms of collections of domain entities and not in terms of the table structures where these entities are deconstructed. And the artifact that provides us a unified view of the underlying collection of entities is the Repository.

Why Repository ?

In the days of EJB 2.0, we had the DAO pattern. Data Access Objects also provided with an abstraction of the underlying data model by defining queries and updates for every table of the model. However, the difference of DAOs with repositories are more semantic than technical. Repositories provide a higher level of abstraction and is a more natural habitant of the rich domain model. Repositories offer more controlled access to the domain model only through Aggregate roots implementing a set of APIs which follow the Ubiquitous Language of the domain. Programming at the JDBC level with the DAO pattern, we used to think in terms of individual tables and operations on them. However, in reality, the domain model is much more than that - it contains complex associations between entities and strong business rules that govern the integrity of the associations. Instead of having the domain model directly deal with individual DAOs, we had always felt the need for a higher level of abstraction that would shield the domain layer from the granularities of joins and projections. ORM frameworks like Hibernate gave us this ability and specifications like JPA standardized them. Build your repository to get this higher level of abstraction on top of DAOs.

You build a repository at the level of the Aggregate Root, and provide access to all entities underneath the root through the unified interface. Here is an example ..


@Entity
class Order {
  private String orderNo;
  private Date orderDate;
  private Collection<LineItem> lineItems;
  //..

  //.. methods
}

@Entity
class LineItem {
  private Product product;
  private long quantity;
  //..

  //.. methods
}



The above snippet shows the example of an Aggregate, with Order as the root. For an Order Management System, all that the user needs is to manipulate Orders through intention revealing interfaces. He should not be given access to manipulate individual line items of an Order. This may lead to inconsistency of the domain model if the user does an operation on a LineItem which invalidates the invariant of the Order aggregate. While the Order entity encapsulates all domain logic related to manipulation of an Order aggregate, the OrderRepository is responsible for giving the user a single point of interface with the underlying persistent store.


interface OrderRepository {
  //.. queries
  List<Order> getAll();
  List<Order> getByOrderNo(String orderNo);
  List<Order> getByOrderDate(Date orderDate);
  List<Order> getByProduct(Product product);

  //..
  void write(final Order order);
  //..
}



Now the domain services can use the service of this repository to access orders from the underlying database. This is what Eric Evans calls reconsitution as opposed to construction (which is typically the responsibility of the factory).

JPA to implement Repository

The nice thing to be able to program to a specification is the abstraction that you can enforce on your model. Repositories can be implemented using JPA and can nicely be abstracted away from the actual domain services. A Repository acts like a collection and provides user the illusion of using memory based data structures without exposing the internals of the interactions with the persistent store. Let us see a sample implementation of a method of the above repository ..


class OrderRepositoryImpl implements OrderRepository {
  //..
  //..

  public List<Order> getByProduct(Product product) {
    String query = "select o from Order o, IN (o.lineItems) li where li.product.id = ?1";
    Query qry = em.createQuery(query);
    qry.setParameter(1, product.getId());

    List<Order> res = qry.getResultList();
    return res;
  }
  //..
  //..
}



The good part is that we have used JPA to implement the Repository, but the actual domain services will not contain a single line of JPA code in them. All of the JPA dependencies can be localized within the Repository implementations. Have a look at the following OrderManagementService api ..


class OrderManagementService {
  //..
  // to be dependency injected
  private OrderRepository orderRepository;

  // apply a discount to all orders for a product
  public List<Order> markDown(final Product product, float discount) {
    List<Order> orders = orderRepository.getByProduct(product);
    for(Order order : orders) {
      order.setPrice(order.getPrice() * discount);
    }
    return orders;
  }
  //..
  //..
}



Note that the repository is injected through DI containers like Spring or Guice, so that the domain service remains completely independent of the implementation of the repository.

But OrderRepository is also a domain artifact !

Right .. and with proper encapsulation we can abstract away the JPA dependencies from OrderRepositoryImpl as well. I had blogged on this before on how to implement a generic repository service and make all domain repositories independent of the implementation.

Monday, October 08, 2007

Domain Modeling with JPA - The Gotchas - Part 3 - A Tip on Abstracting Relationships

JPA is all about POJOs and all relationships are managed as associations between POJOs. All JPA implementations are based on best practices that complement an ideal POJO based domain model. In the first part of this series, I had talked about immutable entities, which prevent clients from inadvertent mutation of the domain model. Mutation mainly affects associations between entities thereby making the aggregate inconsistent. In this post, I will discuss some of the gotchas and best practices of abstracting associations between entities from client code making your domain model much more robust.

Maintaining relationships between entities is done by the underlying JPA implementation. But it the responsibility of the respective entities to set them up based on the cardinalities specified as part of the mapping. This setting up of relationships has to be explicit through appropriate message passing between the respective POJOs. Let us consider the following relationship :


@Entity
public class Employee implements Serializable {
  //..
  //.. properties

  @ManyToMany
  @JoinTable(
    name="ref_employee_skill",
    joinColumns=@JoinColumn(name="employee_pk", referencedColumnName="employee_pk"),
    inverseJoinColumns=@JoinColumn(name="skill_pk", referencedColumnName="skill_pk")
  )
  private Set<Skill> skills;
  //..
  //.. other properties
  //.. methods
}



There is a many-to-many relationship between Employee and Skill entities, which is set up appropriately using proper annotations. Here are the respective accessors and mutators that help us manage this relationship on the Employee side :


@Entity
public class Employee implements Serializable {
  //..
  //.. properties

  public Set<Skill> getSkills() {
    return skills;
  }

  public void setSkills(Set<Skill> skills) {
    this.skills = skills;
  }
  //..
}



Similarly on the Skill entity we will have the corresponding annotations and the respective accessors and mutators for the employees collection ..


@Entity
public class Skill implements Serializable {
  //..
  //.. properties

  @ManyToMany(
    mappedBy="skills"
  )
  private Set<Employee> employees;

  public Set<Employee> getEmployees() {
    return employees;
  }

  public void setEmployees(Set<Employee> employees) {
    this.employees = employees;
  }
  //..
}



Can you see the problem in the above model ?

The problem lies in the fact that the model is vulnerable to inadvertent mutation. Public setters are evil and exposes the model to be changed inconsistently by the client. How ? Let us look at the following snippet of the client code trying to set up the domain relationships between an Employee and a collection of Skills ..


Employee emp = .. ; // managed
Set<Skill> skills = .. ; //managed
emp.setSkills(skills);



The public setter sets the skillset of the employee to the set skills. But what about the back reference ? Every skill should also point to the Employee emp. And this needs to be done explicitly by the client code.


for(Skill skill : skills) {
  skill.getEmployees().add(emp);
}



This completes the relationship management code on part of the client. But is this the best level of abstraction that we can offer from the domain model ?

Try this !

If the setter can make your model inconsistent, do not make it public. Hibernate does not mandate public setters for its own working. Replace public setters with domain friendly APIs, which make more sense to your client. How about addSkill(..) ?


@Entity
public class Employee implements Serializable {
  //..
  //.. properties

  public Employee addSkill(final Skill skill) {
    skills.add(skill);
    skill.addEmployee(this);
    return this;
  }
  //..
}



addSkill() adds a skill to an employee. Internally it updates the collection of skills and best of all, transparently manages both sides of the relationship. And returns the current Employee instance to make it a FluentInterface. Now your client can use your API as ..


Employee emp = .. ; // managed
emp.add(skill_1)
   .add(skill_2)
   .add(skill_3);



Nice!

For clients holding a collection of skills (managed), add another helper method ..


@Entity
public class Employee implements Serializable {
  //..
  //.. properties

  public Employee addSkill(final Skill skill) {
    //.. as above
  }

  public Employee addSkills(final Set<Skill> skills) {
    skills.addAll(skills);

    for(Skill skill : skills) {
      skill.addEmployee(this);
    }
    return this;
  }
  //..
}



Both the above methods abstract away the mechanics of relationship management from the client and present fluent interfaces which are much more friendlier for the domain. Remove the public setter and don't forget to make the getter return an immutable collection ..


@Entity
public class Employee implements Serializable {
  //..
  //.. properties

  public Set<Skill> getSkills() {
    return Collections.unmodifiableSet(skills);
  }
  //..
}



The above approach makes your model more robust and domain friendly by abstracting away the mechanics of relationship management from your client code. Now, as an astute reader you must be wondering how would you use this domain entity as the command object of you controller layer in MVC frameworks - the setters are no longer there as public methods ! Well, that is the subject of another post, another day.

Friday, September 28, 2007

Domain Modeling with JPA - The Gotchas - Part 2 - The Invaluable Value Objects

In the first post of this gotcha series, I had discussed some of the issues around making entities publicly immutable, by not exposing direct setters to the layers above. This approach has its own set of advantages and offers a big safety net to the domain model. The domain model can then be manipulated only through the methods published by the domain contracts. While still on the subject of immutability of domain models, I thought I would discuss about the other cousin of immutable entities that plays a very big role in making your domain driven design more supple.

Enter Value Objects.

While an object-oriented domain model focuses on the behavior of entities, the relational persistence model manages object identities. And a successful marriage of the two paradigms is the job of a good ORM framework. But not all entities need to maintain their identities - their behaviors depend only upon the values they carry. Eric Evans calls them Value Objects.

Value objects are an integral part of any object oriented model, while they are somewhat obscure in the relational persistence model. It is a real challenge to have a successful representation of value objects as reusable abstractions in the OO domain model, while transparently storing them in the relational model with minimum invasiveness on part of the programmer. Value objects increase the reusability of the domain model and JPA offers a flexibile programming model to make their persistence transparent to the developer. The big advantages with value objects are that you need not manage their identities or their lifetimes - both of them are the same as the entities which own them.

Modeling a Value Object with JPA

Consider a sample model snippet where an Employee has-an Address - both of them are designed as separate domain objects in the model. After a careful analysis of the domain, we find that addresses are never shared, i.e. each employee will have a unique address. Hence the relational model becomes the following monolithic table structure :

create table employee (
  //..employee specific columns
  //..
  //..address specific columns
)


In the relational model, we need not have any identity for an address - hence it can be seamlessly glued into the employee record. While in the OO model, we need to have a fine grained abstraction for Address, since the purpose of the OO model is to have the most faithful representation of how the domain behaves. The Address class will have its own behavior, e.g. the format in which an Address gets printed depends upon the country of residence, and it makes no sense to club these behaviors within the Employee domain entity. Hence we model the class Address as a separate POJO.


// immutable
class Address {
  private String houseNumber;
  private String street;
  private String city;
  private String zip;
  private String country;

  //.. getters
  //.. no setter
  //.. behaviors
}



and an Employee has-an Address ..


class Employee {
  private Address homeAddress;
  //.. other attributes
}



JPA makes it really easy to have a successful combination of the two models in the above relationship. Just add an @Embedded annotation to the Address property in Employee class. This will do all the magic to make all individual address attributes as separate columns in the Employee table. And of course we can use all sorts of annotations like @AttributeOverride to change column names between the class and the table.


@Entity
class Employee {
  @Embedded
  @AttributeOverrides( {
    @AttributeOverride(name   =  "street",
        column = @Column(name = "home_street")),
    @AttributeOverride(name   =  "city",
          column = @Column(name = "home_city")),
    @AttributeOverride(name  =  "zip",
          column = @Column(name = "home_zip"))})
  private Address homeAddress;
  //.. other attributes
}



Modeling with JPA allows independent evolution of the OO domain model and relational persistence model. Don't ever try to enforce the relational paradigm into your domain - you are likely to end up in the swamps of the ActiveRecord modeling.

Collection of Value Objects

In the above example, the entity Employee has a one-to-one association with Address - hence it was easy to embed the address attributes as columns within the Employee table. How do we handle a one-to-many association between an entity and a value object ? Let us have a look at this scenario ..

A Project is an entity which abstracts an active project in a company. And the company raises Bills periodically to its clients for all the projects that it executes. The Bill object is a value object. We just have to raise bills and keep a record of all bills raised till date. A Bill does not have an identity, it's only the bill date and amount that matters. But we need to associate all bills with the project for which it is raised. This clearly warrants a 1..n association in the relational model as well. And the lifecycle of all bills is coupled to the lifecycle of the owning project. Sharing of bills is not allowed and we do not need to manage identities of every bill.

Using Hibernate specific annotations, here's how we can manage a set of value objects owned by an entity.


@Entity
class Project {
  //.. attributes

  @CollectionOfElements
  @JoinTable(name="project_bill",
    joinColumns = @JoinColumn(name="project_pk")
  )
  @AttributeOverrides( {
    @AttributeOverride(name = "billNo",
        column = @Column(name = "project_bill_no")),
    @AttributeOverride(name = "billDate",
      column = @Column(name = "project_bill_date")),
    @AttributeOverride(name = "raisedOn",
        column = @Column(name = "raised_on")),
    @AttributeOverride(name = "amount",
      column = @Column(name = "project_bill_amount"))}
  )
  @CollectionId(
    columns = @Column(name = "project_bill_pk"),
    type = @Type(type = "long"),
    generator = "sequence"
  )
  private Set<Bill> bills = new HashSet<Bill>();

  //..
  //..
}



Bill is not an entity - it is a simple POJO, which can be reused along with other owning entities as well. And if we want an inverse association as well, we can maintain a reference to the owning project within the Bill class.


@Embeddable
public class Bill {
  @Parent
  private Project project;
  //..
  //..
}



The database contains a table project_bill, which keeps all bills associated with a project indexed by project_pk. In case we need a sequencing of all bills, we can have a sequence generated in the project_bill table itself through the @org.hibernate.annotations.CollectionId annotation.

Value objects are an immensely useful abstraction. Analyse and find out as many value objects as you can in your domain model. And use the power of JPA and your ORM implementation to map them into your persistent model. The more value objects you can dig out, less will be the effort in managing identities and controlling lifetimes for each of them.

Decoupled Value Object Instantiation Models

There are some situations where value objects tend to be numerous in number. Here is an example :

Every employee has-a designation. Designation is a value object in our domain model and in a typical organization we have a limited number of designations. We make a separate abstraction for designation, since a designation has other behaviors associated with it e.g. perks, salary bracket etc. Here we go ..


@Embeddable
class Designation {
  //.. attributes
  //.. behavior
  //.. immutable
}



and the Employee entity ..


@Entity
class Employee {
  //.. attributes
  private Designation designation;
  //.. other attributes
  //..
}



What about the relational model ? We can employ a nice little trick here ..

Clearly many employees share a designation - hence, theoretically speaking, Designation is an entity (and not a value object) in the relational model, having a 1..n association with the Employee table. But, as Eric Evans has suggested in his discussion on Tuning a Database with Value Objects, there may be situations when it is better to apply denormalization techniques for the sake of storing collocated data. Making Designation an entity and wiring a relationship with Employee through its identity will store the Designation table in a far away physical location, leading to extra page fetches and additional access time. As an alternative, if access time is more critical than physical storage, we can store copies of Designation information with the Employee table itself. And, doing so, Designation turns into a Value Object for the relational model as well! In real world use cases, I have found this technique to be an extremely helpful one - hence thought of sharing the tip with all the readers of this blog.

However, we are not done yet - in fact, the subject of this paragraph is decoupled instantiation models for value objects, and we haven't yet started the tango. We first had to set the stage to make Designation a value object at both the levels - domain and persistence models. Now let us find out how we can optimize our object creation at the domain layer while leaving the persistence level to our JPA implementation.

In a typical use case of the application, we may have bulk creation of employees, which may lead to a bulk creation of value objects. One of the cool features of using JPA is that we can adopt a completely different instantiation strategy for our OO domain model and the relational persistent model. While persisting the value object Designation, we are embedding it within the Employee entity - hence there is always a copy of the value object associated with the persistent Employee model. And this is completely managed by the JPA implementation of the ORM. However, for the domain model, we can control the number of distinct instances of the value object created using the Flyweight design pattern. Have a look ..


@Embeddable
class Designation {
  //.. persistent attributes

  @Transient
  private static Map<String, Designation> designations
    = new HashMap<String, Designation>();

  // package scope
  Designation() {
    //.. need this for Hibernate
  }

  // factory method
  public static Designation create(..) {
    Designation d = null;
    if ((= designations.get(..)) != null) {
      return d;
    }
    // create new designation
    // put it in the map
    // and return
  }
  //..
  //..equals(), hashCode() .. etc.
}



We have a flyweight that manages a local cache of distinct designations created and controls the number of objects instantiated. And since value objects are immutable, they can be freely shared across entities in the domain model. Here is an example where using JPA we can decouple the instantiation strategy of the domain objects from the persistence layer. Although we are storing value objects by-value in the database, we need not have distinct in-memory instances in our domain model. And, if you are using Hibernate, you need not have a public constructor as well. For generation of proxy, Hibernate recommends at least package visibility, which works fine with our strategy of controlling instantiation at the domain layer using flyweights.

Value objects are invaluable in making designs more manageable and flexible. And JPA provides great support in transparent handling of instantiation and persistence of value objects along with their owning entities. With a rich domain model, backed up up by a great ORM like Hibernate that implements JPA, we can get the best of both worlds - powerful OO abstractions as well as transparent handling of their persistence in the relational database. I had earlier blogged about injecting ORM backed repositories for transparent data access in a domain model. In future installments of this series, I plan to cover more on this subject describing real life use cases of applying domain driven design techniques using JPA and Hibernate.