Up: Part II

12 JPA Integration

This chapter is still under active development. The contents will change.
The Java Persistence API [J]  [J] http://java.sun.com/javaee/overview/faq/persistence.jsp, or JPA for short, is the evolution of a number of frameworks in Java to provide a simple database access layer for plain java objects (and, transitively, Scala objects). JPA was developed as part of the Enterprise Java Beans 3 (EJB3) specification, with the goal of simplifying the persistence model. Prior versions had used the Container Managed Persistence (CMP) framework, which required many boilerplate artifacts in the form of interfaces and XML descriptors. As part of the overarching theme of EJB3 to simplify and use convention over configuration, JPA uses sensible defaults and annotations heavily, while allowing for targetted overrides of behavior via XML descriptors. JPA also does away with many of the interfaces used in CMP and provides a single javax.persistence.EntityManager class for all persistence operations. An additional benefit is that JPA was designed so that it could be used both inside and outside of the Enterprise container, and several projects (Hibernate, TopLink, JPOX, etc) provide standalone implementations of EntityManager.
As we’ve seen in chapter 8↑, Lift already comes with a very capable database abstraction layer, so why would we want to use something else? There are a number of reasons:
  1. JPA is easily accessible from both Java and Scala. If you are using Lift to complement part of a project that also contains Java components, JPA allows you to use a common database layer between both and avoid duplication of effort. It also means that if you have an existing project based on JPA, you can easily integrate it into Lift
  2. JPA gives you more flexibility with complex and/or large schemas. While Lift’s Mapper provides most of the functionality you would need, JPA provides additional lifecycle methods and mapping controls when you have complex needs. Additionally, JPA has better support for joins and relationships between entities.
  3. JPA can provide additional performance improvements via second-level object caching. It’s possible to roll your own in Lift, but JPA allows you to cache frequently-accessed objects in memory so that you avoid hitting the database entirely

12.1 Introducing JPA

In order to provide a concrete example to build on while learning how to integrate JPA, we’ll be building a small Lift app to manage a library of books. The completed example is available under the Lift Git repository in the sites directory, and is called “JPADemo”. Basic coverage of the JPA operations is in section 12.5 on page 1↓; if you want more detail on JPA, particularly with advanced topics like locking and hinting, there are several very good tutorials to be found online [K]  [K] http://java.sun.com/developer/technicalArticles/J2EE/jpa/, http://www.jpox.org/docs/1_2/tutorials/jpa_tutorial.html. Our first step is to set up a master project for Maven. This project will have two modules under it, one for the JPA library and one for the Lift application. In a working directory of your choosing, issue the following command:
mvn archetype:generate \
  -DarchetypeRepository=http://scala-tools.org/repo-snapshots \
  -DarchetypeGroupId=net.liftweb \
  -DarchetypeArtifactId=lift-archetype-jpa-basic \
  -DarchetypeVersion=1.1-SNAPSHOT \
  -DgroupId=com.foo.jpaweb \
  -DartifactId=JPADemo \
  -Dversion=1.0-SNAPSHOT
This will use the JPA archetype to create a new project for you with modules for the persistence and web portions of the project.
Note: The reason we have split the module out into two projects is that it aids deployment on Jave EE servers to have the Persistence module be an independent JAR file. If you don’t need that, you can simply merge the contents of the two modules into a single project and it will work standalone. Note that you’ll need to merge the pom.xml file’s dependencies and plugin configurations from all three POMs. Lift comes with an archetype that handles this already, albeit without the demo code we show here. Simply use the lift-archetype-jpa-blank-single archetype and you’ll get a blank project (with minimal files for JPA and Lift) that you can use for your app. There’s also a blank archetype that uses two modules if you want that, called lift-archetype-jpa-blank.
You will get a prompt asking you to confirm the settings we’ve chosen; just hit <enter>. As of this writing we have to use the snapshot version of the archetype because it didn’t make the Lift 1.0 deadline, but otherwise it’s a stable archetype. You will also see some Velocity warnings about invalid references; these can be safely ignored and will hopefully be fixed by 1.1. After the archetype is generated, you should have the following tree structure:
JPADemo
|-- README
|-- pom.xml
|-- spa
|   |-- pom.xml
|   ‘-- src ...
‘-- web
    |-- pom.xml
    ‘-- src ...
If you look at the source directories, you’ll see that our code is already in place! If you’re making your own application you can either use the previously mentioned blank archetypes to start from scratch, or use the basic archetype and modify the POMs, Scala code and templates to match your needs. For now, let’s go over the contents of the project.

12.1.1 Using Entity Classes in Scala

The main components of a JPA library are the entity classes that comprise your data model. For our example application we need two primary entities: Author and Book. Let’s take a look at the Author class first, shown in listing G.1.1 on page 1↓. The listing shows our import of the entire javax.persistence package as well as several annotations on a basic class. For those of you coming from the Java world in JPA, the annotations should look very familiar. The major difference between Java and Scala annotations is that each parameter in a Scala annotation is considered a val, which explains the presence of the val keyword in lines 12, 15 and 17-18. In line 17 you may also note that we must specify the target entity class; although Scala uses generics, the generic types aren’t visible from Java, so the Java JPA libraries can’t deduce the correct type. You may also notice that on line 18 we need to use the Java collections classes for Set, List, etc. With a little bit of implicit conversion magic (to be shown later), this has very little impact on our code. On final item item to note is that the Scala compiler currently does not support nested annotations  [L]  [L] https://lampsvn.epfl.ch/trac/scala/ticket/294, so where we would normally use them (join tables, named queries, etc), we will have to use the orm.xml descriptor, which we cover next.

12.1.2 Using the orm.xml descriptor

As we stated in the last section, there are some instances where the Scala compiler doesn’t fully cover the JPA annotations (nested annotations in particular). Some would also argue that queries and other ancillary data (table names, column names, etc) should be separate from code. Because of that, JPA allows you to specify an external mapping descriptor to define and/or override the mappings for your entity classes. The basic orm.xml file starts with the DTD type declaration, as shown in listing G.1.2 on page 1↓. Following the preamble, we can define a package that will apply to all subsequent entries so that we don’t need to use the fully-qualified name for each class. In our example, we would like to define some named queries for each class. Putting them in the orm.xml allows us to modify them without requiring a recompile. The complete XML Schema Definition can be found at http://java.sun.com/xml/ns/persistence/orm_1_0.xsd.
In this case we have used the orm.xml file to augment our entity classes. If, however, we would like to override the configuration, we may use that as well on a case-by-case basis. Suppose we wished to change the column name for the Author’s name property. We can add (per the XSD) a section to the Author entity element as shown in listing 12.1.2↓. The attribute-override element lets us change anything that we would normally specify on the @Column annotation. This gives us an extremely powerful method for controlling our schema mapping outside of the source code. We can also add named queries in the orm.xml so that we have a central location for defining or altering the queries.
Author override
<entity class="Author">
    <named-query name="findAllAuthors">
      <query><![CDATA[from Author a order by a.name]]></query>
    </named-query>
    <attribute-override name="name">
      <column name="author_name" length="30" />
    </attribute-override>
  </entity>

12.1.3 Working with Attached and Detached Objects

JPA operates with entities in one of two modes: attached and detached. An attached object is one that is under the direct control of a live JPA session. That means that the JPA provider monitors the state of the object and writes it to the database at the appropriate time. Objects can be attached either explicitly via the persist and merge methods (section 12.5.1↓), or implicitly via query results, the getReference method or the find method.
As soon as the session ends, any formerly attached objects are now considered detached. You can still operate on them as normal objects but any changes are not directly applied to the database. If you have a detached object, you can re-attach it to your current session with the merge method; any changes since the object was detached, as well as any subsequent changes to the attached object, will be applied to the database at the appropriate time. The concept of object attachment is particularly useful in Lift because it allows us to generate or query for an object in one request cycle and then make modifications and merge in a different cycle.
As an example, our library application provides a summary listing of authors on one page (src/main/webapp/authors/list.html) and allows editing of those entities on another (src/main/webapp/authors/add.html). We can use the SHtml.link generator on our list page, combined with a RequestVar, to pass the instance (detached once we return from the list snippet) to our edit snippet. Listing 12.1.3↓ shows excerpts from our library application snippets demonstrating how we hand off the instance and do a merge within our edit snippets submission processing function (doAdd).
Passing Detached Instances Around an Application
// in src/main/scala/net/liftweb/jpademo/snippets/Author.scala
...package and imports ... 
class AuthorOps {
  def list (xhtml : NodeSeq) : NodeSeq = {
    val authors = ...
    authors.flatMap(author => bind("author", xhtml, ...
        // use the link closure to capture the current
        // instance for edit insertion
        "edit" -> SHtml.link("add.html",
           () => authorVar(author), Text(?("Edit")))))
  }
  ...
  // Set up a requestVar to track the author object for edits and adds
  object authorVar extends RequestVar(new Author())
  // helper def
  def author = authorVar.is
  def add (xhtml : NodeSeq) : NodeSeq = {
    def doAdd () = {
      ...
      // merge and save the detached instance
      Model.mergeAndFlush(author)
      ...
    }
    // Hold a val here so that the closure grabs it instead of the def
    val current = author
    // Use a hidden element to reinsert the instance on form submission
    bind("author", xhtml,
      "id" -> SHtml.hidden(() => authorVar(current)), ...,
      "submit" -> SHtml.submit(?("Save"), doAdd))
  }
}

12.2 Obtaining a Per-Session EntityManager

Ideally, we would like our JPA access to be as seamless as possible, particularly when it comes to object lifecycle. In JPA, objects can be attached to a current persistence session, or they can be detached from a JPA session. This gives us a lot of flexibility (which we’ll use later) in dealing with the objects themselves, but it also means that we need to be careful when we’re accessing object properties. JPA can use lazy retrieval for instance properties; in particular, this is the default behavior for collection-based properties. What this means is that if we’re working on a detached object and we attempt to access a collection contained in the instance, we’re going to get an exception that the session that the object was loaded in is no longer live. What we’d really like to do is have some hooks into Lift’s request cycle that allows us to set up a session when the request starts and properly close it down when the request ends. We still have to be careful with objects that have been passed into our request (from form callbacks, for instance), but in general this will guarantee us that once we’ve loaded an object in our snippet code we have full access to all properties at any point within our snippets.
Fortunately for us, Lift provides just such a mechanism. In fact, Lift supports several related mechanisms for lifecycle management [M]  [M] Notably, S.addAround with the LoanWrapper, but for now we’re going to focus on just one: the RequestVar. A RequestVar represents a variable associated with the lifetime of the request. This is in contrast to SessionVar, which defines a variable for the lifetime of the user’s session. RequestVar gives us several niceties over handling request parameters ourselves, including type safety and a default value. We go into more detail on RequestVars and SessionVars in section 3.11 on page 1↑. In addition to the Lift facilities, we also use the ScalaJPA project [N]  [N] http://scala-tools.org/mvnsites-snapshots/scalajpa/, source code available at http://github.com/dchenbecker/scalajpa/tree to handle some of the boilerplate of utilizing JPA. ScalaJPA provides some nice traits that “Scalafy” the JPA EntityManager and Query interfaces, as well as accessors that make retrieving an EM simple. To use ScalaJPA we simply add the following dependency to our POM.
<dependency>
  <groupId>org.scala-tools</groupId>
  <artifactId>scalajpa</artifactId>
  <version>1.0-SNAPSHOT</version>
</dependency>
Note that at the time of writing the library is at 1.0-SNAPSHOT, but should be promoted to 1.0 soon.
We leverage ScalaJPA’s LocalEMF and RequestVarEM traits to provide a simple RequestVar interface to obtain the EM via local lookup (i.e. via the javax.persistence.Persistence class), as shown in listing 12.2 on page 1↓. It’s trivial to use JNDI instead by substituting the JndiEMF trait for the LocalEMF trait, but the details of setting up the JNDI persistence module are beyond the scope of this book.
Setting up an EntityManager via RequestVar
import _root_.org.scala_libs.jpa._
object Model extends LocalEMF("jpaweb") with RequestVarEM
Once we have this object set up, we can access all of the ScalaEntityManager methods directly on Model.

12.3 Handling Transactions

We’re not going to go into too much detail here; there are better documents available [O]  [O] http://java.sun.com/developer/EJTechTips/2005/tt0125.html if you want to go into depth on how the Java Transaction API (JTA) or general transactions work. Essentially, a transaction is a set of operations that are performed atomically; that is, they either all complete successfully or none of them do. The classic example is transferring funds between two bank accounts: you subtract the amount from one account and add it to the other. If the addition fails and you’re not operating in the context of a transaction, the client has lost money!
In JPA, transactions are required. If you don’t perform your operations within the scope of a transaction you will either get an exception (if you’re using JTA), or you will spend many hours trying to figure out why nothing is being saved to the database. There are two ways of handling transactions under JPA: resource local and JTA. Resource local transactions are what you use if you are managing the EM factory yourself (corresponding to the LocalEMF trait). Similarly, JTA is what you use when you obtain your EM via JNDI. Technically it’s also possible to use JTA with a locally managed EM, but that configuration is beyond the scope of this book.
Generally, we would recommend using JTA where it’s free (i.e., when deploying to a Java EE container) and using resource-local when you’re using a servlet container such as Jetty or Tomcat. If you will be accessing multiple databases or involving resources like EJBs, it is much safer to use JTA so that you can utilize distributed transactions. Choosing between the two is as simple as setting a property in your persistence.xml file (and changing the code to open and close the EM). Listing 12.3↓ shows examples of setting the transaction-type attribute to RESOURCE_LOCAL and to JTA. If you want to use JTA, you can also omit the transaction-type attribute since JTA is the default.
Setting the transaction type
<persistence-unit name="jpaweb" transaction-type="RESOURCE_LOCAL">
  <non-jta-datasource>myDS</non-jta-datasource>
​
<persistence-unit name="jpaweb" transaction-type="JTA">
  <jta-datasource>myDS</jta-datasource>
You must make sure that your EM setup code matches what you have in your persistence.xml. Additionally, the database connection must match; with JTA, you must use a jta-data-source (obtained via JNDI) for your database connection. For resource-local, you can either use a non-jta-datasource element or you can set the provider properties, as shown in listing 12.3 on page 1↓. In this particular example we’re setting the properties for Hibernate, but similar properties exist for TopLink [P]  [P] http://www.oracle.com/technology/products/ias/toplink/JPA/essentials/toplink-jpa-extensions.html, JPOX [Q]  [Q] http://www.jpox.org/docs/1_2/persistence_unit.html, and others.
If you’ll be deploying into a JEE container, such as JBoss or GlassFish, then you get JTA support almost for free since JTA is part of the JEE spec. If you want to deploy your application on a lightweight container like Jetty or Tomcat, we would recommend that you look into using an external JTA coordinator such as JOTM, Atomikos, or JBoss Transaction Manager, since embedding a JTA provider in your container is a nontrivial task.
Setting resource-local properties for Hibernate
<persistence>
   <persistence-unit name="jpaweb" transaction-type="RESOURCE_LOCAL">
      <properties>
         <property name="hibernate.dialect" value="org.hibernate.dialect.PostgreSQLDialect"/>
         <property name="hibernate.connection.driver_class" value="org.postgresql.Driver"/>
         <property name="hibernate.connection.username" value="somUser"/>
         <property name="hibernate.connection.password" value="somePass"/>
         <property name="hibernate.connection.url" value="jdbc:postgresql:jpaweb"/>
      </properties>
   </persistence-unit>
</persistence>
One final note in regard to transactions is how they’re affected by Exceptions. Per the spec, any exceptions thrown during the scope of a transaction, other than
javax.persistence.NoResultException or javax.persistence.NonUniqueResultException, will cause the transaction to be marked for rollback.

12.4 ScalaEntityManager and ScalaQuery

Now that we’ve gone through setting up our EntityManager, let’s look at how we actually use them in an application. As a convenience, ScalaJPA defines two thin wrappers on the existing EntityManager [R]  [R] http://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html and Query [S]  [S] http://java.sun.com/javaee/5/docs/api/javax/persistence/Query.html interfaces to provide more Scala-friendly methods. This means that we get Scala’s collection types (i.e. List instead of java.util.List) and generic signatures so that we can avoid explicit casting. The ScalaEntityManager trait provides a wrapper on the EntityManager class, and is included as part of the RequestVarEM trait that we’ve mixed into our Model object. The API for ScalaEntityManager can be found at http://scala-tools.org/mvnsites/scalajpa/scaladocs/org/scala_libs/jpa/ScalaEntityManager.html.
Next, we have the ScalaQuery trait, with API docs at http://scala-tools.org/mvnsites/scalajpa/scaladocs/org/scala_libs/jpa/ScalaQuery.html. Like ScalaEntityManager, this is a thin wrapper on the Query interface. In particular, methods that return entities are typed against the ScalaQuery itself, so that you don’t need to do any explicit casting in your client code. We also have some utility methods to simplify setting a parameter list as well as obtaining the result(s) of the query.

12.5 Operating on Entities

In this section we’ll demonstrate how to work with entities and cover some important tips on using JPA effectively.

12.5.1 Persisting, Merging and Removing Entities

The first step to working with any persistent entities is to actually persist them. If you have a brand new object, you can do this with the persist method:
val myNewAuthor = new Author; myNewAuthor.name = "Wilma"
Model.persist(myNewAuthor)
This attaches the myNewAuthor object to the current persistence session. Once the object is attached it should be visible in any subsequent queries, although it may not be written to the database just yet (see section 12.5.6↓). Note that the persist method is only intended for brand new objects. If you have a detached object and you try to use persist you will most likely get an EntityExistsException as the instance you’re merging is technically conflicting with itself. Instead, you want to use the merge method to re-attach detached objects:
val author = Model.merge(myOldAuthor)
An important thing to note is that the merge method doesn’t actually attach the object passed to it; instead, it makes an attached copy of the passed object and returns the copy. If you mistakenly merge without using the returned value:
Model.merge(myOldAuthor)
myOldAuthor.name = “Fred”
you’ll find that subsequent changes to the object won’t be written to the database. One nice aspect of the merge method is that it intelligently detects whether the entity you’re merging is a new object or a detached object. That means that you can use merge everywhere and let it sort out the semantics. For example, in our library application, using merge allows us to combine the adding and editing functionality into a single snippet; if we want to edit an existing Author we pass it into the method. Otherwise, we pass a brand new Author instance into the method and the merge takes care of either case appropriately.
Removing an object is achieved by calling the remove method:
Model.remove(myAuthor)
The passed entity is detached from the session immediately and will be removed from the database at the appropriate time. If the entity has any associations on it (to collections or other entities), they will be cascaded as indicated by the entity mapping. An example of a cascade is shown in the Author listing on page 1↓. The books collection has the cascade set to REMOVE, which means that if an author is deleted, all of the books by that author will be removed as well. The default is to not cascade anything, so it’s important that you properly set the cascade on collections to avoid constraint violations when you remove entities. It’s also useful to point out that you don’t actually need to have an entity loaded to remove it. You can use the getReference method to obtain a proxy that will cause the corresponding database entry to be removed:
Model.remove(Model.getReference(classOf[Author], someId))

12.5.2 Loading an Entity

There are actually three ways to load an entity object in your client code: using find, getReference or a query. The simplest is to use the find method:
val myBook = Model.find(classOf[Book], someId)
The find method takes two parameters: the class that you’re trying to load and the value of the ID field of the entity. In our example, the Book class uses the Long type for its ID, so we would put a Long value here. It returns either a Full Box (section C.2 on page 1↓) if the entity is found in the database, otherwise it returns Empty. With find, the entity is loaded immediately from the database and can be used in both attached and detached states.
The next method you can use is the getReference method:
val myBook = Model.getReference(classOf[Book], someId)
This is very similar to the find method with a few key differences. First, the object that is returned is a lazy proxy for the entity. That means that no database load is required to occur when you execute the method, although providers may do at least a check on the existence of the ID. Because this is a lazy proxy, you usually don’t want to use the returned object in a detached state unless you’ve accessed its fields while the session was open. The normal use of getReference is when you want to set up a relationship between two (or more) entities, since you don’t need to query all of the fields just to set a foreign key. For example:
myBook.author = Model.getReference(classOf[Author], authorId)
When myBook is flushed to the database the EM will correctly set up the relationship. The final difference is in how unknown entities are handled. Recall that the find method returns Empty if the entity cannot be found; with getReference, however, we don’t query the database until the reference is used. Because of this, the javax.persistence.EntityNotFoundException is thrown when you try to access an undefined entity for the first time (this also marks the transaction for rollback).
The third method for loading an entity would be to use a query (named or otherwise) to fetch the entity. As an example, here’s a query equivalent of the find method:
val myBook = 
  Model.createQuery[Book]("from Book bk where bk.id = :id")
       .setParams("id" -> someId).findOne
The advantage here is that we have more control over what is selected by using the query language to specify other properties. One caveat is that when you use the findOne method you need to ensure that the query will actually result in a unique entity; otherwise, the EM will throw a NonUniqueResultException.

12.5.3 Loading Many Entities

Corresponding to the findOne method is the findAll method, which returns all entities based on a query. There are two ways to use findAll; the first is to use the convenience findAll method defined in the ScalaEntityManager class:
val myBooks = Model.findAll("booksByYear", "year" -> myYear)
This requires the use of a named query for the first arg, and subsequent args are of the form (“paramName” -> value). Named queries can be defined in your orm.xml, as shown in section 12.1.2 on page 1↑. Named queries are highly recommended over ad-hoc queries since they allow you to keep the queries in one location instead of being scattered all over your code. Named queries can also be pre-compiled by the JPA provider, which will catch errors at startup (or in your unit tests, hint hint) instead of when the query is run inside your code.
The second method is to create a ScalaQuery instance directly and then set parameters and execute it. In reality this is exactly what the Model.findAll method is doing. The advantage here is that with the ScalaQuery instance you can do things like set hinting, paging, and so on. For instance, if you wanted to do paging on the books query, you could do
val myBooks = Model.createNamedQuery(“booksByYear”)
                   .setParams(“year” -> myYear)
                   .setMaxResults(20)
                   .setFirstResult(pageOffset).findAll

12.5.4 Using Queries Wisely

In general we recommend that you use named queries throughout your code. In our experience, the extra effort involved in adding a named query is more than offset by the time it saves you if you ever need to modify the query. Additionally, we recommend that you use named parameters in your queries. Named parameters are just that: parameters that are inserted into your query by name, in contrast to positional parameters. As an example, here is the same query using named and positional parameters:
Named parametersselect user from User where (user.name like :searchString or user.email like :searchString) and user.widgets > :widgetCount
Positional parametersselect user from User where (user.name like ? or user.email like ?) and user.widgets > ?
This example shows several advantages of named parameters over positional parameters:
  1. You can reuse the same parameter within the same query and you only set it once. In the example about we would set the same parameter twice using positional params
  2. The parameters can have meaningful names.
  3. With positional params you may have to edit your code if you need to alter your query to add or remove parameters
In any case, you should generally use the parameterized query types as opposed to hand constructing your queries; using things like string concatenation opens up your site to SQL injection attacks unless you’re very careful. For more information on queries there’s an excellent reference for the EJBQL on the Hibernate website at http://www.hibernate.org/hib_docs/entitymanager/reference/en/html/queryhql.html.

12.5.5 Converting Collection Properties

The ScalaEntityManager and ScalaQuery methods are already defined so that they return Scala-friendly collections such as scala.collection.jcl.BufferWrapper or SetWrapper. We have to use Java Collections [T]  [T] http://java.sun.com/docs/books/tutorial/collections/index.html “under the hood” and then wrap them because JPA doesn’t understand Scala collections. For the same reason, collections in your entity classes must also use the Java Collections classes. Fortunately, Scala has a very nice framework for wrapping Java collections. In particular, the scala.collection.jcl.Conversions class contains a number of implicit conversions; all you have to do is import them at the top of your source file like so:
import scala.collection.jcl.Conversions._
Once you’ve done that the methods are automatically in scope and you can use collections in your entities as if they were real Scala collections. For example, we may want to see if our Author has written any mysteries:
val suspenseful = author.books.exists(_.genre = Genre.Mystery)

12.5.6 The importance of flush() and Exceptions

It’s important to understand that in JPA the provider isn’t required to write to the database until the session closes or is flushed. That means that constraint violations aren’t necessarily checked at the time that you persist, merge or remove and object. Using the flush method forces the provider to write any pending changes to the database and immediately throw any exceptions resulting from any violations. As a convenience, we’ve written the mergeAndFlush, persistAndFlush, and removeAndFlush methods to do persist, merge and remove with a subsequent flush, as shown in listing 12.5.6↓, taken from the Author snippet code. You can also see that because we flush at this point, we can catch any JPA-related exceptions and deal with them here. If we don’t flush at this point, the exception would be thrown when the transaction commits, which is often very far (in code) from where you would want to handle it.
Auto-flush methods
def doAdd () = {
  if (author.name.length == 0) {
    error("emptyAuthor", "The author’s name cannot be blank")
  } else {
    try {
      Model.mergeAndFlush(author)
     redirectTo("list.html")
    } catch {
      case ee : EntityExistsException => error("Author already exists")
      case pe : PersistenceException => 
        error("Error adding author"); Log.error("Error adding author", pe)
    }
  }
}
Although the combo methods simplify things, we recommend that if you will be doing multiple operations in one session cycle that you use a single flush at the end:
Multiple JPA ops
val container = Model.find(classOf[Container], containerId)
Model.remove(container.widget)
container.widget = new Widget("Foo!")
// next line only required if container.widget doesn’t cascade PERSIST
Model.persist(container.widget)
Model.flush()

12.5.7 Validating Entities

Since we’ve already covered the Mapper framework and all of the extra functionality that it provides beyond being a simple ORM, we felt that we should discuss one of the more important aspects of data handling as it pertains to JPA: validation of data.
JPA itself doesn’t come with a built-in validation framework, although the upcoming JPA 2.0 may use the JSR 303 (Bean Validation) framework as its default. Currently, Hibernate Validator is one of the more popular libraries for validating JPA entities, and can be used with any JPA provider. More information is available at the project home page: http://www.hibernate.org/412.html.
The validation of entities with Hibernate Validator is achieved, like the JPA mappings, with annotations. Listing 12.5.7↓ shows a modified Author class with validations for the name. In this case we have added a NotNull validation as well as a Length check to ensure we are within limits.
Note: Unfortunately, due to the way that the validator framework extracts entity properties, we have to rework our entity to use a getter/setter for any properties that we want to validate; even the scala.reflect.BeanProperty annotation won’t work.
Validation can be performed automatically via the org.hibernate.validator.event.JPAValidateListener EntityListener, or programmatically via the org.hibernate.validator.ClassValidator utility class. In the listing we use ClassValidator and match on the array returned from getInvalidValues for processing. Further usage and configuration is beyond the scope of this book.
The Author class with Hibernate Validations
...
class Author {
  ...
  var name : String = ""
  @Column{val unique = true, val nullable = false}
  @NotNull
  @Length{val min = 3, val max = 100}
  def getName() = name
  def setName(nm : String) { name = nm }
  ...
} 
// In the snippet class
class AuthorOps {
  ...
  val authorValidator = new ClassValidator(classOf[Author])
  def add (xhtml : NodeSeq) : NodeSeq = {
    def doAdd () = {
      authorValidator.getInvalidValues(author) match {
        case Array() =>
          try {
            Model.mergeAndFlush(author)
            ...
          } catch {
            ...
          }     
        case errors => {
          errors.foreach(err => S.error(err.toString)) 
        }      
      }
    ...
  }
}

12.6 Supporting User Types

JPA can handle any Java primitive type, their corresponding Object versions (java.lang.Long, java.lang.Integer, etc), and any entity classes comprised of these types  [U]  [U] It can technically handle more; see the JPA spec, section 2.1.1 for details. Occasionally, though, you may have a requirement for a type that doesn’t fit directly with those specifications. One example in particular would be Scala’s enumerations. Unfortunately, the JPA spec currently doesn’t have a means to handle this directly, although the various JPA providers such as Toplink and Hibernate provide mechanisms for resolving custom user types. JPA does provide direct support for Java enumerations, but that doesn’t help us here since Scala enumerations aren’t an extension of Java enumerations. In this example, we’ll be using Hibernate’s UserType to support an enumeration for the Genre of a Book.
We begin by implementing a few helper classes besides the Genre enumeration itself. First, we define an Enumv trait, shown in listing G.1.3 on page 1↓. Its main purpose is to provide a valueOf method that we can use to resolve the enumerations database value to the actual enumeration. We also add some extra methods so that we can encapsulate a description along with the database value. Scala enumerations can use either Ints or Strings for the identity of the enumeration value (unique to each val), and in this case we’ve chosen Strings. By adding a map for the description (since Scala enumeration values must extend the Enumeration#Value class and therefore can’t carry the additional string) we allow for the additional info. We could extend this concept to make the Map carry additional data, but for our purposes this is sufficient.
In order to actually convert the Enumeration class into the proper database type (String, Int, etc), we need to implement the Hibernate UserType interface, shown in listing G.1.4 on page 1↓. We can see on line 18 that we will be using a varchar column for the enumeration value. Since this is based on the Scala Enumeration’s Value method, we could technically use either Integer or character types here. We override the sqlTypes and returnedClass methods to match our preferred type, and set the equals and hashCode methods accordingly. Note that in Scala, the “==” operator on objects delegates to the equals method, so we’re not testing reference equality here. The actual resolution of database column value to Enumeration is done in the nullSafeGet method; if we decided, for instance, that the null value should be returned as unknown, we could do this here with some minor modifications to the Enumv class (defining the unknown value, for one).The rest of the methods are set appropriately for an immutable object (Enumeration). The great thing about the EnumvType class, is that it can easily be used for a variety of types due to the “et” constructor argument; as long as we mix in the Enumv trait to our Enumeration objects, we get persistence essentially for free. If we determined instead that we want to use Integer enumeration IDs, we need to make minor modifications to the EnumvType to make sure arguments match and we’re set.
Genre and GenreType
package com.foo.jpaweb.model
​
object Genre extends Enumeration with Enumv {
  val Mystery = Value("Mystery", "Mystery")
  val Science = Value("Science", "Science")
  val Theater = Value("Theater", "Drama literature")
  // more values here...
}
​
class GenreType extends EnumvType(Genre) {}
Finally, the Genre object and the associated GenreType is shown in listing 12.6↑. You can see that we create a singleton Genre object with specific member values for each enumeration value. The GenreType class is trivial now that we have the EnumvType class defined. To use the Genre type in our entity classes, we simply need to add the proper var and annotate it with the @Type annotation, as shown in listing 12.6↓. We need to specify the type of the var due to the fact that the actual enumeration values are of the type Enumeration.Val, which doesn’t match our valueOf method in the Enumv trait. We also want to make sure we set the enumeration to some reasonable default; in our example we have an unknown value to cover that case.
Using the @Type annotation
@Type{val ‘type‘ = "com.foo.jpaweb.model.GenreType"}
  var genre : Genre.Value = Genre.unknown

12.7 Running the Application

Now that we’ve gone over everything, it’s time to run the application. Because we’ve split up the app into separate SPA and WEB modules, we need to first run
mvn install
From the SPA module directory to get the persistence module added to your maven repository. Once that is done, you can go to the WEB module directory and run
mvn jetty:run
To get it started.

12.8 Summing Up

As we’ve shown in this chapter, the Java Persistence API provides a robust, flexibile framework for persisting data to your database, and does so in a manner that integrates fairly well with Lift. We’ve demonstrated how you can easily write entities using a combination of annotations and the orm.xml descriptor, how to define your own custom user types to handle enumerations, the intricacies of working with transactions in various contexts, and leveraging the ScalaJPA framework to simplify your persistence setup.
Up: Part II

(C) 2012 Lift 2.0 EditionWritten by Derek Chen-Becker, Marius Danciu and Tyler Weir