8 The Mapper and Record Frameworks
In our experience, most webapps end up needing to store user data somewhere. Once you start working with user data, though, you start dealing with issues like coding up input forms, validation
↓, persistence, etc. to handle the data. That’s where the Mapper and Record frameworks come in. These frameworks provides a scaffolding for all of your data manipulation needs. Mapper is the original Lift persistence framework, and it is closely tied to JDBC for its storage. Record is a new refactorization of Mapper that is backing-store agnostic at its core, so it doesn’t matter whether you want to save your data to JDBC, JPA, or even something such as XML. With Record, selecting the proper driver will be as simple as hooking the proper traits into your class.
The Record framework is relatively new to Lift. The plan is to move to Record as the primary ORM framework for Lift sometime post-1.0. Because Record is still under active design and development, and because of its current “moving target” status, this chapter is mostly going to focus on Mapper. We will, however, provide a few comparitive examples of Record functionality to give you a general feel for the flavor of the changes. In any case, Mapper will not go away even when record comes out, so you can feel secure that any code using Mapper will be viable for quite a while.
8.1 Introduction to Mapper and MetaMapper
Let’s start by discussing the relationship between the Mapper
↓ and MetaMapper
↓ traits (and the corresponding Record
↓ and MetaRecord
↓). Mapper provides the
per-instance functionality for your class, while MetaMapper handles the
global operations for your class and provides a common location to define per-class static specializations of things like field order, form generation, and HTML representation. In fact, many of the Mapper methods actually delegate to methods on MetaMapper. In addition to Mapper and MetaMapper, there is a third trait, MappedField, that provides the per-field functionality for your class. In Record, the trait is simply called “Field”. The MappedField trait lets you define the individual validators as well as filters to transform the data and the field name. Under Record, Field adds some functionality such as tab order and default error messages for form input handling.
8.1.1 Adding Mapper to Your Project
Since Mapper is a separate module, you need to add the following dependency to your pom.xml to access it:
Mapper POM Dependency
<project ...>
...
<dependencies>
...
<dependency>
<groupId>net.liftweb</groupId>
<artifactId>lift-mapper</artifactId>
<version>1.0</version> <!-- or 1.1-SNAPSHOT, etc -->
</dependency>
</dependencies>
...
</project>
You’ll also need the following import in any Scala code that uses Mapper:
Mapper Imports
import _root_.net.liftweb.mapper._
8.1.2 Setting Up the Database Connection
The first thing you need to do is to define the database connection. We do this by defining an object called
DBVendor (but you can call it whatever you want). This object extends the net.liftweb.mapper.ConnectionManager trait and must implement two methods: newConnection and releaseConnection. You can make this as sophisticated as you want, with pooling, caching, etc., but for now, Listing
8.1.2↓ shows a basic implementation to set up a PostgreSQL driver.
.. standard Lift imports ...
import _root_.net.liftweb.mapper._
import _root_.java.sql._
object DBVendor extends ConnectionManager {
// Force load the driver
Class.forName("org.postgresql.Driver")
// define methods
def newConnection(name : ConnectionIdentifier) = {
try {
Full(DriverManager.getConnection(
"jdbc:postgresql://localhost/mydatabase",
"root", "secret"))
} catch {
case e : Exception => e.printStackTrace; Empty
}
}
def releaseConnection (conn : Connection) { conn.close }
}
class Boot {
def boot {
...
DB.defineConnectionManager(DefaultConnectionIdentifier, DBVendor)
}
}
A few items to note:
-
The name parameter for newConnection can be used if you need to have connections to multiple distinct databases. One specialized case of this is when you’re doing DB sharding (horizontal scaling). Multiple database usage is covered in more depth in Section 8.3.1↓
-
The newConnection method needs to return a Box[java.sql.Connection]. Returning Empty indicates failure
-
The releaseConnection method exists so that you have complete control over the lifecycle of the connection. For instance, if you were doing connection pooling yourself you would return the connection to the available pool rather than closing it
-
The DB.defineConnectionManager call is what binds our manager into Mapper. Without it your manager will never get called
8.1.3 Constructing a Mapper-enabled Class
Now that we’ve covered some basic background, we can start constructing some Mapper classes to get more familiar with the framework. We’ll start with a simple example of a class for an expense transaction from our PocketChange application with the following fields:
-
Date
-
Description: a string with a max length of 100 chars
-
Amount: a decimal value with a precision of 16 digits and two decimal places
-
A reference to the Account that owns the transaction
Given these requirements we can declare our
Expense class as shown in Listing
8.1.3↓.
import _root_.java.math.MathContext
class Expense extends LongKeyedMapper[Expense] with IdPK {
def getSingleton = Expense
object dateOf extends MappedDateTime(this)
object description extends MappedString(this,100)
object amount extends MappedDecimal(this, MathContext.DECIMAL64, 2)
object account extends MappedLongForeignKey(this, Account)
}
For comparison, the Record version is shown in Listing
8.1.3↓. This example already shows some functionality that hasn’t been ported over to Record from Mapper; among other things, the
IdPK trait, and foreign key fields (many to one mappings) are missing. The other minor differences are that the
getSingleton method has been renamed to
meta, and the Field traits use different names under the Record framework (i.e.
DateTimeField vs
MappedDateTime).
import _root_.java.math.MathContext
import _root_.net.liftweb.record._
class Expense extends KeyedRecord[Expense,Long] {
def meta = Expense
def primaryKey = id
object id extends LongField(this) with KeyField[Long,Expense]
object dateOf extends DateTimeField(this)
object description extends StringField(this, 100)
object amount extends DecimalField(this, MathContext.DECIMAL64, 2)
object account extends LongField(this)
}
As you can see, we’ve set
Expense to extend the
LongKeyedMapper and
IdPK traits and we’ve added the fields required by our class. We would like to provide a primary key for our entity; while not strictly necessary, having a synthetic primary key often helps with CRUD operations. The
LongKeyedMapper trait accomplishes two objectives: it tells Lift that we want a primary key defined and that the key should be a long. This is basically a shortcut for using the
KeyedMapper[Long,Expense]↓ trait. When you use the
KeyedMapper trait you need to provide an implementation for the
primaryKeyField def, which must match the type of the
KeyedMapper trait and be a subtype of
IndexedField↓. The
IdPK trait handles the implementation, but note that
IdPK currently only supports
Long keys. Mapper supports both indexed
Longs and
Strings, so if you want
Strings you’ll need to explicitly use
KeyedMapper[String,...] and provide the field definition yourself. It’s possible to use some other type for your primary key, but you’ll need to roll your own (Section
8.2.7↓). Technically
Int indexes are supported as well, but there is no corresponding trait for an
Int foreign key. That means that if you use an
Int for the primary key, you may not be able to add a relationship to another object (Section
8.1.4↓), unless you write your own. Record is a little more flexible in primary key selection because it uses, in effect, a marker trait (
KeyField) to indicate that a particular field is a key field. One thing to note is that in the Mapper framework, the table name
↓ for your entity defaults to the name of the class (Expense, in our case). If you want to change this, then you just need to override the
dbTableName↓ def in your MetaMapper object.
Looking at these examples, you’ve probably noticed that the fields are defined as objects rather than instance members (vars). The basic reason for this is that the MetaMapper needs access to fields for its validation and form functionality; it is more difficult to cleanly define these properties in the MetaMapper if it had to access member vars on each instance since a MetaMapper instance is itself an object. Also note that
MappedDecimal is a custom field type, which we’ll cover in Section
8.2.7↓.
In order to tie all of this together, we need to define a matching
LongKeyedMetaMapper↓ object as the singleton for our entity, as shown in Listing
8.1.3↓. The Meta object (whether MetaMapper or MetaRecord) is where you define most behavior that is common across all of your instances. In our examples, we’ve decided to name the meta object and instance class the same. We don’t feel that this is unclear because the two together are what really define the ORM behavior for a “type.”
object Expense extends Expense with LongKeyedMetaMapper[Expense] {
override def fieldOrder = List(dateOf, description, amount)
}
In this instance, we’re simply defining the order of fields as they’ll be displayed in XHTML
↓ and forms
↓ by overriding the
fieldOrder method. The default behavior is an empty list, which means no fields are involved in display or form generation. Generally, you will want to override
fieldOrder because this is not very useful. If you don’t want a particular field to show up in forms or XHTML output, simply omit it from the fieldOrder
↓ list.
Because fields aren’t actually instance members, operations on them are slightly different than with a regular var. The biggest difference is how we set fields: we use the apply method. In addition, field access can be chained so that you can set multiple field values in one statement, as shown in Listing
8.1.3↓:
myEntry.dateOf(new Date).description("A sample entry")
myEntry.amount(BigDecimal("127.20"))
The underlying value of a given field can be retrieved with the
is method (the
value method in Record) as shown in Listing
8.1.3↓.
Accessing Field Values in Record
// mapper
val tenthOfAmount = myEntry.amount.is / 10
val formatted = String.format("%s : %s",
myEntry.description.is,
myEntry.amount.is.toString)
// record
if (myEntry.description.value == "Doughnuts") {
println("Diet ruined!")
}
8.1.4 Object Relationships
Often it’s appropriate to have relationships between different entities. The archetypical example of this is the parent-child relationship. In SQL, a relationship can be defined with a foreign key that associates one table to another based on the primary key of the associated table. As we showed in Listing
8.1.3↑, there is a corresponding
MappedForeignKey trait, with concrete implementations for Long and String foreign keys. Once we have this defined, accessing the object via the relationship is achieved by using the
obj method on the foreign key field. Note that the
obj method returns a
Box, so you need to do some further processing with it before you can use it. With the foreign key functionality you can easily do one-to-many and many-to-one
↓ relationships (depending on where you put the foreign key). One-to-many
↓ relationships can be achieved using helper methods on the “one” side that delegate to queries. We’ll cover queries in a moment, but Listing
8.1.4↓ shows examples of two sides of the same relationship.
Accessing Foreign Objects
class Expense extends LongKeyedMapper[Expense] with IdPK {
...
object account extends MappedLongForeignKey(this, Account)
def accountName =
Text("My account is " + (account.obj.map(_.name.is) openOr "Unknown"))
}
class Account ... {
...
def entries = Expense.findAll(By(Expense.account, this.id))
}
If you want to do many-to-many
↓ mappings you’ll need to provide your own “join” class with foreign keys to both of your mapped entities. An example would be if we wanted to have tags (categories) for our ledger entries and wanted to be able to have a given entry have multiple tags (e.g., you purchase a book for your mother’s birthday, so it has the tags Gift, Mom, and Books). First we define the
Tag entity, as shown in Listing
8.1.4↓ .
class Tag extends LongKeyedMapper[Tag] with IdPK {
def getSingleton = Tag
object name extends MappedString(this,100)
}
object Tag extends Tag with LongKeyedMetaMapper[Tag] {
override def fieldOrder = List(name)
}
Next, we define our join entity, as shown in Listing
8.1.4↓. It’s a
LongKeyedMapper just like the rest of the entities, but it only contains foreign key fields to the other entities.
class ExpenseTag extends LongKeyedMapper[ExpenseTag] with IdPK {
def getSingleton = ExpenseTag
object tag extends MappedLongForeignKey(this,Tag)
object expense extends MappedLongForeignKey(this,Expense)
}
object ExpenseTag extends ExpenseTag with LongKeyedMetaMapper[ExpenseTag] {
def join (tag : Tag, tx : Expense) =
this.create.tag(tag).expense(tx).save
}
To use the join entity, you’ll need to create a new instance and set the appropriate foreign keys to point to the associated instances. As you can see, we’ve defined a convenience method on our
Expense meta object to do just that. To make the many-to-many
↓ accessible as a field on our entities, we can use the HasManyThrough trait, as shown in Listing
8.1.4↓.
HasManyThrough for Many-to-Many Relationships
class Expense ... {
object tags extends HasManyThrough(this, Tag,
ExpenseTag, ExpenseTag.tag, ExpenseTag.expense)
}
A similar field could be set up on the Tag entity to point to entries. It’s important to note a few items:
-
The only way to add new entries is to directly construct the ExpenseTag instances and save them (either directly or via a helper method). You can’t make any modifications via the HasManyThrough trait
-
Although the field is defined as a query, the field is actually lazy and only runs once. That means if you query it and then add some new ExpenseTag instances, they won’t show up in the field contents
If you want a way to retrieve the joined results such that it pulls fresh from the database each time, you can instead define a helper join method as shown in Section
8.1.11 on page 1↓.
It’s often helpful to add indexes to a database to improve performance. Mapper makes it easy to do most simple indexing simply by overriding the
dbIndexed_? ↓ def on the field. Listing
8.1.5↓ shows how we would add an index to our
Expense.account field.
class Expense ... {
object account extends ... {
override def dbIndexed_? = true
}
}
Mapper provides for more complex indexing via the
MetaMapper.dbIndexes def combined with the
Index,
IndexField and
BoundedIndexField case classes. Listing
8.1.5↓ shows some examples of how we might create more complex indices.
object Expense extends ... {
// equivalent to the previous listing
override dbIndexes = Index(IndexField(account)) :: Nil
// equivalent to "create index ... on transaction_t (account, description(10))"
override dbIndexes = Index(IndexField(account),
BoundedIndexField(description,10))
}
8.1.6 Schema Mapping
The Mapper framework makes it easy not only to define domain objects, but also to create the database schema to go along with those objects. The
Schemifier↓ object is what does all of the work for you: you simply pass in the
MetaMapper objects that you want the schema created for and it does the rest. Listing
8.1.6↓ shows how we could use
Schemifier to set up the database for our example objects. The first argument controls whether an actual write will be performed on the database. If false,
Schemifier will log all of the DDL statements that it would like to apply, but no changes will be made to the database. The second argument is a logging function (logging is covered in Appendix
E↓). The remaining arguments are the
MetaMapper objects that you would like to have schemified. You need to be careful to remember to include all of the objects, otherwise the tables won’t be created.
Schemifier.schemify(true, Log.infoF _, User, Expense, Account, Tag, ExpenseTag)
As we mentioned in Section
8.1.3↑, you can override the default table name for a given Mapper class via the
dbTableName def in the corresponding MetaMapper. The default table name is the name of the Mapper class, except when the class name is also an SQL reserved word; in this case, a “_t” is appended to the table name. You can also override individual column names on a per-field basis by overriding the
dbColumnName↓ def in the field itself. Like tables, the default column name for a field will be the same as the field name as long as it’s not an SQL reserved word; in this case a “_c” is appended to the column name. Listing
8.1.6↓ shows how we could make our
ExpenseTag.expense field map to “expense_id”.
Setting a Custom Column Name
class ExpenseTag ... {
object expense extends ... {
override def dbColumnName = "expense_id"
}
}
8.1.7 Persistence Operations on an Entity
Now that we’ve defined our entity we probably want to use it in the real world to load and store data. There are several operations on MetaMapper that we can use :
create Creates a new instance of the entity
save Saves an instance to the database.
delete Deletes the given entity instance
count Returns the number of instances of the given entity. An optional query criteria list can be used to narrow the entities being counted
countByInsecureSQL Similar to count, except a raw SQL string can be used to perform the count. The count value is expected to be in the first column and row of the returned result set. An example would be
-
Expense.countByInsecureSQL(“select count(amount) “ +
“from Expense where amount > 20”, ...)
We’ll cover the IHaveValidatedThisSQL parameter in a moment.
There are also quite a few methods available for retrieving instances from the database. Each of these methods comes in two varieties: one that uses the default database connection, and one that allows you to specify the connection
↓ to use (Section
8.3.1 on page 1↓). The latter typically has “DB” appended to the method name. The query methods on
MetaMapper are:
findAll Retrieves a list of instances from the database. The method is overloaded to take an optional set of query criteria parameters; these will be covered in detail in their own section,
8.1.8↓.
findAllByInsecureSQL Retrieves a list of instances based on a raw SQL query. The query needs to return columns for all mapped fields. Usually you can use the BySQL QueryParameter to cover most of the same functionality.
findAllByPreparedStatement Similar to findAllByInsecureSQL except that prepared statements are used, which usually means that the driver will handle properly escaping arguments in the query string.
findAllFields This allows you to do a normal query returning only certain fields from your Mapper instance. For example, if you only wanted the amount from the transaction table you would use this method. Note that any fields that aren’t specified in the query will return their default value. Generally, this method is only useful for read access to data because saving any retrieved instances could overwrite real data.
findMap* These methods provide the same functionality as the non-Map methods, but take an extra function argument that transforms an entity into a Box[T], where T is an arbitrary type. An example would be getting a list of descriptions of our transactions:
The KeyedMapperClass adds the find method, which can be used to locate a single entity based on its primary key. In general these operations will be supported in both Record and Mapper. However, because Record isn’t coupled tightly to a JDBC backend some of the find methods may not be supported directly and there may be additional methods not available in Mapper for persistence. For this reason, this section will deal specifically with Mapper’s persistence operations.
Creating an Instance
Once we have a
MetaMapper object defined we can use it to create objects using the
create↓ method. You generally don’t want to use the “new” operator because the framework has to set up internal data for the instance such as field owner, etc. This is important to remember, since nothing will prevent you from creating an instance manually: you may just get errors when you go to use the instance. The
join method in Listing
8.1.4↑ shows an example of create usage.
Saving an Instance
Saving an instance is as easy as calling the
save method on the instance you want to save. Optionally, you can call the
save method on the Meta object, passing in the instance you want to save. The
save method uses the the
saved_? and
clean_? flags to determine whether an insert or update is required to persist the current state to the database, and returns a boolean to indicate whether the save was successful or not. The
join method in Listing
8.1.4↑ shows an example of
save usage.
Deleting an Instance
There are several ways to delete instances. The simplest way is to call the
delete_! method on the instance you’d like to remove. An alternative is to call the
delete_! method on the Meta object, passing in the instance to delete. In either case, the
delete_! method returns a boolean indicating whether the delete was successful or not. Listing
3 on page 1↓ shows an example of deleting instances.
if (! myExpense.delete_!) S.error("Couldn’t delete the expense!")
//or
if (! (Expense delete_! myExpense)) S.error(...)
Another approach to deleting entities is to use the
bulkDelete_!! method on
MetaMapper. This method allows you to specify query parameters to control which entities are deleted. We will cover query parameters in Section
8.1.8↓ (an example is in Listing
8.1.9 on page 1↓).
8.1.8 Querying for Entities
There are a variety of methods on
MetaMapper for querying for instances of a given entity. The simplest method is
findAll↓ called with no parameters. The “bare”
findAll returns a
List of all of the instances of a given entity loaded from the database. Note that each
findAll... method has a corresponding method that takes a database connection for sharding or multiple database usage (see sharding in Section
8.3.1↓). Of course, for all but the smallest datasets, pulling the entire model to get one entity from the database is inefficient and slow. Instead, the
MetaMapper provides “flag” objects to control the query.
The ability to use fine-grained queries to select data is a fundamental feature of relational databases, and Mapper provides first-class support for constructing queries in a manner that is not only easy to use, but type-safe. This means that you can catch query errors at compile time instead of runtime. The basis for this functionality is the QueryParam trait, which has several concrete implementations that are used to construct the actual query. The QueryParam implementations can be broken up into two main groups:
-
Comparison - These are typically items that would go in the where clause of an SQL query. They are used to refine the set of instances that will be returned
-
Control - These are items that control things like sort order and pagination of the results
Although Mapper provides a large amount of the functionality in SQL, some features are not covered directly or at all. In some cases we can define helper methods to make querying easier, particularly for joins (Section
8.1.11↓).
8.1.9 Comparison QueryParams
The simplest
QueryParam to refine your query is the
By↓ object and its related objects.
By is used for a direct value comparison of a given field: essentially an “=” in SQL. For instance, Listing
8.1.9↓ shows how we can get all of the expenses for a given account.
val myEntries = Expense.findAll(By(Expense.account, myAccount.id))
Note that our By criterion is comparing the Expense.account field to the primary key (id field) of our account instead of to the account instance itself. This is because the Expense.account field is a MappedForeignKey field, which uses the type of the key instead of the type of the entity as its underlying value. In this instance, that means that any queries using Expense.account need to use a Long to match the underlying type. Besides By, the other basic clauses are:
-
NotBy↓ - Selects entities whose queried field is not equal to the given value
-
By_>↓- Selects entities whose queried field is larger than the given value
-
By_<↓- Selects entities whose queried field is less than the given value
-
ByList↓ - Selects entities whose queried field is equal to one of the values in the given List. This corresponds to the “field IN (x,y,z)” syntax in SQL.
-
NullRef↓ - Selects entities whose queried field is NULL
-
NotNullRef↓ - Select entities whose queried field is not NULL
-
Like↓ - Select entities whose queried field is like the given string. As in SQL, the percent sign is used as a wildcard
In addition to the basic clauses there are some slightly more complex ways to control the query. The first of these is
ByRef↓, which selects entities whose queried field is equal to the value of another query field
on the same entity. A contrived example would be if we define a tree structure in our table and root nodes are marked as having themselves as parents:
// select all root nodes from the forest
TreeNode.findAll(ByRef(TreeNode.parent,TreeNode.id))
The related NotByRef tests for inequality between two query fields.
Getting slightly more complex, we come to the
In↓ QueryParameter, which is used just like an “IN” clause with a subselect in an SQL statement. For example, let’s say we wanted to get all of the entries that belong to tags that start with the letter “c”. Listing
8.1.9↓ shows the full breakdown.
val cExpenses =
ExpenseTag.findAll(
In(ExpenseTag.tag,
Tag.id,
Like(Tag.name, "c%"))).map(_.expense.obj.open_!).removeDuplicates
Note that we use the
List.removeDuplicates method to make sure that the List contains unique entities. This requires overriding the
equals and
hashCode methods on the Expense class, which we show in Listing
8.1.9↓. In our example we’re using the primary key (
id field) to define object “identity”.
Overriding
equals and
hashcode on the Expense entity
class Expense ... {
...
override def equals (other : Any) = other match {
case e : Expense if e.id.is == this.id.is => true
case _ => false
}
override def hashCode = this.id.is.hashCode
...
}
We use the
ByRef params to do the join between the many-to-many entity on the query. Related to
In is
InRaw↓, which allows you to specify your own SQL subquery for the “IN” portion of the where clause. Listing
8.1.9↓ shows an example of how we could use
InRaw to find
Tags for expense entries made in the last 30 days.
def recentTags = {
val joins = ExpenseTag.findAll(
InRaw(ExpenseTag.expense,
"select id from Expense where dateOf > (CURRENT_DATE - interval ’30 days’)",
IHaveValidatedThisSQL("dchenbecker", "2008-12-03"))
joins.map(_.expense.obj.open_!).removeDuplicates
}
Here things are starting to get a little hairy. The
InRaw only allows us to specify the subquery for the IN clause, so we have to do some postprocessing to get unique results. If you want to do this in the query itself you’ll have to use the
findAllByInsecureSql or
findAllByPreparedStatement methods, which are covered later in this section on page number
1↓. The final parameter for
InRaw,
IHaveValidatedThisSQL acts as a code audit mechanism that says that someone has checked the SQL to make sure it’s safe to use. The query fragment is added to the master query as-is: no escaping or other filtering is performed on the string. That means that if you take user input. then you need to be very careful about it or you run the risk of an SQL injection attack on your site.
The next
QueryParam we’ll cover is
BySql, which lets you use a complete SQL fragment that gets put into the where clause. An example of this would be if we want to find all expense entries within the last 30 days, as shown in Listing
8.1.9↓. Again, the
IHaveValidatedThisSQL↓ case class is required as a code audit mechanism to make sure someone has verified that the SQL used is safe.
val recentEntries = Expense.findAll(
BySql("dateOf > (CURRENT_DATE - interval ’30 days’)",
IHaveValidatedThisSQL("dchenbecker","2008-12-03"))
The tradeoff with using
BySql is that you need to be careful with what you allow into the query string.
BySql supports parameterized queries as shown in Listing
8.1.9↓, so use those if you need to have dynamic queries. Whatever you do, don’t use string concatenation unless you really know what you’re doing.
val amountRange = Expense.findAll(
BySql("amount between ? and ?", lowVal, highVal))
As we mentioned in Section
3 on page 1↑, we can use the query parameters to do bulk deletes in addition to querying for instances. Simply use the
QueryParam classes to constrain what you want to delete. Obviously, the control params that we’ll cover next make no sense in this context, but the compiler won’t complain. Listing
8.1.9↓ shows an example of deleting all entries older than a certain date.
def deleteBefore (date : Date) =
Expense.bulkDelete_!!(By_<(Expense.dateOf, date))
8.1.10 Control QueryParams
Now that we’ve covered the selection and comparison
QueryParams, we can start to look at the control params. The first one that we’ll look at is
OrderBy. This operates exactly like the order by clause in SQL, and allows you to sort on a given field in either ascending or descending order. Listing
8.1.10↓ shows an example of ordering our
Expense entries by amount. The
Ascending and
Descending case objects are in the net.liftweb.mapper package. The
OrderBySql case class operates similarly, except that you provide your own SQL fragment for the ordering, as shown in the example. Again, you need to validate this SQL.
val cheapestFirst =
Expense.findAll(OrderBy(Expense.amount,Ascending))
// or
val cheapestFirst =
Expense.findAll(OrderBySql("amount asc"),
IHaveValidatedThisSQL("dchenbecker", "2008-12-03"))
Pagination of results is another feature that people often want to use, and Mapper provides a simple means for controlling it with two more
QueryParam classes:
StartAt and
MaxRows, as shown in Listing
8.1.10↓. In this example, we take the offset from a parameter passed to our snippet, with a default of zero.
val offset = S.param("offset").map(_.toLong) openOr 0
Expense.findAll(StartAt(offset), MaxRows(20))
An important feature of the methods that take
QueryParams is that they can take multiple params, as shown in this example. A more complex example is shown in Listing
8.1.10↓. In this example, we’re querying with a
Like clause, sorting on the date of the entries, and paginating the results, all in one statement!
Expense.findAll(Like(Expense.description, "Gift for%"),
OrderBy(Expense.dateOf,Descending),
StartAt(offset),
MaxRows(pageSize))
Another useful
QueryParam is the
Distinct case class, which acts exactly the same way as the DISTINCT keyword in SQL. One caveat is that Mapper doesn’t support explicit joins, so this restricts the situations in which you can use
Distinct. The final “control”
QueryParam that we’ll cover is
PreCache↓. It’s used when you have a mapped foreign key field on an entity. Normally, when Mapper loads your main entity it leaves the foreign key field in a lazy state, so that the query to get the foreign object isn’t executed until you access the field. This can obviously be inefficient when you have many entities loaded that you need to access, so the
PreCache parameter forces Mapper to preload the foreign objects as part of the query. Listing
8.1.10↓ shows how we can use
PreCache to fetch an
Expense entry as well as the account for the entry.
def loadExpensePlusAccount (id : Long) =
Expense.findAll(By(Expense.id, id),
PreCache(Expense.account))
8.1.11 Making Joins a Little Friendlier
If you prefer to keep your queries type-safe, but you want a little more convenience in your joins between entities, you can define helper methods on your entities. One example is finding all of the tags for a given
Expense, as shown in Listing
1↓. Using this method in our example has an advantage over using
HasManyThrough:
hasManyThrough is a lazy value that will only retrieve data from the database once per request. Using a
findAll will retrieve data from the database every time. This may be important if you add data to the database during a request, or if you expect things to change between queries.
def tags =
ExpenseTag.findAll(By(ExpenseTag.expense, this.id)).map(_.tag.obj.open_!)
8.2 Utility Functionality
In addition to the first-class persistence support in Mapper and Record, the frameworks provide additional functionality to make writing data-driven applications much simpler. This includes things such as automatic XHTML representation of objects and support for generating everything from simple forms for an individual entity to a full-fledged CRUD implementation for your entities.
8.2.1 Display Generation
If you want to display a Mapper instance as XHTML, simply call the
asHtml method (
toXHtml in Record) on your instance. The default implementation turns each field’s value into a
Text node via the
toString method and concatenates the results separated by newlines. If you want to change this behavior, override the
asHtml on your field definitions. For example, if we wanted to control formatting on our
dateOf field, we could modify the field as shown in Listing
8.2.1↓.
import _root_.java.text.DateFormat
...
object dateOf extends MappedDateTime(this) {
final val dateFormat =
DateFormat.getDateInstance(DateFormat.SHORT)
override def asHtml = Text(dateFormat.format(is))
}
Note that in Record, dateOf contains a java.util.Calendar instance and not a
java.util.Date, so we would need to use the getTime method on the value. Two similar methods, asJSON and asJs, will return the JSON and JavaScript object representation of the instance, respectively.
8.2.2 Form Generation ↓
One of the biggest pieces of functionality in the Mapper framework is the ability to generate entry forms for a given record. The
toForm↓ method on Mapper is overloaded so that you can control how your form is created. All three
toForm methods on Mapper take a
Box[String] as their first parameter to control the submit button; if the Box is Empty, no submit button is generated, otherwise, the String contents of the Box are used as the button label. If you opt to skip the submit button you’ll need to provide it yourself via binding or some other mechanism, or you can rely on implicit form submission (when the user hits enter in a text field, for instance). The first
toForm method simply takes a function to process the submitted form and returns the XHTML as shown in Listing
8.2.2↓:
myEntry.toForm(Full("Save"), { _.save })
As you can see, this makes it very easy to generate a form for editing an entity. The second
toForm method allows you to provide a URL which the Mapper will redirect to if validation succeeds on form submission (this is not provided in Record). This can be used for something like a login form, as shown in Listing
8.2.2↓:
myEntry.toForm (Full("Login"), "/member/profile")
The third form of the toForm method is similar to the first form, with the addition of “redo” snippet parameter. This allows you to keep the current state of the snippet when validation fails so that the user doesn’t have to re-enter all of the data in the form.
The Record framework allows for a little more flexibility in controlling form output. The MetaRecord object allows you to change the default template
↓ that the form uses by setting the formTemplate
↓ var. The template may contain any XHTML you want, but the
toForm method will provide special handling for the following tags:
<lift:field_label name=“...” /> The label for the field with the given name will be rendered here.
<lift:field name=“...” /> The field itself (specified by the given name) will be rendered here. Typically this will be an input field, although it can be anything type-appropriate. For example, a BooleanField would render a checkbox.
<lift:field_msg name=“...” /> Any messages, such as from validation, for the field with the given name will be rendered here.
As an example, if we wanted to use tables to lay out the form for our ledger entry, the row for the description field might look like that in Listing
8.2.2↓:
<!-- Example description field row for Record’s toForm method -->
<tr>
<th><lift:field_label name="description" /></th>
<td><lift:field name="description" />
<lift:field_msg name="description" /></td>
</tr>
Technically, the
field_msg binding looks up Lift messages (Chapter
B↓) based on the field’s
uniqueId, so you can set your own messages outside of validation using the S.{error, notice, warning} methods as shown in Listing
8.2.2↓:
S.warning(myEntry.amount.uniqueFieldId,
"You have entered a negative amount!")
S.warning("amount_id", "This is brittle")
For most purposes, though, using the validation mechanism discussed in the next section is the appropriate way to handle error checking and reporting.
Validation is the process of checking a field during form processing to make sure that the submitted value meets requirements. This can be something as simple as ensuring that a value was submitted, or as complex as comparing multiple field values together. Validation is achieved via a
List of functions on a field that take the field value as input and return a
List[FieldError] (
Box[Node] in Record). To indicate that validation succeeded, simply return an empty List, otherwise the list of
FieldErrors you return are used as the failure messages to be presented to the user. A
FieldError is simply a case class that associates an error message with a particular field. As an example, let’s say we don’t want someone to be able to add an
Expense entry for a date in the future. First, we need to define a function for our
dateOf field that takes a
Date as an input (For Record,
java.util.Calendar↓, not
Date, is the actual value type of
DateTimeField) and returns the proper
List. We show a simple function in Listing
8.2.3↓. In the method, we simply check to see if the millisecond count is greater than “now” and return an error message if so.
import _root_.java.util.Date
class Expense extends LongKeyedMapper[Expense] with IdPK {
...
object dateOf extends MappedDateTime(this) {
def noFutureDates (time : Date) = {
if (time.getTime > System.currentTimeMillis) {
List(FieldError(this, "You cannot make future expense entries"))
} else {
List[FieldError]()
}
}
}
...
}
The first argument for the FieldError is the field itself, so you could use the alternate definition shown in Listing
8.2.3↓ if you would prefer to define your validation functions elsewhere (if they’re common to more than one entity, for example).
Alternate Date Validation
import _root_.java.util.Date
import _root_.net.liftweb.http.FieldIdentifier
object ValidationMethods {
def noFutureDates (field : FieldIdentifier)(time : Date) = {
if (time.getTime > System.currentTimeMillis) {
List(FieldError(field, "You cannot make future expense entries"))
} else {
List[FieldError]()
}
}
...
}
The next step is to tie the validation into the field itself. We do this by slightly modifying our field definition for
date to set our list of validators as shown in Listing
8.2.3↓:
object dateOf extends MappedDateTime(this) {
def noFutureDates (time : Date) = { ... }
override def validations = noFutureDates _ :: Nil
}
// Using the alternate definition:
object dateOf extends MappedDateTime(this) {
override def validations = ValidationMethods.noFutureDates(dateOf) _ :: Nil
}
Note that we need to add the underscore for each validation function to be partially applied on the submitted value. When our form is submitted, all of the validators for each field are run, and if all of them return Empty then validation succeeds. If any validators return a Full Box, then the contents of the Box are displayed as error messages to the user.
8.2.4 CRUD Support
Adding CRUD support to your Mapper classes is very simple. We just mix in the
net.liftweb.mapper.CRUDify trait to our meta object and it provides a full set of add, edit, list, delete and view pages automatically. Listing
8.2.4↓ shows our
Expense meta object with
CRUDify mixed in.
object Expense extends Expense LongKeyedMetaMapper[Expense]
with CRUDify[Long,Expense] {
... normal def here ...
// disable delete functionality
override def deleteMenuLoc = Empty
}
The CRUDify behavior is very flexible, and you can control the templates for pages or whether pages are shown at all (as we do in our example) by overriding defs that are provided on the CRUDify trait. In our example Listing
8.2.4↑, we disable the delete menu by overriding the
deleteMenuLoc method to return
Empty. As an added bonus, CRUDify automatically creates a set of menus for SiteMap (Chapter
7↑) that we can use by appending them onto the rest of our menus as shown in Listing
8.2.4↓.
class Boot {
def boot {
...
val menus = ... Menu(Loc(...)) :: Expense.menus
LiftRules.setSiteMap(SiteMap(menus : _*))
}
}
8.2.5 Lifecycle Callbacks
Mapper and Record provide for a set of callbacks that allow you to perform actions at various points during the lifecycle of a given instance. If you want to define your own handling for one of the lifecycle events, all you need to do is override and define the callback because
MetaMapper already extends the
LifecycleCallbacks trait. Note that there is a separate
LifecycleCallbacks trait in each of the record and mapper packages, so make sure that you import the correct one. For example, if we want to notify a Comet actor whenever a new
Expense entry is saved, we can change our
Expense class as shown in Listing
8.2.5↓:
object Expense extends LongKeyedMapper[Expense] with LifecycleCallbacks {
...
override def afterSave { myCometActor ! this }
}
The lifecycle hooks are executed at the main operations in an instance lifecycle:
Create When a fresh instance is first saved (corresponding to a table insert).
Delete When an instance is deleted.
Save When a new or existing instance is inserted or updated. beforeSave is always called before beforeCreate or beforeUpdate. Similarly, afterSave is always called after afterCreate or afterUpdate.
Update When an instance that already exists in the database is updated (corresponding to a table update).
Validation When form validation occurs.
For each of these points you can execute your code before or after the operation is run.
8.2.6 Base Field Types
The Record and Mapper frameworks define several basic field types. The following table shows the corresponding types between Mapper and Record, as well as a brief description of each type.
MappedBinary
|
BinaryField
|
Represents a byte array. You must provide your own overrides for toForm and asXHtml/asHtml for input and display
|
MappedBirthYear
|
N/A
|
Holds an Int that represents a birth year. The constructor takes a minAge parameter that is used for validation
|
MappedBoolean
|
BooleanField
|
Represents a Boolean value. The default form representation is a checkbox
|
MappedCountry
|
CountryField
|
Represents a choice from an enumeration of country phone codes as provided by the net.liftweb.mapper.Countries.I18NCountry class. The default form representation is a select
|
MappedDateTime
|
DateTimeField
|
Represents a timestamp (java.util.Calender for Record, java.util.Date for Mapper). The default form representation is a text input
|
MappedDouble
|
DoubleField
|
Represents a Double value
|
MappedEmail
|
EmailField
|
Represents an email address with a maximum length
|
MappedEnum
|
EnumField
|
Represents a choice from a given scala.Enumeration. The default form representation is a select
|
MappedEnumList
|
N/A
|
Represents a choice of multiple Enumerations. The default form representation is a set of checkboxes, one for each enum value
|
MappedFakeClob
|
N/A
|
Fakes a CLOB value (really stores String bytes to a BINARY column)
|
MappedGender
|
N/A
|
Represents a Gender enumeration. Display values are localized via the I18NGenders object. Internationalization is covered in appendix D↓
|
MappedInt
|
IntField
|
Represents an Int value
|
MappedIntIndex
|
N/A
|
Represents an indexed Int field (typically a primary key). In Record this is achieved with the KeyField trait
|
MappedLocale
|
LocaleField
|
Represents a locale as selected from the java.util.Locale.getAvailableLocales method. The default form representation is a select
|
MappedLong
|
LongField
|
Represents a Long value
|
MappedLongForeignKey
|
N/A
|
Represents a mapping to another entity via the other entities Long primary key. This functionality in Record is not yet supported
|
MappedLongIndex
|
N/A
|
Represents an indexed Long field (typically a primary key). In Record this is achieved with the KeyField trait
|
MappedPassword
|
PasswordField
|
Represents a password string. The default form representation is a password input (obscured text)
|
MappedPoliteString
|
N/A
|
Just like MappedString, but the default value is an empty string and the input is automatically truncated to fit the database column size
|
MappedPostalCode
|
PostalCodeField
|
Represents a validated postal code string. The field takes a reference to a MappedCountry (CountryField in Record) at definition and validates the input string against the selected country’s postal code format
|
MappedString
|
StringField
|
Represents a string value with a maximum length and optional default value
|
MappedStringForeignKey
|
N/A
|
Represents a mapping to another entity via the other entities String primary key. This functionality in Record is not yet supported
|
MappedStringIndex
|
N/A
|
Represents an indexed String field (typically a primary key). In Record this is achieved with the KeyField trait
|
MappedText
|
N/A
|
Represents a String field that stores to a CLOB column in the database. This can be used for large volumes of text.
|
MappedTextarea
|
TextAreaField
|
Represents a String field that will use an HTML textarea element for its form display. When you define the field you can override the textareaCols and textareaRows defs to control the dimensions of the textarea.
|
MappedTimeZone
|
TimeZoneField
|
Represents a time zone selected from java.util.TimeZone.getAvailableIDs. The default form representation is a select
|
MappedUniqueId
|
N/A
|
Represents a unique string of a specified length that is randomly generated. The implementation doesn’t allow the user to write new values to the field. This can be thought of as a GUID
|
8.2.7 Defining Custom Field Types in Mapper
The basic
MappedField types cover a wide range of needs, but sometimes you may find yourself wanting to use a specific type. In our example, we would like a decimal value for our expense amount and account balance. Using a double would be inappropriate due to imprecision and rounding errors, so instead we base it on
scala.BigDecimal. We’re going to provide an abridged version of the code that will end up in the Lift library. Feel free to examine the source to see the constructors and methods that we’ve omitted. Our first task is to specify the class signature and constructors, as shown in Listing
8.2.7↓. Note that the
BigDecimal we’re using here is
scala.BigDecimal, not
java.math.BigDecimal. We’ll cover how we make this work with JDBC (which doesn’t support scala.BigDecimal) in a moment.
MappedDecimal Constructors
import _root_.java.math.{MathContext, RoundingMode}
class MappedDecimal[T <: Mapper[T]] (val fieldOwner : T,
val context : MathContext,
val scale : Int) extends MappedField[BigDecimal,T] {
// ... constructor taking initial value ...
def this(fieldOwner : T, value : BigDecimal, context: MathContext) = {
this(fieldOwner, context, value.scale)
setAll(value) // we’ll cover this later in this section
}
def this(fieldOwner : T, value : BigDecimal) = {
this(fieldOwner, MathContext.UNLIMITED, value.scale)
setAll(value)
}
The first part of the class definition is the type signature; basically the type [T <: MappedField[T]] indicates that whatever type “owns” this field must be a Mapper subclass (<: specifies an upper type bound). With our primary constructor we specify the owner mapper as well as the MathContext (this controls rounding and precision, or the total number of digits) and scale of the decimal value. The scale in BigDecimal essentially represents the number of digits to the right of the decimal point. In addition, we specify ancillary constructors to take an initial value with or without and explicit MathContext.
Now that we have the constructors in place, there are several abstract methods on
MappedField that we need to define. The first of these is a method to provide a default value. The default value is used for uninitialized fields or if validation fails. We also need to specify the class for our value type by implementing the
dbFieldClass method. Listing
8.2.7↓ shows both of these methods. In our case, we default to a zero value, with the scale set as specified in the contructor. Note that
BigDecimal instances are generally immutable, so the
setScale method returns a new instance. We also provide the vars and methods that handle the before and after values of the field. These values are used to handle persistence state. If you change the value of the field, then the original value is held until the instance is saved to the database. The
st method is used internally to set the value of the field when instances are “rehydrated” from the database.
private val zero = BigDecimal("0")
def defaultValue = zero.setScale(scale)
def dbFieldClass = classOf[BigDecimal]
// The data and orgData variables are used so that
// we know when the field has been modified by the user
private var data : BigDecimal = defaultValue
private var orgData : BigDecimal = defaultValue
private def st (in : BigDecimal) {
data = in
orgData = in
}
// The i_is_! and i_was_! methods are used internally to
// keep track of when the field value is changed. In our
// instance they delegate directly to the data and orgData
// variables
protected def i_is_! = data
protected def i_was_! = orgData
override def doneWithSave() {
orgData = data
}
The next set of methods we need to provide deal with when and how we can access the data. Listing
8.2.7↓ shows the overrides that set the read and write permissions to true (default to false for both) as well as the
i_obscure_! and
real_i_set_! methods. The
i_obscure_! method returns the a value that is used when the user doesn’t have read permissions. The
real_i_set_! method is what actually stores the internal value and sets the dirty flag when the field is updated.
override def readPermission_? = true
override def writePermission_? = true
protected def i_obscure_!(in : BigDecimal) = defaultValue
protected def real_i_set_!(value : BigDecimal): BigDecimal = {
if (value != data) {
data = value
dirty_?(true)
}
data
}
The next two methods that we need to provide deal with actually setting the value of the field. The first is
setFromAny, which takes an
Any parameter and must convert it into a
BigDecimal. The second,
setFromString is a subset of
setFromAny in that it takes a
String parameter and must return a
BigDecimal. Our implementation of these two methods is shown in Listing
8.2.7↓. We’ve also added a
setAll and
coerce method so that we have a common place to properly set scale and rounding modes on the value of the field.
def setFromAny (in : Any) : BigDecimal =
in match {
case bd : BigDecimal => setAll(bd)
case n :: _ => setFromString(n.toString)
case Some(n) => setFromString(n.toString)
case Full(n) => setFromString(n.toString)
case None | Empty | Failure(_, _, _) | null => setFromString("0")
case n => setFromString(n.toString)
}
def setFromString (in : String) : BigDecimal = {
this.setAll(BigDecimal(in))
}
protected def setAll (in : BigDecimal) = set(coerce(in))
// Make a separate method for properly adjusting scale and rounding.
// We’ll use this method later in the class as well.
protected coerce (in : BigDecimal) =
new BigDecimal(in.bigDecimal.setScale(scale, context.getRoundingMode))
Our implementations are relatively straightforward. The only special handling we need for setFromAny is to properly deal with Lists, Boxes, Options and the null value. The BigDecimal constructor takes Strings, so the setFromString method is easy. The only addition we make over the BigDecimal constructor is to properly set the scale and rounding on the returned value.
Our final step is to define the database-specific methods for our field, as shown in Listing
8.2.7↓. The first method we implement is
targetSQLType. This method tells Mapper what the corresponding SQL type is for our database column. The
jdbcFriendly method returns a value that can be used in a JDBC statement. Here’s where we need to use the
bigDecimal val on our
scala.BigDecimal to obtain the real
java.math.BigDecimal instance. Similarly, the
real_convertToJDBCFriendly method needs to return a java
BigDecimal for a given
scala.BigDecimal input. The
buildSet... methods return functions that can be used to set the value of our field based on different input types. These are essentially conversion functions that are used by Lift to convert data retrieved in a
ResultSet into actual field values. Finally, the
fieldCreatorString specifices what we would need in a CREATE TABLE statement to define this column. In this instance, we need to take into account the precision and scale. We use default precision if we’re set to unlimited, but it’s important to understand that actual precision for the default DECIMAL type varies between RDBMS vendors.
Database-Specific Methods
def targetSQLType = Types.DECIMAL
def jdbcFriendly(field : String) = i_is_!.bigDecimal
def real_convertToJDBCFriendly(value: BigDecimal): Object = value.bigDecimal
// The following methods are used internally by Lift to
// process values retrieved from the database.
// We don’t convert from Boolean values to a BigDecimal, so this returns null
def buildSetBooleanValue(accessor : Method, columnName : String) :
(T, Boolean, Boolean) => Unit = null
// Convert from a Date to a BigDecimal. Our assumption here is that we can take
// The milliseconds value of the Date.
def buildSetDateValue(accessor : Method, columnName : String) :
(T, Date) => Unit =
(inst, v) =>
doField(inst, accessor,{
case f: MappedDecimal[T] =>
f.st(if (v == null) defaultValue else coerce(BigDecimal(v.getTime)))
})
// Convert from a String to a BigDecimal. Since the BigDecimal object can
// directly convert a String, we just pass the String directly.
def buildSetStringValue(accessor: Method, columnName: String) :
(T, String) => Unit =
(inst, v) =>
doField(inst, accessor,{
case f: MappedDecimal[T] =>
f.st(coerce(BigDecimal(v)))
})
// Convert from a Long to a BigDecimal. This is slightly more complex than
// for a String, since we need to check for null values.
def buildSetLongValue(accessor: Method, columnName : String) :
(T, Long, Boolean) => Unit =
(inst, v, isNull) =>
doField(inst, accessor, {
case f: MappedDecimal[T] =>
f.st(if (isNull) defaultValue else coerce(BigDecimal(v)))
})
// Convert from an AnyRef (Object). We simply use the String value
// of the input here.
def buildSetActualValue(accessor: Method, data: AnyRef, columnName: String) :
(T, AnyRef) => Unit =
(inst, v) =>
doField(inst, accessor, {
case f: MappedDecimal[T] => f.st(coerce(BigDecimal(v.toString)))
})
def fieldCreatorString(dbType: DriverType, colName: String): String = {
val suffix = if (context.getPrecision == 0) "" else {
"(" + context.getPrecision + "," + scale + ")"
}
colName + " DECIMAL" + suffix
}
8.2.8 ProtoUser and MegaProtoUser
In addition to all of the database-related features, Mapper contains an extra goody to help you quickly set up small sites.
ProtoUser and
MegaProtoUser are two built-in traits that define a simple user account. The
ProtoUser trait defines some basic fields for a user:
email,
firstName,
lastName,
password and
superUser (a boolean to provide basic permissions). There are also a number of defs used to format the fields for display or to provide form labels. Listing
8.2.8↓ shows an example of a ProtoUser-based Mapper class that overrides some of the formatting defs.
class User extends ProtoUser[User] {
override def shortName = firstName.is
override lastNameDisplayName = "surname"
}
The MegaProtoUser trait, as its name implies, extends the ProtoUser trait with a whole suite of functionality. The main thrust of MegaProtoUser (and its associated meta object,
MetaMegaProtoUser) is to automatically handle all of the scaffolding for a complete user management system, with:
-
A user registration page with configurable validation via email
-
A login page that automatically handles authentication
-
A lost password page that does reset via email
-
A change password page
-
A user edit page
-
A simple method to generate SiteMap menus for all of these pages
Of course, you can customize any of these by overriding the associated methods on the MetaMegaProtoUser object. Listing
2.1 on page 1↑ shows an example of sprucing up the signup and login pages by overriding the
loginXHtml and
signupXHtml methods. Listing
8.2.8↓ shows how easy it is to then hook the MetaMegaProtoUser menus into SiteMap.
Hooking MetaMegaProtoUser into Boot
// in Boot.scala
LiftRules.setSiteMap(SiteMap((... :: User.sitemap) :_*))
8.3 Advanced Features
In this section we’ll cover some of the advanced features of Mapper
8.3.1 Using Multiple Databases ↓
It’s common for an application to need to access data in more than one database. Lift supports this feature through the use of overrides on your MetaMapper classes. First, we need to define the identifiers for the various databases using the
ConnectionIdentifier trait and overriding the
jndiName def. Lift comes with one pre-made:
DefaultConnectionIdentifier. It’s jndiName is set to “lift”, so it’s recommended that you use something else. Let’s say we have two databases: sales and employees. Listing
8.3.1↓ shows how we would define the
ConnectionIdentifier objects for these.
Defining Connection Identifiers
object SalesDB extends ConnectionIdentifier {
def jndiName = "sales"
}
object EmployeeDB extends ConnectionIdentifier {
def jndiName = "employees"
}
Simple enough. Now, we need to create connection managers for each one, or we can combine the functionality into a single manager. To keep things clean we’ll use a single manager, as shown in Listing
8.3.1↓. Scala’s match operator allows us to easily return the correct connection.
Multi-database Connection Manager
object DBVendor extends ConnectionManager {
Class.forName("org.postgresql.Driver")
def newConnection(name : ConnectionIdentifier) = {
try {
name match {
case SalesDB =>
Full(DriverManager.getConnection(
"jdbc:postgresql://localhost/sales",
"root", "secret"))
case EmployeeDB =>
Full(DriverManager.getConnection(
"jdbc:postgresql://server/employees",
"root", "hidden"))
} catch {
case e : Exception => e.printStackTrace; Empty
}
}
def releaseConnection (conn : Connection) { conn.close }
}
Now that we’ve defined our connection identifiers, we need to be able to use them in our
Mapper instances. There are several ways to do this. The first (simplest) way is to override the
dbDefaultConnectionIdentifier method on your
MetaMapper object, as shown in Listing
8.3.1↓. In this example we’re setting the
MetaMapper to always use the
EmployeeDB connection for all persistence operations.
Defining the Default Connection Identifier
object EmployeeMeta extends Employee with LongKeyedMetaMapper[Employee] {
...
override def dbDefaultConnectionIdentifier = EmployeeDB
...
}
The second way to utilize more than one DB is to use the “DB” version of the persistence methods, as we mentioned in Section
8.1.7↑. Listing
8.3.1↓ shows how we can perform a
findAll with a specific connection.
Using a Connection Identifier Directly
val employees = EmployeeMeta.findAllDb(EmployeeDB)
8.3.2 Database Sharding
A special case of using multiple databases is sharding. Sharding is a means to scale your database capacity by associating entities with one database instance out of a federation of servers based on some property of the entity. For instance, we could distribute user entites across 3 database servers by using the first character of the last name: A-H goes to server 1, I-P goes to server 2, and Q-Z goes to server 3. As simple as this sounds, there are some important factors to remember:
-
Sharding increases the complexity of your code.
-
To get the most benefit out of sharding, you need to carefully choose and tune your “selector.” If you’re not careful, you can get an uneven distribution where some servers handle significantly more load than others, defeating the purpose of sharding. The example we’ve given here of using the last name is, in practice, a very poor choice. We recommend reading http://startuplessonslearned.blogspot.com/2009/01/sharding-for-startups.html for a good overview of the pros and cons of various selector strategies.
-
When you use sharding, you can’t just use normal joins anymore because the data isn’t all within one instance. This means more work on your part to properly retrieve and associate data
Mapper provides a handy feature for sharding that allows you to choose which database connection you want to use for a specific entity. There are two methods we can use to control the behavior:
dbSelectDBConnectionForFind and
dbCalculateConnectionIdentifier. dbSelect... is used to find an instance by primary key, and takes a partial function (typically a match clause) to determine which connection to use. dbCalculate... is used when a new instance is created to decide where to store the new instance. As an example, say we’ve defined two database connections, SalesA and SalesB. We want to place new instances in SalesA if the amount is > $100 and SalesB otherwise. Listing
8.3.2↓ shows our method in action.
class Expense extends LongKeyedMapper[Expense] {
... fields, etc ...
override def dbCalculateConnectionIdentifier = {
case n if n.amount.is > 100 => SalesA
case _ => SalesB
}
}
8.3.3 SQL-based Queries
If, despite all that Mapper covers, you find yourself still wanting more control over the query, there are two more options available to you:
findAllByPreparedStatement and
findAllByInsecureSql. The
findAllByPreparedStatement method allows you to, in essence, construct your query completely by hand. The added benefit of using a
PreparedStatement means that you can easily include user-defined data in your queries. The
findAllByPreparedStatement method takes a single function parameter. This function takes a
SuperConnection and returns a
PreparedStatement instance. Listing
8.3.3↓ shows our previous example in which we looked up all
Tags for recent
Expense entries, but here using
findAllByPreparedStatement instead. The query that you provide must at least return the fields that are mapped by your entity, but you can return other columns as well (they’ll just be ignored), so you may choose to do a “select *” if you prefer.
Using findAllByPreparedStatement
def recentTags = Tag.findAllByPreparedStatement({ superconn =>
superconn.connection.prepareStatement(
"select distinct Expense.id, Tag.name" +
"from Tag" +
"join ExpenseTag et on Tag.id = et.tag " +
"join Expense ex on ex.id = et.expense " +
"where ex.dateOf > (CURRENT_DATE - interval ’30 days’)")
})
The findAllByInsecureSql method goes even further, executing the string you submit directly as a statement without any checks. The same general rules apply as for
findAllByPreparedStatement, although you need to add the IHaveValidatedThisSQL parameter as a code audit check. In either case, the ability to use full SQL queries can allow you to do some very powerful things, but it comes at the cost of losing type safety and possibly making your app non-portable.
As a last resort, Mapper provides support for non-entity SQL queries through a few methods on the DB object. The first method we’ll look at is
DB.runQuery. This method allows you to provide a full SQL query string, and is overloaded to take a parameterized query. It returns a
Pair[List[String],List[List[String]], with the first
List[String] containing all of the column names and the second
List corresponding to each row in the result set. For example, let’s say we wanted to compute the sums of each tag for a given account. Listing
8.3.3↓ shows how we could accomplish this using a parameterized query against the database.
DB.runQuery("select Tag.name, sum(amount) from Expense ex " +
"join ExpenseTag et on et.expense = ex.id " +
"join Tag on et.tag = Tag.id " +
"join Account on Account.id = ex.account " +
"where Account.id = ? group by Tag.name order by Tag.name",
myAccount.id)
// might return:
(List("tag", "sum"]),
List(List("food","42.00"),
List("home","75.49"),
List("work","2.00")))
If you need full control over the query and full access to the result set, DB provides some low-level utility methods. The most basic is
DB.use, which takes a connection identifier as well as a function that takes a
SuperConnection (a thin wrapper on JDBC’s connection). This forms a loan pattern that lets Mapper deal with all of the connection open and release details. The
DB.exec method takes a provided connection and executes an arbitrary SQL statement on it, then applies a provided function to the result set. Similarly, the
DB.prepareStatement method allows you to create a prepared statement and then apply a function to it. You can combine these methods to run any arbitrary SQL, as shown in Listing
8.3.3↓.
// recompute an account balance from all of the transactions
DB.use(DefaultConnectionIdentifier) { conn =>
val balance =
// Should use a prepared statement here. This is for example only
DB.exec(conn,
"select sum(ex.amount) from Expense ex where ex.account = "
+ myAccount.id) {
rs =>
if (!rs.next) BigDecimal(0)
else (new BigDecimal(rs.getBigDecimal(1)))
}
DB.prepareStatement("update Account set balance = ? where Account.id = ",
conn) { stmt =>
stmt.setBigDecimal(1, balance.bigDecimal)
stmt.setLong(2, resetAccount.id)
stmt.executeUpdate()
}
}
8.4 Logging
8.5 Summary
In this chapter, we discussed the two major ORMs included in Lift: Mapper and Record. We’ve shown how you can define entities using the Mapper field types and how to coordinate between the entity and its Meta-object. We’ve shown how you can customize the display and schema of your behavior with custom form control, CRUD support, and indexing. And we’ve show you how to query for entities using Mapper’s type-safe query support. Finally, we showed you how you can do in-depth customization of Mapper behavior by writing your own field types, using multiple databases, and using raw SQL queries.
(C) 2012 Lift 2.0 EditionWritten by Derek Chen-Becker, Marius Danciu and Tyler Weir