tag:blogger.com,1999:blog-59002759084513732062024-03-08T21:24:26.825+05:00Exception With Butterfly Wingsforketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-5900275908451373206.post-37056716923374590392015-05-07T22:57:00.000+05:002015-05-07T23:02:08.341+05:00Offline data access and synchronization in a mobile application with Couchbase LiteThis is the translation of my article that first appeared on <a href="http://habrahabr.ru/company/cit/blog/257393/">Habrahabr</a>.<br /><br />
<h1>Couchbase and Couchbase Lite</h1>
When developing data-driven mobile applications, we often encounter the customer's wish to fully access all of the app’s features, including changing the data, when the device is offline. The changes made to the data also have to sync with the backend when the device goes online. The backend is also concurrently accessed by desktop and web frontend applications which may also modify the data.<br /><br />
Public cloud synchronization is not always viable, especially when security concerns are in place, and customer wishes to keep all of their data on their private servers. In this article I’ll describe my experience of solving this task by using Couchbase database on the server and Couchbase Lite database in the mobile application with two-way replication between them.<br /><br />
The <a href="http://www.couchbase.com/">Couchbase</a> database is a document-oriented distributed NoSQL database that ensures high performance by writing data into memory first, eventually persisting it onto the disk. Couchbase enables strong consistency between the nodes in a clustered environment by making them independent and equal, while each document being bound to a certain node. Couchbase is queried with indexed views that implement the <a href="https://en.wikipedia.org/wiki/MapReduce">MapReduce</a> pattern.<br /><br />
<a href="http://developer.couchbase.com/mobile/get-started/couchbase-lite-overview/index.html">Couchbase Lite</a> is a lightweight version of Couchbase that is intended for desktop and mobile applications and is able to replicate with Couchbase server. Couchbase Lite is implemented on iOS, Android, Java and .NET platforms, so it can be used not only in mobile but also in desktop applications. It’s worth mentioning that the iOS version of Couchbase Lite currently has several advantages against other platforms. For instance, there is full-text search, and also automatic mapping of documents to Objective C and Swift objects.<br /><br />
For synchronization of Couchbase and Couchbase Lite, a CouchDB-almost-compatible replication protocol is used. Almost — because the authors don’t guarantee complete compatibility due to obscure documentation of CouchDB protocol which they even had to partly reverse. This protocol is implemented in <a href="http://developer.couchbase.com/mobile/get-started/what-is-sync-gateway/index.html">Sync Gateway</a> — a REST-based replication service. All clients that wish to sync data should connect to the central database using this service.<br /><br />
<h1>Couchbase Server installation and setup</h1>
<h2>Couchbase installation</h2>
The installation process of Couchbase differs between platforms and is described <a href="http://docs.couchbase.com/admin/admin/install-intro.html">in the documentation</a>. Let’s assume the database is already installed on localhost. The default location of admin console is <a href="http://localhost:8091/">http://localhost:8091/</a>. Let’s go there and create a bucket named "demo" which we’ll use for storing our documents. To do that, open Data Buckets tab and click Create New Data Bucket button.<br /><br />
<a href="http://habrastorage.org/files/019/653/093/0196530932264927a69ab8cd64b23949.png"><img width="580px" src="http://habrastorage.org/files/019/653/093/0196530932264927a69ab8cd64b23949.png" /></a>
Enter the bucket name "demo" and limit it’s memory quota to 100 MB.<br /><br />
<a href="http://habrastorage.org/files/025/5f1/146/0255f11463814bbda8e49f55be2d6e67.png"><img width="580px" src="http://habrastorage.org/files/025/5f1/146/0255f11463814bbda8e49f55be2d6e67.png"/></a>
When all is done, a new bucket named demo will appear in the list of buckets, with a green circle beside it that indicates its normal activity.<br /><br />
<a href="http://habrastorage.org/files/bcb/d9c/866/bcbd9c866e714d5c973fb21955f10e84.png"><img width="580px" src="http://habrastorage.org/files/bcb/d9c/866/bcbd9c866e714d5c973fb21955f10e84.png"/></a>
Click the Documents button and observe that the newly created bucket is empty.<br /><br />
<a href="http://habrastorage.org/files/ac7/8f2/0fc/ac78f20fcc2a497bacfa972415c6010d.png"><img width="580px" src="http://habrastorage.org/files/ac7/8f2/0fc/ac78f20fcc2a497bacfa972415c6010d.png"/></a>
<h2>Sync Gateway setup</h2>
<a href="http://developer.couchbase.com/mobile/develop/guides/sync-gateway/getting-started-with-sync-gateway/index.html">Sync Gateway installation and setup</a> are described in the documentation. Here I’ll provide a sync-gateway-config.json file that will allow you to run the sample application that we’ll develop in this article:<br /><br />
<pre class="brush: javascript">
{
"interface":":4984",
"adminInterface":"0.0.0.0:4985",
"log": ["CRUD+", "REST+", "Changes+", "Attach+"],
"databases":{
"demo":{
"bucket":"demo",
"server":"http://localhost:8091",
"users": {
"GUEST": {"disabled": false, "admin_channels": ["*"]}
},
"sync":`function(doc) {channel(doc.channels);}`
}
}
}
</pre><br /><br />
After running the Sync Gateway with this config file, you should observe the following log showing that the demo bucket is ready for acting as our central data synchronization storage:<br /><br />
<pre class="brush: bash">
23:27:02.411961 Enabling logging: [CRUD+ REST+ Changes+ Attach+]
23:27:02.412547 ==== Couchbase Sync Gateway/1.0.3(81;fa9a6e7) ====
23:27:02.412559 Configured Go to use all 8 CPUs; setenv GOMAXPROCS to override this
23:27:02.412604 Opening db /demo as bucket "demo", pool "default", server <http://localhost:8091>
23:27:02.413160 Opening Couchbase database demo on <http://localhost:8091>
23:27:02.601456 Reset guest user to config
23:27:02.601467 Starting admin server on 0.0.0.0:4985
23:27:02.603461 Changes+: Notifying that "demo" changed (keys="{_sync:user:}") count=2
23:27:02.604248 Starting server on :4984 ...
</pre><br /><br />
Refresh the page with the bucket document list, and you should see some internal Sync Gateway documents there which IDs start with _sync:<br /><br />
<a href="http://habrastorage.org/files/bfa/ab9/cf7/bfaab9cf768b4f48af132c1baaa00d0e.png"><img width="580px" src="http://habrastorage.org/files/bfa/ab9/cf7/bfaab9cf768b4f48af132c1baaa00d0e.png"/></a>
<h1>Console application</h1>
The code of the console application <a href="https://github.com/forketyfork/couchbase-sync-demo/tree/master">is available on GitHub</a> together with the mobile application. It is mainly intended for demonstrating and testing the interaction of mobile and desktop databases and is comprised of a simple Java application that connects to an embedded Couchbase Lite database, which is also implemented in Java. The application is able to create a local document with an image attachment and a timestamp_added attribute. It also initiates replication of local changes to Couchbase Server.<br /><br />
<h1>Mobile application</h1>
The mobile application will show thumbnails of pictures that were added in a console application, persisted to the local database and replicated to the mobile database via server database. The process of creating this mobile application is described here in full. I chose the iOS platform for the mobile application as it is has better support for the Couchbase Lite API. The language used here is Swift.<br /><br />
<h2>Creating a project and adding dependencies</h2>
First let’s create a simple Single View Application:<br /><br />
<a href="http://habrastorage.org/files/68c/619/044/68c619044d7f4b758e22868e18e76611.png"><img width="580px" src="http://habrastorage.org/files/68c/619/044/68c619044d7f4b758e22868e18e76611.png"/></a>
To attach the couchbase-lite-ios library to the project, let's use the CocoaPods dependency manager. The CocoaPods installation is described <a href="https://guides.cocoapods.org/using/getting-started.html#getting-started">in its documentation</a>. Let’s initialize CocoaPods in the project directory:<br /><br />
<pre class="brush: bash">
pod init
</pre><br /><br />
Add the couchbase-lite-ios dependency to Podfile:<br /><br />
<pre class="brush: bash">
target 'CouchbaseSyncDemo' do
pod 'couchbase-lite-ios', '~> 1.0'
end
</pre><br /><br />
Install the specified library into the project:<br /><br />
<pre class="brush: bash">
pod install
</pre><br /><br />
Now you should reopen the project as a workspace (CouchbaseSyncDemo.xcworkspace). Now add a bridging header file so you can use the CocoaPods-installed Objective C libraries in your Swift classes. To do that, add to the project the following header file, naming it CouchbaseSyncDemo-Bridging-Header.h:<br /><br />
<pre class="brush: java">
#ifndef CouchbaseSyncDemo_CouchbaseSyncDemo_Bridging_Header_h
#define CouchbaseSyncDemo_CouchbaseSyncDemo_Bridging_Header_h
#import "CouchbaseLite/CouchbaseLite.h"
#endif
</pre><br /><br />
Specify this file in your Build Settings:<br /><br />
<a href="http://habrastorage.org/files/463/027/b81/463027b817bd415b8492741ff5529c94.png"><img width="580px" src="http://habrastorage.org/files/463/027/b81/463027b817bd415b8492741ff5529c94.png"/></a>
<h2>UI stub</h2>
Inherit the automatically generated ViewController class from the UICollectionViewController:<br /><br />
<pre class="brush: java">
class ViewController: UICollectionViewController {
</pre><br /><br />
Open Main.storyboard and switch the default ViewController to a Collection View Controller, dragging it from the Object Library and redirecting the Storyboard Entry Point to it. In the Custom Class section of the Identity Inspector specify the generated ViewController. Also select the Collection View Cell and in its Attribute Inspector specify "cell" as its Reuse Identifier. The result is shown on the following screenshot:<br /><br />
<a href="http://habrastorage.org/files/5a6/c6e/994/5a6c6e994c144ea5b5b24a980b1202f9.png"><img width="580px" src="http://habrastorage.org/files/5a6/c6e/994/5a6c6e994c144ea5b5b24a980b1202f9.png"/></a>
<h2>Initializing and starting the replication</h2>
Create a class CouchbaseService that will incapsulate the database-related functionality and implement it as a singleton:<br /><br />
<pre class="brush: java">
private let CouchbaseServiceInstance = CouchbaseService()
class CouchbaseService {
class var instance: CouchbaseService {
return CouchbaseServiceInstance
}
}
</pre><br /><br />
Now open the demo database in the constructor of this class and start continuous pull replication. If the application is run inside the emulator, and Couchbase Server is running on the same machine, then we can use localhost as the address for replication. The continuous flag ensures that the replication runs continuously via long polling mechanism. You should also create the "images" view for extracting the list of all images:<br /><br />
<pre class="brush: java">
private let pull: CBLReplication
private let database: CBLDatabase
private init() {
// create or open the database
database = CBLManager.sharedInstance().databaseNamed("demo", error: nil)
// initiate pull replication
let syncGatewayUrl = NSURL(string: "http://localhost:4984/demo/")
pull = database.createPullReplication(syncGatewayUrl)
pull.continuous = true;
pull.start()
// create a view of all documents in the database
database.viewNamed("images").setMapBlock({(doc: [NSObject : AnyObject]!, emit: CBLMapEmitBlock!) -> Void in
emit(doc["timestamp_added"], nil)
}, version: "1")
}
</pre><br /><br />
<h2>Couchbase Lite views</h2>
Couchbase view is an indexed and automatically refreshed result of execution of a pair of functions — map and (optionally) reduce — on all of the documents in the bucket. Here the view is specified only by its map function that for each document returns its creation timestamp as the key. The key in views is also used to sort the view’s results, so the images will always be sorted by the time they were added. The version parameter specifies the view’s version and has to be changed each time we change the view’s code. The change in the version is a signal for Couchbase to rebuild the view using the new version of the code.<br /><br />
Views in Couchbase can be queried. A specific type of queries is a live query, which results in an automatically updated array of documents. Thanks to Objective C and Swift’s <a href="https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/KeyValueObserving/KeyValueObserving.html">KVO</a> feature, we can observe this array’s changes and update the interface of our application when new data arrives via replication.<br /><br />
As a matter of fact, this way of tracking the changes may only signal the fact that the query results changed, but not the concrete added or deleted records. Such information would allow us to minimize the updates to interface — and gladly, Couchbase Lite provides it via the <a href="http://developer.couchbase.com/mobile/develop/references/couchbase-lite/couchbase-lite/database/database/index.html#string-change">kCBLDatabaseChangeNotification</a> event. This event signals of all new revisions that are added to the database. But in this example I decided to use the more simple live query mechanism.<br /><br />
<h2>Dealing with the data</h2>
Let’s add to CouchbaseService class a function for executing live query to our images view:<br /><br />
<pre class="brush: java">
func getImagesLiveQuery() -> CBLLiveQuery {
return database.viewNamed("images").createQuery().asLiveQuery()
}
</pre><br /><br />
The iOS implementation of Couchbase Lite stands out from other platforms by its automatic bi-directional mapping of documents to object models. This mapping leverages dynamic features of Objective C. A Swift implementation of this mapping is as follows:<br /><br />
<pre class="brush: java">
@objc
class ImageModel: CBLModel {
@NSManaged var timestamp_added: NSString
var imageInternal: UIImage?
var image: UIImage? {
if (imageInternal == nil) {
imageInternal = UIImage(data: self.attachmentNamed("image").content)
}
return imageInternal
}
}
</pre><br /><br />
The timestamp_added attribute is dynamically linked to the corresponding field in the document, and the attachmentNamed: function allows us to receive binary data attached to the document. To convert the document to its object model, we can use the ImageModel constructor.<br /><br />
<h2>Binding interface and data</h2>
All that’s left to do is to subscribe ViewController to live query refresh and process this refresh by reloading the collection view. The images attribute keeps the list of documents converted to object models.<br /><br />
<pre class="brush: java">
private var images: [ImageModel] = []
private var query: CBLLiveQuery?
override func viewDidAppear(animated: Bool) {
query = CouchbaseService.instance.getImagesLiveQuery()
query!.addObserver(self, forKeyPath: "rows", options: nil, context: nil)
}
override func observeValueForKeyPath(keyPath: String, ofObject object: AnyObject, change: [NSObject : AnyObject], context: UnsafeMutablePointer<Void>) {
if object as? NSObject == query {
images.removeAll()
var rows = query!.rows
while let row = rows.nextRow() {
images.append(ImageModel(forDocument: row.document))
}
collectionView?.reloadData()
}
}
</pre><br /><br />
The UICollectionViewDataSource protocol methods are quite typical and self-explanatory, except that we use the "cell" reuse identifier that we specified for the collection view cell in the storyboard earlier.<br /><br />
<pre class="brush: java">
override func collectionView(collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
return images.count
}
override func collectionView(collectionView: UICollectionView, cellForItemAtIndexPath indexPath: NSIndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCellWithReuseIdentifier("cell", forIndexPath: indexPath) as! UICollectionViewCell
cell.backgroundView = UIImageView(image:images[indexPath.item].image)
return cell
}
</pre><br /><br />
<h1>Running the application</h1>
Now let’s see what we’ve achieved. Let’s run the console application. By issuing the start command inside the console application, we’re starting the replication; with the attach command we can create several documents with images.<br /><br />
<pre class="brush: bash">
start
CBL started
апр 15, 2015 11:41:14 PM com.github.oxo42.stateless4j.StateMachine publicFire
INFO: Firing START
push event: PUSH replication event. Source: com.couchbase.lite.replicator.Replication@144c1e50 Transition: INITIAL -> RUNNING Total changes: 0 Completed changes: 0
апр 15, 2015 11:41:15 PM com.github.oxo42.stateless4j.StateMachine publicFire
push event: PUSH replication event. Source: com.couchbase.lite.replicator.Replication@144c1e50 Transition: RUNNING -> IDLE Total changes: 0 Completed changes: 0
INFO: Firing WAITING_FOR_CHANGES
attach http://upload.wikimedia.org/wikipedia/commons/4/41/Harry_Whittier_Frees_-_What%27s_Delaying_My_Dinner.jpg
Saved image with id = 8e357b3c-1c7f-4432-b91d-321dc1c9fd9d
push event: PUSH replication event. Source: com.couchbase.lite.replicator.Replication@144c1e50 Total changes: 1 Completed changes: 0
push event: PUSH replication event. Source: com.couchbase.lite.replicator.Replication@144c1e50 Total changes: 1 Completed changes: 1
</pre><br /><br />
The data is replicated to the mobile device and gets displayed right away:<br /><br />
<a href="http://habrastorage.org/files/1dc/dfb/b9d/1dcdfbb9d4024be0865e3fa77ace0f30.png"><img width="580px" src="http://habrastorage.org/files/1dc/dfb/b9d/1dcdfbb9d4024be0865e3fa77ace0f30.png"/></a>
<h1>Summary</h1>
In this article I demonstrated synchronization of data between server side and mobile application by means of Couchbase and Couchbase Lite. This allows us to create a mobile application that can be fully functional while the device is offline. In my future articles I’ll explore document revisions and replication protocol of Couchbase Lite more closely and test it for bad connectivity, sudden backgrounding of application and other perils of mobile app development.<br /><br />
<h1>Links</h1>
<a href="https://github.com/forketyfork/couchbase-sync-demo/tree/master">Sources of sample applications on GitHub</a><br />
<a href="http://www.couchbase.com/">Couchbase</a><br />
<a href="http://developer.couchbase.com/mobile/get-started/couchbase-lite-overview/index.html">Couchbase Lite</a><br />
<a href="http://developer.couchbase.com/mobile/get-started/what-is-sync-gateway/index.html">Sync Gateway</a><br />
<a href="https://en.wikipedia.org/wiki/MapReduce">MapReduce computation model</a><br />
<a href="http://docs.couchbase.com/admin/admin/install-intro.html">Couchbase installation</a><br />
<a href="http://developer.couchbase.com/mobile/develop/guides/sync-gateway/getting-started-with-sync-gateway/index.html">Installing and running Sync Gateway</a><br />
<a href="https://guides.cocoapods.org/using/getting-started.html#getting-started">Installing CocoaPods</a><br />
forketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.com5tag:blogger.com,1999:blog-5900275908451373206.post-21497578495459843362015-02-06T08:52:00.001+05:002015-02-06T11:41:20.145+05:00Solving issues with installing the "Real World OCaml" prerequisite libraries under Mac OS X I’ve run into some troubles installing the «Real World Ocaml» prerequisites under Mac OS X Yosemite. The installation process is described on the books’ wiki <a href="https://github.com/realworldocaml/book/wiki/Installation-Instructions">here</a>. I've found little help by googling the error messages, which possibly means they could be very specific to my installation. So I've decided to share my experience in case someone runs into similar issues.<br />
At one point, the instruction advises you to install some libraries that are used throughout the book, by issuing the following command: <br />
<pre class="brush: bash">
opam install \
async yojson core_extended core_bench \
cohttp async_graphics cryptokit menhir
</pre>
<br />
At first I’ve failed to install the cohttp and async_graphics packages. Possibly that was because I already had objective-caml installed, and my process of installation deviated a bit from the one prescribed by the instruction.<br />
The cohttp package depends on ctypes package, which was failing with the following error:<br />
<pre class="brush: bash">
# fatal error: 'ffi.h' file not found
# #include <ffi.h>
</pre>
<br />
To solve it, just install libffi and add it to LDFLAGS environment variable for the time of build:<br />
<pre class="brush: bash">
brew install libffi
export LDFLAGS=-L/usr/local/opt/libffi/lib
opam install cohttp
</pre>
<br />
If during the installation of async_graphics module you receive an error:<br />
<pre class="brush: bash">
# Error: Unbound module Graphics
</pre>
<br />
That probably means you’ve installed objective-caml without the Graphics module, which may be verified by the listing:<br />
<pre class="brush: bash">
ls /usr/local/opt/objective-caml/lib/ocaml/graphics*
</pre>
<br />
If the listing is empty, then you should reinstall objective-caml with graphics:<br />
<pre class="brush: bash">
brew uninstall objective-caml
brew install objective-caml --with-x11
</pre>
<br />
Now the process of installing the async_graphics package should work just fine:</br>
<pre class="brush: bash">
opam install async_graphics
</pre>
<br />
forketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.com1tag:blogger.com,1999:blog-5900275908451373206.post-65860239687196891032015-02-06T07:09:00.001+05:002015-02-06T07:14:33.944+05:00Installing Ocsigen web framework under Mac OS X and CentOS and creating a simple web application I recently wanted to play around with OCaml and create a web application. It appears that the <a href="http://ocsigen.org">Ocsigen</a> framework is the only (or the most popular) choice for building web applications with OCaml, so here’s how to install it on Mac OS X and CentOS 6 and create a simple web application.<br />
<h1>Installing on Mac OS X</h1><br />
This process was tested on Yosemite. First, install <a href="http://brew.sh">brew</a>, if you haven't got it already:<br />
<pre class="brush: bash">
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
</pre>
<br />
Now install ocaml, opam package manager and all the prerequisites for the Eliom (web application framework) and Macaque (database framework). Macaque is not really needed to run a simple example, but you're going to need it soon enough if you're about to develop a database-backed web application.<br />
<pre class="brush: bash">
brew install ocaml opam libev gdbm pcre openssl pkg-config sqlite3
</pre>
<br />
Create a symbolic link for pkgconfig to the sqlite package. Use your current installed version of sqlite instead. Not a very clean solution, as it will break if the sqlite package is updated via brew. If you know how to reference the latest sqlite version, please let me know.<br />
<pre class="brush: bash">
ln -s /usr/local/Cellar/sqlite/3.8.8.2/lib/pkgconfig/sqlite3.pc /usr/local/lib/pkgconfig/sqlite3.pc
</pre>
<br />
Now initialize the opam package manager. It will create the ~/.opam directory, where it keeps all of its data, including installed packages.<br />
<pre class="brush: bash">
opam init
</pre>
<br />
Now edit the ~/.profile file and add this line:<br />
<pre class="brush: bash">
eval `opam config env`
</pre>
<br />
Restart the terminal shell to pick up the environment variables, and then check that the scaffolding tool is available:<br />
<pre class="brush: bash">
eliom-distillery
</pre>
<br />
<h1>Installing on CentOS 6</h1><br />
First add the OCaml repository to yum:<br />
<pre class="brush: bash">
cd /etc/yum.repos.d/
wget http://download.opensuse.org/repositories/home:ocaml/CentOS_6/home:ocaml.repo
</pre>
<br />
Now install OCaml, opam and all the prerequisites:<br />
<pre class="brush: bash">
yum install ocaml opam ocaml-camlp4 ocaml-camlp4-devel ocaml-ocamldoc openssl-devel pcre-devel sqlite-devel
</pre>
<br />
Initialize the opam repository and install the eliom web framework and macaque database framework:<br />
<pre class="brush: bash">
opam init
opam install eliom macaque
</pre>
<br />
If for some reason you encounter an error during installation:<br />
<pre class="brush: bash">
# ocamlfind: Package `camlp4' not found
</pre>
<br />
Then try to reinstall the ocamlfind package and run the installation again:<br />
<pre class="brush: bash">
opam reinstall ocamlfind
opam install eliom macaque
</pre>
<br />
<h1>Creating and running your first Ocsigen web application</h1><br />
Create a barebones application using the generator:<br />
<pre class="brush: bash">
eliom-distillery -name mysite -template basic -target-directory mysite
</pre>
<br />
Run it:<br />
<pre class="brush: bash">
cd mysite
make test.byte
</pre>
<br />
Open <a href="http://localhost:8080/">http://localhost:8080/</a> in your browser. You should see the «Welcome from Eliom’s distillery!» greeting message.<br />forketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.com0tag:blogger.com,1999:blog-5900275908451373206.post-29059341659836918642014-01-31T03:08:00.002+06:002014-01-31T03:08:24.591+06:00Creating the simplest HTTP server with basic authentication using node.js In this article I will show you how to create the simplest possible HTTP server with basic authentication in node.js. I have to warn you though, I needed a quick and dirty solution for testing purposes, so this is definitely not for production use. You should at least keep hashes of user passwords, not plaintext passwords themselves, and use digest authentication as a more secure method.<br />
First, install the htpasswd module globally:
<br />
<pre class="brush: bash">npm install -g htpasswd
</pre>
Create a directory for your project and install http-auth module locally:
<br />
<pre class="brush: bash">
npm install http-auth
</pre>
Create a file auth-server.js with your editor of choice. Put the following lines into it:
<br />
<pre class="brush: javascript">var http = require("http");
var auth = require("http-auth");
var basic = auth.basic({
file: __dirname + '/htpasswd'
});
http.createServer(basic, function(req, res) {
console.log('Received request: ' + req.url);
res.end('User successfully authenticated: ' + req.user);
}).listen(8080);
</pre>
Now create a file htpasswd in the same directory and populate it with a user name and a password separated by a colon:
<br />
<pre class="brush: bash">forketyfork:mypassword
</pre>
Now run the node server:
<br />
<pre class="brush: bash">node auth-server.js
</pre>
Go to the <a href="http://localhost:8080/">http://localhost:8080</a> URL in your browser. It will greet you with a standard basic-auth panel to enter your username and password. After successful authentication, you will see the message from server.<br />
For more info on how to use the http-auth package for basic and digest authentication, see its page on github: <a href="https://github.com/gevorg/http-auth">https://github.com/gevorg/http-auth</a>. For more info on htpasswd module, including using different types of hashes instead of plain text passwords, see <a href="https://github.com/gevorg/htpasswd">https://github.com/gevorg/htpasswd</a>.forketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.com0tag:blogger.com,1999:blog-5900275908451373206.post-67583298562216986962014-01-29T22:37:00.001+06:002014-01-31T02:56:46.943+06:00T-SQL: Unicode-escaping characters in a stringI am in no way a T-SQL pro, but today I had a need of escaping a varchar field value to create a valid JSON string, while being limited only to Microsoft T-SQL features.<br />
The <a href="http://www.ietf.org/rfc/rfc4627.txt">JSON RFC</a> states that:<br />
<blockquote class="tr_bq">
All Unicode characters may be placed within the quotation marks except for the characters that must be escaped: quotation mark, reverse solidus, and the control characters (U+0000 through U+001F).</blockquote>
As it turns out, the way of iterating through a string in T-SQL is as such:<br />
<pre class="brush: sql">set @wcount = 0
set @index = 1
set @len = len(@string)
while @index <= @len
begin
set @char = substring(@string, @index, 1)
/* do something with @char */
set @index += 1
end
</pre>
To escape a quote or a backslash, we just prefix it with a backslash. As for the control characters, this gets a bit trickier, as we need to convert them to \u-notation that is used in JSON. We can use the built-in <strong>unicode</strong> function to get the ordinal value of a char and determine that it needs to be escaped.
<br />
<pre class="brush: sql">when unicode(@char) < 32
</pre>
Then we take advantage of the <strong>fn_varbintohexstr</strong> system function to convert a char value through varbinary type to a hex string.
<br />
<pre class="brush: sql">sys.fn_varbintohexstr(cast(@char as varbinary))
</pre>
Finally, after some string chopping and concatenating, we get what we want:
<br />
<pre class="brush: sql">'\u00' + right(sys.fn_varbintohexstr(cast(@char as varbinary)), 2)
</pre>
Here's the code of the function json_escape in its entirety.
<br />
<pre class="brush: sql">if object_id(N'dbo.json_escape', N'FN') is not null
drop function dbo.json_escape
go
create function dbo.json_escape (@string varchar(max)) returns varchar(max)
as
begin
declare @wcount int, @index int, @len int, @char char, @escaped_string varchar(max)
set @escaped_string = ''
set @wcount = 0
set @index = 1
set @len = len(@string)
while @index <= @len
begin
set @char = substring(@string, @index, 1)
set @escaped_string +=
case
when @char = '"' then '\"'
when @char = '\' then '\\'
when unicode(@char) < 32 then '\u00' + right(sys.fn_varbintohexstr(cast(@char as varbinary)), 2)
else @char
end
set @index += 1
end
return(@escaped_string)
end
go
</pre>
forketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.com0tag:blogger.com,1999:blog-5900275908451373206.post-37394902071874423142013-11-27T13:37:00.002+06:002013-11-27T13:37:41.739+06:00Date formatting in Velocity templatesHere's how to format a date inside a velocity template. Add an additional velocity-tools library in your dependencies:
<pre class='brush: xml'>
<dependency>
<groupId>org.apache.velocity</groupId>
<artifactId>velocity-tools</artifactId>
<version>2.0</version>
</dependency>
</pre>
Import the <a href="http://velocity.apache.org/tools/devel/javadoc/org/apache/velocity/tools/generic/DateTool.html">DateTool</a> class:
<pre class='brush: java'>
import org.apache.velocity.tools.generic.DateTool;
</pre>
Add an instance of this class to the VelocityContext:
<pre class='brush: java'>
VelocityContext context = new VelocityContext();
context.put("date", new DateTool());
</pre>
Add your date object to the context:
<pre class='brush: java'>
context.put("some_date", new Date());
</pre>
Use the DateTool parameter in the template to format date:
<pre class='brush: java'>
$date.format('dd.MM.yyyy', $some_date)
</pre>
forketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.com0tag:blogger.com,1999:blog-5900275908451373206.post-67989664036446396122013-07-04T12:49:00.000+06:002013-07-04T12:49:16.153+06:00Sending large attachments via SOAP and MTOM in Java Sometimes you need to pass a large chunk of unstructured (possibly even binary) data via SOAP protocol — for instance, you wish to attach a file to a message. The default way to do this is to pass the data in an XML element with <a href="http://www.w3.org/TR/2004/PER-xmlschema-2-20040318/#base64Binary">base64Binary</a> type. What it effectively means is, your data will be <a href="http://en.wikipedia.org/wiki/Base64">Base64</a>-encoded and passed inside the message body. Not only your data gets enlarged by about 30%, but also any client or server that sends or receives such message will have to parse it entirely which may be time and memory consuming on large volumes of data.<br /><br />
To solve this problem, the <a href="http://www.w3.org/TR/soap12-mtom/">MTOM</a> standard was defined. Basically it allows you to pass the content of a base64Binary block outside of the SOAP message, leaving a simple reference element instead. As for the correspondent HTTP binding, the message is transferred as a <a href="http://www.w3.org/TR/soap12-af/">SOAP with attachments</a> with a <a href="http://www.ietf.org/rfc/rfc2387.txt">multipart/related</a> content type. I won't go into the details here, you may learn it all straight from the above mentioned standards and RFCs.<br /><br />
The tricky part is, although we've disposed of a 30% volume overhead by passing the data outside of the message, the standards themselves don't specify the ways of processing the messages by the implementations of clients and servers — whether the messages should be completely read into memory with all their attachments during sending and receiving or offloaded on external storage. By default, the implementations (including Java's <a href="http://docs.oracle.com/javaee/5/tutorial/doc/bnbhg.html">SAAJ</a>) usually read the attachments completely into memory, thus causing a possibility of running out of memory on large files or heavy-loaded systems. In Java, this is usually signified by a "java.lang.OutOfMemoryError: Java heap space" error.<br /><br />
In this post I will demonstrate a simple client-server application that can transfer SOAP attachments of arbitrary volume with disk offloading, using <a href="http://cxf.apache.org">Apache CXF</a> on the client and Oracle's SAAJ implementation (a part of JDK 6+) on the server. This will require some tuning for the mentioned frameworks. The complete code of the application is <a href="https://github.com/forketyfork/mtom-soap">available on GitHub</a>.<br /><br />
First, we will place the common files (XSD and WSDL) in a separate project, as they will be used by both client and sever. The WSDL schema of the service is relatively straightforward: we have a port with a single operation that consists of a SimpleRequest request and a SimpleResponse response from the server. The file is transferred in the request to the server. The XSD schema of request and response is as follows:<br /><br />
<pre class='brush: xml'>
<?xml version="1.0" encoding="UTF-8"?>
<s:schema elementFormDefault="qualified"
targetNamespace="http://forketyfork.ru/mtomsoap/schema"
xmlns:s="http://www.w3.org/2001/XMLSchema"
xmlns:xmime="http://www.w3.org/2005/05/xmlmime">
<s:element name="SampleRequest">
<s:annotation>
<s:documentation>Service request</s:documentation>
</s:annotation>
<s:complexType>
<s:sequence>
<s:element name="text" type="s:string" />
<s:element name="file" type="s:base64Binary" xmime:expectedContentTypes="*/*" />
</s:sequence>
</s:complexType>
</s:element>
<s:element name="SampleResponse">
<s:annotation>
<s:documentation>Service response</s:documentation>
</s:annotation>
<s:complexType>
<s:attribute name="text" type="s:string" />
</s:complexType>
</s:element>
</s:schema>
</pre>
Take a note of the imported xmime schema, and the usage of <b>xmime:expectedContentTypes="*/*"</b> attribute on a binary data element. This enables us to generate correct JAXB code out of this schema, because by default the base64Binary element corresponds to a byte[] array field in the JAXB-mapped class. But as we'll see, the expectedContentTypes attribute alters the generation of the class:<br /><br />
<pre class='brush: java'>
@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "", propOrder = {
"text",
"file"
})
@XmlRootElement(name = "SampleRequest")
public class SampleRequest {
@XmlElement(required = true)
protected String text;
@XmlElement(required = true)
@XmlMimeType("*/*")
protected DataHandler file;
...
</pre>
Note that the file field is of type DataHandler, which allows for streaming processing of the data.<br /><br />
We shall generate the JAXB classes for both client and server, and a service class for the client, using Apache CXF cxf-codegen-plugin for Maven during build-time. The configuration is as follows:<br /><br />
<pre class='brush: xml'>
<plugin>
<groupId>org.apache.cxf</groupId>
<artifactId>cxf-codegen-plugin</artifactId>
<version>${cxf.version}</version>
<executions>
<execution>
<id>generate-sources</id>
<phase>generate-sources</phase>
<configuration>
<sourceRoot>${project.build.directory}/generated-sources/cxf</sourceRoot>
<wsdlOptions>
<wsdlOption>
<wsdl>${basedir}/src/main/resources/service.wsdl</wsdl>
<wsdlLocation>classpath:service.wsdl</wsdlLocation>
</wsdlOption>
</wsdlOptions>
</configuration>
<goals>
<goal>wsdl2java</goal>
</goals>
</execution>
</executions>
</plugin>
</pre>
In this Maven plugin configuration we explicitly specify the wsdlLocation property that will be included into the generated service class. Without it, the generated path to the WSDL file will be a local path on the developer's machine, which we obviously don't want.<br /><br />
The client (module mtom-soap-client) is plain simple, as it is based on Apache CXF and a generated SampleService class. Here we only enable MTOM for underlying SOAP binding and specify an infinite timeout, as the transfer of large files may take time:<br /><br />
<pre class='brush: java'>
// Creating a CXF-generated service
Sample sampleClient = new SampleService().getSampleSoap12();
// Setting infinite HTTP timeouts
HTTPClientPolicy httpClientPolicy = new HTTPClientPolicy();
httpClientPolicy.setConnectionTimeout(0);
httpClientPolicy.setReceiveTimeout(0);
HTTPConduit httpConduit = (HTTPConduit) ClientProxy.getClient(sampleClient).getConduit();
httpConduit.setClient(httpClientPolicy);
// Enabling MTOM for the SOAP binding provider
BindingProvider bindingProvider = (BindingProvider) sampleClient;
SOAPBinding binding = (SOAPBinding) bindingProvider.getBinding();
binding.setMTOMEnabled(true);
// Creating request object
SampleRequest request = new SampleRequest();
request.setText("Hello");
request.setFile(new DataHandler(new FileDataSource(args[0])));
// Sending request
SampleResponse response = sampleClient.sample(request);
System.out.println(String.format("Server responded: \"%s\"", response.getText()));
</pre>
The server is based on the <a href="http://www.springsource.org/spring-ws">Spring WS</a> framework. Only we won't use a typical default <annotation-config /> configuration here and specify a custom DefaultMethodEndpointAdapter configuration, because we need Spring WS to use our custom-configured jaxb2Marshaller bean:<br /><br />
<pre class='brush: xml'>
<!-- The service bean -->
<bean class="ru.forketyfork.mtomsoap.server.SampleServiceEndpoint" p:uploadPath="/tmp"/>
<!-- SAAJ message factory configured for SOAP v1.2 -->
<bean id="messageFactory" class="org.springframework.ws.soap.saaj.SaajSoapMessageFactory"
p:soapVersion="#{T(org.springframework.ws.soap.SoapVersion).SOAP_12}"/>
<!-- JAXB2 Marshaller configured for MTOM -->
<bean id="jaxb2Marshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"
p:contextPath="ru.forketyfork.mtomsoap.schema"
p:mtomEnabled="true"/>
<!-- Endpoint mapping for the @PayloadRoot annotation -->
<bean class="org.springframework.ws.server.endpoint.mapping.PayloadRootAnnotationMethodEndpointMapping" />
<!-- Endpoint adapter to marshal endpoint method arguments and return values as JAXB2 objects -->
<bean class="org.springframework.ws.server.endpoint.adapter.DefaultMethodEndpointAdapter">
<property name="methodArgumentResolvers">
<list>
<ref bean="marshallingPayloadMethodProcessor" />
</list>
</property>
<property name="methodReturnValueHandlers">
<list>
<ref bean="marshallingPayloadMethodProcessor" />
</list>
</property>
</bean>
<!-- JAXB@ Marshaller/Unmarshaller for method arguments and return values -->
<bean id="marshallingPayloadMethodProcessor" class="org.springframework.ws.server.endpoint.adapter.method.MarshallingPayloadMethodProcessor">
<constructor-arg ref="jaxb2Marshaller" />
</bean>
</pre>
Important thing to notice here is a <b>mtomEnabled</b> property of jaxb2Marshaller, the rest of the configuration is quite typical.<br /><br />
The SampleServiceEndpoint class is a service that is bound via the @PayloadRoot annotation to process our SampleRequest requests:<br /><br />
<pre class='brush: java'>
@PayloadRoot(namespace = "http://forketyfork.ru/mtomsoap/schema", localPart = "SampleRequest")
@ResponsePayload
public SampleResponse serve(@RequestPayload SampleRequest request) throws IOException {
// randomly generating file name as a UUID
String fileName = UUID.randomUUID().toString();
File file = new File(uploadPath + File.separator + fileName);
// writing attachment to file
try(FileOutputStream fos = new FileOutputStream(file)) {
request.getFile().writeTo(fos);
}
// constructing the response
SampleResponse response = new SampleResponse();
response.setText(String.format("Hi, just received a %d byte file from ya, saved with id = %s",
file.length(), fileName));
return response;
}
</pre>
Notice how we work with the request.getFile() field of the request. Remember, the type of the field is DataHandler. What actually happens is, the request.getFile() wraps an InputStream that points to the attachment that was offloaded by SAAJ to disk when the request was received. So we may copy this file to another location or process it in any way while not loading it completely into memory.<br /><br />
A final trick is to enable the attachment offloading for the Oracle's SAAJ implementation that is bundled with Oracle's JDK starting from version 6. To do that, we must run our server with the <b>-Dsaaj.use.mimepull=true</b> JVM argument.<br /><br />
Once again, the complete code for the article is <a href="https://github.com/forketyfork/mtom-soap">available on GitHub</a>.
forketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.com3tag:blogger.com,1999:blog-5900275908451373206.post-92228016720511481352013-06-21T00:48:00.000+06:002013-06-21T00:48:05.106+06:00How to return a file, a stream or a classpath resource from a Spring MVC controller You can use <a href="http://static.springsource.org/spring/docs/current/javadoc-api/org/springframework/core/io/AbstractResource.html">AbstractResource</a> subclasses as return values from the controller methods, combining them with the <a href="http://static.springsource.org/spring/docs/current/javadoc-api/org/springframework/web/bind/annotation/ResponseBody.html">@ResponseBody</a> method annotation.<br /><br />
Consequently, as soon as you know the filesystem path of the file or have its URI, returning a file from a Spring MVC controller is as easy as:<br />
<pre class='brush: java'>
@RequestMapping(value = "/file", method = RequestMethod.GET,
produces = MediaType.IMAGE_JPEG_VALUE)
@ResponseBody
public Resource getFile() throws FileNotFoundException {
return new FileSystemResource("/Users/forketyfork/cat.jpg");
}
</pre>
The code to return a classpath resource is quite similar:<br />
<pre class='brush: java'>
@RequestMapping(value = "/classpath", method = RequestMethod.GET,
produces = MediaType.IMAGE_JPEG_VALUE)
@ResponseBody
public Resource getFromClasspath() {
return new ClassPathResource("cat.jpg");
}
</pre>
But how about outputting data from a stream? A common advice is to inject HttpServletResponse as a method parameter and write directly to the output stream of the response. But this badly breaks the abstraction, not to mention the testability. Technically we can write to a Writer introduced as a method parameter, like this:<br />
<pre class='brush: java'>
@RequestMapping(value = "/writer", method = RequestMethod.GET,
produces = MediaType.TEXT_PLAIN_VALUE)
@ResponseBody
public void getStream(Writer writer) throws IOException {
writer.write("Hello World!");
}
</pre>
A seemingly simple one-liner. But if you consider serving a large chunk of binary data, this approach appears to be quite slow, memory-consuming and not very handy as it uses the Writer which deals in chars. Moreover, Spring MVC is not able to set the Content-Length header until the output is finished. Here's a slightly more verbose solution, which however does not break the abstraction and is fast and testable.<br />
<pre class='brush: java'>
@RequestMapping(value = "/stream", method = RequestMethod.GET,
produces = MediaType.TEXT_PLAIN_VALUE)
@ResponseBody
public Resource getStream() {
String string = "Hello World!";
// acquiring the stream
InputStream stream = new ByteArrayInputStream(string.getBytes());
// counting the length of data
final long contentLength = string.length();
return new InputStreamResource(stream){
@Override
public long contentLength() throws IOException {
return contentLength;
}
};
}
</pre>
First, we acquire the stream. Then we count the length of the content we need to output. This may be done in some optimized fashion so as not to process the content entirely. Spring MVC first calls the contentLength() method of the InputStreamResource, sets the Content-Length header and then pipes the stream to the client.<br /><br />
Here we touch on a bit of inconsistency in Spring API. The class <a href="http://static.springsource.org/spring/docs/current/javadoc-api/org/springframework/core/io/InputStreamResource.html">InputStreamResource</a> extends the <a href="http://static.springsource.org/spring/docs/current/javadoc-api/org/springframework/core/io/AbstractResource.html">AbstractResource</a>, which in turn implements the method contentLength() by processing the whole incapsulated stream to count its length. InputStreamResource does not override the contentLength() method, but does override the getInputStream() method, prohibiting to call it more than once, which effectively does not allow for direct usage of this class as a controller method return value. In the example above we override the contentLength() method and provide the correct functionality.forketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.com5tag:blogger.com,1999:blog-5900275908451373206.post-67812724455733782922013-05-28T18:03:00.000+06:002013-05-28T18:03:40.298+06:00"stack shape inconsistent" error during Spring/Jackson application initializationI have a JSON-service client that's implemented in Spring and Jackson and deployed on WebSphere Application Server. The client worked properly, but on a single machine I encountered a strange classloading issue during Spring initialization:
<pre>
java.lang.VerifyError: JVMVRFY012 stack shape inconsistent; class=org/codehaus/jackson/map/ObjectMapper
</pre>
The reason was having two incompatible dependencies in the effective pom of the project:
<pre class='brush: xml'>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-mapper-asl</artifactId>
<version>1.4.2</version>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-mapper-lgpl</artifactId>
<version>1.9.12</version>
</dependency>
</pre>
Both of those jars had ObjectMapper class defined, and both ended up in WEB-INF/lib directory. The error was unstable, because on some machines the correct (latest) versions of the libraries took precedence during classloading.forketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.com1tag:blogger.com,1999:blog-5900275908451373206.post-37500471432395967042013-02-01T00:33:00.000+06:002013-02-01T00:44:10.488+06:00Of Domain Modeling, Separation of Concerns, and How the JPA Annotations Fail at Both Not all of the JPA annotations are actually about mapping the entities to the database, or even about persistence at all. Some of them are intended to instrument the Java language for better modeling of the domain. Here's a simple example of two domain classes that are somehow associated with each other (accessor methods are omitted for clarity):<br />
<pre class='brush: java'>
public class User {
private String name;
private Set<Role> roles;
}
public class Role {
private String name;
}
</pre>
Observing those classes, can we unambiguously determine the relationship between them? A User has a set of Roles, that we may say for sure. But for all we know, this association may be either one-to-many or many-to-many, from the domain view point. There's no reverse association from the Role to the User, so we don't actually know if the same Role can be assigned to several different Users or not. Well, let's suppose we wanted to model a many-to-many relationship. Let's add the reverse link to the Role class and see if it helps, though we may have no use for the reverse connection in the context of our model at all.<br />
<pre class='brush: java'>
public class User {
private String name;
private Set<Role> roles;
}
public class Role {
private String name;
private Set<User> users;
}
</pre>
Does it feel better now? Oh, surely we do have a many-to-many association between those two classes now... or not? Actually, we made quite a heck of an assumption that the User.roles set and the Role.users set point at each other, i.e., they model the same association, but that may certainly not be the case. For example, User.roles set may be a set of Roles that a User has as a user of a system, but the Role.users set may be a completely unrelated set in the context of the domain model. <br /><br />
For example, a Role may have a set of Users that have an authority to grant that role. Surely in this case we could make a better job of naming those two fields somehow differently, so that noone would confuse them as representing the same association. But now we have two different associations, and we still have no clue as of what their actual relationship is. We're still missing the point of modeling the domain with the Java programming language which is considered to be an Object-Oriented language — seemingly a right choise for the job!<br /><br />
That's where the association annotations come in. Here's the annotated version of the first case — unidirectional association:<br />
<pre class='brush: java'>
public class User {
private String name;
@ManyToMany
private Set<Role> roles;
}
public class Role {
private String name;
}
</pre>
Now we see clearly that the association between those entities is many-to-many, and there's no ambiguity in it. As for the second case:<br />
<pre class='brush: java'>
public class User {
private String name;
@ManyToMany
private Set<Role> roles;
}
public class Role {
private String name;
@ManyToMany(mappedBy = "roles")
private Set<User> users;
}
</pre>
The mappedBy attribute for the @ManyToMany annotation in the Role class is what makes those two sets "click" together. The "roles" string is the name of the User class field (not a database field!). OMG, is that a String pointer to a Java class field? Yeah, yeah, we should probably have a more obvious and compile-checked pointer to the field from the other side, but, alas, the Java programming language is so poor that it does not leave us any options. Though some IDEs may help you and actually highlight the value of this attribute if you misspell it, or even navigate you to the connected field with a somethihg+click on it, but still, I'd argue that calling a Java bean field by it's name in a String is quite a poor (yet inevitable) implementation of "binding" the bidirectional association together.<br /><br />
But wait! "mappedBy"?! Seems like we have another fallacy here. The word "mapping" is surely from another story. What mapping is this all about? We haven't said a word yet about the mapping of the entities into a relational data source, all we did was modeling the domain. But let's blame this poor choice of the attribute name (and breaking the separation of concerns) on the developers of the JPA standard.<br /><br />
Another weirdness here is the "fetch" attribute that every association annotation has. "Fetch" is actually a concept of the data source query optimization that allows us to lazily load some heavily packed associations that may not be always of use. For instance, if we only want to show the User's name, why should the data source fetch the collection of roles for us? That's where the "fetch" attribute comes in:<br />
<pre class='brush: java'>
public class User {
private String name;
@ManyToMany(fetch = FetchType.LAZY)
private Set<Role> roles;
}
</pre>
But wait, we find ourselves even more into the data source and mapping concerns here. Why do those "fetch" types even matter if all we want for now is to simply model the domain?<br /><br />
To conclude, I believe that these four annotations — @OneToOne, @OneToMany, @ManyToOne and @ManyToMany — are intended for the developer to model the domain, and they surely would be better off in another package or even another API that has nothing to do with "persistence". Maybe even somewhere in Java SE. But we have them only in Enterprise Edition — as if domain modeling has to be done only in Enterprise, and only in connection with the underlying relational data sources. But that's not always the case. Those annotations would be useful in single-user, desktop applications as well, or even in applications that have no persistence whatsoever but still need to have a domain model. And those annotations are no good place to specify fetching attributes, either.forketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.com0tag:blogger.com,1999:blog-5900275908451373206.post-1128211896770096602012-09-18T17:25:00.002+06:002013-02-01T00:37:46.292+06:00Displaying Maven Project Version In Your Web ApplicationIf you need to display the Maven pom project version of your web application on its web pages, the easiest way to do this is to use the standard Maven resource filtering functionality. Say we have the following main index.html page, and we'd like to place the pom version in the title of the page:
<pre class='brush: html'>
<html>
<head>
<title>My Application — version ${project.version}</title>
</head>
<body>
...
</body>
</html>
</pre>
We'll use a tweak of maven-war-plugin configuration to substitute the placeholder. This plugin is used anyway during the package phase, we only configure it to filter certain resources with the following code in the build/plugins section of the pom:
<pre class='brush: xml'>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.2</version>
<configuration>
<webResources>
<resource>
<directory>src/main/webapp</directory>
<filtering>false</filtering>
<excludes>
<exclude>**/*.html</exclude>
</excludes>
</resource>
<resource>
<directory>src/main/webapp</directory>
<filtering>true</filtering>
<includes>
<include>**/*.html</include>
</includes>
</resource>
</webResources>
</configuration>
</plugin>
</plugins>
</build>
</pre>
With the maven-war-plugin configured as shown above, all the html files in the webapp directory will be filtered, i.e., ${project.version} and all other placeholders will be substituted for their actual values. All the rest of the resources will be copied without alteration. If your project uses jsp, xhtml or some sort of templating, you should configure the include/exclude file extensions respectively.forketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.com0tag:blogger.com,1999:blog-5900275908451373206.post-75230131925489238412012-08-02T13:25:00.001+06:002012-08-02T13:25:21.650+06:00Logging of messages sent with Spring WS clientWhen sending messages with Spring WS client, observing of actual sent and received messages is often required. To set the logging level, add the following lines in log4j.properties:<br />
<br />
<code>
log4j.logger.org.springframework.ws.client.MessageTracing.sent=TRACE<br />
log4j.logger.org.springframework.ws.client.MessageTracing.received=TRACE<br />
</code>
<br />
Here's what you might observe in the logs:<br />
<br />
<code>
2012-08-02 12:31:49,615 TRACE [org.springframework.ws.client.MessageTracing.sent] - Sent request [<SOAP-ENV:Envelope...<br />
2012-08-02 12:31:50,581 TRACE [org.springframework.ws.client.MessageTracing.received] - Received response [<env:Envelope<br />
</code>
<br />
For more info about message logging see <a href="http://static.springsource.org/spring-ws/site/reference/html/common.html#logging">this chapter</a> of the excellent Spring WS documentation.forketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.com0tag:blogger.com,1999:blog-5900275908451373206.post-48294869326819806112012-04-10T12:35:00.001+06:002012-04-10T15:17:21.885+06:00Ext JS 4: How to wrap text in grid headers or cellsHere's a CSS hack for ExtJS 4 to wrap the text in grid columns for all grids in the application:
<pre class='brush: css'>
.x-column-header-inner .x-column-header-text {
white-space: normal;
}
.x-column-header-inner {
line-height: normal;
padding-top: 3px !important;
padding-bottom: 3px !important;
text-align: center;
top: 20%;
}
</pre>
And here's the trick to make it work for a single grid. You should configure your grid with an id:
<pre class='brush: javascript'>
{
xtype : 'grid',
id : 'somegrid',
...
}
</pre>
And add this id to the CSS path:
<pre class='brush: css'>
#somegrid .x-column-header-inner .x-column-header-text {
white-space: normal;
}
#somegrid .x-column-header-inner {
line-height: normal;
padding-top: 3px !important;
padding-bottom: 3px !important;
text-align: center;
top: 20%;
}
</pre>
Here's a hack to wrap text in grid cells:
<pre class='brush: css'>
.x-grid-cell-inner {
white-space: normal
}
</pre>
And that's the same hack for a single grid:
<pre class='brush: css'>
#somegrid .x-grid-cell-inner {
white-space: normal
}
</pre>forketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.com2tag:blogger.com,1999:blog-5900275908451373206.post-85921061818131782002012-03-26T00:51:00.000+06:002013-06-21T01:51:46.179+06:00Integrating DBUnit with Spring TestContext FrameworkThe Spring Framework 2.5 came with a new <a href="http://static.springsource.org/spring/docs/3.1.x/spring-framework-reference/html/testing.html#testcontext-framework">TestContext</a> framework which made integration testing of database code a lot easier. It provided annotations to declaratively specify the application context to execute your test in, annotations to mark the transactions for test methods, and also some base test classes for <a href="http://www.junit.org/">JUnit</a> and <a href="http://testng.org">TestNG</a>. In the following article I will describe an approach for integrating the TestContext framework with the <a href="http://www.dbunit.org/">DBUnit</a> framework, which in turn allows you to initialize the test database before test and verify its condition with the expected dataset after the test is complete.<br /><br />
Here’s a simple example. We are about to test the correctness of persisting a domain object.
<pre class='brush: java'>
@Entity
public class Person {
@Id @GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
private String name;
...
</pre>
Here’s DAO that does the saving:
<pre class='brush: java'>
public class JpaPersonDao implements PersonDao {
@PersistenceContext
private EntityManager em;
public void save(Person person) {
em.persist(person);
}
}
</pre>
It’s worth mentioning that integration testing of DAO implies testing the DAO itself in complex with domain object mappings as well as the underlying persistence provider. In our test case we shall use Hibernate. Let’s create a Spring application context named testContext.xml with the following content (headers and beans tag omitted):
<pre class='brush: xml'>
<!-- For declarative transactions via @Transactional annotation -->
<tx:annotation-driven/>
<!-- we’ll use embedded HSQLDB database -->
<jdbc:embedded-database id="dataSource" />
<!-- persistence unit configuration -->
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="persistenceProviderClass" value="org.hibernate.ejb.HibernatePersistence"/>
<property name="dataSource" ref="dataSource"/>
<property name="packagesToScan" value="ru.kacit.commons.test.dbunit"/> <!-- package with domain classes -->
<property name="jpaPropertyMap">
<map>
<entry key="hibernate.show_sql" value="true"/>
<entry key="hibernate.format_sql" value="true"/>
<entry key="hibernate.hbm2ddl.auto" value="create"/>
</map>
</property>
</bean>
<!-- generic transaction manager -->
<bean class="org.springframework.orm.jpa.JpaTransactionManager" id="transactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory"/>
</bean>
<!-- DAO under test -->
<bean class="ru.kacit.commons.test.dbunit.JpaPersonDao" />
</pre>
Now let’s create a test class by subclassing a standard Spring TestContext Framework class for transactional tests with JUnit. The @ContextConfiguration annotation specifies the context file (in our case, it’s located on classpath) that shall host the current test. This allows for injecting the DAO under test using @Autowired annotation.
<pre class='brush: java'>
@ContextConfiguration("classpath:testContext.xml")
public class JunitDbunitTest extends AbstractTransactionalJUnit4SpringContextTests {
@Autowired
public PersonDao personDao;
@Test
public void test1() {
personDao.save(new Person("Chip"));
personDao.save(new Person("Dale"));
personDao.save(new Person("Gadget"));
}
}
</pre>
The base class AbstractTransactionalJUnit4SpringContextTests is configured for running each test method in a transaction that gets rollbacked after the test is over.<br /><br />
Now we have to verify that the data actually got persisted into the database. We could simply inject EntityManager in our test class and use it right after persisting the data to make the asserts we need. In most cases of testing a DAO that would be enough to verify the correctness of mappings and DAO logic. The transaction will rollback after the test, and all side effects of the test will be successfully eliminated.<br /><br />
There are cases though when we have to commit the transaction after the test, to ensure its correct ending and to check the data that actually got saved in the database, down to specific fields of tables. It’s worth mentioning that such verifications are likely to tightly couple the test to physical structure of the database, thus increasing its sensitivity. Moreover, the test may fail when executed against another persistence provider due to differences in exported database structure or table naming.<br /><br />
The base test classes provided by Spring only allow you to execute SQL statements against the test database. Let’s see how DBUnit may help us here, and how to integrate it with Spring TestContext Framework.<br /><br />
DBUnit allows you to describe the condition of the database as a generic XML dataset, with no binding to underlying physical data types. Here’s an initial dataset for our test. It is empty, there’s a single ‘persons’ table with two fields in it. It corresponds to the domain class we’ve defined earlier.<br /><br />
<pre class='brush: xml'>
<!DOCTYPE dataset SYSTEM "dataset.dtd">
<dataset>
<table name="person">
<column>id</column>
<column>name</column>
</table>
</dataset>
</pre>
Here’s the expected dataset. The ‘persons’ table contains three records.
<pre class='brush: xml'>
<source lang="xml">
<!DOCTYPE dataset SYSTEM "dataset.dtd">
<dataset>
<table name="person">
<column>id</column>
<column>name</column>
<row>
<value>1</value>
<value>Chip</value>
</row>
<row>
<value>2</value>
<value>Dale</value>
</row>
<row>
<value>3</value>
<value>Gadget</value>
</row>
</table>
</dataset>
</pre>
There’s also a shortened notation in DBUnit in which tags correspond to table names and attributes to fields. But the full-sized format oftentimes appears to be more handy.<br /><br />
Let’s create an annotation for a test method that will specify the datasets to load before starting the test (‘before’ attribute) and to verify against after the test is completed (‘after’ attribute):<br /><br />
<pre class='brush: java'>
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface DbunitDataSets {
String before();
String after();
}
</pre>
To process this annotation, we shall extend the base test class AbstractTransactionalJUnit4SpringContextTests.
<pre class='brush: java'>
@TestExecutionListeners(
AbstractDbunitTransactionalJUnit4SpringContextTests.DbunitTestExecutionListener.class
)
public abstract class AbstractDbunitTransactionalJUnit4SpringContextTests
extends AbstractTransactionalJUnit4SpringContextTests {
/** DBUnit tester */
private IDatabaseTester databaseTester;
/** expected dataset file name */
private String afterDatasetFileName;
/** method to execute after the test transaction is completed — verification */
@AfterTransaction
public void assertAfterTransaction() throws Exception {
if (databaseTester == null || afterDatasetFileName == null) {
return;
}
IDataSet databaseDataSet = databaseTester.getConnection().createDataSet();
IDataSet expectedDataSet =
new XmlDataSet(ClassLoader.getSystemResourceAsStream(afterDatasetFileName));
Assertion.assertEquals(expectedDataSet, databaseDataSet);
databaseTester.onTearDown();
}
private static class DbunitTestExecutionListener extends AbstractTestExecutionListener {
/** method to execute before the test method — initialization */
public void beforeTestMethod(TestContext testContext) throws Exception {
AbstractDbunitTransactionalJUnit4SpringContextTests testInstance = (AbstractDbunitTransactionalJUnit4SpringContextTests) testContext.getTestInstance();
Method method = testContext.getTestMethod();
DbunitDataSets annotation = method.getAnnotation(DbunitDataSets.class);
if (annotation == null) {
return;
}
DataSource dataSource = testContext.getApplicationContext().getBean(DataSource.class);
IDatabaseTester databaseTester = new DataSourceDatabaseTester(dataSource);
databaseTester.setDataSet(
new XmlDataSet(ClassLoader.getSystemResourceAsStream(annotation.before())));
databaseTester.onSetup();
testInstance.databaseTester = databaseTester;
testInstance.afterDatasetFileName = annotation.after();
}
}
}
</pre>
The static nested class DbunitTestExecutionListener extends the AbstractExecutionListener — a part of the TestContext Framework. It is bound to the test lifecycle using the @TestExecutionListeners annotation on the test class.<br /><br />
Our basic test class binds to the test lifecycle using two pointcuts. The first is the DbunitTestExecutionListener#beforeTestMethod that gets executed before each test method. It checks for the @DbunitDataSets annotation on the current test method. If the annotation is present, DBUnit database tester is initialized, and the ‘before’ dataset gets loaded into the database. The ‘after’ attribute value and the database tester are saved into the fields of the test class for later use.<br /><br />
The second pointcut is the assertAfterTransaction() method that is marked with @AfterTransaction annotation which is also a part of TestContext framework. This annotation ensures execution of the annotated method after the transaction of the @Transactional-marked test method is completed. Here we apply our previously saved databaseTester and afterDatasetFileName to compare the database condition to the expected dataset using the DBUnit functionality.<br /><br />
Now let’s see what our test looks like.
<pre class='brush: java'>
@ContextConfiguration("classpath:testContext.xml")
public class JunitDbunitTest extends AbstractDbunitTransactionalJUnit4SpringContextTests {
@Autowired
private PersonDao personDao;
@Test
@Rollback(false)
@DbunitDataSets(before = "initialDataset.xml", after = "expectedDataset.xml")
@DirtiesContext
public void test1() {
personDao.save(new Person("Chip"));
personDao.save(new Person("Dale"));
personDao.save(new Person("Gadget"));
}
}
</pre>
The @Rollback(false) annotation ensures that the transaction is committed after the test. The @DirtiesContext annotation states that the Spring application context has to be recreated before starting the text test in the class. Our custom @DbunitDataSets annotation defines the names of the files containing the initial and expected DBUnit datasets.<br /><br />
The drawback of this approach is the need to recreate the heavy Spring application context before each test method. After the test is finished, the database is polluted not only with business data (which we can easily purge with DBUnit as well as using the AbstractTransactionalJUnit4SpringContextTests#deleteFromTables method) but also with auxiliary tables and sequences of the persistence provider. Thus, each test method has to be marked with @DirtiesContext annotation which ensures recreating the Spring context and re-exporting the database schema before each sequential test.<br /><br />
To skip the context refresh we could re-export the database schema in a @Before method, thus eliminating need for @DirtiesContext annotation. But I decided not to do that in the base class mostly to keep it uncoupled with Hibernate. Also, even such an aggressive database purgation would keep me unsure if I really eliminated all of the side effects of the test, like Hibernate caching.<br /><br />
An abstract TestNG base test class is absolutely identical to the one I’ve presented here, except for extending the AbstractTransactionalTestNGSpringContextTests class. For my own purposes I’ve extracted the DbunitTestExecutionListener into a separate class and implemented two base classes for these two test frameworks.<br /><br />
The source code for the article is available on GitHub: <a href="https://github.com/forketyfork/spring-dao-test-demo">https://github.com/forketyfork/spring-dao-test-demo</a>forketyforkhttp://www.blogger.com/profile/02167827568480122071noreply@blogger.com2