Couchbase Java SDK 2.0.0 Developer Preview 3
Originally written by Michael Nitschinger
On behalf of the whole SDK team I'm happy to announce the third developer preview of the Java/JVM SDK release train nicknamed Armstrong. It contains both the JVM core package "core-io" 0.3 as well as the Java SDK 2.0 developer preview 3.
This preview provides many more APIs which bring us closer to API completion (and a beta release to follow), as well as changes to configuration management and dependencies.
Here is how you can get it:
<dependencies> <dependency> <groupId>com.couchbase.client</groupId> <artifactId>couchbase-client</artifactId> <version>2.0.0-dp3</version> </dependency> </dependencies> <repositories> <repository> <id>couchbase</id> <name>couchbase repo</name> <url>http://files.couchbase.com/maven2</url> <snapshots><enabled>false</enabled></snapshots> </repository> </repositories>
Dependency Changes
Every dependency increases the possibility of version conficts with user libraries. In an effort to reduce the likelyhood, we made the following changes:
- Getting rid of non-essential dependencies.
- Package internal-only dependencies into a "fat jar" and move their packages to avoid clashing.
We've removed hard dependencies to SLF4J (and added optional support for more) and to the Typesafe Config library. While removing the Config library leaves us with potentially less features when it comes to configuration management, we made sure to implement a nice builder DSL on our own, which can also be configured through system properties.
The "core-io" package now shadows internal dependencies like the LMAX Disruptor, Netty and Jackson. This not only makes it less likely that the SDK conflicts with other versions of those dependencies, but also allows us to freely upgrade those packages as we see fit, and even replace them with something else completely and users won't take notice or need to change something in their application stack.
From this developer preview on forward, the SDK only has two explicit dependencies: the core-io package and RxJava. RxJava cannot be shadowed because it is the primary interface of our asynchronous APIs and a vital part of building applications with the SDK.
New Configuration Builder API & Connection String
One thing on the "todo" list became very important with the removal of the config library: a builder DSL for a cluster configuration with sensible defaults and the ability to configure them through system properties as well.
The ordering of settings is now as follows (highes precedence first):
- System Properties
- Builder settings
- Default values
This allows you to override default settings through the builder for all of your deployments, but in different environments override specific settings through system properties on the fly when you start up the JVM/container.
Also initial support for the "connection string" has been added, which provides similar bootstrap semantics across all of our official SDKs. For that purpose, the construction of the "CouchbaseCluster" object has been changed to factory methods which provide greater flexibility. If you don't care about custom settings you can use one of these:
// Default settings & localhost CouchbaseCluster.create(); // Default settings & vararg list of nodes CouchbaseCluster.create("192.168.56.101", "192.168.56.102"); // Default settings & list of nodes List<String> nodes = Arrays.asList("192.168.56.101", "192.168.56.102"); CouchbaseCluster.create(nodes);
If you want to use the connection string API (which is not covered here in detail) you can use the `fromConnectionString` method instead:
CouchbaseCluster.fromConnectionString("couchbase://192.168.56.101,192.168.56.102");
All of the shown methods have additional overloads that allow it to pass in a "ClusterEnvironment". The environment is a stateful component which also contains properties (the settings that you can change).
So, enabling N1QL querying for example works like this (and we still want to connect to localhost):
ClusterEnvironment environment = DefaultClusterEnvironment.create( DynamicClusterProperties.builder() .queryEnabled(true) .build() ); CouchbaseCluster.create(environment);
Note that we might further streamline this API for the beta release, so expect slight changes in that area.
The environment contains, apart from the configurable properties, important bits and pieces of the SDK runtime (including various thread pools for proper functioning) and it is designed to be re-used across CouchbaseCluster objects. This means that it doesn't matter if you connect to multiple buckets in one cluster or to multiple clusters, ressouces can be shared and are optimally allocated.
Now if you don't want to use the builder, but instead set system properties, the "dynamic properties", which are also used by default, listen on specific properties as well. This is another way to enable querying (or even to override a builder setting):
System.setProperty("com.couchbase.queryEnabled", "true"); CouchbaseCluster cluster = CouchbaseCluster.create();
The Legacy Transcoder
To be compatible with older applications, we've implemented a LegacyTranscoder/LegacyDocument which sets and understands flags, compression and object serialization like the one of the 1.4.* series (and older) does. Since the APIs are not bound to any specific transcoder, it can be used through the same API as the JsonTranscoder (and the corresponding JsonDocument).
// Will be stored as a string on the server, can also be a JSON string bucket.upsert(LegacyDocument.create("legacydoc-string", "mystring")); // Will be stored as a number bucket.upsert(LegacyDocument.create("legacydoc-int", 25)); // A POJO that will be serialized if serializable bucket.upsert(LegacyDocument.create("legacydoc-pojo", user));
The data can be loaded as well:
// Loads a legacy document based on the ID. bucket.get("legacydoc", LegacyDocument.class); // Also loads the legacy document and infers its document by the input. bucket.get(LegacyDocument.create("legacydoc"));
Changes to View & Query API
We made some slight changes to the View and N1QL query APIs to give you more information during error and debugging scenarios. Previously, all query method returned the rows directly, but there is associated information available (querying statistics, error messages,...) which could not be exposed through such an interface.
Now the query methods return results, which itself contain an observable to get to the actual rows or other information. Here is how you can query a view but also log debug information.
bucket .query(ViewQuery.from("design", "view")) .doOnNext(result -> { if (!result.success()) { System.err.println(result.error()); } }) .flatMap(ViewResult::rows) .subscribe(row -> System.out.println(row.id()));
You can use the "doOnNext" to add a side-effect and if the result is not successful, log the error. If you don't care about other parts than the actual rows, you can use flatMap directly to get the rows out as before. We've also added the possibility to load the full document contents if "withDocuments" is set:
bucket .query(ViewQuery.from("design", "view").withDocuments()) .flatMap(ViewResult::rows) .flatMap(ViewRow::document) .subscribe(System.out::println);
Since every ViewRow provides a way to load the full document, you can use flatMap to get the full document (which internally uses get calls) and then print it instead of just the ID.
The same API is used for N1QL queries, but "info" output is itself an Observable because internally it is streamed after the returned rows.
Persistence and Replication Constraints
The mutation APIs have been extended to support the Durability Requirements feature which gives you replication and persistence constraints. They are provided as simple overloads to keep the same semantics. Some examples:
bucket.insert(doc, PersistTo.ONE); bucket.upsert(doc, PersistTo.MASTER, ReplicateTo.ONE); bucket.replace(doc, ReplicateTo.THREE);
For users familiar with the current SDK, this will look very similar. You can provide either/or PersistTo/ReplicateTo enums to supply information about which constraint you want to set. By default they are set to NONE, which means once the document is acknowledged by the server in its managed cache the observable is completed successfully. By changing the argument(s), internally the client blocks/checks in an efficient way until constraints are satisfied or a failure occurs.
More APIs
In addition to making existing APIs more powerful and expressive, additional APIs have been added:
Get from replica
Additional read capabilities have been added which allow you to get potentially outdated (stale) data from the replica nodes. The API is identically to a regular get, but in addition you need to provide a ReplicaMode. There are two types of modes: either all possible documents will be fetched (from the master and all the replicas) or one specific replica document can be loaded. Here are some examples:
// will return 0 to N documents bucket.getFromReplica("id", ReplicaMode.ALL); // will return 0 or 1 document bucket.getFromReplica("id", ReplicaMode.SECOND);
If you need a fallback scenario where when a regular get fails and you can live with stale data, you can chain it like this:
bucket .get("mydoc0") .onErrorResumeNext(bucket.getFromReplica("mydoc0", ReplicaMode.ALL)) .first() .subscribe();
Note that since the "all" mode will potentially return mode documents, you can use the "first" method to only pick the first one arriving.
Touch
Touching a document has the effect of resetting its expiry time. Both a "touch" and a "getAndTouch" method are exposed which have the same effect, but the latter one also retreives the document.
// touch and set to 5 seconds expiry Observable<Boolean> result = bucket.touch("id", 5); // touch, set the expiry time and return the document Observable<JsonDocument> result = bucket.getAndTouch("id", 5);
Lock & Unlock
Similar methods have been added to allow locking unlocking of write-locked documents. Write-Locking is done through the "getAndLock" method. To unlock a document, it can either be unlocked through the "unlock" command or once it is updated with a matching CAS value. Also note that on the server side the document will be unlocked after 30 seconds automatically.
// get and write lock for 10 seconds. Observable<JsonDocument> doc = bucket.getAndLock("id", 10); // unlock with the matching cas long cas = doc.toBlocking().single().cas(); Observable<Boolean> done = bucket.unlock("id", cas);
Counter
If you need atomic increment and decrement operations on a numeric value document, the counter method is here to help. An initial value, the delta and an optional expiration time:
// increment by 5 and set the initial value to 5 too Observable<LongDocument> result = bucket.counter("id", 5); // increment by 5 and the initial document to 0 Observable<LongDocument> result = bucket.counter("id", 5, 0); // decrement by 3 and set the initial document to 10 and the expiry to 5 Observable<LongDocument> result = bucket.counter("id", -3, 10, 5);
Note that a LongDocument is returned for convenience, which contains the atomically incremented/decremented result.
Append & Prepend
Finally, append and prepend document operations have been added. One note upfront: this does not work with JSON documents and should only be used with arbitrary byte structures or strings. We're talking about some future enhancements here, so please let us know what you'd be interested to see. Also note that the document needs to be created if it does not exist. Here is an example that tries to append something to a document (using the legacy document) and if it does not exist create it freshly.
LegacyDocument doc = LegacyDocument.create("doc", "1,"); bucket .append(doc) .onErrorResumeNext(throwable -> { if (throwable instanceof DocumentDoesNotExistException) { return bucket.insert(doc); } else { // handle other errors return null; } }) .subscribe();
The same API can be used for prepend.
The Road to Beta
The only missing API piece is cluster-level management APIs (bucket creation, deletion,...) which will be provided at Beta release. The focus now shifts towards documentation and stability. Before we lock down the API for the 2.0 release, we need your feedback and input to make it as easy and useful as possible. Also, let us know on which bits and pieces you'd like to see extensive documentation and guidance.
Comments