Cloud + data orchestration: Demolish your data silos. Enable complex analytics. Eliminate I/O bottlenecks. Learn the essentials (and more)!
2024 DZone Community Survey: SMEs wanted! Help shape the future of DZone. Share your insights and enter to win swag!
Chief architect at JBoss
London, SI
Joined Mar 2007
Manik Surtani is a core R&D engineer at JBoss and project lead on JBoss Cache. He has a background in artificial intelligence and neural networks, a field he left behind when he moved from academic circles to the commercial world. Since then, he's been working with Java-related technologies, first for a startup, focusing on knowledge management and information exchange. He later worked for a large London-based consultancy as a tech lead focused on e-commerce applications on large J2EE and peer-to-peer technology. Manik is a strong proponent of open source development methodologies, ethos, and collaborative processes, and often speaks at Java User Groups around the world.
Stats
Reputation: | 629 |
Pageviews: | 232.4K |
Articles: | 1 |
Comments: | 15 |
Getting Started with Infinispan
Comments
Jun 17, 2011 · Wesley Hales
Sep 13, 2010 · Manik Surtani
Sep 13, 2010 · Manik Surtani
Sep 05, 2010 · Manik Surtani
Apr 01, 2010 · Mr B Loid
1) We support both READ_COMMITTED and REPEATABLE_READ, with READ_COMMITTED being the default. See this config reference link for details on how to configure this. This wiki page has more info as well.
2) Write skews only really occur when using REPEATABLE_READ. READ_COMMITTED just assumes the latest committed version and overwrites anyway; R_R is where you have issues when, for example, updating a shared counter. Write skews are detected easily enough, how they are handled depends on how you configure Infinispan: ignore the write skew and overwrite, or abort the tx.
3) Yes, if using transactions.
4) MVCC is not necessarily timestamp based - all it is, is versioned. So it needs to have a notion of the current, committed version and older versions, etc. Rather than timestamps, we use object refs and compare the object ref of the known, committed version against an update coming in. This is used to detect write skews. In the distributed case, locks are acquired before updates take place to prevent a concurrent object ref swap. These locks are pretty short lived though (duration of the object ref swap, cluster-wide).
HTH
Manik
Aug 26, 2009 · Manik Surtani
Aug 26, 2009 · Manik Surtani
Aug 24, 2009 · Lowell Heddings
An Infinispan-based implementation for Hibernate is in the pipelines. Already implemented in Hibernate trunk, even. :-)
See https://jira.jboss.org/jira/browse/ISPN-6 and http://opensource.atlassian.com/projects/hibernate/browse/HHH-4103
- Manik
Aug 24, 2009 · Carol McDonald
An Infinispan-based implementation for Hibernate is in the pipelines. Already implemented in Hibernate trunk, even. :-)
See https://jira.jboss.org/jira/browse/ISPN-6 and http://opensource.atlassian.com/projects/hibernate/browse/HHH-4103
- Manik
Jan 06, 2009 · admin
Hi David
We do have some design documentation on the JBoss Cache wiki (including the MVCC locking designs), and while some components are out of date or not as well documented, you should still have a look at the wiki. We do plan to bring it up to date soon, though, but at the end of the day the source code (along with the comments there) will always be your best documentation.
Cheers
Manik
Sep 22, 2008 · Lebon Bon Lebon
in your scenario it would be all about future proofing. Allocating that mem to the DB would probably be better/faster, but allocating it to the app tier as a cache would mean that you have more flexibility in future to move the DB to a different machine, add more app servers, etc.
Sep 22, 2008 · Lebon Bon Lebon
in your scenario it would be all about future proofing. Allocating that mem to the DB would probably be better/faster, but allocating it to the app tier as a cache would mean that you have more flexibility in future to move the DB to a different machine, add more app servers, etc.
Sep 22, 2008 · Lebon Bon Lebon
in your scenario it would be all about future proofing. Allocating that mem to the DB would probably be better/faster, but allocating it to the app tier as a cache would mean that you have more flexibility in future to move the DB to a different machine, add more app servers, etc.
Sep 19, 2008 · Lebon Bon Lebon
I agree with your comment regarding the increased memory requirements - or alternatively the architectural changes to involve dedicated cache servers and the corresponding complexity in testing that comes with it. But keep in mind that both memory and commodity hardware is cheap.
Think about the alternative: buy a bigger database server, which means throwing out your old one, possibly paying more in license fees if your new DB server has more CPUs, and this still doesn't solve your problem by making the system capable of scaling horizontally, just pushes up your limits. Once these increased limits are hit, you would have to repeat this process all over again, incurring the costs and efforts all over again. And then you have to consider where you end up - what do you do when your DB is already running on the fastest piece of commodity hardware around and you need more performance? Run your DB on a mainframe? :-)
So I don't buy the idea that the increased cost/complexity of introducing caching is a barrier provided caches are warranted in the first place (see last paragraph in the article). As you say - and I agree with you - "... the strategy to cache must not be wholistic". Only use it where you know it is needed.
Cheers
Manik
Mar 21, 2007 · HP Thirteen