REG-RESP?v=3;r=1576955852;n=+8613204228202;s=025A

他的最新文章
他的热门文章
您举报文章:
举报原因:
原文地址:
原因补充:
(最多只允许输入30个字)How to create REST API for Android app using PHP, Slim and MySQL – Day 2/3Infinispan 7.0 User Guide
Table of Contents
Welcome to the official Infinispan user guide. This comprehensive document will guide you through every last detail of Infinispan, however can be a poor starting point if you are new to Infinispan.
For newbies, starting with the
or one of the
is probably a better bet.
are also useful documents to have alongside this user guide.
1. Configuring cache
Infinispan offers both declarative and programmatic configuration.
Declarative configuration comes in a form of XML document that adheres to a provided Infinispan configuration XML .
Every aspect of Infinispan that can be configured declaratively can also be configured programmatically In fact, declarative configuration, behind the scenes, invokes programmatic configuration API as the XML configuration file is being processed. One can even use combination of these approaches. For example, you can read static XML configuration files and at runtime programmatically tune that same configuration. Or you can use a certain static configuration defined in XML as a starting point or template for defining additional configurations in runtime.
There are two main configuration abstractions in Infinispan: global and default sections.
Global configuration Global cache configuration defines global settings shared among all cache instances created by a single . Shared resources like thread pools, serialization/marshalling settings, transport and network settings, JMX domains are all part of global configuration.
Default configuration Default cache configuration is more specific to actual caching domain itself. It specifies eviction, locking, transaction, clustering, cache store settings etc. The default cache can be retrieved via the CacheManager.getCache() API.
Named caches However, the real power of default cache mechanism comes to light when used in conjunction with named caches. Named caches have the same XML schema as the default cache. Whenever they are specified, named caches inherit settings from the default cache while additional behavior can be specified or overridden. Named caches are retrieved via the CacheManager.getCache(String name) API. Therefore, note that the name attribute of named cache is both mandatory and unique for every named cache specified.
Do not forget to refer to Infinispan configuration
for more details.
1.1. Configuring Cache declaratively
One of the major goals of Infinispan is to aim for zero configuration. A simple XML configuration file containing nothing more than a single infinispan element is enough to get you started. The configuration file listed below provides sensible defaults and is perfectly valid.
infinispan.xml
However, that would only give you the most basic, local mode, non-clustered cache. Non-basic configurations are very likely to use customized global and default cache elements.
Declarative configuration is the most common approach to configuring Infinispan cache instances. In order to read XML configuration files one would typically construct an instance of DefaultCacheManager by pointing to an XML file containing Infinispan configuration. Once configuration file is read you can obtain reference to the default cache instance.
EmbeddedCacheManager manager = new DefaultCacheManager(&my-config-file.xml&);
Cache defaultCache = manager.getCache();
or any other named instance specified in my-config-file.xml.
Cache someNamedCache = manager.getCache(&someNamedCache&);
The name of the default cache is defined in the &cache-container& element of the XML configuration file, and additional caches can be configured using the &local-cache&,&distributed-cache&,&invalidation-cache& or &replicated-cache& elements.
Refer to Infinispan configuration
for more details. If you are using XML editing tools for configuration writing you can use provided Infinispan
to assist you.
1.2. Configuring cache programmatically
Programmatic Infinispan configuration is centered around CacheManager and ConfigurationBuilder API. Although every single aspect of Infinispan configuration could be set programmatically, the most usual approach is to create a starting point in a form of XML configuration file and then in runtime, if needed, programmatically tune a specific configuration to suit the use case best.
EmbeddedCacheManager manager = new DefaultCacheManager(&my-config-file.xml&);
Cache defaultCache = manager.getCache();
Let assume that a new synchronously replicated cache is to be configured programmatically. First, a fresh instance of Configuration object is created using ConfigurationBuilder helper object, and the cache mode is set to synchronous replication. Finally, the configuration is defined/registered with a manager.
Configuration c = new ConfigurationBuilder().clustering().cacheMode(CacheMode.REPL_SYNC).build();
String newCacheName = &repl&;
manager.defineConfiguration(newCacheName, c);
Cache&String, String& cache = manager.getCache(newCacheName);
The default cache configuration (or any other cache configuration) can be used as a starting point for creation of a new cache. For example, lets say that infinispan-config-file.xml specifies a replicated cache as a default and that a distributed cache is desired with a specific L1 lifespan while at the same time retaining all other aspects of a default cache. Therefore, the starting point would be to read an instance of a default Configuration object and use ConfigurationBuilder to construct and modify cache mode and L1 lifespan on a new Configuration object. As a final step the configuration is defined/registered with a manager.
EmbeddedCacheManager manager = new DefaultCacheManager(&infinispan-config-file.xml&);
Configuration dcc = manager.getDefaultCacheConfiguration();
Configuration c = new ConfigurationBuilder().read(dcc).clustering().cacheMode(CacheMode.DIST_SYNC).l1().lifespan(60000L).build();
String newCacheName = &distributedWithL1&;
manager.defineConfiguration(newCacheName, c);
Cache&String, String& cache = manager.getCache(newCacheName);
As long as the based configuration is the default named cache, the previous code works perfectly fine. However, other times the base configuration might be another named cache. So, how can new configurations be defined based on other defined caches? Take the previous example and imagine that instead of taking the default cache as base, a named cache called "replicatedCache" is used as base. The code would look something like this:
EmbeddedCacheManager manager = new DefaultCacheManager(&infinispan-config-file.xml&);
Configuration rc = manager.getCacheConfiguration(&replicatedCache&);
Configuration c = new ConfigurationBuilder().read(rc).clustering().cacheMode(CacheMode.DIST_SYNC).l1().lifespan(60000L).build();
String newCacheName = &distributedWithL1&;
manager.defineConfiguration(newCacheName, c);
Cache&String, String& cache = manager.getCache(newCacheName);
javadocs for more details.
1.2.1. ConfigurationBuilder Programmatic Configuration API
However, users do not have to first read an XML based configuration and then
they can start from scratch using only programmatic API. This is where powerful ConfigurationBuilder API comes to shine. The aim of this API is to make it easier to chain coding of configuration options in order to speed up the coding itself and make the configuration more readable. This new configuration can be used for both the global and the cache level configuration. GlobalConfiguration objects are constructed using GlobalConfigurationBuilder while Configuration objects are built using ConfigurationBuilder. Let’s look at some examples on configuring both global and cache level options with this new API:
One of the most commonly configured global option is the transport layer, where you indicate how an Infinispan node will discover the others:
GlobalConfiguration globalConfig = new GlobalConfigurationBuilder().transport()
.defaultTransport()
.clusterName(&qa-cluster&)
.addProperty(&configurationFile&, &jgroups-tcp.xml&)
.machineId(&qa-machine&).rackId(&qa-rack&)
Sometimes you might also want to enable collection of
at cache manager level or get information about the transport. To enable global JMX statistics simply do:
GlobalConfiguration globalConfig = new GlobalConfigurationBuilder()
.globalJmxStatistics()
Please note that by not enabling (or by explicitly disabling) global JMX statistics your are just turning off statistics collection. The corresponding MBean is still registered and can be used to manage the cache manager in general, but the statistics attributes do not return meaningful values.
Further options at the global JMX statistics level allows you to configure the cache manager name which comes handy when you have multiple cache managers running on the same system, or how to locate the JMX MBean Server:
GlobalConfiguration globalConfig = new GlobalConfigurationBuilder()
.globalJmxStatistics()
.cacheManagerName(&SalesCacheManager&)
.mBeanServerLookup(new JBossMBeanServerLookup())
Some of the Infinispan features are powered by a group of the thread pool executors which can also be tweaked at this global level. For example:
GlobalConfiguration globalConfig = new GlobalConfigurationBuilder()
.replicationQueueThreadPool()
.threadPoolFactory(ScheduledThreadPoolExecutorFactory.create())
You can not only configure global, cache manager level, options, but you can also configure cache level options such as the :
Configuration config = new ConfigurationBuilder()
.clustering()
.cacheMode(CacheMode.DIST_SYNC)
.l1().lifespan(25000L)
.hash().numOwners(3)
Or you can configure :
Configuration config = new ConfigurationBuilder()
.eviction()
.maxEntries(20000).strategy(EvictionStrategy.LIRS).expiration()
.wakeUpInterval(5000L)
.maxIdle(120000L)
An application might also want to interact with an Infinispan cache within the boundaries of JTA and to do that you need to configure the transaction layer and optionally tweak the locking settings. When interacting with transactional caches, you might want to enable recovery to deal with transactions that finished with an heuristic outcome and if you do that, you will often want to enable JMX management and statistics gathering too:
Configuration config = new ConfigurationBuilder()
.locking()
.concurrencyLevel(10000).isolationLevel(IsolationLevel.REPEATABLE_READ)
.lockAcquisitionTimeout(12000L).useLockStriping(false).writeSkewCheck(true)
.versioning().enable().scheme(VersioningScheme.SIMPLE)
.transaction()
.transactionManagerLookup(new GenericTransactionManagerLookup())
.recovery()
.jmxStatistics()
Configuring Infinispan with chained cache stores is simple too:
Configuration config = new ConfigurationBuilder()
.loaders()
.shared(false).passivation(false).preload(false)
.addFileCacheStore().location(&/tmp&).streamBufferSize(1800).async().enable().threadPoolSize(20).build();
1.2.2. Advanced programmatic configuration
The fluent configuration can also be used to configure more advanced or exotic options, such as advanced externalizers:
GlobalConfiguration globalConfig = new GlobalConfigurationBuilder()
.serialization()
.addAdvancedExternalizer(998, new PersonExternalizer())
.addAdvancedExternalizer(999, new AddressExternalizer())
Or, add custom interceptors:
Configuration config = new ConfigurationBuilder()
.customInterceptors().addInterceptor()
.interceptor(new FirstInterceptor()).position(InterceptorConfiguration.Position.FIRST)
.interceptor(new LastInterceptor()).position(InterceptorConfiguration.Position.LAST)
.interceptor(new FixPositionInterceptor()).index(8)
.interceptor(new AfterInterceptor()).after(NonTransactionalLockingInterceptor.class)
.interceptor(new BeforeInterceptor()).before(CallInterceptor.class)
For information on the individual configuration options, please check the
1.3. Configuration Migration Tools
Infinispan has a number of scripts for importing configurations from other cache and data grid products. Currently we have scripts to import configurations from:
JBoss Cache 3.x
EHCache 1.x
Oracle Coherence 3.x
JBoss Cache 3.x itself supports configuration
from previous (2.x) versions, so JBoss Cache 2.x configurations can be migrated indirectly.
If you wish to help write conversion tools for other caching systems, please contact &a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev"&infinispan-dev&/a&.
There is a single scripts for importing configurations: ${INFINISPAN_HOME}/bin/importConfig.sh and an equivalent .BAT script for Windows. Just run it and you should get a help message to assist you with the import:
C:\infinispan\bin& importConfig.bat
Missing 'source', cannot proceed
importConfig [-source &the file to be transformed&]
[-destination &where to store resulting XML&]
[-type &the type of the source, possible values being: [JBossCache3x, Ehcache1x, Coherence35x] &]
C:\infinispan\bin&
Here is how a JBoss Cache 3.x configuration file is imported:
C:\infinispan\bin&importConfig.bat -source in\jbosscache_all.xml -destination out.xml -type JBossCache3x
WARNING! Preload elements cannot be automatically transformed, please do it manually!
WARNING! Please configure cache loader props manually!
WARNING! Singleton store was changed and needs to be configured manually!
IMPORTANT: Please take a look at the generated file for (possible) TODOs about the elements that couldn't be converted automatically!
New configuration file [out.xml] successfully created.
C:\infinispan\bin&
Please read all warning messages carefully and inspect the generated XML for potential TODO statements that indicate the need for manual intervention. In the case of JBoss Cache 3.x this would usually have to do with custom extensions, such as custom CacheLoaders that cannot be automatically migrated.
For EHCache and Coherence these may also contain suggestions and warnings for configuration options that may not have direct equivalents in Infinispan.
1.4. Clustered Configuration
Infinispan uses
for network communications when in clustered mode. Infinispan ships with pre-configured JGroups stacks that make it easy for you to jump-start a clustered configuration.
1.4.1. Using an external JGroups file
If you are configuring your cache programmatically, all you need to do is:
GlobalConfiguration gc = new GlobalConfigurationBuilder()
.transport().defaultTransport()
.addProperty(&configurationFile&, &jgroups.xml&)
and if you happen to use an XML file to configure Infinispan, just use:
name=&external-file& path=&jgroups.xml&
default-cache=&replicatedCache&
stack=&external-file&
name=&replicatedCache&
In both cases above, Infinispan looks for jgroups.xml first in your classpath, and then for an absolute path name if not found in the classpath.
1.4.2. Use one of the pre-configured JGroups files
Infinispan ships with a few different JGroups files (packaged in infinispan-core.jar) which means they will already be on your classpath by default. All you need to do is specify the file name, e.g., instead of jgroups.xml above, specify /default-configs/default-jgroups-tcp.xml.
The configurations available are:
default-jgroups-udp.xml - Uses UDP as a transport, and UDP multicast for discovery. Usually suitable for larger (over 100 nodes) clusters or if you are using
. Minimises opening too many sockets.
default-jgroups-tcp.xml - Uses TCP as a transport and UDP multicast for discovery. Better for smaller clusters (under 100 nodes) only if you are using
, as TCP is more efficient as a point-to-point protocol
default-jgroups-ec2.xml - Uses TCP as a transport and
for discovery. Suitable on
nodes where UDP multicast isn’t available.
Tuning JGroups settings
The settings above can be further tuned without editing the XML files themselves. Passing in certain system properties to your JVM at startup can affect the behaviour of some of these settings. The table below shows you which settings can be configured in this way. E.g.,
$ java -cp ... -Djgroups.tcp.port=1234 -Djgroups.tcp.address=10.11.12.13
Table 1. default-jgroups-udp.xml
System Property Description Default Required?
jgroups.udp.mcast_addr IP address to use for multicast (both for communications and discovery). Must be a valid
IP address, suitable for IP multicast. 228.6.7.8 No
jgroups.udp.mcast_port Port to use for multicast socket 46655 No
jgroups.udp.ip_ttl Specifies the time-to-live (TTL) for IP multicast packets. The value here refers to the number of network hops a packet is allowed to make before it is dropped 2 No
Table 2. default-jgroups-tcp.xml
System Property Description Default Required?
jgroups.tcp.address IP address to use for the TCP transport. 127.0.0.1 No
jgroups.tcp.port Port to use for TCP socket 7800 No
jgroups.udp.mcast_addr IP address to use for multicast (for discovery). Must be a valid
IP address, suitable for IP multicast. 228.6.7.8 No
jgroups.udp.mcast_port Port to use for multicast socket 46655 No
jgroups.udp.ip_ttl Specifies the time-to-live (TTL) for IP multicast packets. The value here refers to the number of network hops a packet is allowed to make before it is dropped 2 No
Table 3. default-jgroups-ec2.xml
System Property Description Default Required?
jgroups.tcp.address IP address to use for the TCP transport. 127.0.0.1 No
jgroups.tcp.port Port to use for TCP socket 7800 No
jgroups.s3.access_key The Amazon S3 access key used to access an S3 bucket
jgroups.s3.secret_access_key The Amazon S3 secret key used to access an S3 bucket
jgroups.s3.bucket Name of the Amazon S3 bucket to use. Must be unique and must already exist
1.4.3. Further reading
JGroups also supports more system property overrides, details of which can be found on this page:
In addition, the JGroups configuration files shipped with Infinispan are intended as a jumping off point to getting something up and running, and working. More often than not though, you will want to fine-tune your JGroups stack further to extract every ounce of performance from your network equipment. For this, your next stop should be the JGroups manual which has a
on configuring each of the protocols you see in a JGroups configuration file.
1.5. Dynamically Start and Stop Clustered Cache
1.5.1. Library Mode
Start start/stop cache in non-clustered mode is simple. You can use EmbeddedCacheManager.defineConfiguration(cacheName, configuration) to define a cache, and then call EmbeddedCacheManager.getCache(cacheName).
If you don’t define a specific configuration for the cache and directly call EmbeddedCacheManager.getCache(…​) , then a new cache would be created with default configurations.
To stop a cache, call EmbeddedCacheManager.remove(cacheName)
To start a clustered cache, you’ll need to do the above on every clustered node, while making sure the cache mode is clustered, of course.
You can start the cache by calling EmbeddedCacheManager.getCache(…​) To do this on every single node though, you could write your own service to do that, or with JMX, or use DistributedExecutorService.
For example, write a StartCacheCallable class:
StartCacheCallable.java
public class StartCacheCallable&K, V& implements DistributedCallable&K, V, Void&, Serializable {
private static final long serialVersionUID = 2636780L;
private final String cacheN
private transient Cache&K, V&
public StartCacheCallable(String cacheName) {
this.cacheName = cacheN
public Void call() throws Exception {
cache.getCacheManager().getCache(cacheName);
return null;
public void setEnvironment(Cache&K, V& cache, Set&K& inputKeys) {
this.cache =
Then submit the task to all nodes:
DistributedExecutorService des = new DefaultExecutorService(existingCacheSuchAsDefaultCache);
List&Future&Void&& list = des.submitEverywhere(new StartCacheCallable&K, V&(cacheName));
for (Future&Void& future : list) {
future.get();
} catch (InterruptedException e) {
} catch (ExecutionException e) {
1.5.2. Server Mode
Hot Rod client does not support dynamically start/stop of cache.
2. The Cache APIs
2.1. The Cache interface
Infinispan exposes a simple,
interface.
The Cache interface exposes simple methods for adding, retrieving and removing entries, including atomic mechanisms exposed by the JDK’s ConcurrentMap interface. Based on the cache mode used, invoking these methods will trigger a number of things to happen, potentially even including replicating an entry to a remote node or looking up an entry from a remote node, or potentially a cache store.
For simple usage, using the Cache API should be no different from using the JDK Map API, and hence migrating from simple in-memory caches based on a Map to Infinispan’s Cache should be trivial.
2.1.1. Performance Concerns of Certain Map Methods
Certain methods exposed in Map have certain performance consequences when used with Infinispan, such as
. Specific methods on the keySet, values and entrySet are fine for use please see their Javadoc for further details.
Attempting to perform these operations globally would have large performance impact as well as become a scalability bottleneck. As such, these methods should only be used for informational or debugging purposes only.
It should be noted that using certain flags with the
method can mitigate some of these concerns, please check each method’s documentation for more details.
2.1.2. Mortal and Immortal Data
Further to simply storing entries, Infinispan’s cache API allows you to attach mortality information to data. For example, simply using
would create an immortal entry, i.e., an entry that lives in the cache forever, until it is removed (or evicted from memory to prevent running out of memory). If, however, you put data in the cache using
, this creates a mortal entry, i.e., an entry that has a fixed lifespan and expires after that lifespan.
In addition to lifespan , Infinispan also supports maxIdle as an additional metric with which to determine expiration. Any combination of lifespans or maxIdles can be used.
2.1.3. Example of Using Expiry and Mortal Data
of using mortal data with Infinispan.
2.1.4. putForExternalRead operation
Infinispan’s
class contains a different 'put' operation called
. This operation is particularly useful when Infinispan is used as a temporary cache for data that is persisted elsewhere. Under heavy read scenarios, contention in the cache should not delay the real transactions at hand, since caching should just be an optimization and not something that gets in the way.
To achieve this, putForExternalRead acts as a put call that only operates if the key is not present in the cache, and fails fast and silently if another thread is trying to store the same key at the same time. In this particular scenario, caching data is a way to optimise the system and it’s not desirable that a failure in caching affects the on-going transaction, hence why failure is handled differently. putForExternalRead is consider to be a fast operation because regardless of whether it’s successful or not, it doesn’t wait for any locks, and so returns to the caller promptly.
To understand how to use this operation, let’s look at basic example. Imagine a cache of Person instances, each keyed by a PersonId , whose data originates in a separate data store. The following code shows the most common pattern of using
within the context of this example:
PersonId id = ...;
Cache&PersonId, Person& cache = ...;
Person cachedPerson = cache.get(id);
if (cachedPerson == null) {
Person person = dataStore.lookup(id);
cache.putForExternalRead(id, person);
return cachedP
Please note that
should never be used as a mechanism to update the cache with a new Person instance originating from application execution (i.e. from a transaction that modifies a Person’s address). When updating cached values, please use the standard
operation, otherwise the possibility of caching corrupt data is likely.
2.2. The AdvancedCache interface
In addition to the simple Cache interface, Infinispan offers an
interface, geared towards extension authors. The AdvancedCache offers the ability to inject custom interceptors, access certain internal components and to apply flags to alter the default behavior of certain cache methods. The following code snippet depicts how an AdvancedCache can be obtained:
AdvancedCache advancedCache = cache.getAdvancedCache();
2.2.1. Flags
Flags are applied to regular cache methods to alter the behavior of certain methods. For a list of all available flags, and their effects, see the
enumeration. Flags are applied using
. This builder method can be used to apply any number of flags to a cache invocation, for example:
advancedCache.withFlags(Flag.CACHE_MODE_LOCAL, Flag.SKIP_LOCKING)
.withFlags(Flag.FORCE_SYNCHRONOUS)
.put(&hello&, &world&);
2.2.2. Entry Retrieval
It is possible to retrieve all of the entries stored in a given cache, irrespective of its clustering configuration. These entries are only retrieved through an iterator where each value is only returned one at a time. This is done because of the possible memory constraints of pulling all the values into a single node at the same time.
An EntryIterable is available by invoking the
method on AdvancedCache. Note you are required to provide a KeyValueFilter, this is to make sure users realize this can be an expensive operation and by filtering entries on the remote side will allow the operation to perform faster and more efficiently. This allows you to iterate over the contents in the cache in a memory sensitive way so out of memory errors should not occur. The size of contents held in memory by the iterator while being processed currently is limited by the state transfer chunk size configuration value.
Once the EntryIterable is retrieved, invocation of the iterator method will immediately start retrieving entries. Note each invocation of the iterator method will start a brand new entry request thus allowing the Iterable to be reused.
EntryIterable also implements AutoCloseable so it should be done in a try with resource block to ensure that all resources can be closed properly after invocation.
KeyValueFilter&Object, Object& filter = ...
try (EntryIterable&Object, Object& iterable = advancedCache.filterEntries(filter)) {
for (CacheEntry&Object, Object& entry : iterable) {
Transactional Aware
The iterators produced will obey the current transaction if there is one when they are generated. Note the transaction they use is the one that is found to be in the thread when filterEntries is first invoked. Thus you should only access this iterator from the same thread or else undefind behavior may occur.
Since we cannot put all entries into the local transaction any entries retrieved using the iterator are not added to the transaction context. This means that the iterator will behave always in a way similar to a Read Committed isolation level even if Repeatable Read is enabled for example. If an entry was previously put in the context it will use this value, including if it was removed, in which case it will not be returned from the iterator.
Iterator remove
We also do support the remove operation on the iterator. In a non transactional cache this will immediately remove the key from the cache. In a transactional cache this will instead be added to the existing transaction context if there is one otherwise an implicit transaction will be generated for it.
Value conversion
While the provided filter can be used to efficiently reduce what entries are returned to the local node, there is also the possibility of providing an optional
which will convert the provided value to another object or even type, which is done on the remote side. This is useful to reduce payload size when you may want only a partial view of the object or even an object that is created from it.
In this case we have a converter that can be used to convert a Car instance to instead return one of its wheels as determined by the value passed to the converter when created.
public class CarWheelConverter implements Converter&String, Car, Wheel& {
private final int wheelP
public CarWheelConverter(int wheelPosition) {
this.wheelPosition = wheelP
public Wheel convert(String key, Car value, Metadata metadata) {
return value.getWheels().wheel(wheelPosition);
try (CloseableIterable&CacheEntry&String, Wheel&& iterable = advancedCache.filterEntries(teslaCarFilter).converter(new CarWheelConverter(3)) {
for (CacheEntry&String, Wheel& entry : iterable) {
Remember that both the KeyValueFilter and the Converter must either implement Serializable or have a provided Infinispan Externalizer
2.2.3. Custom Interceptors
The AdvancedCache interface also offers advanced developers a mechanism with which to attach custom interceptors. Custom interceptors allow developers to alter the behavior of the cache API methods, and the AdvancedCache interface allows developers to attach these interceptors programmatically, at run-time. See the AdvancedCache Javadocs for more details.
For more information on writing custom interceptors, see .
2.3. Listeners and Notifications
Infinispan offers a listener API, where clients can register for and get notified when events take place. This annotation-driven API applies to 2 different levels: cache level events and cache manager level events.
Events trigger a notification which is dispatched to listeners. Listeners are simple
s annotated with
and registered using the methods defined in the
interface.
Both Cache and CacheManager implement Listenable, which means you can attach listeners to either a cache or a cache manager, to receive either cache-level or cache manager-level notifications.
For example, the following class defines a listener to print out some information every time a new entry is added to the cache:
public class PrintWhenAdded {
@CacheEntryCreated
public void print(CacheEntryCreatedEvent event) {
System.out.println(&New entry & + event.getKey() + & created in the cache&);
For more comprehensive examples, please see the .
2.3.1. Cache-level notifications
Cache-level events occur on a per-cache basis, and by default are only raised on nodes where the events occur. Note in a distributed cache these events are only raised on the owners of data being affected. Examples of cache-level events are entries being added, removed, modified, etc. These events trigger notifications to listeners registered to a specific cache.
Please see the
for a comprehensive list of all cache-level notifications, and their respective method-level annotations.
Please refer to the
for the list of cache-level notifications available in Infinispan.
Cluster Listeners
The cluster listeners should be used when it is desirable to listen to the cache events on a single node.
To do so all that is required is set to annotate your listener as being clustered.
@Listener (clustered = true)
public class MyClusterListener { .... }
There are some limitations to cluster listeners from a non clustered listener.
A cluster listener can only listen to @CacheEntryModified, @CacheEntryCreated and @CacheEntryRemoved events. Note this means any other type of event will not be listened to for this listener.
Only the post event is sent to a cluster listener, the pre event is ignored.
Event filtering and conversion
All applicable events on the node where the listener is installed will be raised to the listener. It is possible to dynamically filter what events are raised by using a
(only allows filtering on keys) or
(used to filter for keys, old value, old metadata, new value, new metadata, whether command was retried, if the event is before the event (ie. isPre) and also the command type).
The example here shows a simple KeyFilter that will only allow events to be raised when an event modified the entry for the key Only Me.
public class SpecificKeyFilter implements KeyFilter&String& {
private final String keyToA
public SpecificKeyFilter(String keyToAccept) {
if (keyToAccept == null) {
throw new NullPointerException();
this.keyToAccept = keyToA
boolean accept(String key) {
return keyToAccept.equals(key);
cache.addListener(listener, new SpecificKeyFilter(&Only Me&));
This can be useful when you want to limit what events you receive in a more efficient manner.
There is also a
that can be supplied that allows for converting a value to another before raising the event. This can be nice to modularize any code that does value conversions.
The mentioned filters and converters are especially beneficial when used in conjunction with a Cluster Listener. This is because the filtering and conversion is done on the node where the event originated and not on the node where event is listened to. This can provide benefits of not having to replicate events across the cluster (filter) or even have reduced payloads (converter).
Initial State Events
When a listener is installed it will only be notified of events after it is fully installed.
It may be desirable to get the current state of the cache contents upon first registration of listener by having an event generated of type @CacheEntryCreated for each element in the cache. Any additionally generated events during this initial phase will be queued until appropriate events have been raised.
This only works for clustered listeners at this time.
covers adding this for non clustered listeners.
Duplicate Events
It is possible in a non transactional cache to receive duplicate events. This is possible when the primary owner of a key goes down while trying to perform a write operation such as a put.
Infinispan internally will rectify the put operation by sending it to the new primary owner for the given key automatically, however there are no guarantees in regards to if the write was first replicated to backups. Thus more than 1 of the following write events (CacheEntryCreatedEvent, CacheEntryModifiedEvent & CacheEntryRemovedEvent) may be sent on a single operation.
If more than one event is generated Infinispan will mark the event that it was generated by a retried command to help the user to know when this occurs without having to pay attention to view changes.
public class MyRetryListener {
@CacheEntryModified
public void entryModified(CacheEntryModifiedEvent event) {
if (event.isCommandRetried()) {
Also when using a CacheEventFilter or CacheEventConverter the
contains a method isRetry to tell if the event was generated due to retry.
2.3.2. Cache manager-level notifications
Cache manager-level events occur on a cache manager. These too are global and cluster-wide, but involve events that affect all caches created by a single cache manager. Examples of cache manager-level events are nodes joining or leaving a cluster, or caches starting or stopping.
Please see the
for a comprehensive list of all cache manager-level notifications, and their respective method-level annotations.
2.3.3. Synchronicity of events
By default, all notifications are dispatched in the same thread that generates the event. This means that you must write your listener such that it does not block or do anything that takes too long, as it would prevent the thread from progressing. Alternatively, you could annotate your listener as asynchronous , in which case a separate thread pool will be used to dispatch the notification and prevent blocking the event originating thread. To do this, simply annotate your listener such:
@Listener (sync = false)
public class MyAsyncListener { .... }
Asynchronous thread pool
To tune the thread pool used to dispatch such asynchronous notifications, use the
XML element in your configuration file.
2.4. Asynchronous API
In addition to synchronous API methods like
, etc., Infinispan also has an asynchronous, non-blocking API where you can achieve the same results in a non-blocking fashion.
These methods are named in a similar fashion to their blocking counterparts, with "Async" appended.
These asynchronous counterparts return a
containing the actual result of the operation.
For example, in a cache parameterized as Cache&String, String&, Cache.put(String key, String value) returns a String. Cache.putAsync(String key, String value) would return a Future&String&.
2.4.1. Why use such an API?
Non-blocking APIs are powerful in that they provide all of the guarantees of synchronous communications - with the ability to handle communication failures and exceptions - with the ease of not having to block until a call completes.
This allows you to better harness parallelism in your system.
For example:
Set&Future&?&& futures = new HashSet&Future&?&&();
futures.add(cache.putAsync(key1, value1));
futures.add(cache.putAsync(key2, value2));
futures.add(cache.putAsync(key3, value3));
for (Future&?& f: futures) f.get();
2.4.2. Which processes actually happen asynchronously?
There are 4 things in Infinispan that can be considered to be on the critical path of a typical write operation. These are, in order of cost:
network calls
marshalling
writing to a cache store (optional)
As of Infinispan 4.0, using the async methods will take the network calls and marshalling off the critical path.
For various technical reasons, writing to a cache store and acquiring locks, however, still happens in the caller’s thread.
In future, we plan to take these offline as well.
about this topic.
2.4.3. Notifying futures
Strictly, these methods do not return JDK Futures, but rather a sub-interface known as a
The main difference is that you can attach a listener to a NotifyingFuture such that you could be notified when the future completes.
Here is an example of making use of a notifying future:
FutureListener futureListener = new FutureListener() {
public void futureDone(Future future) {
future.get();
} catch (Exception e) {
System.out.println(&Help!&);
cache.putAsync(&key&, &value&).attachListener(futureListener);
2.4.4. Further reading
The Javadocs on the
interface has some examples on using the asynchronous API, as does
by Manik Surtani introducing the API.
2.5. Invocation Flags
An important aspect of getting the most of Infinispan is the use of per-invocation flags in order to provide specific behaviour to each particular cache call. By doing this, some important optimizations can be implemented potentially saving precious time and network resources. One of the most popular usages of flags can be found right in Cache API, underneath the
method which is used to load an Infinispan cache with data read from an external resource. In order to make this call efficient, Infinispan basically calls a normal put operation passing the following flags:
What Infinispan is doing here is effectively saying that when putting data read from external read, it will use an almost-zero lock acquisition time and that if the locks cannot be acquired, it will fail silently without throwing any exception related to lock acquisition. It also specifies that regardless of the cache mode, if the cache is clustered, it will replicate asynchronously and so won’t wait for responses from other nodes. The combination of all these flags make this kind of operation very efficient, and the efficiency comes from the fact this type of putForExternalRead calls are used with the knowledge that client can always head back to a persistent store of some sorts to retrieve the data that should be stored in memory. So, any attempt to store the data is just a best effort and if not possible, the client should try again if there’s a cache miss.
2.5.1. DecoratedCache
Another approach would be to use the
wrapper. This allows you to reuse flags. For example:
AdvancedCache cache = ...
DecoratedCache strictlyLocal = new DecoratedCache(cache, Flag.CACHE_MODE_LOCAL, Flag.SKIP_CACHE_STORE);
strictlyLocal.put(&local_1&, &only&);
strictlyLocal.put(&local_2&, &only&);
strictlyLocal.put(&local_3&, &only&);
This approach makes your code more readable.
2.5.2. Examples
If you want to use these or any other flags available, which by the way are described in detail the
, you simply need to get hold of the advanced cache and add the flags you need via the
method call. For example:
Cache cache = ...
cache.getAdvancedCache()
.withFlags(Flag.SKIP_CACHE_STORE, Flag.CACHE_MODE_LOCAL)
.put(&local&, &only&);
It’s worth noting that these flags are only active for the duration of the cache operation. If the same flags need to be used in several invocations, even if they’re in the same transaction,
needs to be called repeatedly. Clearly, if the cache operation is to be replicated in another node, the flags are carried over to the remote nodes as well.
Suppressing return values from a put() or remove()
Another very important use case is when you want a write operation such as put() to not return the previous value. To do that, you need to use two flags to make sure that in a distributed environment, no remote lookup is done to potentially get previous value, and if the cache is configured with a cache loader, to avoid loading the previous value from the cache store. You can see these two flags in action in the following example:
Cache cache = ...
cache.getAdvancedCache()
.withFlags(Flag.SKIP_REMOTE_LOOKUP, Flag.SKIP_CACHE_LOAD)
.put(&local&, &only&)
For more information, please check the
2.6. Tree API Module
offers clients the possibility of storing data using a tree-structure like API. This API is similar to the one , hence the tree module is perfect for those users wanting to migrate their applications from JBoss Cache to Infinispan, who want to limit changes their codebase as part of the migration. Besides, it’s important to understand that Infinispan provides this tree API much more efficiently than JBoss Cache did, so if you’re a user of the tree API in JBoss Cache, you should consider migrating to Infinispan.
2.6.1. What is Tree API about?
The aim of this API is to store information in a hierarchical way. The hierarchy is defined using paths represented as
, for example: /this/is/a/fqn/path or /another/path . In the hierarchy, there’s a special path called root which represents the starting point of all paths and it’s represented as: /
Each FQN path is represented as a node where users can store data using a key/value pair style API (i.e. a Map). For example, in /persons/john , you could store information belonging to John, for example: surname=Smith, birthdate=05/02/;​etc.
Please remember that users should not use root as a place to store data. Instead, users should define their own paths and store data there. The following sections will delve into the practical aspects of this API.
2.6.2. Using the Tree API
Dependencies
For your application to use the tree API, you need to import infinispan-tree.jar which can be located in the Infinispan binary distributions, or you can simply add a dependency to this module in your pom.xml:
org.infinispan
infinispan-tree
$put-infinispan-version-here
2.6.3. Creating a Tree Cache
The first step to use the tree API is to actually create a tree cache. To do so, you need to
, create an instance of
. A very important note to remember here is that the Cache instance passed to the factory must be configured with &&_batching, invocation batching. For example:
import org.infinispan.config.Configuration;
import org.infinispan.tree.TreeCacheFactory;
import org.infinispan.tree.TreeCache;
Configuration config = new Configuration();
config.setInvocationBatchingEnabled(true);
Cache cache = new DefaultCacheManager(config).getCache();
TreeCache treeCache = TreeCacheFactory.createTreeCache(cache);
2.6.4. Manipulating data in a Tree Cache
The Tree API effectively provides two ways to interact with the data:
convenience methods: These methods are located within the TreeCache interface and enable users to
…​etc data with a single call that takes the
, in String or Fqn format, and the data involved in the call. For example:
treeCache.put(&/persons/john&, &surname&, &Smith&);
import org.infinispan.tree.Fqn;
Fqn johnFqn = Fqn.fromString(&persons/john&);
Calendar calendar = Calendar.getInstance();
calendar.set(1980, 5, 2);
treeCache.put(johnFqn, &birthdate&, calendar.getTime()));
API: It allows finer control over the individual nodes that form the FQN, allowing manipulation of nodes relative to a particular node. For example:
import org.infinispan.tree.Node;
TreeCache treeCache = ...
Fqn johnFqn = Fqn.fromElements(&persons&, &john&);
Node&String, Object& john = treeCache.getRoot().addChild(johnFqn);
john.put(&surname&, &Smith&);
Node persons = treeCache.getRoot().addChild(Fqn.fromString(&persons&));
Node&String, Object& john = persons.addChild(Fqn.fromString(&john&));
john.put(&surname&, &Smith&);
Fqn personsFqn = Fqn.fromString(&persons&);
Fqn johnFqn = Fqn.fromRelative(personsFqn, Fqn.fromString(&john&));
Node&String, Object& john = treeCache.getRoot().addChild(johnFqn);
john.put(&surname&, &Smith&);
A node also provides the ability to access its
. For example:
Node&String, Object& john = ...
Node persons = john.getParent();
Set&Node&String, Object&& personsChildren = persons.getChildren();
2.6.5. Common Operations
In the previous section, some of the most used operations, such as addition and retrieval, have been shown. However, there are other important operations that are worth mentioning, such as remove:
You can for example remove an entire node, i.e. /persons/john , using:
treeCache.removeNode(&/persons/john&);
Or remove a child node, i.e. persons that a child of root, via:
treeCache.getRoot().removeChild(Fqn.fromString(&persons&));
You can also remove a particular key/value pair in a node:
Node john = treeCache.getRoot().getChild(Fqn.fromElements(&persons&, &john&));
john.remove(&surname&);
Or you can remove all data in a node with:
Node john = treeCache.getRoot().getChild(Fqn.fromElements(&persons&, &john&));
john.clearData();
Another important operation supported by Tree API is the ability to move nodes around in the tree. Imagine we have a node called "john" which is located under root node. The following example is going to show how to we can move "john" node to be under "persons" node:
Current tree structure:
Moving trees from one FQN to another:
Node john = treeCache.getRoot().addChild(Fqn.fromString(&john&));
Node persons = treeCache.getRoot().getChild(Fqn.fromString(&persons&));
treeCache.move(john.getFqn(), persons.getFqn());
Final tree structure:
/persons/john
2.6.6. Locking in the Tree API
Understanding when and how locks are acquired when manipulating the tree structure is important in order to maximise the performance of any client application interacting against the tree, while at the same time maintaining consistency.
Locking on the tree API happens on a per node basis. So, if you’re putting or updating a key/value under a particular node, a write lock is acquired for that node. In such case, no write locks are acquired for parent node of the node being modified, and no locks are acquired for children nodes.
If you’re adding or removing a node, the parent is not locked for writing. In JBoss Cache, this behaviour was configurable with the default being that parent was not locked for insertion or removal.
Finally, when a node is moved, the node that’s been moved and any of its children are locked, but also the target node and the new location of the moved node and its children. To understand this better, let’s look at an example:
Imagine you have a hierarchy like this and we want to move c/ to be underneath b/:
The end result would be something like this:
To make this move, locks would have been acquired on:
/a/b - because it’s the parent underneath which the data will be put
/c and /c/e - because they’re the nodes that are being moved
/a/b/c and /a/b/c/e - because that’s new target location for the nodes being moved
2.6.7. Listeners for tree cache events
The current Infinispan listeners have been designed with key/value store notifications in mind, and hence they do not map to tree cache events correctly. Tree cache specific listeners that map directly to tree cache events (i.e. adding a child…​etc) are desirable but these are not yet available. If you’re interested in this type of listeners, please follow
to find out about any progress in this area.
3. Eviction
Infinispan supports eviction of entries, such that you do not run out of memory. Eviction is typically used in conjunction with a cache store, so that entries are not permanently lost when evicted, since eviction only removes entries from memory and not from cache stores or the rest of the cluster.
Passivation is also a popular option when using eviction, so that only a single copy of an entry is maintained - either in memory or in a cache store, but not both. The main benefit of using passivation over a regular cache store is that updates to entries which exist in memory are cheaper since the update doesn’t need to be made to the cache store as well.
that eviction occurs on a local basis, and is not cluster-wide. Each node runs an eviction thread to analyse the contents of its in-memory container and decide what to evict. Eviction does not take into account the amount of free memory in the JVM as threshold to starts evicting entries. You have to set maxEntries attribute of the eviction element to be greater than zero in order for eviction to be turned on. If maxEntries is too large you can run out of memory. maxEntries attribute will probably take some tuning in each use case.
3.1. Enabling Eviction
Eviction is configured by adding the
element to your &*-cache /& configuration sections or using
API programmatic approach.
All cache entry are evicted by piggybacking on user threads that are hitting the cache. Periodic pruning of expired cache entries from cache is done on a dedicated thread which is turned on by enabling reaper in expiration configuration element/API.
3.1.1. Eviction strategies
LIRS is default eviction algorithm in Infinispan 5.2 onwards. LRU was the default prior to that.
NONE This eviction strategy effectively disables the eviction thread.
UNORDERED UNORDERED eviction strategy is a legacy eviction strategy that has been deprecated. If UNORDERED strategy is specified LRU eviction algorithm will be used.
LRU If LRU eviction is used cache entries are selected for eviction using a well known least-recently-used pattern.
LIRS LRU eviction algorithm, although simple and easy to understand, under performs in cases of weak access locality (one time access entries are not timely replaced, entries to be accessed soonest are unfortunately replaced, and so on). Recently, a new eviction algorithm - LIRS has gathered a lot of attention because it addresses weak access locality shortcomings of LRU yet it retains LRU’s simplicity. Eviction in LIRS algorithm relies on history information of cache entries accesses using so called Inter-Reference Recency (a.k.a IRR) and the Recency. The IRR of a cache entry A refers to number of other distinct entries accessed between the last two consecutive accesses to cache entry A, while recency refers to the number of other entries accessed from last reference to A up to current time point. If we relied only on cache recency we would essentially have LRU functionality. However, in addition to recency LIRS tracks elements that are in low IRR and high IRR, aptly named LIR and HIR cache entry blocks respectively. LIRS eviction algorithm essentially keeps entries with a low IRR in the cache as much as possible while evicting high IRR entries if eviction is required. If recency of a LIR cache entry increases to a certain point and entry in HIR gets accessed at a smaller recency than that of the LIR entry, the LIR/HIR statuses of the two blocks are switched. Entries in HIR may be evicted regardless of its recency, even if element was recently accessed.
3.1.2. More defaults
By default when no &eviction /& element is specified, no eviction takes place.
In case there is an eviction element, this table describes behaviour of eviction based on information provided in the xml configuration ("-" in Supplied maxEntries or Supplied strategy column means that the attribute wasn’t supplied)
Supplied maxEntries Supplied strategy Example Eviction behaviour
- - &eviction /& no eviction
& 0 - &eviction max-entries="100" /& the strategy defaults to LIRS and eviction takes place
& 0 NONE &eviction max-entries="100" strategy="NONE" /& the strategy defaults to LIRS and eviction takes place
& 0 != NONE &eviction max-entries="100" strategy="LRU" /& eviction takes place with defined strategy
0 - &eviction max-entries="0" /& no eviction
0 NONE &eviction max-entries="0" strategy="NONE" /& no eviction
0 != NONE &eviction max-entries="0" strategy="LRU" /& ConfigurationException
& 0 - &eviction max-entries="-1" /& no eviction
& 0 NONE &eviction max-entries="-1" strategy="NONE" /& no eviction
& 0 != NONE &eviction max-entries="-1" strategy="LRU" /& ConfigurationException
3.1.3. Advanced Eviction Internals
Implementing eviction in a scalable, low lock contention approach while at the same time doing meaningful selection of entries for eviction is not an easy feat. Data container needs to be locked until appropriate eviction entries are selected. Having such a lock protected data container in turn causes high lock contention offsetting any eviction precision gained by sophisticated eviction algorithms. In order to get superior throughput while retaining high eviction precision both low lock contention data container and high precision eviction algorithm implementation are needed. Infinispan evicts entries from cache on a segment level (segments similar to ConcurrentHashMap), once segment is full entries are evicted according to eviction algorithm. However, there are two drawbacks with this approach. Entries might get evicted from cache even though maxEntries has not been reached yet and maxEntries is a theoretical limit for cache size but in practical terms it will be slightly less than maxEntries. For more details refer to .
3.2. Expiration
Similar to, but unlike eviction, is expiration. Expiration allows you to attach lifespan and/or maximum idle times to entries. Entries that exceed these times are treated as invalid and are removed. When removed expired entries are not passivated like evicted entries (if passivation is turned on).
Unlike eviction, expired entries are removed globally - from memory, cache stores, and cluster-wide.
By default entries created are immortal and do not have a lifespan or maximum idle time. Using the cache API, mortal entries can be created with lifespans and/or maximum idle times. Further, default lifespans and/or maximum idle times can be configured by adding the
element to your &*-cache /& configuration sections.
3.2.1. Difference between Eviction and Expiration
Both Eviction and Expiration are means of cleaning the cache of unused entries and thus guarding the heap against OutOfMemory exceptions, so now a brief explanation of the difference.
With eviction you set maximal number of entries you want to keep in the cache and if this limit is exceeded, some candidates are found to be removed according to a choosen eviction strategy (LRU, LIRS, etc…​). Eviction can be setup to work with passivation (evicting to a cache store).
With expiration you set time criteria for entries, how long you want to keep them in cache. Either you set maximum lifespan of the entry - time it is allowed to stay in the cache or maximum idle time , time it’s allowed to be untouched (no operation performed with given key).
3.3. Eviction Examples
Expiration is a top-level construct, represented in the configuration as well as in the cache API.
While eviction is local to each cache instance , expiration is cluster-wide . Expiration lifespans and maxIdle values are replicated along with the cache entry.
Expiration lifespan and maxIdle are also persisted in CacheStores, so this information survives eviction/passivation.
Four eviction strategies are shipped,
3.3.1. Configuration
Eviction may be configured using the Configuration bean or the XML file. Eviction configuration is on a per-cache basis. Valid eviction-related configuration elements are:
strategy=&LRU& max-entries=&2000&
lifespan=&1000& max-idle=&500& interval=&1000&
Programmatically, the same would be defined using:
Configuration c = new ConfigurationBuilder().eviction().strategy(EvictionStrategy.LRU)
.maxEntries(2000).expiration().wakeUpInterval(5000l).lifespan(1000l).maxIdle(500l)
3.3.2. Default values
Eviction is disabled by default. If enabled (using an empty &eviction /& element), certain default values are used:
strategy: EvictionStrategy.NONE is assumed, if a strategy is not specified..
wakeupInterval: 5000 is used if not specified.
If you wish to disable the eviction thread, set wakeupInterval to -1.
maxEntries: -1 is used if not specified, which means unlimited entries.
0 means no entries, and the eviction thread will strive to keep the cache empty.
Expiration lifespan and maxIdle both default to -1.
3.3.3. Using expiration
Expiration allows you to set either a lifespan or a maximum idle time on each key/value pair stored in the cache. This can either be set cache-wide using the configuration, as described above, or it can be defined per-key/value pair using the Cache interface. Any values defined per key/value pair overrides the cache-wide default for the specific entry in question.
For example, assume the following configuration:
lifespan=&1000&
cache.put(&pinot noir&, pinotNoirPrice);
cache.put(&chardonnay&, chardonnayPrice, 2, TimeUnit.SECONDS);
cache.put(&pinot grigio&, pinotGrigioPrice, -1,
TimeUnit.SECONDS, 1, TimeUnit.SECONDS);
cache.put(&riesling&, rieslingPrice, 5,
TimeUnit.SECONDS, 1, TimeUnit.SECONDS);
3.4. Eviction designs
Central to eviction is an EvictionManager - which is only available if eviction or expiration is configured.
The purpose of the EvictionManager is to drive the eviction/expiration thread which periodically purges items from the DataContainer. If the eviction thread is disabled (wakeupInterval set to -1) eviction can be kicked off manually using EvictionManager.processEviction(), for example from another maintenance thread that may run periodically in your application.
The eviction manager processes evictions in the following manner:
Causes the data container to purge expired entries
Causes cache stores (if any) to purge expired entries
Prunes the data container to a specific size, determined by maxElements
4. Persistence
Persistence allows configuring external (persistent) storage engines complementary to the default in memory storage offered by Infinispan. An external persistent storage might be useful for several reasons:
Increased Durability. Memory is volatile, so a cache store could increase the life-span of the information store in the cache.
Write-through. Interpose Infinispan as a caching layer between an application and a (custom) external storage engine.
Overflow Data. By using eviction and passivation, one can store only the "hot" data in memory and overflow the data that is less frequently used to disk.
The integration with the persistent store is done through the following SPI: CacheLoader, CacheWriter, AdvancedCacheLoader and AdvancedCacheWriter (discussed in the following sections).
This SPI was refactored in Infinispan 6. It brings the following improvements over the previous (up to 5.x) persistence integration
Alignment with . The
interface are similar to the the loader and writer in JSR 107. This should considerably help writing portable stores across JCache compliant vendors.
Simplified Transaction Integration. All necessary locking is handled by Infinispan automatically and implementations don’t have to be concerned with coordinating concurrent access to the store. Even though concurrent writes on the same key are not going to happen (depending locking mode in use), implementors should expect operations on the store to happen from multiple/different threads and code the implementation accordingly.
Parallel Iteration. It is now possible to iterate over entries in the store with multiple threads in parallel. Map/Reduce tasks immediately benefit from this, as the map/reduce tasks now run in parallel over both the nodes in the cluster and within the same node (multiple threads).
Reduced Serialization. This translates in less CPU usage. The new API exposes the stored entries in serialized format. If an entry is fetched from persistent storage for the sole purpose of being sent remotely, we no longer need to deserialize it (when reading from the store) and serialize it back (when writing to the wire). Now we can write to the wire the serialized format as read from the storage directly.
4.1. Data Migration
The format in which data is persisted has changed in Infinispan 6.0, so this means that if you stored data using Infinispan 4.x or Infinispan 5.x, Infinispan 6.0 won’t be able to read it. The best way to upgrade persisted data from Infinispan 4.x/5.x to Infinispan 6.0 is to use the mechanisms explained in the . In other words, by starting a rolling upgrade, data stored in Infinispan 4.x/5.x can be migrated to a Infinispan 6.0 installation where persitence is configured with a different location for the data. The location configuration varies according to the specific details of each cache store.
Following sections describe the SPI and also discuss the SPI implementations that Infinispan ships out of the box.
The following class diagram presents the main SPI interfaces of the persistence API:
Some notes about the classes:
- abstracts the serialized form of an object
- abstracts the information held within a persistent store corresponding to a key-value added to the cache. Provides method for reading this information both in serialized () and deserialized (Object) format. Normally data read from the store is kept in serialized format and lazily deserialized on demand, within the
implementation
provide basic methods for reading and writing to a store
provide operations to manipulate the underlaying storage in bulk: parallel iteration and purging of expired entries, clear and size.
A provider might choose to only implement a subset of these interfaces:
Not implementing the
makes the given writer not usable for purging expired entries or clear
Not implementing the
makes the information stored in the given loader not used for preloading, nor for the map/reduce iteration
If you’re looking at migrating your existing store to the new API or to write a new store implementation, the
might be a good starting point/example.
4.3. Configuration
Stores (readers and/or writers) can be configured in a chain. Cache read operation looks at all of the specified CacheLoader s, in the order they are configured, until it finds a valid and non-null element of data. When performing writes all cache CacheWriter s are written to, except if the ignoreModifications element has been set to true for a specific cache writer.
Implementing both a CacheWriter and CacheLoader it is possible and recommended for a store provider to implement both the CacheWriter and the CacheLoader interface. The stores that do this are considered both for reading and writing(assuming read-only=false) data.
This is the configuration of a custom(not shipped with infinispan) store:
name=&myCustomStore&
passivation=&false&
class=&org.acme.CustomStore&
fetch-state=&false& preload=&true& shared=&false&
purge=&true& read-only=&false& singleton=&false&
flush-lock-timeout=&12321& modification-queue-size=&123& shutdown-timeout=&321& thread-pool-size=&23&
name=&myProp&${system.property}
Explanation of the configuration options:
passivation (false by default) has a significant impact on how Infinispan interacts with the loaders, and is discussed in the
class defines the class of the store and must implement CacheLoader, CacheWriter or both
fetch-state (false by default) determines whether or not to fetch the persistent state of a cache when joining a cluster. The aim here is to take the persistent state of a cache and apply it to the local cache store of the joining node. Fetch persistent state is ignored if a cache store is configured to be shared, since they access the same data. Only one configured cache loader may set t if more than one cache loader does so, a configuration exception will be thrown when starting your cache service.
preload (false by default) if true, when the cache starts, data stored in the cache loader will be pre-loaded into memory. This is particularly useful when data in the cache loader is needed immediately after startup and you want to avoid cache operations being delayed as a result of loading this data lazily. Can be used to provide a 'warm-cache' on startup, however there is a performance penalty as startup time is affected by this process. Note that preloading is done in a local fashion, so any data loaded is only stored locally in the node. No replication or distribution of the preloaded data happens. Also, Infinispan only preloads up to the maximum configured number of entries in .
shared (false by default) indicates that the cache loader is shared among different cache instances, for example where all instances in a cluster use the same JDBC settings to talk to the same remote, shared database. Setting this to true prevents repeated and unnecessary writes of the same data to the cache loader by different cache instances.
purge(false by default) empties the specified cache loader (if read-only is false)}

我要回帖

更多关于 东风本田xrv2018新款 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信