Monday 22 February 2016

Difference between Heap and Stack Memory

Java Heap Memory

Heap memory is used by java run time to allocate memory to Objects. While try to create an object, it’s always created in the Heap space. If the objects doesn't have any reference, Garbage Collection runs on the heap memory to free the memory used by objects.

Java Stack Memory

Stack memory size is very less compared to Heap memory. It is used for execution of a thread. Stack memory is always referenced in Last-In-First-Out order. Whenever a method is invoked, a new block is created in the stack memory for the method to hold local primitive values and reference to other objects in the method. When the method ends, the block becomes unused and become available for next method.

Difference between Heap and Stack Memory

  1. Heap memory is used by all the parts of the application whereas stack memory is used only by one thread of execution.
  2. Stack memory size is very less when compared to Heap memory
  3. Stack memory is very fast when compared to heap memory
  4. When stack memory is full, Java run time throws java.lang.StackOverFlowError whereas if heap memory is full, it throws java.lang.OutOfMemoryError: Java Heap Space error.
  5. Object is always stored in the Heap space and stack memory contains the reference to it. Stack memory only contains local primitive variables and reference variables to objects in heap space.
  6. We can use -Xss to define the stack memory size.We can use -Xms and -Xmx JVM option to define the startup size and maximum size of heap memory
  7. Objects stored in the heap are globally accessible whereas stack memory can’t be accessed by other threads.
  8. Stack memory is short-lived whereas heap memory lives from the start till the end of application execution.

Friday 19 February 2016

Redis Installation in linux

Redis Install linux

Redis is an open Source, in-memory data stricture store, used as database, cache and message broker. Redis server can be installed and configured inside the main box (Server machine) or outside the box as a separate machine. Here you go with the installation/setup process of Redis server.

Download Redis

Redis 3.0 introduces Redis Cluster, a distributed implementation of Redis with automatic data sharding and fault tolerance, important speed improvements under certain workloads, improved AOF rewriting, and more.




Installation

The suggested way of installing Redis is compiling it from sources as Redis has no dependencies other than a working GCC compiler and libc. Installing it using the package manager of your Linux distribution is somewhat discouraged as usually the available version is not the latest.
In order to compile Redis follow this simple steps:
Copy the distribution to your favorite folder where you want to do the installation.

Install Development tools and Jemalloc

Redis have dependency with Development tools like gcc and gcc-c++. We have to install those packages in the operating system. Follow below commands for different operating systems.

--------------------Development Tools installation --------------
Centos:
yum -y update
yum groupinstall 'Development Tools'

Red-hat/centos:
yum -y update
yum -y install gcc gcc-c++ make

Ubuntu:
apt-get install build-essential

------------------------------End------------------------------------

-----------------Jemalloc Installation-------------------------------
Jemalloc:
cd redis-home/deps
make hiredis jemalloc linenoise lua
cd ..


Install Ruby and Rubygems


cd redis-home/redis-3.0.7/utils
yum install ruby
yum install rubygems


Install Redis

Install:
cd redis-home/redis-3.0.7
make install
gem install redis

Configure cluster-create:
cd /redis-3.0.7/utils/create-cluster/
vi create-cluster

Change the HOST name with your ip address which you are going to use in application and change the port to 6379.

# Settings
PORT=6379
TIMEOUT=2000
NODES=6
REPLICAS=1
------------------------------------
-----------------------------------------
-------------------------------------------------
if [ "$1" == "create" ]
then
    HOSTS=""
    while [ $((PORT < ENDPORT)) != "0" ]; do
        PORT=$((PORT+1))
        HOSTS="$HOSTS 192.168.1.10:$PORT"
    done
    ../../src/redis-trib.rb create --replicas $REPLICAS $HOSTS
    exit 0
fi

# Save the file

Redis -start and stop


Start Redis:
./cluster-create start

Stop Redis:
./cluster-create stop

Start Redis:
./cluster-create start

Create Redis:
./cluster-create create

Stop Redis:
./cluster-create stop

Reference


http://redis.io/topics/cluster-tutorial



http://redis.io/topics/quickstart


Check If Redis is working

External programs talk to Redis using a TCP socket and a Redis specific protocol. This protocol is implemented in the Redis client libraries for the different programming languages. However to make hacking with Redis simpler Redis provides a command line utility that can be used to send commands to Redis. This program is called redis-cli.
The first thing to do in order to check if Redis is working properly is sending a PING command using redis-cli:
Below example show how you can connect to Node having port 6380.

command:
cd redis-home/redis-3.0.7/utils
redis-cli -c -p 6380
  
Set a sample value:
redis 127.0.0.1:6379> set mykey somevalue
OK

Get the value:
redis 127.0.0.1:6379> get mykey
"somevalue"


 

Wednesday 10 February 2016

Apache Camel Web-socket

Apache Camel Websocket


This example provides sample camel routes for websocket producer and consumer.

Apache camel websocket producer example:

      from("direct:Producer1").
      //we will use this connectionKey for uniquely identifying each connection from the client.
      setHeader(WebsocketConstants.CONNECTION_KEY, header("connectionKey")).
      to("websocket://{host}:{port}/camel-websocket?sendToAll=false").end();

Apache camel websocket consumer example.

     from("direct:Consumer1")
    .process(new Processor() {
     public void process(Exchange exchange) throws Exception {
       Map<String, Object> headers=exchange.getIn().getHeaders();
  //you can get a unique connection key from the exchange header.This would be unique for each client.
  //store this key somewhere, to send messages to particular client.
  String uniqueConnectionKey=headers.get("websocket.connectionKey").toString();
          //you can get message from the client like below.
          String dataFromClient=exchange.getIn().getBody().toString();

   }
}).end();

To send message to websocket producer.
using producer template, we can send messages to camel websocket endpoint.

CamelContext camelContext=new DefaultCamelContext();
ProducerTemplate template=camelContext.createProducerTemplate();
template.sendBodyAndHeader("direct:Producer1", {message}, "connectionKey", {connectionkey});

direct:Producer1 : producer endpoint name
connectionkey : a unique connection key, which you will get from the exchange header in websocket consumer.
message : message to the websocket endpoint.

I hope this would help.
cheers

Tuesday 9 February 2016

Apache Karaf Monitoring

Monitoring and Management using hawtio


Apache Karaf have different monitoring solution running in Apache Karaf. Apache Karaf Decanter and hawtio are few of them.

hawtio is a lightweight and modular HTML5 web console with many plugins for managing your Java container.

hawtio has many plugins such as: Dashboard and Wiki, logs, health, JMX, OSGi, Apache OpenEJB, Apache ActiveMQ, Apache Camel, Apache Tomcat, Jetty, JBoss and Fuse Fabric. One of the key feature is, you can dynamically extend hawtio with your own plugins or automatically discover plugins inside the JVM.

The only server side dependency (other than the static HTML/CSS/JS/images) is the excellent Jolokia library which has small footprint (around 300Kb) and is available as a JVM agent, or comes embedded as a servlet inside the hawtio-default.war or can be deployed as an OSGi bundle.

If you are using Apache Karaf 3.x or newer then you can use 'feature:repo-add' which is simpler to do, and install hawtio-core:

feature:repo-add hawtio 1.4.60
feature:install hawtio-core

NOTE Karaf 3.x/4.x has an issue with the hawtio-terminal which does not yet work. And therefore you need to install hawtio-core instead of hawtio. 

Apache Karaf 3.x hawtio Reference

http://hawt.io/



Monday 8 February 2016

Sort an ArrayList with another using Java


Sort an ArrayList with another ArrayList


This java sample helps you to understand how you can sort an ArrayList with another ArrayList.

Step 1: 
Create class SampleSort. Follow below program.

package com.sample;

import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

/**
 *
 * @author Vimal Sanker
 *
 */
public class SampleSort {

public static void main(String[] args) {
                List<String> listA = java.util.Arrays.asList("Monday","Thursday","Sunday","Tuesday","Saturday","Wednesday","Friday");
                List<String> listB = java.util.Arrays.asList("Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday");
                Map<String, Integer> day = new HashMap<String, Integer>();
                for(int i=0;i<listB.size();i++) {
                        String day2 = listB.get(i);
                        day.put(day2, i);
                }
                Collections.sort(listA, new CustomFieldComparator(day));
                for(String s : listA) {
                        System.out.println(s);
                }
        }
}

Above program have two ArrayList, listA and listB. Here we are trying to sort listA with listB.


Step 2:
Create class CustomFieldComparator. Follow below program.

package com.Sample;

import java.util.Comparator;
import java.util.Map;

/**
 * 
 * @author Vimal Sanker
 *
 */
public class CustomFieldComparator implements Comparator<String> {
        private Map<String, Integer> sortOrder;
        public CustomFieldComparator(Map<String, Integer> sortOrder) {
                this.sortOrder = sortOrder;
        }
        public int compare(String il, String i2) {
                Integer weekdayPos1 = sortOrder.get(il.trim());
                if (weekdayPos1 == null) {
                        throw new IllegalArgumentException("Bad name encountered: "+ il);
                }
                Integer weekdayPos2 = sortOrder.get(i2);
                if (weekdayPos2 == null) {
                        throw new IllegalArgumentException("Bad name encountered: "+ i2);
                }
                return weekdayPos1.compareTo(weekdayPos2);
         }
}

This is Comparator class which will help us to do the sorting. This is a custom class created as per the requirement.

Step 3:
Execute class SampleSort.

Output:

Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday

............................................................................................................................................
Enjoy.....

Sunday 7 February 2016

Apache Karaf Cellar 3.x

Apache Karaf Cellar 3.x

Architecture Guide

1. Architecture Overview

The core concept behind Karaf Cellar is that each node can be a part of one or more groups that provide the node distributed memory for keeping data (e.g. configuration, features information, other) and a topic which is used to exchange events with the rest of the group nodes.
Each group comes with a configuration, which defines which events are to be broadcasted and which are not. Whenever a local change occurs to a node, the node will read the setup information of all the groups that it belongs to and broadcasts the event to the groups that are whitelisted to the specific event.
The broadcast operation happens via a distributed topic provided by the group. For the groups that the broadcast reaches, the distributed configuration data will be updated so that nodes that join in the future can pickup the change.
1. Introduction

1.1. Karaf Cellar use cases

The first purpose of Cellar is to synchronize the state of several Karaf instances (named nodes).
Cellar provides dedicated shell commands and MBeans to administrate the cluster, and manipulate the resources on the cluster.
It’s also possible to enable local resources listeners: these listeners broadcast local resource changes as cluster events. Please note that this behavior is disabled by default as it can have side effects (especially when a node is stopped). Enabling listeners is at your own risk.
The nodes list could be discovered (using uni-cast or multicast), or "statically" defined (using a couple host name or IP and port list).
Cellar is able to synchronize:
  • bundles (remote, local, or from an OBR)
  • config
  • features
  • event-admin
Optionally, Cellar also support synchronization of OSGi Event Admin, OBR (URLs and bundles).
The second purpose is to provide a Distributed OSGi run time. It means that using Cellar, you are able to call an OSGi service located on a remote instance. See the [Transport and DOSGi] section of the user guide.
Finally, Cellar also provides "run time clustering" by providing dedicated feature like:
  • HTTP load balancing
  • HTTP sessions replication
  • log centralization
Please, see the sections dedicated to those features.

1.2. Cross topology

This is the default Cellar topology. Cellar is installed on every nodes, each node has the same function.
It means that you can perform actions on any node, it will be broadcast to all others nodes.

1.3. Star topology

In this topology, if Cellar is installed on all nodes, you perform actions only on one specific node (the "manager").
To do that, the "manager" is a standard Cellar node, and the event producing is disable on all others nodes (cluster:producer-stop on all "managed" nodes).
Like this, only the "manager" will send event to the nodes (which are able to consumer and handle), but no event can be produced on the nodes.
2. Installation
This chapter describes how to install Apache Karaf Cellar into your existing Karaf based installation.

2.1. Pre-Installation Requirements

Cellar is installed on running Karaf instances.
Cellar is provided as a Karaf features descriptor. The easiest way to install is just to have an internet connection from the Karaf running instance.
3. Deploy Cellar
This chapter describes how to deploy and start Cellar into a running Apache Karaf instance. This chapter assumes that you already know Apache Karaf basics, especially the notion of features and shell usage.

3.1. Registering Cellar features

Karaf Cellar is provided as a Karaf features XML descriptor.
Simply register the Cellar feature URL in your Karaf instance:
karaf@root()> feature:repo-add mvn:org.apache.karaf.cellar/apache-karaf-cellar/3.0.3/xml/features
Adding feature url mvn:org.apache.karaf.cellar/apache-karaf-cellar/3.0.3/xml/features
Now you have Cellar features available in your Karaf instance:
karaf@root()> feature:list |grep -i cellar
cellar-core                   | 3.0.3   |           | karaf-cellar-3.0.3 | Karaf clustering core
hazelcast                     | 3.4.2   |           | karaf-cellar-3.0.3 | In memory data grid
cellar-hazelcast              | 3.0.3   |           | karaf-cellar-3.0.3 | Cellar implementation based on Hazelcast
cellar-config                 | 3.0.3   |           | karaf-cellar-3.0.3 | ConfigAdmin cluster support
cellar-features               | 3.0.3   |           | karaf-cellar-3.0.3 | Karaf features cluster support
cellar-bundle                 | 3.0.3   |           | karaf-cellar-3.0.3 | Bundle cluster support
cellar-shell                  | 3.0.3   |           | karaf-cellar-3.0.3 | Cellar shell support
cellar                        | 3.0.3   |           | karaf-cellar-3.0.3 | Karaf clustering
cellar-dosgi                  | 3.0.3   |           | karaf-cellar-3.0.3 | DOSGi support
cellar-obr                    | 3.0.3   |           | karaf-cellar-3.0.3 | OBR cluster support
cellar-eventadmin             | 3.0.3   |           | karaf-cellar-3.0.3 | OSGi events broadcasting in clusters
cellar-cloud                  | 3.0.3   |           | karaf-cellar-3.0.3 | Cloud blobstore support in clusters
cellar-webconsole             | 3.0.3   |           | karaf-cellar-3.0.3 | Cellar plugin for Karaf WebConsole

3.2. Starting Cellar

To start Cellar in your Karaf instance, you only need to install the Cellar feature:
karaf@root()> feature:install cellar
You can now see the Cellar components (bundles) installed:
karaf@root()> la|grep -i cellar
80 | Active   |  30 | 3.0.3        | Apache Karaf :: Cellar :: Core
81 | Active   |  31 | 3.0.3        | Apache Karaf :: Cellar :: Utils
82 | Active   |  33 | 3.0.3        | Apache Karaf :: Cellar :: Hazelcast
83 | Active   |  40 | 3.0.3        | Apache Karaf :: Cellar :: Shell
84 | Active   |  40 | 3.0.3        | Apache Karaf :: Cellar :: Config
85 | Active   |  40 | 3.0.3        | Apache Karaf :: Cellar :: Bundle
86 | Active   |  40 | 3.0.3        | Apache Karaf :: Cellar :: Features
And Cellar cluster commands are now available:
karaf@root()> cluster:<TAB>

3.3. Optional features

Optionally, you can install additional features.
The cellar-event feature adds support of OSGi Event Admin on the cluster:
karaf@root()> feature:install cellar-event
The cellar-obr feature adds support of OBR sync on the cluster:
karaf@root()> feature:install cellar-obr
The cellar-dosgi feature adds support of DOSGi (Distributed OSGi):
karaf@root()> feature:install cellar-dosgi
The cellar-cloud feature adds support of cloud blob-store, allowing to use instances located on a cloud provider:
karaf@root()> feature:install cellar-cloud
4. Core runtime and Hazelcast
Cellar uses Hazelcast as cluster engine.
When you install the cellar feature, a hazelcast feature is automatically installed, providing the etc/hazelcast.xmlconfiguration file.
The etc/hazelcast.xml configuration file contains all the core configuration, especially: * the Hazelcast cluster identifiers (group name and password) * network discovery and security configuration

4.1. Hazelcast cluster identification

The <group/> element in the etc/hazelcast.xml defines the identification of the Hazelcast cluster:
    <group>
        <name>cellar</name>
        <password>pass</password>
    </group>
All Cellar nodes have to use the same name and password (to be part of the same Hazelcast cluster).

4.2. Network

The <network/> element in the etc/hazelcast.xml contains all the network configuration.
First, it defines the port numbers used by Hazelcast:
        <port auto-increment="true" port-count="100">5701</port>
        <outbound-ports>
            <!--
                Allowed port range when connecting to other nodes.
                0 or * means use system provided port.
            -->
            <ports>0</ports>
        </outbound-ports>
Second, it defines the mechanism used to discover the Cellar nodes: it’s the <join/> element.
By default, Hazelcast uses unicast.
You can also use multicast (enabled by default in Cellar):
            <multicast enabled="true">
                <multicast-group>224.2.2.3</multicast-group>
                <multicast-port>54327</multicast-port>
            </multicast>
            <tcp-ip enabled="false"/>
            <aws enabled="false"/>
Instead of using multicast, you can also explicitly define the host names (or IP addresses) of the different Cellar nodes:
            <multicast enabled="false"/>
            <tcp-ip enabled="true"/>
            <aws enabled="false"/>
By default, it will bind to all interfaces on the node machine. It’s possible to specify a interface:
            <multicast enabled="false"/>
            <tcp-ip enabled="true">
                <interface>127.0.0.1</interface>
            </tcp-ip>
            <aws enabled="false"/>
NB: in previous Hazelcast versions (especially the one used by Cellar 2.3.x), it was possible to have multicast and tcp-ip enabled in the same time. In Hazelcast 3.3.x (the version currently used by Cellar 3.0.x), only one discover mechanism can be enabled at a time. Cellar uses multicast by default (tcp-ip is disabled). If your network or network interface don’t support multicast, you have to enable tcp-ip and disable multicast.
You can also discover nodes located on a Amazon instance:
            <multicast enabled="false"/>
            <tcp-ip enabled="false"/>
            <aws enabled="true">
                <access-key>my-access-key</access-key>
                <secret-key>my-secret-key</secret-key>
                <!--optional, default is us-east-1 -->
                <region>us-west-1</region>
                <!--optional, default is ec2.amazonaws.com. If set, region shouldn't be set as it will override this property -->
                <host-header>ec2.amazonaws.com</host-header>
                <!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
                <security-group-name>hazelcast-sg</security-group-name>
                <tag-key>type</tag-key>
                <tag-value>hz-nodes</tag-value>
            </aws>
Third, you can specific on which network interface the cluster is running (whatever the discovery mechanism used). By default, Hazelcast listens on all interfaces (0.0.0.0). But you can specify an interface:
        <interfaces enabled="true">
            <interface>10.10.1.*</interface>
        </interfaces>
Finally, you can also enable security transport on the cluster. Two modes are supported: * SSL:
        <ssl enabled="true"/>
  • Symmetric Encryption:
        <symmetric-encryption enabled="true">
            <!--
               encryption algorithm such as
               DES/ECB/PKCS5Padding,
               PBEWithMD5AndDES,
               AES/CBC/PKCS5Padding,
               Blowfish,
               DESede
            -->
            <algorithm>PBEWithMD5AndDES</algorithm>
            <!-- salt value to use when generating the secret key -->
            <salt>thesalt</salt>
            <!-- pass phrase to use when generating the secret key -->
            <password>thepass</password>
            <!-- iteration count to use when generating the secret key -->
            <iteration-count>19</iteration-count>
        </symmetric-encryption>
Cellar provides additional discovery mechanisms, See Discovery Service (jclouds and kubernetes) section for details.
5. Cellar nodes
This chapter describes the Cellar nodes manipulation commands.

5.1. Nodes identification

When you installed the Cellar feature, your Karaf instance became automatically a Cellar cluster node, and hence tries to discover the others Cellar nodes.
You can list the known Cellar nodes using the list-nodes command:
karaf@root()> cluster:node-list
  | Id             | Host Name | Port
-------------------------------------
x | node2:5702     | node2 | 5702
  | node1:5701     | node1 | 5701
The starting x indicates that it’s the Karaf instance on which you are logged on (the local node).
NB: if you don’t see the other nodes there (whereas they should be there), it’s probably due to a network issue. By default, Cellar uses multicast to discover the nodes. If your network or network interface don’t support multicast, you have to switch to tcp-ip instead of multicast. See [Core Configuration|hazelcast] for details.
NB: in Cellar 2.3.x, Cellar used both multicast and tcp-ip by default. Due to a change in Hazelcast, it’s no more possible to have both. Now, in Cellar 3.0.x, the default configuration is multicast enabled, tcp-ip disabled. See [Core Configuration|hazelcast] for details.

5.2. Testing nodes

You can ping a node to test it:
karaf@root()> cluster:node-ping node1:5701
PING node1:5701
from 1: req=node1:5701 time=11 ms
from 2: req=node1:5701 time=12 ms
from 3: req=node1:5701 time=13 ms
from 4: req=node1:5701 time=7 ms
from 5: req=node1:5701 time=12 ms

5.3. Node Components: listener, producer, handler, consume, and synchronize

A Cellar node is actually a set of components, each component is dedicated to a special purpose.
The etc/org.apache.karaf.cellar.node.cfg configuration file is dedicated to the configuration of the local node. It’s where you can control the status of the different components.

5.4. Synchronize and sync policy

A synchronize is invoked when you:
  • Cellar starts
  • a node joins a cluster group (see [groups] for details about cluster groups)
  • you explicitly call the cluster:sync command
We have a synchronize per resource: feature, bundle, config, obr (optional).
Cellar supports three sync policies:
  • cluster (default): if the node is the first one in the cluster, it pushes its local state to the cluster, else if it’s not the first node in the cluster, the node will update its local state with the cluster one (meaning that the cluster is the master).
  • node: in this case, the node is the master, it means that the cluster state will be overwritten by the node state.
  • disabled: in this case, it means that the synchronize is not used at all, meaning the node or the cluster are not updated at all (at sync time).
You can configure the sync policy (for each resource, and each cluster group) in the etc/org.apache.karaf.cellar.groups.cfg configuration file:
default.bundle.sync = cluster
default.config.sync = cluster
default.feature.sync = cluster
default.obr.urls.sync = cluster
The cluster:sync command allows you to "force" the sync:
karaf@node1()> cluster:sync
Synchronizing cluster group default
        bundle: done
        config: done
        feature: done
        obr.urls: No synchronizer found for obr.urls
It’s also possible to sync only a resource using:
  • -b (--bundle) for bundle
  • -f (--feature) for feature
  • -c (--config) for configuration
  • -o (--obr) for OBR URLs
or a given cluster group using the -g (--group) option.

6. Cellar groups

You can define groups in Cellar. A group allows you to define specific nodes and resources that are to be working together. This permits some nodes (those outside the group) not to need to sync’ed with changes of a node within a group.
By default, the Cellar nodes go into the default group:
karaf@root()> cluster:group-list
  | Group   | Members
-----------------------------------------------
x | default | node2:5702 node1:5701(x)
The x indicates a local group. A local group is a group containing the local node (where we are connected).

6.1. New group

You can create a new group using the group-create command:
karaf@root()> cluster:group-create test
For now, the test group hasn’t any nodes:
karaf@node1()> cluster:group-list
| Group | Members ----------------------------------------------- x | default | node2:5702 node1:5701(x) | test |

7. DOSGi and Transport
DOSGi (Distributed OSGi) enables the distribution of OSGi services across the Cellar nodes.
The purpose of the Cellar DOSGi is to leverage the Cellar resources (Hazelcast instances, distributed map, etc), and to be very easy to use.
DOSGi is provided by installing the optional feature cellar-dosgi.
To be available and visible for the others nodes, the OSGi service should only have the service.exported.interfacesproperty:
<service ref="MyService" interface="my.interface">
  <service-properties>
    <entry key="service.exported.interfaces" value="*"/>
  </service-properties>
</service>
You can see all OSGi services "flagged" as distributed (available for the nodes) using the cluster:list-servicecommand:
karaf@root()> cluster:service-list
A "client" bundle could use this service. If the service is not available locally, Cellar will "route" the service call to the remote remote containing the service.

8. Discovery Services

The Discovery Services allow you to use third party libraries to discover the nodes member of the Cellar cluster.

8.1. jClouds

Cellar relies on Hazelcast (http://www.hazelcast.com) in order to discover cluster nodes. This can happen either by using unicast, multicast or specifying the ip address of each node. See the Core Configuration section for details.
Unfortunately multicast is not allowed in most IaaS providers and the alternative of specifying all IP addresses creates maintenance difficulties, especially since in most cases the addresses are not known in advance.
Cellar solves this problem using a cloud discovery service powered by jclouds (http://jclouds.apache.org).

8.1.1. Cloud discovery service

Most cloud providers provide cloud storage among other services. Cellar uses the cloud storage via jclouds, in order to determine the IP addresses of each node so that Hazelcast can find them.
This approach is also called blackboard and refers to the process where each node registers itself in a common storage are so that other nodes know its existence.

8.1.2. Installing Cellar cloud discovery service

To install the cloud discovery service simply install the appropriate jclouds provider and then install cellar-cloud feature. Amazon S3 is being used here for this example, but the below applies to any provider supported by jclouds.
karaf@root()> feature:install jclouds-aws-s3
karaf@root()> feature:install cellar-cloud
Once the feature is installed, you’re required to create a configuration that contains credentials and the type of the cloud storage (aka blobstore). To do that add a configuration file under the etc folder with the nameorg.apache.karaf.cellar.cloud-<provider>.cfg and place the following information there:
provider=aws-s3 (this varies according to the blobstore provider)
identity=<the identity of the blobstore account>
credential=<the credential/password of the blobstore account)
container=<the name of the bucket>
validity=<the amount of time an entry is considered valid, after that time the entry is removed>
For instance, you can create etc/org.apache.karaf.cellar.cloud-mycloud.cfg containing:
provider=aws-s3
identity=username
credential=password
container=cellar
validity=360000
NB: you can find the cloud providers supported by jclouds herehttp://repo1.maven.org/maven2/org/apache/jclouds/provider/. You have to install the corresponding jclouds feature for the provider.
After creating the file the service will check for new nodes. If new nodes are found the Hazelcast instance configuration will be updated and the instance restarted.

8.1.3. Kubernetes & docker.io

Kubernetes is an open source orchestration system for docker.io containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. Using the concepts of "labels", "pods", "replicationControllers" and "services", it groups the containers which make up an application into logical units for easy management and discovery. Following the aforementioned concept will most likely change how you package and provision your Karaf based applications. For instance, you will eventually have to provide a Docker image with a pre-configured Karaf, KAR files in deployment folder, etc. so that your Kubernetes container may bootstrap everything on boot.
The Cellar Kubernetes discovery service is a great complement to the Karaf docker.io feature (allowing you to easily create and manage docker.io images in and for Karaf).

8.1.4. Kubernetes discovery service

In order to determine the IP address of each node, so that Hazelcast can connect to them, the Kubernetes discovery service queries the Kubernetes API for containers labeled with the pod.label.key and pod.label.key specified inetc/org.apache.karaf.cellar.kubernetes-name.cfg. The name in etc/org.apache.karaf.cellar.kubernetes-name.cfg is a name of the choice. It allows you to create multiple Kubernetes discovery services. Thanks to that, the Cellar nodes can be discovered on different Kubernetes.
So, you must be sure to label your containers (pods) accordingly.
After a Cellar node starts up, Kubernetes discovery service will configure Hazelcast with currently running Cellar nodes. Since Hazelcast follows a peer-to-peer all-shared topology, whenever nodes come up and down, the cluster will remain up-to-date.

11.1.5. Installing Kubernetes discovery service

To install the Kubernetes discovery service, simply install cellar-kubernetes feature.
karaf@root()> feature:install cellar-kubernetes
Once the cellar-kubernetes feature is installed, you have to create the Kubernetes provider configuration file. If you have multiple Kubernetes instances, you create one configuration file per instance.
For instance, you can create etc/org.apache.karaf.cellar.kubernetes-myfirstcluster.cfg containing:
host=localhost
port=8080
pod.label.key=name
pod.label.value=cellar
and another one etc/org.apache.karaf.cellar.kubernetes-mysecondcluster.cfg containing:
host=192.168.134.2
port=8080
pod.label.key=name
pod.label.value=cellar
In case you change the file, the discovery service will check again for new nodes. If new nodes are found, Hazelcast configuration will be updated and the instance restarted.

Reference
http://karaf.apache.org/