Friday 25 March 2016

How to install JBoss Fuse Tooling into Eclipse

How to install JBoss Fuse Tooling into Eclipse

JBoss Fuse is an open source, lightweight and modular integration platform with style Enterprise Service Bus (ESB) that supports integration beyond the data center. JBoss Fuse combines several technologies like Apache Camel, Apache CXF, Apache ActiveMQ, Apache Karaf and Fabric8 in a single integrated distribution.

http://www.jboss.org/products/fuse/get-started/

For this guide i am using STS(Spring Tool Suite) Version: 3.6.2.RELEASE, Build Id: 201410091308, Platform: Eclipse Luna SR1 (4.4.1). Choose your workspace and then you should find yourself in the Eclipse Welcome Screen. (only if you start it for the first time).

You have to install http://download.jboss.org/jbosstools/updates/development/luna/ first. After that restart STS and you will get a home page like below.

Home page


Now lets click on "Fuse integration project" option at home page.
There are few features available from that update site:


JBoss Fuse Tooling Apache Camel Editor:
This feature gives you the route editor for Apache Camel Routes, a new project wizard to setup new integration modules and the option to launch your new routes locally for testing purposes.

JBoss Fuse Server Extension Feature:
When installing this feature you will get server adapters for Apache ServiceMix, Apache Karaf, Fabric8 and JBoss Fuse runtimes. It allows you to start/stop those servers and to connect to their shell. Deployment options are also available.




You can review your selection and make changes to it by clicking the Back button if needed. If all is fine you can click the Next button instead. This will lead you to the license agreement screen. You need to agree to the licenses in order to install the software.




Once that is done you can click again the Finish button and the installation will start by downloading all needed plugins and features. Once that is done the new features will be installed into your STS folder. Before that happens Eclipse will warn you that you are going to install unsigned content. This happens because our plugins are not signed but nothing to worry about. Just click OK to do the installation.



After everything is installed Eclipse will ask you for a restart.



Click the Yes button to restart Eclipse. When the restart is done you will be able to select the Fuse Integration perspective from the perspectives list.





Reference & Example:

Enjoy camel routing builder and easy your development efforts. This will help you to understand the wiring before starting your development..................

Thursday 24 March 2016

Apache Camel

Apache camel for Beginner's

Camel is a versatile integration framework provides simple, manageable abstractions for the complex systems that you are integrating and the “glue” for plugging them together seamlessly. Apache Camel is such an integration framework. It is a versatile open-source integration framework based on known Enterprise Integration Patterns.

Camel have very important feature named "message routing". It has two main ways of defining routing rules: the Java-based domain specific language (DSL) and the Spring XML configuration format.Camel is a mature open source project, available under the liberal Apache 2 license, and it has a strong community.


What is Camel?

Camel is a routing engine or we can say it is a routing engine builder. By allows you to define your own routing rules, it helps you to decide from which source to accept message and what process has to done and send the message to other destination. Important fact of camel is that it makes no assumptions about the type of data you need to process. So it gives the developer, an opportunity to integrate any kind of system, without the need to convert your data.

Camel isn’t an enterprise service bus (ESB), but  some call Camel a lightweight ESB because of its support for routing, transformation, monitoring, orchestration, and so forth. Camel doesn’t have a container or a reliable message bus, but it can be deployed in one, such as Open-ESB or the previously mentioned ServiceMix. For that reason, we prefer to call Camel an integration framework rather than an ESB.

you can define routing and mediation rules in a variety of domain-specific languages, including a Java-based Fluent API, Spring or Blueprint XML Configuration files, and a Scala DSL.

Java DSL

from ("file:/sample").to("jms:sampleQueue");

Spring DSL

<route>
  <from uri="file:/sample"/>
  <to uri="jms:sampleQueue"/>
</route>

Scala DSL

from "file:/sample" -> "jms:sampleQueue"


In all the above examples we define a routing rule that will load files in the “/sample” directory into memory, create a new JMS message with the file contents, and send that message to a JMS queue named sampleQueue.

These are the concepts that Camel was built upon. Since then many other interesting features have been added. I recommend reading Camel in Action.

•Pluggable data formats and type converters for easy message transformation between CSV, JAXB, JSON, XmlBeans, XStream, Zip, etc.

•Pluggable languages to create expressions or predicates for use in the DSL. Some of these languages include: OGNL, JavaScript, Groovy, Python, PHP, Ruby, SQL, XPath, XQuery, etc. 

•Support for the integration of beans and POJOs in various places in Camel.

•Excellent support for testing distributed and asynchronous systems using a messaging approach



Own custom Solution: 

Implement an individual solution that works for your problem without separating problems into little pieces. This works and is probably the fastest alternative for small use cases. You have to code all by yourself.


Integration Framework: 

Use a framework, which helps to integrate applications in a standardized way using several integration patterns. It reduces efforts a lot. Every developer will easily understand what you did. You do not have to reinvent the wheel each time.


Enterprise Service Bus (ESB): 

Use an ESB to integrate your applications. The ESB often also uses an integration framework. But there is much more functionality, such as business process management, a registry or business activity monitoring. You can usually configure routing and such stuff within a graphical user interface. Usually, an ESB is a complex product. The learning curve is much higher than using a lightweight integration framework.

If you decide to use an integration framework, you still have three good alternatives in the JVM environment: Spring Integration, Mule, and Apache Camel. They are all lightweight, easy to use and implement the EIPs.  Therefore, they offer a standardized way to integrate applications and can be used even in very complex integration projects.

My personal favorite is Apache Camel due to its awesome Java DSLs, combined with many supported technologies.

Examples:
Camel In Action

Apache Kafka

Apache Kafka for Beginner's


Message queuing allows applications to communicate by sending messages to each other. The message queues provides a temporary message storage when the destination program is busy or not connected. Most of you know about messaging queue's. A normal messaging queue is not capable of handling big data, which is where a Distributed Messaging Queue comes to the rescue.

Big data needs a scalable messaging system, means it should easily scale to thousands of nodes. The system should be fault tolerant in such a way that it should work even if some nodes in a cluster goes down and should support replication. In short, there shouldn't be a single point failure. Also the messaging system should support higher throughput, means it should handle millions of messages in short time.

Here Apache Kafka fits in the world of distributed messaging. Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.

Features of Apache Kafka

  • No Single point of failure, Peer to Peer architecture and doesn't follow master-slave.
  • Higher throughput
  • Easily scale to thousands of nodes
  • Replication supports, in such a way that messages are replicated across a cluster
  • Durable, messages are persisted into file system
  • Open Source by LinkedIn to the Apache Community

Apache Kafka was conceptually designed to partition and persist large amounts of messages disregarding if the consumers are online or not. The main point is not to throttle down producers because consumers are failing to consume data fast enough but to provide a buffer between the flood of events and the system/consumers. 

AMQP(Advanced Message Queuing Protocol) standard defines that one of each producer, channel, exchange, queue and consumer are required for ordered delivery. This breaks the philosophy of no single point of failure.

When looking at the performance, Kafka can sustain 1 million of messages produced per second on just a couple of nodes keeping the durability and ordered partitioning of data. This performance is considered high and only the top few companies have higher requirements than this.

Being distributed, Kafka has fail-over mechanisms where if master node is down, one of the existing nodes is automatically voted and promoted into master.

If you need to push large messages or if simplicity and ease of use are what you are after you should consider some of the lightweight brokers, but if you need reliability and performance at scale and are pushing large amounts of data through your system then Kafka is the perfect choice.

Example:
Kafka Example

Monday 21 March 2016

Jar was compiled on a 64bit or 32bit system?

Jar was compiled on a 64bit or 32bit system?


Java byte code is platform independent and it doesn't matter whether it was built with a 32-bit or 64-bit JDK and there is no way to figure this out.

There is no difference to have a jar compiled with 32-bit or 64-bit. It should be platform-independent, unless otherwise you have some native library dependency.

Also some 64-bit JVMs use COMPRESSED OOPS. It is because data is aligned to around every 8 or 16 bytes. This allows 32-bit address data to use heap sizes of 35 or 36 bits on a 64-bit platform.


Wednesday 9 March 2016

Java Socket Programming

Java Socket Programming

A socket is one end-point of a two-way communication link between two programs running on the network. Socket classes are used to represent the connection between a client program and a server program, when you are writing a client-server application.

TCP/IP and UDP/IP communications

There are two communication protocols that one can use for socket programming: datagram communication and stream communication.

Datagram communication:

The datagram communication protocol(UDP) is a connection less protocol, 
meaning that each time you send datagrams, you also need to send the local socket descriptor and the receiving socket's address. Means, additional data must be sent each time a communication is made.

Stream communication:

The stream communication protocol is known as TCP (transfer control protocol). 
TCP is a connection-oriented protocol. In order to do communication over the TCP protocol, a connection must first be established between the pair of sockets. Once two sockets have been connected, they can be used to transmit data in both directions. 


Example of Java Socket Programming 

Server:

    class SampleServer{  
    public static void main(String args[])throws Exception{  
    ServerSocket serverSocket=new ServerSocket(1234);  
    Socket socket=serverSocket.accept();  
    DataOutputStream dataOuSt=new DataOutputStream(scoket.getOutputStream());  
DataInputStream dataInSt=new DataInputStream(scoket.getInputStream()); BufferedReader br=new BufferedReader(new InputStreamReader(System.in)); String string="";
    String string2="";  
    while(!string.equals("stop")){  
    string=dataInST.readUTF();  
    System.out.println("Message: "+string);  
    string2=br.readLine();  
    dataOuSt.writeUTF(string2);  
    dataOuSt.flush();  
    }  
    dataInSt.close();  
    socket.close();  
    serverSocket.close();  

    }}  

Client:

    class SampleClient{  
    public static void main(String args[])throws Exception{  
    Socket socket=new Socket("localhost",1234);  
    DataInputStream dataInSt=new DataInputStream(socket.getInputStream());  
    DataOutputStream dataOuSt=new DataOutputStream(socket.getOutputStream());  
    BufferedReader br=new BufferedReader(new InputStreamReader(System.in));  
      
    String string="";
    String string2="";  
    while(!string.equals("stop")){  
    string=br.readLine();  
    dataOuSt.writeUTF(string);  
    dataOuSt.flush();  
    string2=dataInSt.readUTF();  
    System.out.println("Message: "+string2);  
    }  
      
    dataOuSt.close();  
    socket.close();  
    }}  

------------------------------




What is new in BizViz 2.0.0(2.x) ?

Big Data BizViz

BizViz Analytic platform version 2.0.0 is released with all its plugins - Report plugin, Self Service BI, Predictive Analysis, Survey, Social Media Browser, Sentiment Analysis, dashboard designer - All in one platform integrated together.


This release is a major new release. More than just bug fixes, this release brings several refactoring and new features.

  • Introducing Business Story: 

Offering Self Service BI

  • Introducing Predictive Analysis:

Offering Statistical Analysis

  • Dashboard Designer: 

Advanced version

  • SMB:

Social Media Browser have improved for better extraction of data from web / social media.

  • Survey:

Added 16 more question type and total is 27 now.


It’s time to take a tour in the new BizViz 2.0.0.
http://bdbizviz.com/releasenotes.html

Monday 7 March 2016

Load balancing with Apache Karaf Cellar

Load balancing with Apache Karaf Cellar

Consider your application have different components like CXF services, Camel Routes, DOSGi services, Distributed Session.... etc. If you can deploy your applications on several Karaf nodes and with help of Cellar, you may need load balancing for CXF/HTTP endpoints, need is to load balance the HTTP requests on the Karaf nodes.


Apache httpd with mod_proxy_balancer

There are different ways to do that and here i am explaining Apache httpd with mod_proxy_balancer. It’s a very stable solution, easy to setup.

Consider you have three Karaf nodes, having the following services:
  • http://192.168.1.10:8181/services
  • http://192.168.1.11:8181/services
  • http://192.168.1.12:8181/services

install Apache httpd

# on Debian/Ubuntu system
aptitude install apache2

# enable network connect on httpd
/usr/sbin/setsebool -P httpd_can_network_connect 1

# on CentOS/RHEL/Fedora system
yum install httpd

Just check if mod_proxy, mod_proxy_http, and mod_proxy_balancer modules are there in the main httpd.conf.

Edit the main httpd.conf or create a conf file in etc/httpd/conf.d and add

<Proxy balancer://mycluster>
  BalancerMember http://192.168.1.10:8181
  BalancerMember http://192.168.1.11:8181
  BalancerMember http://192.168.1.12:8181
</Proxy>
ProxyPass /services balancer://cluster

This load balancer will easily proxy the /services requests to the different Karaf nodes.
The mod_proxy_balancer module is able to support session as well. Means, when a request is proxied to one node, then all following requests from the same user should be proxied to the same node.
For this, you can use the cookie in header to define the session.

Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<Proxy balancer://mycluster>
  BalancerMember http://192.168.1.10:8181 route=1
  BalancerMember http://192.168.1.11:8182 route=2
ProxySet stickysession=ROUTEID
</Proxy>
ProxyPass /myapp balancer://mycluster

mod_proxy_balancer: web manager

This allows you to see if your Karaf nodes are up or not, the number of requests received by each node, and the current load balancing method in use and all.

<Location /balancer-manager>
  SetHandler balancer-manager
  Order allow,deny
  Allow from all
</Location>

Point your browser to http://host:port/balancer-manager and you will see the manager page.

Reference:

http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html.



Saturday 5 March 2016

Apache Karaf Cellar and DOSGi

Apache Karaf Cellar and DOSGi


Distributed OSGi enables the distribution of OSGi services across the Cellar nodes.

The purpose of the Cellar DOSGi is to leverage the Cellar resources (Hazelcast instances, distributed map, etc), and to be very easy to use.

Karaf Cellar DOSGi installation

DOSGi is provided by installing the optional feature cellar-dosgi.

karaf@root> feature:repo-add mvn:org.apache.karaf.cellar/apache-karaf-cellar/3.0.3/xml/features
karaf@root> feature:install cellar

List Distributed services

You can see all OSGi services as distributed using the cluster:list-service command:

karaf@root()> cluster:service-list

DOSGi Example

  • Service provider bundle is installed in node A and “flag” a distributed service
  • Client bundle is installed on node B and will use the provider service

Provider bundle


The provider bundle expose an OSGi service, exposed as a distributed service.
The Service is doing simple echo.

Interface:
package com.vimal.karaf.cellar.sample;

public interface SampleService {

  String process(String message);

}

Implementation:
package com.vimal.karaf.cellar.sample;

public class SampleServiceImpl implements SampleService {

public String process(String message) {
    return "Distributed service sample: " + message;
}

}

To expose the service in Karaf, we create a blueprint descriptor:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

  <bean id="sampleService" class="com.vimal.karaf.cellar.sample.SampleServiceImpl"/>

  <service ref="sampleService" interface="com.vimal.karaf.cellar.sample.SampleService">
    <service-properties>
      <entry key="service.exported.interfaces" value="*"/>
    </service-properties>
  </service>

</blueprint>

The only “Important” part is that we added the service.exported.interfaces 
property which is just a flag to define the interface/service to define as distributed. Means, it’s really easy to turn an existing service as a distributed service.

Client Bundle

The client bundle will get a reference to the sampleService. The reference will be a kind of proxy to the service implementation located remotely, on another node.

package com.vimal.karaf.cellar.sample.client;

public class SampleClient {

  private SampleService sampleService;

  public void trigger() throws Exception {
    int i = 0;
    while (true) {
      System.out.println(sampleService.process("Count" + i));
      Thread.sleep(10000);
      i++;
    }
  }

public SampleService getSampleService() {
    return this.sampleService;
}

public void setSampleService(SampleService sampleService) {
    this.sampleService = sampleService;
}

}

Blueprint:
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

  <reference id="sampleService" interface="com.vimal.karaf.cellar.sample.SampleService"/>

  <bean id="sampleClient" class="com.vimal.karaf.cellar.sample.client.SampleClient" init-method="trigger">
    <property name="sampleService" ref="sampleService"/>
  </bean>

</blueprint>

If a local sampleService is available, the OSGi framework will bind the reference to this service, else Cellar will look for a distributed service (on all nodes) exporting the SampleService interface and bind a proxy to the distributed service.

Leave your comments......