In this article I will tell you how Apache Camel can turn a full-stack Linux microcomputer (like Raspberry Pi) into a device collecting the GPS coordinates.
I’ve built a small example of running a standalone Java application that both serves static HTML, JavaScript, CSS content, and also publishes a REST web service.
A step-by-step guide for migrating a web application from Struts to Spring MVC, covering essential changes in libraries, configurations, and code structure.
[This article was written by Juergen Hoeller.] Spring is well-known to actively support the latest versions of common open source projects out there, e.g. Hibernate and Jackson but also common server engines such as Tomcat and Jetty. We usually do this in a backwards-compatible fashion, supporting older versions at the same time - either through reflective adaptation or through separate support packages. This allows for applications to selectively decide about upgrades, e.g. upgrading to the latest Spring and Jackson versions while preserving an existing Hibernate 3 investment. With the upcoming Spring Framework 4.2, we are taking the opportunity to support quite a list of new open source project versions, including some rather major ones: Hibernate ORM 5.0 Hibernate Validator 5.2 Undertow 1.2 / WildFly 9 Jackson 2.6 Jetty 9.3 Reactor 2.0 SockJS 1.0 final Moneta 1.0 (the JSR-354 Money & Currency reference implementation) While early support for the above is shipping in the Spring Framework 4.2 RCs already, the ultimate point that we’re working towards is of course 4.2 GA - scheduled for July 15th. At this point, we’re eagerly waiting for Hibernate ORM 5.0 and Hibernate Validator 5.2 to GA (both of them are currently at RC1), as well as WildFly 9.0 (currently at RC2) and Jackson 2.6 (currently at RC3). Tight timing… By our own 4.2 GA on July 15th, we’ll keep supporting the latest release candidates, rolling any remaining GA support into our 4.2.1 if necessary. If you’d like to give some of those current release candidates a try with Spring, let us know how it goes. Now is a perfect time for such feedback towards Spring Framework 4.2 GA! P.S.: Note that you may of course keep using e.g. Hibernate ORM 3.6+ and Hibernate Validator 4.3+ even with Spring Framework 4.2. A migration to Hibernate ORM 5.0 in particular is likely to affect quite a bit of your setup, so we only recommend it in a major revision of your application, whereas Spring Framework 4.2 itself is designed as a straightforward upgrade path with no impact on existing code and therefore immediately recommended to all users.
Continuing my Spring-Cloud learning journey, earlier I had covered how to write the infrastructure components of a typical Spring-Cloud and Netflix OSS based micro-services environment - in this specific instance two critical components, Eureka to register and discover services and Spring Cloud Configuration to maintain a centralized repository of configuration for a service. Here I will be showing how I developed two dummy micro-services, one a simple "pong" service and a "ping" service which uses the "pong" service. Sample-Pong microservice The endpoint handling the "ping" requests is a typical Spring MVC based endpoint: @RestController public class PongController { @Value("${reply.message}") private String message; @RequestMapping(value = "/message", method = RequestMethod.POST) public Resource pongMessage(@RequestBody Message input) { return new Resource<>( new MessageAcknowledgement(input.getId(), input.getPayload(), message)); } } It gets a message and responds with an acknowledgement. Here the service utilizes the Configuration server in sourcing the "reply.message" property. So how does the "pong" service find the configuration server, there are potentially two ways - directly by specifying the location of the configuration server, or by finding the Configuration server via Eureka. I am used to an approach where Eureka is considered a source of truth, so in this spirit I am using Eureka to find the Configuration server. Spring Cloud makes this entire flow very simple, all it requires is a "bootstrap.yml" property file with entries along these lines: --- spring: application: name: sample-pong cloud: config: discovery: enabled: true serviceId: SAMPLE-CONFIG eureka: instance: nonSecurePort: ${server.port:8082} client: serviceUrl: defaultZone: http://${eureka.host:localhost}:${eureka.port:8761}/eureka/ The location of Eureka is specified through the "eureka.client.serviceUrl" property and the "spring.cloud.config.discovery.enabled" is set to "true" to specify that the configuration server is discovered via the specified Eureka server. Just a note, this means that the Eureka and the Configuration server have to be completely up before trying to bring up the actual services, they are the pre-requisites and the underlying assumption is that the Infrastructure components are available at the application boot time. The Configuration server has the properties for the "sample-pong" service, this can be validated by using the Config-servers endpoint - http://localhost:8888/sample-pong/default, 8888 is the port where I had specified for the server endpoint, and should respond with a content along these lines: "name": "sample-pong", "profiles": [ "default" ], "label": "master", "propertySources": [ { "name": "classpath:/config/sample-pong.yml", "source": { "reply.message": "Pong" } } ] } As can be seen the "reply.message" property from this central configuration server will be used by the pong service as the acknowledgement message Now to set up this endpoint as a service, all that is required is a Spring-boot based entry point along these lines: @SpringBootApplication @EnableDiscoveryClient public class PongApplication { public static void main(String[] args) { SpringApplication.run(PongApplication.class, args); } } and that completes the code for the "pong" service. Sample-ping micro-service So now onto a consumer of the "pong" micro-service, very imaginatively named the "ping" micro-service. Spring-Cloud and Netflix OSS offer a lot of options to invoke endpoints on Eureka registered services, to summarize the options that I had: 1. Use raw Eureka DiscoveryClient to find the instances hosting a service and make calls using Spring's RestTemplate. 2. Use Ribbon, a client side load balancing solution which can use Eureka to find service instances 3. Use Feign, which provides a declarative way to invoke a service call. It internally uses Ribbon. I went with Feign. All that is required is an interface which shows the contract to invoke the service: package org.bk.consumer.feign; import org.bk.consumer.domain.Message; import org.bk.consumer.domain.MessageAcknowledgement; import org.springframework.cloud.netflix.feign.FeignClient; import org.springframework.http.MediaType; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.ResponseBody; @FeignClient("samplepong") public interface PongClient { @RequestMapping(method = RequestMethod.POST, value = "/message", produces = MediaType.APPLICATION_JSON_VALUE, consumes = MediaType.APPLICATION_JSON_VALUE) @ResponseBody MessageAcknowledgement sendMessage(@RequestBody Message message); } The annotation @FeignClient("samplepong") internally points to a Ribbon "named" client called "samplepong". This means that there has to be an entry in the property files for this named client, in my case I have these entries in my application.yml file: samplepong: ribbon: DeploymentContextBasedVipAddresses: sample-pong NIWSServerListClassName: com.netflix.niws.loadbalancer.DiscoveryEnabledNIWSServerList ReadTimeout: 5000 MaxAutoRetries: 2 The most important entry here is the "samplepong.ribbon.DeploymentContextBasedVipAddresses" which points to the "pong" services Eureka registration address using which the service instance will be discovered by Ribbon. The rest of the application is a routine Spring Boot application. I have exposed this service call behind Hystrix which guards against service call failures and essentially wraps around this FeignClient: package org.bk.consumer.service; import com.netflix.hystrix.contrib.javanica.annotation.HystrixCommand; import org.bk.consumer.domain.Message; import org.bk.consumer.domain.MessageAcknowledgement; import org.bk.consumer.feign.PongClient; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.stereotype.Service; @Service("hystrixPongClient") public class HystrixWrappedPongClient implements PongClient { @Autowired @Qualifier("pongClient") private PongClient feignPongClient; @Override @HystrixCommand(fallbackMethod = "fallBackCall") public MessageAcknowledgement sendMessage(Message message) { return this.feignPongClient.sendMessage(message); } public MessageAcknowledgement fallBackCall(Message message) { MessageAcknowledgement fallback = new MessageAcknowledgement(message.getId(), message.getPayload(), "FAILED SERVICE CALL! - FALLING BACK"); return fallback; } } Boot"ing up I have dockerized my entire set-up, so the simplest way to start up the set of applications is to first build the docker images for all of the artifacts this way: mvn clean package docker:build -DskipTests and bring all of them up using the following command, the assumption being that both docker and docker-compose are available locally: docker-compose up Assuming everything comes up cleanly, Eureka should show all the registered services, at http://dockerhost:8761 url - The UI of the ping application should be available at http://dockerhost:8080 url - Additionally a Hystrix dashboard should be available to monitor the requests to the "pong" app at this url http://dockerhost:8989/hystrix/monitor?stream=http%3A%2F%2Fsampleping%3A8080%2Fhystrix.stream: References 1. The code is available at my github location - https://github.com/bijukunjummen/spring-cloud-ping-pong-sample 2. Most of the code is heavily borrowed from the spring-cloud-samples repository - https://github.com/spring-cloud-samples
When creating a camel route using http, the destination might require a ssl connection with a self signed certificate. Therefore on our http client we should register a TrustManager that suports the certificate. In our case we will use the https4 component of Apache Camel Therefore we should configure the routes and add them to the camel context RouteBuilder routeBuilder = new RouteBuilder() { @Override public void configure() throws Exception { from("http://localhost") .to("https4://securepage"); } }; routeBuilder.addRoutesToCamelContext(camelContext); But before we proceed on starting the camel context we should register the trust store on the component we are going to use. Therefore we should implement a function for creating an ssl context with the trustore. Supposed the jks file that has the certificate imported is located on the root of our classpath. private void registerTrustStore(CamelContext camelContext) { try { KeyStore truststore = KeyStore.getInstance("JKS"); truststore.load(getClass().getClassLoader().getResourceAsStream("example.jks"), "changeit".toCharArray()); TrustManagerFactory trustFactory = TrustManagerFactory.getInstance("SunX509"); trustFactory.init(truststore); SSLContext sslcontext = SSLContext.getInstance("TLS"); sslcontext.init(null, trustFactory.getTrustManagers(), null); SSLSocketFactory factory = new SSLSocketFactory(sslcontext, SSLSocketFactory.ALLOW_ALL_HOSTNAME_VERIFIER); SchemeRegistry registry = new SchemeRegistry(); final Scheme scheme = new Scheme("https4", 443, factory); registry.register(scheme); HttpComponent http4 = camelContext.getComponent("https4", HttpComponent.class); http4.setHttpClientConfigurer(new HttpClientConfigurer() { @Override public void configureHttpClient(HttpClientBuilder builder) { builder.setSSLSocketFactory(factory); Registry registry = RegistryBuilder.create() .register("https", factory) .build(); HttpClientConnectionManager connectionManager = new BasicHttpClientConnectionManager(registry); builder.setConnectionManager(ccm); } }); } catch (IOException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } catch (CertificateException e) { e.printStackTrace(); } catch (KeyStoreException e) { e.printStackTrace(); } catch (KeyManagementException e) { e.printStackTrace(); } } After that our route would be able to access the destination securely.
I have just recorded a 5 minute video that demonstrates running an out of stock example from Apache Camel release, the camel-example-servlet packaged as a docker container and running on a kubernetes platform, such as openshift 3. camel-servlet-example scaled up to 3 running containers (pods) which is easy with kubernetes and fabric8 In this video I have already deployed the example and then demonstrates how we can use the fabric8 web console to manage our application. And also connect to the running container and see inside, such as the Camel routes visually as shown above. Then I run a simple bash script from my laptop that sends a HTTP GET to the Camel example and prints the response. The script runs in a endless loop and demonstrates how kubernetes can easily scale up and down multiple Camel containers and load balance across the running containers. And at the end its even self healing when I force killing docker containers. So I suggest to grab a fresh cup of tea or coffee and sit back and play the 5 minutes video. The video is hosted on vimeo and can be seen from this link.
As I get older, I get increasingly to a functionalist position on the world around me. That the social structures and constructs that we see around us are there through a process of evolution and serve some sort of positive benefit because otherwise they would have fallen foul of natural selection. Sometimes this can seem counter-intuitive; crime, for example, has a positive benefit in allowing the vast majority of us to know clearly the boundaries of things that are wrong – a world without law-breaking would probably be a world without laws, not one without crime. When I look at organisations, I seem to see an inexorable move away from structures based on professional expertise. The boundaries by which we define organisations seem to be caught in a previous age, and their utility is becoming less and less. I had the privilege yesterday to spend my time working with a group of Market Insight professionals from a very broad set of organisations. The challenges that they are facing seem in many ways to be linked to a sense of purpose: what the heck is “insight”, and why should a single operational part of an organisation have sole responsibility for it? Organising an organisation by specialism – sales, marketing, operations, production, HR, Finance, IT, legal and so on – is decreasing in function. If that sounds outlandish, remember that most manufacturing organisation moved from specialism-focus to product-focus at the beginning of the last century with the advent of the production line. And today, with matrix management and project-centric management the norm rather than the exception, we’ve implicitly acknowledged the passing of professional specialism. The creation of new divisions like “digital” or “innovation” – at their core, multi-disciplinary activities – again implicitly acknowledges that modern organisations need to have a design based on the things that people are doing, not the skills that they collectively have in common. With the ways in which we can communicate and collaborate (and the breaking down of 9-5 “in the office” through flexible working patterns) the big benefit of specialist operating divisions, communities of practice, becomes lessened dramatically too. Do I gain more professionally (and does my organisation gain more) by surrounding myself with colleagues who have skills rather than necessarily activity in common? This isn’t going to happen overnight. Our professional structures are so ingrained, that it will be at an evolutionary rather than revolutionary pace that change will permeate; some of the memes (like matrix management) are already in place; there are some mutations that will probably wither (looking at you, Holocracy). How to prepare? I’ve heard a lot in recent years about “T-shaped” workers: people with a strong expertise (the down stroke of the T) but with a breadth of knowledge and experience (the horizontal stroke). I personally am hedging more on being “comb-shaped” – many more areas of domain knowledge, and enough understanding to be able to bring in an expert where necessary. The irony that because of follicle challenge I haven’t used a comb in many years isn’t lost…
Written by Craig Wentworth. To understand the furor that’s greeted recent vendor announcements around open source analytics computing engine Spark, and some commentary seemingly setting up a Spark versus Hadoop battle, it’s worth taking a moment to recap on what each actually is (and is not). As I covered in last year’s MWD report on Hadoop and its family of tools, when people talk about Apache Hadoop they’re often referring to a whole framework of tools designed to facilitate distributed parallel processing of large datasets. That processing was traditionally confined to MapReduce batch jobs in Hadoop’s early days, though Hadoop 2 brought the YARN resource scheduler and opened up Hadoop to streaming, real-time querying and a wider array of analytical programming applications (beyond MapReduce). Spark has been designed to run on top of Hadoop’s Distributed File System (amongst other data platforms) as an alternative to MapReduce – tuned for real-time streaming data processing and fast interactive queries, and with multi-genre analytics applicability (machine learning, time series, graph, SQL, streaming out-of-the-box). It gets that speed advantage by caching in-memory (rather than writing interim results to disk, as MapReduce does), but with that approach comes a need for higher-spec physical machines (compared with MapReduce’s tolerance for commodity hardware). So, Spark isn’t about to replace Hadoop -- but it may well supplant MapReduce (especially in growing real-time use cases). Those “Spark vs Hadoop” headlines are about as meaningful as one proclaiming “mushrooms vs pizza." Yes, mushroom might be a more suitable topping than, say, pepperoni (especially in a vegetarian use case), but it’ll still be deployed on the same dough and tomato sauce pizza platform. Nobody’s about to suggest the mushroom should go it alone! But what’s behind the headlines and the hype is a story of enterprise adoption – or at least vendors anticipating that adoption and investing in ‘the weaponization of Spark’ as it faces the more exacting standards of security, scaling performance, consistency, etc. which come with mainstream enterprise deployment. Big names like IBM, Databricks (the company formed by the originators of Spark), and MapR made commitments in and around the Spark Summit earlier this month. MapR has announced three new Quick Start Solutions for its Hadoop distribution to help customers get started with Spark in real-time security log analytics, genome sequencing, and time series analytics; Databricks’ cloud-hosted Spark platform (formerly known as Databricks Cloud) has become generally available; and IBM announced a raft of measures designed to give Spark a significant shot in the arm – it’s open sourcing its SystemML technology to bolster Spark’s machine learning capabilities, integrating Spark into its own analytics platforms, investing in Spark training and education, committing 3,500 of its researchers and developers to work on Spark-related projects, and offering Spark as a service on its Bluemix developer cloud. Given the overlap with Databricks’ business model (of offering development, certification, and support for Spark), IBM’s intentions are likely to tread on some toes before long – but for now, at least, both companies are content to focus on the combined push benefiting the Spark community and its enterprise aspirations overall (though clearly IBM’s betting on all this investment buying it some influence over where Spark goes next). It’s worth bearing in mind that not all its supporters champion Spark wholesale and all the interested parties tend to be interested in particular bits of Spark (as wide-ranging as it is) because of overlaps with their own preferred toolsets. For instance, although Spark supports many analytics genres, Cloudera focuses on its machine learning capabilities (as it has its own SQL-on-Hadoop tool in Impala), and MapR and Hortonworks also promote Drill and Hive as their favoured source of SQL-on-Hadoop. IBM’s support is focused on Spark’s machine learning and in-memory capabilities (hence the SystemML open sourcing news). In the face of such strong vendor preferences, how long before some of Spark’s current features fall away (or at least start to show the effects of being starved of as much care and feeding as is bestowed upon vendors’ favourite Spark components)? The Spark community is at much the same place the Hadoop one was at a while back – it’s showing great promise and suitability in key growth workloads (in Spark’s case, such as real-time IoT applications). However, the product as it stands is too immature for many enterprise tastes. Cue enterprise software vendors stepping up to help grow Spark up fast. Their challenge though is to smooth out the edges without smothering what made it so interesting in the first place.
One mystery in ASP.NET 5 that people are asking me about are ASP.NET 5 console applications. We have web applications running on some web server – what is the point of new type of command-line applications that refer by name to web framework? Here’s my explanation. What we have with ASP.NET 5? CoreCLR – minimal subset of .NET Framework that is ~11MB in size and that supports true side-by-side execution. Yes, your application can specify exact version of CLR it needs to run and it doesn’t conflict with another versions of CLR on same box. DNX runtime – formerly named as K runtime, console based runtime to manage CLR versions, restore packages and run commands that our application defines. Framework level dependency injection – this is something we don’t have with classic console applications that have static entry point but we have it with ASP.NET console applications (they have also method Main but it’s not static). More independence from Visual Studio – it’s easier to build and run applications in build and continuous integration servers as there’s no need (or less need) for Visual Studio and its components. Applications can define their commands for different things like generating EF migrations and running unit tests. Also ASP.NET 5 is more independent from IIS and can be hosted by way smaller servers. Microsoft provides with ASP.NET 5 simple web listener server and new server called Kestrel that is based on libuv and can be used also on Unix-based environments. Application commands Your application can define commands that DNX runtime is able to read from your application configuration file. All these commands are actually ASP.NET console applications that run on command-line with no need for Visual Studio intsalled on your box. When you run command using DNX then DNX is creating instance of class and it looks for method Main(). I come back to those commands in future posts. Framework level dependency injection What we don’t have with classic console applications is framework-level dependency injection. I thinks it’s not easy to implement it when application is actually a class with one static entry point. ASP.NET console applications can be more aware of technical environment around them by supporting dependency injection. Alse we can take our console program to all environments where CoreCLR is running and we don’t have to worry about platform. New environments Speculation alert! All ideas in this little chapter are pure speculation and I have no public or secret sources to refer. This is just what I see that possibly comes in near future. But I’’m not a successful sidekick. CoreCLR can take our ASP.NET applications to different new environments. On Azure cloud we will possibly see that Webjobs can be built as ASP.NET console applications and we can host them with web applications built for CoreCLR. As CoreCLR is very small – remember, just 11MB – I’m almost sure that ASP.NET 5 and console applications will find their way to small devices like RaspberryPi, routers, wearables and so on. It’s possible we don’t need web server support in those environments but we still want use CoreCLR from console. Maybe this market is not big today but it will be huge tomorrow. Wrapping up Although the name “ASP.NET console application” is little confusing we can think of those applications as console applications for DNX. Currently the main usage for those applications are ASP.NET 5 commands but by my speculations we will see much more scenarios for those applications in near future. Related Posts ASP.NET MVC 3: Using controllers scaffolding Visual Studio 2010: Web.config transforms Creating gadget-like blocks for Windows Home Server 2011 web add-in user interface ASP.NET MVC 3: Using multiple view engines in same project Starting with ASP.NET MVC The post What is ASP.NET console application? appeared first on Gunnar Peipman - Programming Blog.
When running a Maven build with many plugins (e.g. the jOOQ or Flyway plugins), you may want to have a closer look under the hood to see what’s going on internally in those plugins, or in your extensions of those plugins. This may not appear obvious when you’re running Maven from the command line, e.g. via: C:\Users\jOOQ\workspace>mvn clean install Luckily, it is rather easy to debug Maven. In order to do so, just create the following batch file on Windows: @ECHO OFF IF "%1" == "off" ( SET MAVEN_OPTS= ) ELSE ( SET MAVEN_OPTS=-Xdebug -Xnoagent -Djava.compile=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5005 ) Of course, you can do the same also on a MacOS X or Linux box, by usingexport intead of SET. Now, run the above batch file and proceed again with building: C:\Users\jOOQ\workspace>mvn_debug C:\Users\jOOQ\workspace>mvn clean install Listening for transport dt_socket at address: 5005 Your Maven build will now wait for a debugger client to connect to your JVM on port 5005 (change to any other suitable port). We’ll do that now with Eclipse. Just add a new Remote Java Application that connects on a socket, and hit “Debug”: That’s it. We can now set breakpoints and debug through our Maven process like through any other similar kind of server process. Of course, things work exactly the same way with IntelliJ or NetBeans. Once you’re done debugging your Maven process, simply call the batch again with parameter off: C:\Users\jOOQ\workspace>mvn_debug off C:\Users\jOOQ\workspace>mvn clean install And your Maven builds will no longer be debugged. Happy debugging!