So I finally made the decision on trying to learn Scala. Little did I know I was in for another round of IntelliJ integration hell. Let me rephrase that: IntelliJ with Gradle hell. I love Gradle. I love IntelliJ. However, the combination of the two is sometimes enough to drive me utterly crazy. Now take for example the Scala integration. I made the most simple Gradle build possible that compiles a standard Hello World application. apply plugin: 'scala' apply plugin: 'idea' repositories{ mavenCentral() mavenLocal() } dependencies{ compile 'org.slf4j:slf4j-api:1.7.5' compile "org.scala-lang:scala-library:2.10.4" compile "org.scala-lang:scala-compiler:2.10.4" testCompile "junit:junit:4.11" } task run(type: JavaExec, dependsOn: classes) { main = 'Main' classpath sourceSets.main.runtimeClasspath classpath configurations.runtime } First I stumbled upon the first issue: the Scala gradle plugin is incompatible with Java 8. Not a big issue, but this meant changing my java environment for this build, so it is a nuisance. Once this was fixed, the Gradle build succeeded and Hello World was printed out. I opened up IntelliJ and made sure the Scala plugin was installed. Then I imported the project using the Gradle build file. Everything looked okay, IntelliJ recognized the Scala source folder and provided the correct editor for the Scala source file. Then I tried to run the Main class. This resulted in a NoClassDefFoundException. IntelliJ didn’t want to compile my source classes. So I started digging. Apparently, the project was lacking a Scala facet. I’d expected IntelliJ to automatically add this once it saw I was using the scala plugin but it didn’t. So I tried manually adding the facet and there I got stuck. See, the facet requires you to state which scala compiler library you want to use. Luckily IntelliJ correctly added the jars to the classpath, so I was able to choose the correct jar. This, however, did not fix the issue as IntelliJ now complained it could not locate the scala runtime library (scala-library*.jar). This library was however included in the build. If you were to choose the runtime library as the scala library, it would complain it cannot find the compiler library. And this is where I am now: deadlocked. There is an issue in the bugtracker of IntelliJ here but it’s been eerily quiet at Jetbrains on this issue. As it is, it’s impossible to use IntelliJ with Gradle and Scala unless you’re willing to execute every bit of code including unit tests with Gradle instead of the IDE (which in effect defeats the purpose of an IDE). And I’ll die before adopting yet another build framework (SBT) that is supposed to work. Honestly, I really don’t know whether I want to learn Scala anymore. Just the fact that you can’t compile Scala in the most popular IDE at the moment when using the most popular build tool at the moment is something I cannot comprehend. Forcing me to adopt a Scala-specific build tool is unacceptable to me. If I were TypeSafe, I’d put an engineer on this and fix this as this would seriously aid in promoting the language. If it were easy to adopt Scala in an existing build cycle, it would pop up on more radars than it would right now. But it’s not just Scala and IntelliJ: most newer JVM languages struggle with IntelliJ. This is a real pity as this either forces me to change my IDE (i.e. Ceylon has its own IDE based on Eclipse) or not consider the language. As it is, the current viable option with IntelliJ is Java and Groovy (and Kotlin, but it’s not even near production ready quality). Wouldn’t it be nice to only need one IDE for all development? I couldn’t care less if it would cost $500, I just want things to work. I’d love to be able to write my AngularJS front-end that’s consuming my Scala/Java hydrid backend reading data from a MongoDB that’s feeded data from my Arduino sensors (for which I’ve written and uploaded the sketch from that same IDE).
Performance tuning SELECT statements can be a time consuming task which in my opinion follows Pareto principle’s. 20% effort is likely give you an 80% performance improvement. To get another 20% performance improvement you probably need to spend 80% of the time. Unless you work on the planet Venus where each day on Venus is equal to 243 Earth days, delivery deadlines are likely to mean you will not have enough time to put into tuning your SQL queries. After years writing and running SQL statements I began to develop a mental check-list of things I looked at when trying to improve query performance. These are the things I check before moving on to query plans and reading the sometimes complicated documentation of the database I am working on. My check-list is by no means comprehensive or scientific, more like a back of the envelope calculation but can I can say that most of the time I do get performance improvements following these simple steps. The check-list follows. Check Indexes There should be indexes on all fields used in the WHERE and JOIN portions of the SQL statement. Take the 3-Minute SQL performance test. Regardless of your score be sure to read through the answers as they are informative. Limit Size of Your Working Data Set Examine the tables used in the SELECT statement to see if you can apply filters in the WHERE clause of your statement. A classic example is when a query initially worked well when there were only a few thousand rows in the table. As the application grew the query slowed down. The solution may be as simple as restricting the query to looking at the current month’s data. When you have queries that have sub-selects, look to apply filtering to the inner statement of the sub-selects as opposed to the outer statements. Only Select Fields You Need Extra fields often increase the grain of the data returned and thus result in more (detailed) data being returned to the SQL client. Additionally: When using reporting and analytical applications, sometimes the slow report performance is because the reporting tool has to do the aggregation as data is received in detailed form. Occasionally the query may run quickly enough but your problem could be a network related issue as large amounts of detailed data are sent to the reporting server across the network. When using a column-oriented DBMS only the columns you have selected will be read from disk, the less columns you include in your query the less IO overhead. Remove Unnecessary Tables The reasons for removing unnecessary tables are the same as the reasons for removing fields not needed in the select statement. Writing SQL statements is a process that usually takes a number of iterations as you write and test your SQL statements. During development it is possible that you add tables to the query that may not have any impact on the data returned by the SQL code. Once the SQL is correct I find many people do not review their script and remove tables that do not have any impact or use in the final data returned. By removing the JOINS to these unnecessary tables you reduce the amount of processing the database has to do. Sometimes, much like removing columns you may find your reduce the data bring brought back by the database. Remove OUTER JOINS This can easier said than done and depends on how much influence you have in changing table content. One solution is to remove OUTER JOINS by placing placeholder rows in both tables. Say you have the following tables with an OUTER JOIN defined to ensure all data is returned: customer_id customer_name 1 John Doe 2 Mary Jane 3 Peter Pan 4 Joe Soap customer_id sales_person NULL Newbee Smith 2 Oldie Jones 1 Another Oldie NULL Greenhorn The solution is to add a placeholder row in the customer table and update all NULL values in the sales table to the placeholder key. customer_id customer_name 0 NO CUSTOMER 1 John Doe 2 Mary Jane 3 Peter Pan 4 Joe Soap customer_id sales_person 0 Newbee Smith 2 Oldie Jones 1 Another Oldie 0 Greenhorn Not only have you removed the need for an OUTER JOIN you have also standardised how sales people with no customers are represented. Other developers will not have to write statements such as ISNULL(customer_id, “No customer yet”). Remove Calculated Fields in JOIN and WHERE Clauses This is another one of those that may at times be easier said than done depending on your permissions to make changes to the schema. This can be done by creating a field with the calculated values used in the join on the table. Given the following SQL statement: FROM sales a JOIN budget b ON ((year(a.sale_date)* 100) + month(a.sale_date)) = b.budget_year_month Performance can be improved by adding a column with the year and month in the sales table. The updated SQL statement would be as follows: SELECT * FROM PRODUCTSFROM sales a JOIN budget b ON a.sale_year_month = b.budget_year_month Conclusion The recommendations boil down to a few short pointers check for indexes work with the smallest data set required remove unnecessary fields and tables and remove calculations in your JOIN and WHERE clauses. If all these recommendations fail to improve your SQL query performance my last suggestion is you move to Venus. All you will need is a single day to tune your SQL.
want to add java 8 support to kepler? java 8 has not yet landed in our standard download packages . but you can add it to your existing eclipse kepler package. i’ve got three different eclipse installations running java 8: a brand new kepler sr2 installation of the eclipse ide for java developers; a slightly used kepler sr1 installation of the eclipse for rcp/rap developers (with lots of other features already added); and a nightly build (dated march 24/2014) of eclipse 4.4 sdk. the jdt team recommends that you start from kepler sr2, the second and final service release for kepler (but using the exact same steps, i’ve installed it into kepler sr1 and sr2 packages). there are some detailed instructions for adding java 8 support by installing a feature patch in the eclipsepedia wiki . the short version is this: from kepler sr2, use the “help > install new software…” menu option to open the “available software” dialog; enter http://download.eclipse.org/eclipse/updates/4.3-p-builds/ into the “work with” field (highlighted below); put a checkbox next to “eclipse java 8 support (for kepler sr2)” (highlighted below); click “next”, click “next”, read and accept the license, and click “finish” watch the pretty progress bar move relatively quickly across the bottom of the window; and restart eclipse when prompted. select “help > install new software…” to open the available software dialog. voila! support for java 8 is installed. if you’ve already got the java 8 jdk installed and the corresponding jre is the default on your system, you’re done. if you’re not quite ready to make the leap to a java 8 jre, there’s still hope (my system is still configured with java 7 as the default). install the java 8 jdk; open the eclipse preferences, and navigate to “java > installed jres”; java runtime environment preferences click “add…”; select “standard vm”, click “next”; enter the path to the java 8 jre (note that this varies depending on platform, and how you obtain and install the bits); java 8 jre definition click “finish”. before closing the preferences window, you can set your workspace preference to use the newly-installed java 8 jre. or, if you’re just planning to experiment with java 8 for a while, you can configure this on a project-by-project basis. in the create a java project dialog, specify that your project will use a javase-1.8 jre. it’s probably better to do this on the project as this will become a project setting that will follow the project into your version control system. next step… learn how wrong my initial impressions of java 8 were (hint: it’s far better). the lambda is so choice. if you have the means, i highly recommend picking one up. about these ads
Recently, I had to take some content in markdown, specifically markdown extra, and convert it to a series of PDFs styled with a specific branding. While some will argue that PDFs are dead and “long live the web”, many of us still need to produce PDFs for one reason or another. In this case I had to take markdown extra, with some html sprinkled in, clean it up, and convert it to a styled PDF. What follows is how I did that using QueryPath for the cleanup and DOMPDF to make the conversion. The Setup At the root of this little app was a PHP script with the dependencies managed through composer. The composer.json file looked like: { "name": "foo/bar", "description": "Convert markdown to PDF.", "type": "application", "require": { "php": ">=5.3.0", "michelf/php-markdown": "1.4.*", "dompdf/dompdf" : "0.6.*", "querypath/querypath": "3.*", "masterminds/html5": "1.*" } } Turning the Markdown into HTML Within the script I started with a file we’ll call $file. Form here it was easy using the official markdown extra conversion utility. $markdown = file_get_contents($file); $markdownParser = new \Michelf\MarkdownExtra(); $html = $markdownParser->transform($markdown); This produces the html needed to go inside the body of an html page. From here I wrapped it in a document because I could easily link to a CSS file for styling purposes. DOMPDF supports quite a bit of CSS 2.1. $html = '' . $html . '’; pdf.css is where you can style the PDF. If you know how to style web pages using CSS you can manage to style a PDF document. Cleaning Up The Content There were a number of places html had been injected into the markdown that was either broken, unwanted in a PDF, or an edge case that DOMPDF didn’t support. To make these changes I used QueryPath. For example, I needed to take relative links, normally used in generation of a website, and add a domain name to them: $dom = \HTML5::loadHTML($html); $links = htmlqp($dom, 'a'); foreach ($links as $link) { $href = $link->attr('href'); if (substr($href, 0, 1) == '/' && substr($href, 1, 1) != '/') { $link->attr('href', $domain_name . $href); } } $html = \HTML5::saveHTML($dom); Note, I used the HTML5 parser and writer rather than the built-in one designed for xhtml and HTML 4. This is because DOMPDF attempts to work with HTML5 and I wanted to keep that consistent from the beginning. Converting to PDF There is a little setup before using DOMPDF. It has a built in autoloader which should be disabled and needs a config file. In my case I used the default config file and handled this with: define('DOMPDF_ENABLE_AUTOLOAD', false); require_once __DIR__ . '/vendor/dompdf/dompdf/dompdf_config.inc.php'; The conversion was fairly straight forward. I used a snippet like: $dompdf = new DOMPDF(); $dompdf->load_html($html); $dompdf->render(); $output = $dompdf->output(); file_put_contents(‘path/to/file.pdf', $output); DOMPDF has a lot of options and some quirks. It wasn’t exactly designed for composer. For example, if you want to work with custom fonts you need to get the project from git and install submodules. Despite the quirks, needing to cleanup some of the html, and brand the documents, I was able to write a conversion script that handled dozens of documents quickly. Almost all of my time was on html cleanup and css styling.
In SQL Server management studio, using, View, Registered Servers (Ctrl+Alt+G) set up the servers that you want to execute the same query across all servers for, right click the group, select new query. Then when you execute the query, the results will come back with the first column showing you the database instance that that row came from.
A Mule application which uses the Jboss TX transaction manager needs a persistent Object Store to hold the objects and states of the transactions being processed (further information about different object stores can be found in the following page). By default Mule uses the ShadowNoFileLockStorem, which uses the file system to store the objects. As one can guess, if an application does not have permission to write the object store to the file system, the Jboss Transaction Manager will not be able to work properly and will throw an exception similar to the following: com.arjuna.ats.arjuna: ARJUNA12218: cant create new instance of {0} java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ... Caused by: com.arjuna.ats.arjuna.exceptions.ObjectStoreException: ARJUNA12225: FileSystemStore::setupStore - cannot access root of object store: //ObjectStore/ShadowNoFileLockStore/defaultStore/ at com.arjuna.ats.internal.arjuna.objectstore.FileSystemStore.(FileSystemStore.java:482) at com.arjuna.ats.internal.arjuna.objectstore.ShadowingStore.(ShadowingStore.java:619) at com.arjuna.ats.internal.arjuna.objectstore.ShadowNoFileLockStore.(ShadowNoFileLockStore.java:53) ... 36 more Since the object store is not created, the XA Transaction Manager is not initialised properly. This will throw a ‘Could not initialize class’ exception whenever the transaction manager is invoked. org.mule.exception.DefaultSystemExceptionStrategy: Caught exception in Exception Strategy: errorCode: 0 javax.resource.spi.work.WorkCompletedException: errorCode: 0 at org.mule.work.WorkerContext.run(WorkerContext.java:335) at java.util.concurrent.ThreadPoolExecutor$CallerRunsPolicy.rejectedExecution(ThreadPoolExecutor.java:2025) ... Caused by: java.lang.NoClassDefFoundError: Could not initialize class com.arjuna.ats.arjuna.coordinator.TxControl at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.begin(BaseTransaction.java:87) at org.mule.transaction.XaTransaction.doBegin(XaTransaction.java:63) at org.mule.transaction.AbstractTransaction.begin(AbstractTransaction.java:66) at org.mule.transaction.XaTransactionFactory.beginTransaction(XaTransactionFactory.java:32) at org.mule.execution.BeginAndResolveTransactionInterceptor.execute(BeginAndResolveTransactionInterceptor.java:51) at org.mule.execution.ResolvePreviousTransactionInterceptor.execute(ResolvePreviousTransactionInterceptor.java:48) at org.mule.execution.SuspendXaTransactionInterceptor.execute(SuspendXaTransactionInterceptor.java:54) at org.mule.execution.ValidateTransactionalStateInterceptor.execute(ValidateTransactionalStateInterceptor.java:44) at org.mule.execution.IsolateCurrentTransactionInterceptor.execute(IsolateCurrentTransactionInterceptor.java:44) at org.mule.execution.ExternalTransactionInterceptor.execute(ExternalTransactionInterceptor.java:52) at org.mule.execution.RethrowExceptionInterceptor.execute(RethrowExceptionInterceptor.java:32) at org.mule.execution.RethrowExceptionInterceptor.execute(RethrowExceptionInterceptor.java:17) at org.mule.execution.TransactionalErrorHandlingExecutionTemplate.execute(TransactionalErrorHandlingExecutionTemplate.java:113) at org.mule.execution.TransactionalErrorHandlingExecutionTemplate.execute(TransactionalErrorHandlingExecutionTemplate.java:34) at org.mule.transport.jms.XaTransactedJmsMessageReceiver.poll(XaTransactedJmsMessageReceiver.java:214) at org.mule.transport.AbstractPollingMessageReceiver.performPoll(AbstractPollingMessageReceiver.java:219) at org.mule.transport.PollingReceiverWorker.poll(PollingReceiverWorker.java:84) at org.mule.transport.PollingReceiverWorker.run(PollingReceiverWorker.java:53) at org.mule.work.WorkerContext.run(WorkerContext.java:311) ... 15 more Mule computes the default directory where to write the object store as follows : muleInternalDir = config.getWorkingDirectory(); (see the code for further analysis). If Mule is started from a directory where the user does not have write permissions, the problems mentioned above will be faced. The easiest way to fix this issue is to make sure that the user running Mule as full write permission to the working directory. If that cannot be achieved, fear not, there is a solution. On first analysis, one would be tempted to set the Object Store Directory by using Spring properties as follows: Unfortunately this will not work since the Jboss Transaction Manager is a Singleton and this property is used in the constructor of the object. Hence a behaviour similar to the following will be experienced: Caused by: com.arjuna.ats.arjuna.exceptions.ObjectStoreException: ARJUNA12225: FileSystemStore::setupStore - cannot access root of object store: PutObjectStoreDirHere/ShadowNoFileLockStore/defaultStore/ (Please note that “PutObjectStoreDirHere” is the default directory assigned by the JBoss TX transaction manager). One way to go around this issue is to be sure that these properties are set before the object is initialised. There are at least two ways to be sure that this is achieved: 1. Set the properties on start up as follows: ./mule -M-Dcom.arjuna.ats.arjuna.objectstore.objectStoreDir=/path/to/objectstoreDir -M-DObjectStoreEnvironmentBean.objectStoreDir=/path/to/objectstoreDir 2. Set the properties in the wrapper.config as follow: wrapper.java.additional.x=-Dcom.arjuna.ats.arjuna.objectstore.objectStoreDir=/path/to/objectstoreDir wrapper.java.additional.x=-DObjectStoreEnvironmentBean.objectStoreDir=/path/to/objectstoreDir (x is the next number available in the wrapper.config by default this is 4). Otherwise, take the easiest route and make sure that Mule can write to the start up directory.
Postgres and Oracle compatibility with Hibernate There are situations your JEE application needs to support Postgres and Oracle as a Database. Hibernate should do the job here, however, there are some specifics worth mentioning. While enabling Postgres for application already running Oracle I came across following tricky parts: BLOBs support, CLOBs support, Oracle not knowing Boolean type (using Integer) instead and DUAL table. These were the tricks I had to apply to make the @Entity classes running on both of these. Please note I’ve used Postgres 9.3 with Hibernate 4.2.1.SP1. BLOBs support The problem with Postgres is that it offers 2 types of BLOB storage: bytea - data stored in table oid - table holds just identifier to data stored elsewhere I guess in the most of the situations you can live with the bytea as well as I did. The other one as far as I’ve read is to be used for some huge data (in gigabytes) as it supports streams for IO operations. Well, it sounds nice there is such a support, however using Hibernate in this case can make things quite problematic (due to need to use the specific annotations), especially if you try to achieve compatibility with Oracle. To see the trouble here, see StackOverflow: proper hibernate annotation for byte[] All- the combinations are described there: annotation postgres oracle works on ------------------------------------------------------------- byte[] + @Lob oid blob oracle byte[] bytea raw(255) postgresql byte[] + @Type(PBA) oid blob oracle byte[] + @Type(BT) bytea blob postgresql where @Type(PBA) stands for: @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType") and @Type(BT) stands for: @Type(type="org.hibernate.type.BinaryType"). These result in all sorts of Postgres errors, like: ERROR: column “foo” is of type oid but expression is of type bytea or ERROR: column “foo” is of type bytea but expression is of type oid Well, there seems to be a solution, still it includes patching of Hibernate library (something I see as the last option when playing with 3.rd party library). There is also a reference to official blog post from the Hibernate guys on the topic: PostgreSQL and BLOBs. Still solution described in blog post seems not working for me and based on the comments, seems to be invalid for more people. BLOBs solved OK, so now the optimistic part. After quite some debugging I ended up with the Entity definition like this : @Lob private byte[] foo; Oracle has no trouble with that, moreover I had to customize the Postgres dialect in a way: public class PostgreSQLDialectCustom extends PostgreSQL82Dialect { @Override public SqlTypeDescriptor remapSqlTypeDescriptor(SqlTypeDescriptor sqlTypeDescriptor) { if (sqlTypeDescriptor.getSqlType() == java.sql.Types.BLOB) { return BinaryTypeDescriptor.INSTANCE; } return super.remapSqlTypeDescriptor(sqlTypeDescriptor); } } That’s it! Quite simple right? That works for persisting to bytea typed columns in Postgres (as that fits my usecase). CLOBs support The errors in misconfiguration looked something like this: org.postgresql.util.PSQLException: Bad value for type long : ... So first I’ve found (on String LOBs on PostgreSQL with Hibernate 3.6) following solution: @Lob @Type(type = "org.hibernate.type.TextType") private String foo; Well, that works, but for Postgres only. Then there was a suggestion (on StackOverflow: Postgres UTF-8 clobs with JDBC) from to go for: @Lob @Type(type="org.hibernate.type.StringClobType") private String foo; That pointed me the right direction (the funny part was that it was just a comment to some answers). It was quite close, but didn’t work for me in all cases, still resulted in errors in my tests. CLOBs solved The important was @deprecation javadocs in the org.hibernate.type.StringClobType that brought me to working one: @Lob @Type(type="org.hibernate.type.MaterializedClobType") private String foo; That works for both Postgres and Oracle, without any further hacking (on Hibernate side) needed. Boolean type Oracle knows no Boolean type and the trouble is that Postgres does. As there was also some plain SQL present, I ended up In Postgres with error: ERROR: column “foo” is of type boolean but expression is of type integer I decided to enable cast from Integer to Boolean in Postgres rather than fixing all the plain SQL places (in a way found in Forum: Automatically Casting From Integer to Boolean): update pg_cast set castcontext = 'i' where oid in ( select c.oid from pg_cast c inner join pg_type src on src.oid = c.castsource inner join pg_type tgt on tgt.oid = c.casttarget where src.typname like 'int%' and tgt.typname like 'bool%'); Please note you should run the SQL update by user with provileges to update catalogs (probably not your postgres user used for DB connection from your application), as I’ve learned on Stackoverflow: Postgres - permission denied on updating pg_catalog.pg_cast. DUAL table There is one more specific in the Oracle I came across. If you have plain SQL, in Oracle there is DUAL table provied (see more info on Wikipedia on that) that might harm you in Postgres. Still the solution is simple. In Postgres create a view that would fill the similar purpose. It can be created like this: create or replace view dual as select 1; Conclusion Well that should be it. Enjoy your cross DB compatible JEE apps.
Sometimes we want to use Hibernate native SQL in our code. For example we might need to invoke a selectable stored procedure, we cannot invoke in another way. To invoke a native SQL query we use the method createSQLQuery() which is available from the Hibernate session object. In our Grails code we must then first get access to the current Hibernate session. Luckily we only have to inject the sessionFactory bean in our Grails service or controller. To get the current session we invoke the getCurrentSession() method and we are ready to execute a native SQL query. The query itself is defined as a String value and we can use placeholders for variables, just like with other Hibernate queries. In the following sample we create a new Grails service and use a Hibernate native SQL query to execute a selectable stored procedure with the nameorganisation_breadcrumbs. This stored procedure takes one argument startId and will return a list of results with an id, name and level column. // File: grails-app/services/com/mrhaki/grails/OrganisationService.groovy package com.mrhaki.grails import com.mrhaki.grails.Organisation class OrganisationService { // Auto inject SessionFactory we can use // to get the current Hibernate session. def sessionFactory List breadcrumbs(final Long startOrganisationId) { // Get the current Hiberante session. final session = sessionFactory.currentSession // Query string with :startId as parameter placeholder. final String query = 'select id, name, level from organisation_breadcrumbs(:startId) order by level desc' // Create native SQL query. final sqlQuery = session.createSQLQuery(query) // Use Groovy with() method to invoke multiple methods // on the sqlQuery object. final results = sqlQuery.with { // Set domain class as entity. // Properties in domain class id, name, level will // be automatically filled. addEntity(Organisation) // Set value for parameter startId. setLong('startId', startOrganisationId) // Get all results. list() } results } } In the sample code we use the addEntity() method to map the query results to the domain class Organisation. To transform the results from a query to other objects we can use the setResultTransformer() method. Hibernate (and therefore Grails if we use the Hibernate plugin) already has a set of transformers we can use. For example with the org.hibernate.transform.AliasToEntityMapResultTransformer each result row is transformed into a Map where the column aliases are the keys of the map. // File: grails-app/services/com/mrhaki/grails/OrganisationService.groovy package com.mrhaki.grails import org.hibernate.transform.AliasToEntityMapResultTransformer class OrganisationService { def sessionFactory List> breadcrumbs(final Long startOrganisationId) { final session = sessionFactory.currentSession final String query = 'select id, name, level from organisation_breadcrumbs(:startId) order by level desc' final sqlQuery = session.createSQLQuery(query) final results = sqlQuery.with { // Assign result transformer. // This transformer will map columns to keys in a map for each row. resultTransformer = AliasToEntityMapResultTransformer.INSTANCE setLong('startId', startOrganisationId) list() } results } } Finally we can execute a native SQL query and handle the raw results ourselves using the Groovy Collection API enhancements. The result of thelist() method is a List of Object[] objects. In the following sample we use Groovy syntax to handle the results: // File: grails-app/services/com/mrhaki/grails/OrganisationService.groovy package com.mrhaki.grails class OrganisationService { def sessionFactory List> breadcrumbs(final Long startOrganisationId) { final session = sessionFactory.currentSession final String query = 'select id, name, level from organisation_breadcrumbs(:startId) order by level desc' final sqlQuery = session.createSQLQuery(query) final queryResults = sqlQuery.with { setLong('startId', startOrganisationId) list() } // Transform resulting rows to a map with key organisationName. final results = queryResults.collect { resultRow -> [organisationName: resultRow[1]] } // Or to only get a list of names. //final List names = queryResults.collect { it[1] } results } } Code written with Grails 2.3.7.
As businesses accelerate their move toward making B2E applications available to employees on mobile devices, the subject of mobile application security is getting more attention. Mobile Device Management (MDM) solutions are being deployed in the largest enterprises - but there are still application-level security issues that are important to consider. Furthermore, medium size businesses are moving to mobilize their applications prior to having a formalized MDM solution or policy in place. A key element of a mobile app strategy is whether to go Native, Hybrid, or pure HTML5. As an early proponent of HTML5 platforms, Gizmox has been thinking about the security angle of HTML5 applications for a long time. In a recent webinar, we discussed 4 ways that HTML5 - done right - can be more secure than native apps. 1. Applications should leverage HTML5's basic security model HTML5 represents a revolutionary step for HTML-based browsers as the first truly cross-platform technology for rich, interactive applications. It has earned endorsements by all the major IT vendors (e.g. Google, Microsoft, IBM, Oracle, etc...). Security of applications and websites has been a consideration from the start of HTML5 development. The first element of the security model is that HTML5 applications live within the secure shell of the browser sandbox. Application code is to a large degree insulated from the device. The browser's interaction with the device and any other application on the device is highly limited. This makes it difficult for HTML5 application code to influence other applications/data on the device or for other applications to interact with the application running on the browser. The second element is that, built correctly, HTML5 thin clients are "secure by design." Application logic running on the server insultates sensitive intellectual property from the client. Proper design strategies would include minimal or no data caching; keeping tokens, passwords, credentials, and security profiles on the server; minimizing logic on the client - focusing on pure UI interaction with the server. Finally, HTML5 apps should be architected to ensure that no data is left behind in cache. 2. HTML5 apps can be containerized within secure browsers Secure browsers are just one element of MDM that can be deployed on their own to enhance application security. HTML5 application security can be extended with the use of secure browsers that restrict access to enterprise-approved URLs, prevent cross-site scripting, and integrate with company VPNs. Furthermore, secure browsers further harden the interaction between HTML5 applications and the device, the device OS and other applciations on the device. 3. Integration with Mobile Device Management MDM solutions play a variety of security roles including application inventory management (i.e. who gets access to what on which device), application distribution (i.e. through enterprise app store), implementation of security standards (e.g. passwords, encryption, VPN, authentication, etc...), and implemetation of enterprise access control policies. While MDM was in part conceived to enable secure distribution and control of native applications, HTML5 apps can be managed and further secured as well. While full MDM solutions are not required for HTML5 security, HTML5 apps can be integrated into a broader mobile security strategy that incorporates MDM. 4. HTML5 was conceived for the BYOD world The complexity of managing security for native apps gets multiplied as application variants are created for different mobile device form factors and operating systems. With cross-platform HTML5 applications that run on any desktop, tablet, or smartphone, security strategy is implemented and controlled centrally. Updates and security fixes are implemented on the server and there are no concerns with users not applying updates to the apps on their devices. There are many reasons to evaluate HTML5 as the platform for mobile business applications. Security of HTML5 apps (built with good practices and leveraging a full platform like Visual WebGui) is a particularly compelling reason to consider. Check out this slide share from recent webinar on HTML5 security strategies. Security strategies for html5 enterprise mobile apps from Gizmox
Digital signing is a widely used mechanism to make digital contents authentic. By producing a digital signature for some content, we can let another party capable of validating that content. It can provide a guarantee that, is not altered after we signed it, with this validation. With this sample I am to share how to generate the a signature for SOAP envelope. But of course this is valid for any other content signing as well. Here, I will sign The SOAP envelope itself An attachment Place the signature inside SOAP header With the placement of signature inside the SOAP header which is also signed by the signature, this becomes a demonstration of enveloped signature. I am using Apache Santuario library for signing. Following is the code segment I used. I have shared the complete sample here to to be downloaded. public static void main(String unused[]) throws Exception { String keystoreType = "JKS"; String keystoreFile = "src/main/resources/PushpalankaKeystore.jks"; String keystorePass = "pushpalanka"; String privateKeyAlias = "pushpalanka"; String privateKeyPass = "pushpalanka"; String certificateAlias = "pushpalanka"; File signatureFile = new File("src/main/resources/signature.xml"); Element element = null; String BaseURI = signatureFile.toURI().toURL().toString(); //SOAP envelope to be signed File attachmentFile = new File("src/main/resources/sample.xml"); //get the private key used to sign, from the keystore KeyStore ks = KeyStore.getInstance(keystoreType); FileInputStream fis = new FileInputStream(keystoreFile); ks.load(fis, keystorePass.toCharArray()); PrivateKey privateKey = (PrivateKey) ks.getKey(privateKeyAlias, privateKeyPass.toCharArray()); //create basic structure of signature javax.xml.parsers.DocumentBuilderFactory dbf = javax.xml.parsers.DocumentBuilderFactory.newInstance(); dbf.setNamespaceAware(true); DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance(); DocumentBuilder dBuilder = dbFactory.newDocumentBuilder(); Document doc = dBuilder.parse(attachmentFile); XMLSignature sig = new XMLSignature(doc, BaseURI, XMLSignature.ALGO_ID_SIGNATURE_RSA_SHA1); //optional, but better element = doc.getDocumentElement(); element.normalize(); element.getElementsByTagName("soap:Header").item(0).appendChild(sig.getElement()); { Transforms transforms = new Transforms(doc); transforms.addTransform(Transforms.TRANSFORM_C14N_OMIT_COMMENTS); //Sign the content of SOAP Envelope sig.addDocument("", transforms, Constants.ALGO_ID_DIGEST_SHA1); //Adding the attachment to be signed sig.addDocument("../resources/attachment.xml", transforms, Constants.ALGO_ID_DIGEST_SHA1); } //Signing procedure { X509Certificate cert = (X509Certificate) ks.getCertificate(certificateAlias); sig.addKeyInfo(cert); sig.addKeyInfo(cert.getPublicKey()); sig.sign(privateKey); } //write signature to file FileOutputStream f = new FileOutputStream(signatureFile); XMLUtils.outputDOMc14nWithComments(doc, f); f.close(); } At first it reads in the private key which is to be used in signing. To create a key pair for your own, this post will be helpful. Then it has created the signature and added the SOAP message and the attachment as the documents to be signed. Finally it performs signing and write the signed document to a file. The signed SOAP message looks as follows. FUN PARTY uri:www.pjxml.org/socialService/Ping FUN PARTY FUN 59c64t0087fg3kfs000003n9 uri:www.pjxml.org/socialService/ Ping FUN 59c64t0087fg3kfs000003n9 2013-10-22T17:12:20 uri:www.pjxml.org/socialService/ Ping 9RXY9kp/Klx36gd4BULvST4qffI= 3JcccO8+0bCUUR3EJxGJKJ+Wrbc= d0hBQLIvZ4fwUZlrsDLDZojvwK2DVaznrvSoA/JTjnS7XZ5oMplN9 THX4xzZap3+WhXwI2xMr3GKO................x7u+PQz1UepcbKY3BsO8jB3dxWN6r+F4qTyWa+xwOFxqLj546WX35f8zT4GLdiJI5oiYeo1YPLFFqTrwg== MIIDjTCCAnWgAwIBAgIEeotzFjANBgkqhkiG9w0BAQsFADB3MQswCQYDVQQGEwJMSzEQMA4GA1UE...............qXfD/eY+XeIDyMQocRqTpcJIm8OneZ8vbMNQrxsRInxq+DsG+C92b k5y0amGgOQ2O/St0Kc2/xye80tX2fDEKs2YOlM/zCknL8VgK0CbAKVAwvJoycQL9mGRkPDmbitHe............StGofmsoKURzo8hofYEn41rGsq5wCuqJhhHYGDrPpFcuJiuI3SeXgcMtBnMwsIaKv2uHaPRbNX31WEuabuv6Q== AQAB 1.90 In a next post lets see how to verify this signature, so that we can guarantee signed documents are not changed. Cheers!
We all know what XML is right? Just in case not, no problem here is what it is all about. 5 Now, what the computer really needs is the number five and some context around it. In XML you (human and computer) can see how it represents context to five. Now lets say instead you have a business XML document like FPML 32.00 150000 1.00 EUR 405000 2001-07-17Z NONE EUR 2.70 ISDA2002 ISDA2002Equity TODO GBEN Party A Party B That is a lot of extra unnecessary data points. Now lets look at this using Apache Avro. With Avro, the context and the values are separated. This means the schema/structure of what the information is does not get stored or streamed over and over and over and over (and over) again. The Avro schema is hashed. So the data structure only holds the value and the computer understands the fingerprint (the hash) of the schema and can retrieve the schema using the fingerprint. 0x d7a8fbb307d7809469ca9abcb0082e4f8d5651e46d3cdb762d02d0bf37c9e592 This type of implementation is pretty typical in the data space. When you do this you can reduce your data between 20%-80%. When I tell folks this they immediately ask, “why such a large gap of unknowns”. The answer is because not every XML is created the same. But that is the problem because you are duplicating the information the computer needs to understand the data. XML is nice for humans to read, sure … but that is not optimized for the computer. Here is a converter we are working on https://github.com/stealthly/xml-avro to help get folks off of XML and onto lower cost, open source systems. This allows you to keep parts of your systems (specifically the domain business code) using the XML and not having to be changed (risk mitigation) but store and stream the data with less overhead (optimize budget).
Here is a short snippet of Ansible playbook that installs R and any required packages to any nodes of the cluster: - name: Making sure R is installed apt: pkg=r-base state=installed - name: adding a few R packages command: /usr/bin/Rscript --slave --no-save --no-restore-history -e "if (! ('{{item}' %in% installed.packages()[,'Package'])) install.packages(pkgs={{item}, repos=c('http://www.freestatistics.org/cran/'))" with_items: - rjson - rPython - plyr - psych - reshape2 You should replace the repos with one chosen from the list of Cran mirrors. Note that the command above installs each package only if it is not already present, but messes up the “changed” status of Ansible’s PLAY RECAP by incorrectly reporting a change per R package at every run. Find more big data technical posts on my blog.
A while back, I wrote an article showing how to Live Migrate Your VMs in One Line of Powershell between non-clustered Windows Server 2012 Hyper-V hosts using Shared Nothing Live Migration. Since then, I’ve been asked a few times for how this type of parallel Live Migration would be performed for highly available virtual machines between Hyper-V hosts within a cluster. In this article, we’ll walk through the steps of doing exactly that … via Windows PowerShell on Windows Server 2012 or 2012 R2 or our FREE Hyper-V Server 2012 R2 bare-metal, enterprise-grade hypervisor in a clustered configuration. Wait! Do I need PowerShell to Live Migrate multiple VMs within a Cluster? Well, actually … No. You could certainly use the Failover Cluster Manager GUI tool to select multiple highly available virtual machines, right-click and select Move | Live Migration … Failover Cluster Manager – Performing Multi-VM Live Migration But, you may wish to script this process for other reasons … perhaps to efficiently drain all VM’s from a host as part of a maintenance script that will be performing other tasks. Can I use the same PowerShell cmdlets for Live Migrating within a Cluster? Well, actually … No again. When VMs are made highly available resources within a cluster, they’re managed as cluster group resources instead of being standalone VM resources. As a result, we have a different set of Cluster-aware PowerShell cmdlets that we use when managing these cluster groups. To perform a scripted multi-VM Live Migration, we’ll be leveraging three of these cmdlets: Get-ClusterNode, Get-ClusterGroup and Move-ClusterVirtualMachineRole Now, let’s see that one line of PowerShell! Before getting to the point of actually performing the multi-VM Live Migration in a single PowerShell command line, we first need to setup a few variables to handle the "what" and "where" of moving these VMs. First, let’s specify the name of the cluster with which we’ll be working. We’ll store it in a $clusterName variable. $clusterName = read-host -Prompt "Cluster name" Next, we’ll need to select the cluster node to which we’ll be Live Migrating the VMs. Lets use the Get-ClusterNode and Out-GridView cmdlets together to prompt for the cluster node and store the value in a $targetClusterNode variable. $targetClusterNode = Get-ClusterNode -Cluster $clusterName | Out-GridView -Title "Select Target Cluster Node" ` -OutputMode Single And then, we’ll need to create a list of all the VMs currently running in the cluster. We can use the Get-ClusterGroup cmdlet to retrieve this list. Below, we have an example where we are combining this cmdlet with a Where-Object cmdlet to return only the virtual machine cluster groups that are running on any node except the selected target cluster node. After all, it really doesn’t make any sense to Live Migrate a VM to the same node on which it’s currently running! $haVMs = Get-ClusterGroup -Cluster $clusterName | Where-Object {($_.GroupType -eq "VirtualMachine") ` -and ($_.OwnerNode -ne $targetClusterNode.Name)} We’ve stored the resulting list of VMs in a $haVMs variable. Ready to Live Migrate! OK … Now we have all of our variables defined for the cluster, the target cluster node and the list of VMs from which to choose. Here’s our single line of PowerShell to do the magic … $haVMs | Out-GridView -Title "Select VMs to Move" –PassThru | Move-ClusterVirtualMachineRole -MigrationType Live ` -Node $targetClusterNode.Name -Wait 0 Proceed with care: Keep in mind that your target cluster node will need to have sufficient available resources to run the VM's that you select for Live Migration. Of course, it's best to initially test tasks like this in your lab environment first. Here’s what is happening in this single PowerShell command line: We’re passing the list of VMs stored in the $haVMs variable to the Out-GridView cmdlet. Out-GridView prompts for which VMs to Live Migrate and then passes the selected VMs down the PowerShell object pipeline to the Move-ClusterVirtualMachineRole cmdlet. This cmdlet initiates the Live Migration for each selected VM, and because it’s using a –Wait 0 parameter, it initiates each Live Migration one-after-another without waiting for the prior task to finish. As a result, all of the selected VMs will Live Migrate in parallel, up to the maximum number of concurrent Live Migrations that you’ve configured on these cluster nodes. The VMs selected beyond this maximum will simply queue up and wait their turn. Unlike some competing hypervisors, Hyper-V doesn't impose an artificial hard-coded limit on how many VMs for you can Live Migrate concurrently. Instead, it's up to you to set the maximum to a sensible value based on your hardware and network capacity. Do you have your own PowerShell automation ideas for Hyper-V? Feel free to share your ideas in the Comments section below. See you in the Clouds! - Keith
If you used earlier versions of Neo4j via its Java API with Java 6 you probably have code similar to the following to ensure write operations happen within a transaction: public class StylesOfTx { public static void main( String[] args ) throws IOException { String path = "/tmp/tx-style-test"; FileUtils.deleteRecursively(new File(path)); GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase( path ); Transaction tx = db.beginTx(); try { db.createNode(); tx.success(); } finally { tx.close(); } } } In Neo4j 2.0 Transaction started extending AutoCloseable which meant that you could use ‘try with resources’ and the ‘close’ method would be automatically called when the block finished: public class StylesOfTx { public static void main( String[] args ) throws IOException { String path = "/tmp/tx-style-test"; FileUtils.deleteRecursively(new File(path)); GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase( path ); try ( Transaction tx = db.beginTx() ) { Node node = db.createNode(); tx.success(); } } } This works quite well although it’s still possible to have transactions hanging around in an application when people don’t use this syntax – the old style is still permissible. In Venkat Subramaniam’s Java 8 book he suggests an alternative approach where we use a lambda based approach: public class StylesOfTx { public static void main( String[] args ) throws IOException { String path = "/tmp/tx-style-test"; FileUtils.deleteRecursively(new File(path)); GraphDatabaseService db = new GraphDatabaseFactory().newEmbeddedDatabase( path ); Db.withinTransaction(db, neo4jDb -> { Node node = neo4jDb.createNode(); }); } static class Db { public static void withinTransaction(GraphDatabaseService db, Consumer fn) { try ( Transaction tx = db.beginTx() ) { fn.accept(db); tx.success(); } } } } The ‘withinTransaction’ function would actually go on GraphDatabaseService or similar rather than being on that Db class but it was easier to put it on there for this example. A disadvantage of this style is that you don’t have explicit control over the transaction for handling the failure case – it’s assumed that if ‘tx.success()’ isn’t called then the transaction failed and it’s rolled back. I’m not sure what % of use cases actually need such fine grained control though. Brian Hurt refers to this as the ‘hole in the middle pattern‘ and I imagine we’ll start seeing more code of this ilk once Java 8 is released and becomes more widely used.
Sometimes it is useful to “backcast” a time series — that is, forecast in reverse time. Although there are no in-built R functions to do this, it is very easy to implement. Suppose x is our time series and we want to backcast for periods. Here is some code that should work for most univariate time series. The example is non-seasonal, but the code will also work with seasonal data. library(forecast) x <- WWWusage h <- 20 f <- frequency(x) # Reverse time revx <- ts(rev(x), frequency=f) # Forecast fc <- forecast(auto.arima(revx), h) plot(fc) # Reverse time again fc$mean <- ts(rev(fc$mean),end=tsp(x)[1] - 1/f, frequency=f) fc$upper <- fc$upper[h:1,] fc$lower <- fc$lower[h:1,] fc$x <- x # Plot result plot(fc, xlim=c(tsp(x)[1]-h/f, tsp(x)[2]))