For people in hurry get the code from Github. In continuation of my earlier blog on spring-test-mvc junit testing Spring Security layer with InMemoryDaoImpl, in this blog I will discuss how to use achieve method level access control. Please follow the steps in this blog to setup spring-test-mvc and run the below test case. mvn test -Dtest=com.example.springsecurity.web.controllers.SecurityControllerTest The JUnit test case looks as below, @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(loader = WebContextLoader.class, value = { "classpath:/META-INF/spring/services.xml", "classpath:/META-INF/spring/security.xml", "classpath:/META-INF/spring/mvc-config.xml" }) public class SecurityControllerTest { @Autowired CalendarService calendarService; @Test public void testMyEvents() throws Exception { Authentication auth = new UsernamePasswordAuthenticationToken("user1@example.com", "user1"); SecurityContext securityContext = SecurityContextHolder.getContext(); securityContext.setAuthentication(auth); calendarService.findForUser(0); SecurityContextHolder.clearContext(); } @Test(expected = AuthenticationCredentialsNotFoundException.class) public void testForbiddenEvents() throws Exception { calendarService.findForUser(0); } } @Test(expected=AccessDeniedException.class) public void testWrongUserEvents() throws Exception { Authentication auth = new UsernamePasswordAuthenticationToken("user2@example.com", "user2"); SecurityContext securityContext = SecurityContextHolder.getContext(); securityContext.setAuthentication(auth); calendarService.findForUser(0); SecurityContextHolder.clearContext(); } If you notice, if the user did not login or if the user is trying to access another users information it will throw an exception. The interface access control is as below, public interface CalendarService { @PreAuthorize("hasRole('ROLE_ADMIN') or principal.id == #userId") List findForUser(int userId); } The PreAuthorize only works on interface so that any implementation that implements this interface has this access control. I hope this blog helps you.
Dependency inversion is the idea that interfaces should depend on abstractions not on specifics. According to Wikipedia, the principle states: A. High-level modules should not depend on low-level modules. Both should depend on abstractions. B. Abstractions should not depend upon details. Details should depend upon abstractions. Of course the second part of this principle is impossible if read literally. You can't have an abstraction until you know what details are to be covered, and so the abstraction and details are both co-dependent. If the covered details change sufficiently the abstraction will become either leaky or inadequate and so it is worth seeing these as intertwined to some extent. The focus on abstraction is helpful because it suggests that the interface contract should be designed in such a way that neither side really has to understand any internal details of the other in order to make things work. Both sides depend on well-encapsulated API's and neither side has to worry about what the other side is really doing. This is what is meant by details depending on abstractions rather than the other way around. This concept is quite applicable beyond object oriented programming because it covers a very basic aspect of API contract design, namely how well an API should encapsulate behavior. This principle is first formulated in its current form in the object oriented programming paradigm but is generally applicable elsewhere. SQL as an Abstraction Layer, or Why RDBMS are Still King There are plenty of reasons to dislike SQL, such as the fact that nulls are semantically ambiguous. As a basic disclaimer I am not holding SQL up to be a paragon of programming languages or even db interfaces, but I think it is important to discuss what SQL does right in this regard. SQL is generally understood to be a declarative language which approximates relational mathematics for database access purposes. With SQL, you specify what you want returned, not how to get it, and the planner determines the best way to get it. SQL is thus an interface language rather than a programming language per se. With SQL, you can worry about the logical structure, leaving the implementation details to the db engine. SQL queries are basically very high level specifications of operations, not detailed descriptions of how to do something efficiently. Even update and insert statements (which are by nature more imperative than select statements) leave the underlying implementation entirely to the database management system. I think that this, along with many concessions the language has made to real-world requirements (such as bags instead of sets and the addition of ordering to bags) largely account for the success of this language. SQL, in essence, encapsulates a database behind a mature mathematical, declarative model in the same way that JSON and REST do (in a much less comprehensive way) in many NoSQL db's. In essence SQL provides encapsulation, interface, and abstraction in a very full-featured way and this is why it has been so successful. SQL Abstraction as Imperfect One obvious problem with treating SQL as an abstraction layer in its own right is that one is frequently unable to write details in a way that is clearly separate from the interface. Often storage tables are hit directly, and therefore there is little separation between logical detail and logical interface, and so this can break down when database complexity reaches a certain size. Approaches to managing this problem include using stored procedures or user defined functions, and using views to encapsulate storage tables. Stored Procedures and User Defined Functions Done Wrong Of the above methods, stored procedures and functional interfaces have bad reputations frequently because of bad experiences that many people have with them. These include developers pushing too much logic into stored procedures, and the fact that defining functional interfaces in this way usually produces a very tight binding between database code and application code, often leading to maintainability problems. The first case is quite obvious, and includes the all-too-frequent case of trying to send emails directly from stored procedures (always a bad idea). This mistake leads to certain types of problems, including the fact that ACID-compliant operations may be mixed with non-ACID-compliant ones, leading to cases where a transaction can only be partially rolled back. Oops, we didn't actually record the order as shipped, but we told the customer it was..... MySQL users will also note this is an argument against mixing transactional and nontransactional backend table types in the same db..... However that problem is outside the scope of this post. Additionally, MySQL is not well suited for many applications against a single set of db relations. The second problem, though, is more insidious. The traditional way stored procedures and user defined functions are typically used, the application has to be deeply aware of the interface to the database, but the rollout for these aspects is different leading to the possibility or service interruptions, and a need to very carefully and closely time rollout of db changes with application changes. As more applications use the database, this becomes harder and the chance of something being overlooked becomes greater. For this reason the idea that all operations must go through a set of stored procedures is a decision fraught with hazard as the database and application environment evolves. Typically it is easier to manage backwards-compatibility in schemas than it is in functions and so a key question is how many opportunities you have to create new bugs when a new column is added. There are, of course, more hazards which I have dealt with before, but the point is that stored procedures are potentially harmful and a major part of the reason is that they usually form a fairly brittle contract with the application layer. In a traditional stored procedure, adding a column to be stored will require changing the number of variables in the stored procedure's argument list, the queries to access it, and each application's call to that stored procedure. In this way, they provide (in the absence of other help) at best a leaky abstraction layer around the database details. This is the sort of problem that dependency inversion helps to avoid. Stored Procedures and User Defined Functions Done Right Not all stored procedures are done wrong. In the LedgerSMB project we have at least partially solved the abstraction/brittleness issue by looking to web services for inspiration. Our approach provides an additional mapping layer and dynamic query generation around a stored procedure interface. By using a service locator pattern, and overloading the system tables in PostgreSQL as the service registry, we solve the problem of brittleness. Our approach of course is not perfect and it is not the only possibility. One shortcoming is that our approach is that the invocation of the service locator is relatively spartan. We intend to allow more options there in the future. However one thing I have noticed is the fact that there are far fewer places where bugs can hide and therefore faster and more robust development takes place. Additionally a focus on clarity of code in stored procedures has eliminated a number of important performance bottlenecks, and it limits the number of places where a given change propagates to. Other Important Options in PostgreSQL Stored procedures are not the only abstraction mechanisms available from PostgreSQL. In addition to views, there are also other interesting ways of using functions to accomplish this without insisting that all access goes through stored procedures. In addition these methods can be freely mixed to produce very powerful, intelligent database systems. Such options include custom types, written in C, along with custom operators, functions and the like. These would then be stored in columns and SQL can be used to provide an abstraction layer around the types. In this way SQL becomes the abstraction and the C programs become the details. A future post will cover the use of ip4r in network management with PostgreSQL db's as an example of what can be done here. Additionally, things like triggers and notifications can be used to ensure that appropriate changes trigger other changes in the same transaction or, upon transaction commit, hand off control to other programs in subsequent transactions (allowing for independent processing and error control for things like sending emails). Recommendations Rather than specific recommendations, the overall point here is to look at the database itself as a an application running in an application server (the RDBMS) and design it as an application with an appropriate API. There are many ways to do this, from writing components in C and using SQL as an abstraction mechanism to writing things in SQL and using stored procedures as a mechanism. One could even write code in SQL and still use SQL as an abstraction mechanism. The key point however is to be aware of the need for discoverable abstraction, a need which to date things like ORMs and stored procedures often fill very imperfectly. A well designed db with appropriate abstraction in interfaces, should be able to be seen as an application in its own right, engineered as such, and capable of serving multiple client apps through a robust and discoverable API. As with all things, it starts by recognizing the problems and putting solutions as priorities from the design stage onward.
One of the most popular posts on my blog is the short tutorial about the JDBC Security Realm and form based Authentication on GlassFish with Primefaces. After I received some comments about it that it isn't any longer working with latest GlassFish 3.1.2.2 I thought it might be time to revisit it and present an updated version. Here we go: Preparation As in the original tutorial I am going to rely on some stuff. Make sure to have a recent NetBeans 7.3 beta2 (which includes GlassFish 3.1.2.2) and the MySQL Community Server (5.5.x) installed. You should have verified that everything is up an running and that you can start GlassFish and the MySQL Server also is started. Some Basics A GlassFish authentication realm, also called a security policy domain or security domain, is a scope over which the GlassFish Server defines and enforces a common security policy. GlassFish Server is preconfigured with the file, certificate, and administration realms. In addition, you can set up LDAP, JDBC, digest, Oracle Solaris, or custom realms. An application can specify which realm to use in its deployment descriptor. If you want to store the user credentials for your application in a database your first choice is the JDBC realm. Prepare the Database Fire up NetBeans and switch to the Services tab. Right click the "Databases" node and select "Register MySQL Server". Fill in the details of your installation and click "ok". Right click the new MySQL node and select "connect". Now you see all the already available databases. Right click again and select "Create Database". Enter "jdbcrealm" as the new database name. Remark: We're not going to do all that with a separate database user. This is something that is highly recommended but I am using the root user in this examle. If you have a user you can also grant full access to it here. Click "ok". You get automatically connected to the newly created database. Expand the bold node and right click on "Tables". Select "Execute Command" or enter the table details via the wizard. CREATE TABLE USERS ( `USERID` VARCHAR(255) NOT NULL, `PASSWORD` VARCHAR(255) NOT NULL, PRIMARY KEY (`USERID`) ); CREATE TABLE USERS_GROUPS ( `GROUPID` VARCHAR(20) NOT NULL, `USERID` VARCHAR(255) NOT NULL, PRIMARY KEY (`GROUPID`) ); That is all for now with the database. Move on to the next paragraph. Let GlassFish know about MySQL First thing to do is to get the latest and greatest MySQL Connector/J from the MySQL website which is 5.1.22 at the time of writing this. Extract the mysql-connector-java-5.1.22-bin.jar file and drop it into your domain folder (e.g. glassfish\domains\domain1\lib). Done. Now it is finally time to create a project. Basic Project Setup Start a new maven based web application project. Choose "New Project" > "Maven" > Web Application and hit next. Now enter a name (e.g. secureapp) and all the needed maven cordinates and hit next. Choose your configured GlassFish 3+ Server. Select Java EE 6 Web as your EE version and hit "Finish". Now we need to add some more configuration to our GlassFish domain.Right click on the newly created project and select "New > Other > GlassFish > JDBC Connection Pool". Enter a name for the new connection pool (e.g. SecurityConnectionPool) and underneath the checkbox "Extract from Existing Connection:" select your registered MySQL connection. Click next. review the connection pool properties and click finish. The newly created Server Resources folder now shows your sun-resources.xml file. Follow the steps and create a "New > Other > GlassFish > JDBC Resource" pointing the the created SecurityConnectionPool (e.g. jdbc/securityDatasource).You will find the configured things under "Other Sources / setup" in a file called glassfish-resources.xml. It gets deployed to your server together with your application. So you don't have to care about configuring everything with the GlassFish admin console.Additionally we still need Primefaces. Right click on your project, select "Properties" change to "Frameworks" category and add "JavaServer Faces". Switch to the Components tab and select "PrimeFaces". Finish by clicking "OK". You can validate if that worked by opening the pom.xml and checking for the Primefaces dependency. 3.4 should be there. Feel free to change the version to latest 3.4.2. Final GlassFish Configuration Now it is time to fire up GlassFish and do the realm configuration. In NetBeans switch to the "Services" tab again and right click on the "GlassFish 3+" node. Select "Start" and watch the Output window for a successful start. Right click again and select "View Domain Admin Console", which should open your default browser pointing you to http://localhost:4848/. Select "Configurations > server-config > Security > Realms" and click "New..." on top of the table. Enter a name (e.g. JDBCRealm) and select the com.sun.enterprise.security.auth.realm.jdbc.JDBCRealm from the drop down. Fill in the following values into the textfields: JAAS jdbcRealm JNDI jdbc/securityDatasource User Table users User Name Column username Password Column password Group Table groups Group Name Column groupname Leave all the other defaults/blanks and select "OK" in the upper right corner. You are presented with a fancy JavaScript warning window which tells you to _not_ leave the Digest Algorithm Field empty. I field a bug about it. It defaults to SHA-256. Which is different to GlassFish versions prior to 3.1 which used MD5 here. The older version of this tutorial didn't use a digest algorithm at all ("none"). This was meant to make things easier but isn't considered good practice at all. So, let's stick to SHA-256 even for development, please. Secure your application Done with configuring your environment. Now we have to actually secure the application. First part is to think about the resources to protect. Jump to your Web Pages folder and create two more folders. One named "admin" and another called "users". The idea behind this is, to have two separate folders which could be accessed by users belonging to the appropriate groups. Now we have to create some pages. Open the Web Pages/index.xhtml and replace everything between the h:body tags with the following: Select where you want to go: Now add a new index.xhtml to both users and admin folders. Make them do something like this: Hello Admin|User On to the login.xhtml. Create it with the following content in the root of your Web Pages folder. Username: Password: As you can see, whe have the basic Primefaces p:panel component which has a simple html form which points to the predefined action j_security_check. This is, where all the magic is happening. You also have to include two input fields for username and password with the predefined names j_username and j_password. Now we are going to create the loginerror.xhtml which is displayed, if the user did not enter the right credentials. (use the same DOCTYPE and header as seen in the above example). Sorry, you made an Error. Please try again: Login The only magic here is the href link of the Login anchor. We need to get the correct request context and this could be done by accessing the faces context. If a user without the appropriate rights tries to access a folder he is presented a 403 access denied error page. If you like to customize it, you need to add it and add the following lines to your web.xml: 403 /faces/403.xhtml That snippet defines, that all requests that are not authorized should go to the 403 page. If you have the web.xml open already, let's start securing your application. We need to add a security constraint for any protected resource. Security Constraints are least understood by web developers, even though they are critical for the security of Java EE Web applications. Specifying a combination of URL patterns, HTTP methods, roles and transport constraints can be daunting to a programmer or administrator. It is important to realize that any combination that was intended to be secure but was not specified via security constraints, will mean that the web container will allow those requests. Security Constraints consist of Web Resource Collections (URL patterns, HTTP methods), Authorization Constraint (role names) and User Data Constraints (whether the web request needs to be received over a protected transport such as TLS). Admin Pages Protected Admin Area /faces/admin/* GET POST HEAD PUT OPTIONS TRACE DELETE admin NONE All Access None Protected User Area /faces/users/* GET POST HEAD PUT OPTIONS TRACE DELETE NONE If the constraints are in place you have to define, how the container should challenge the user. A web container can authenticate a web client/user using either HTTP BASIC, HTTP DIGEST, HTTPS CLIENT or FORM based authentication schemes. In this case we are using FORM based authentication and define the JDBCRealm FORM JDBCRealm /faces/login.xhtml /faces/loginerror.xhtml The realm name has to be the name that you assigned the security realm before. Close the web.xml and open the sun-web.xml to do a mapping from the application role-names to the actual groups that are in the database. This abstraction feels weird, but it has some reasons. It was introduced to have the option of mapping application roles to different group names in enterprises. I have never seen this used extensively but the feature is there and you have to configure it. Other appservers do make the assumption that if no mapping is present, role names and group names do match. GlassFish doesn't think so. Therefore you have to put the following into the glassfish-web.xml. You can create it via a right click on your project's WEB-INF folder, selecting "New > Other > GlassFish > GlassFish Descriptor" admin admin hat was it _basically_ ... everything you need is in place. The only thing that is missing are the users in the database. It is still empty ...We need to add a test user: Adding a Test-User to the Database And again we start by right clicking on the jdbcrealm database on the "Services" tab in NetBeans. Select "Execute Command" and insert the following: INSERT INTO USERS VALUES ("admin", "8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918"); INSERT INTO USERS_GROUPS VALUES ("admin", "admin"); You can login with user: admin and password: admin and access the secured area. Sample code to generate the hash could look like this: try { MessageDigest md = MessageDigest.getInstance("SHA-256"); String text = "admin"; md.update(text.getBytes("UTF-8")); // Change this to "UTF-16" if needed byte[] digest = md.digest(); BigInteger bigInt = new BigInteger(1, digest); String output = bigInt.toString(16); System.out.println(output); } catch (NoSuchAlgorithmException | UnsupportedEncodingException ex) { Logger.getLogger(PasswordTest.class.getName()).log(Level.SEVERE, null, ex); } Have fun securing your apps and keep the questions coming! In case you need it, the complete source code is on https://github.com/myfear/JDBCRealmExample
Almost all the implementation I see today are based on OAuth 2.0 Bearer Token Profile. Of course its an RFC proposed standard today. OAuth 2.0 Bearer Token profile brings a simplified scheme for authentication. This specification describes how to use bearer tokens in HTTP requests to access OAuth 2.0 protected resources. Any party in possession of a bearer token (a "bearer") can use it to get access to the associated resources (without demonstrating possession of a cryptographic key). To prevent misuse, bearer tokens need to be protected from disclosure in storage and in transport. Before dig in to the OAuth 2.0 MAC profile lets have quick high-level overview of OAuth 2.0 message flow. OAuth 2.0 has mainly three phases. 1. Requesting an Authorization Grant. 2. Exchanging the Authorization Grant for an Access Token. 3. Access the resources with the Access Token. Where does the token type come in to action ? OAuth 2.0 core specification does not mandate any token type. At the same time at any point token requester - client - cannot decide which token type it needs. It's purely up to the Authorization Server to decide which token type to be returned in the Access Token response. So, the token type comes in to action in phase-2 when Authorization Server returning back the OAuth 2.0 Access Token. The access token type provides the client with the information required to successfully utilize the access token to make a protected resource request (along with type-specific attributes). The client must not use an access token if it does not understand the token type. Each access token type definition specifies the additional attributes (if any) sent to the client together with the "access_token" response parameter. It also defines the HTTP authentication method used to include the access token when making a protected resource request. For example following is what you get for Access Token response irrespective of which grant type you use. HTTP/1.1 200 OK Content-Type: application/json;charset=UTF-8 Cache-Control: no-store Pragma: no-cache { "access_token":"mF_9.B5f-4.1JqM", "token_type":"Bearer", "expires_in":3600, "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA" } The above is for Bearer - following is for MAC. HTTP/1.1 200 OK Content-Type: application/json Cache-Control: no-store { "access_token":"SlAV32hkKG", "token_type":"mac", "expires_in":3600, "refresh_token":"8xLOxBtZp8", "mac_key":"adijq39jdlaska9asud", "mac_algorithm":"hmac-sha-256" } Here you can see MAC Access Token response has two additional attributes. mac_key and the mac_algorithm. Let me rephrase this - "Each access token type definition specifies the additional attributes (if any) sent to the client together with the "access_token" response parameter". This MAC Token Profile defines the HTTP MAC access authentication scheme, providing a method for making authenticated HTTP requests with partial cryptographic verification of the request, covering the HTTP method, request URI, and host. In the above response access_token is the MAC key identifier. Unlike in Bearer, MAC token profile never passes it's top secret over the wire. The access_token or the MAC key identifier is a string identifying the MAC key used to calculate the request MAC. The string is usually opaque to the client. The server typically assigns a specific scope and lifetime to each set of MAC credentials. The identifier may denote a unique value used to retrieve the authorization information (e.g. from a database), or self-contain the authorization information in a verifiable manner (i.e. a string consisting of some data and a signature). The mac_key is a shared symmetric secret used as the MAC algorithm key. The server will not reissue a previously issued MAC key and MAC key identifier combination. Now let's see what happens in phase-3. Following shows how the Authorization HTTP header looks like when Bearer Token been used. Authorization: Bearer mF_9.B5f-4.1JqM This adds very low overhead on client side. It simply needs to pass the exact access_token it got from the Authorization Server in phase-2. Under MAC token profile, this is how it looks like. Authorization: MAC id="h480djs93hd8", ts="1336363200", nonce="dj83hs9s", mac="bhCQXTVyfj5cmA9uKkPFx1zeOXM=" This needs bit more attention. id is the MAC key identifier or the access_token from the phase-2. ts the request timestamp. The value is a positive integer set by the client when making each request to the number of seconds elapsed from a fixed point in time (e.g. January 1, 1970 00:00:00 GMT). This value is unique across all requests with the same timestamp and MAC key identifier combination. nonce is a unique string generated by the client. The value is unique across all requests with the same timestamp and MAC key identifier combination. The client uses the MAC algorithm and the MAC key to calculate the request mac. This is how you derive the normalized string to generate the HMAC. The normalized request string is a consistent, reproducible concatenation of several of the HTTP request elements into a single string. By normalizing the request into a reproducible string, the client and server can both calculate the request MAC over the exact same value. The string is constructed by concatenating together, in order, the following HTTP request elements, each followed by a new line character (%x0A): 1. The timestamp value calculated for the request. 2. The nonce value generated for the request. 3. The HTTP request method in upper case. For example: "HEAD", "GET", "POST", etc. 4. The HTTP request-URI as defined by [RFC2616] section 5.1.2. 5. The hostname included in the HTTP request using the "Host" request header field in lower case. 6. The port as included in the HTTP request using the "Host" request header field. If the header field does not include a port, the default value for the scheme MUST be used (e.g. 80 for HTTP and 443 for HTTPS). 7. The value of the "ext" "Authorization" request header field attribute if one was included in the request (this is optional), otherwise, an empty string. Each element is followed by a new line character (%x0A) including the last element and even when an element value is an empty string. Either you use Bearer of MAC - the end user or the resource owner is identified using the access_token. Authorization, throttling, monitoring or any other quality of service operations can be carried out against the access_token irrespective of which token profile you use.
In this post we will Utilize GitHub and/or BitBucket's static web page hosting capabilities to publish our project's Maven 3 Site Documentation. Each of the two SCM providers offer a slightly different solution to host static pages. The approach spelled out in this post would also be a viable solution to "backup" your site documentation in a supported SCM like Git or SVN. This solution does not directly cover site documentation deployment covered by the maven-site-plugin and the Wagon library (scp, WebDAV or FTP). There is one main project hosted on GitHub that I have posted with the full solution. The project URL is https://github.com/mike-ensor/clickconcepts-master-pom/. The POM has been pushed to Maven Central and will continue to be updated and maintained. com.clickconcepts.project master-site-pom 0.16 GitHub Pages GitHub hosts static pages by using a special branch "gh-pages" available to each GitHub project. This special branch can host any HTML and local resources like JavaScript, images and CSS. There is no server side development. To navigate to your static pages, the URL structure is as follows: http://.github.com/ An example of the project I am using in this blog post: http://mike-ensor.github.com/clickconcepts-master-pom/ where the first bold URL segment is a username and the second bold URL segment is the project. GitHub does allow you to create a base static hosted static site for your username by creating a repository with your username.github.com. The contents would be all of your HTML and associated static resources. This is not required to post documentation for your project, unlike the BitBucket solution. There is a GitHub Site plugin that publishes site documentation via GitHub's object API but this is outside the scope of this blog post because it does not provide a single solution for GitHub and BitBucket projects using Maven 3. BitBucket BitBucket provides a similar service to GitHub in that it hosts static HTML pages and their associated static resources. However, there is one large difference in how those pages are stored. Unlike GitHub, BitBucket requires you to create a new repository with a name fitting the convention. The files will be located on the master branch and each project will need to be a directory off of the root. mikeensor.bitbucket.org/ /some-project +index.html +... /css /img /some-other-project +index.html +... /css /img index.html .git .gitignore The naming convention is as follows: .bitbucket.org An example of a BitBucket static pages repository for me would be: http://mikeensor.bitbucket.org/. The structure does not require that you create an index.html page at the root of the project, but it would be advisable to avoid 404s. Generating Site Documentation Maven provides the ability to post documentation for your project by using the maven-site-plugin. This plugin is difficult to use due to the many configuration options that oftentimes are not well documented. There are many blog posts that can help you write your documentation including my post on maven site documentation. I did not mention how to use "xdoc", "apt" or other templating technologies to create documentation pages, but not to fear, I have provided this in my GitHub project. Putting it all Together The Maven SCM Publish plugin (http://maven.apache.org/plugins/maven-scm-publish-plugin/ publishes site documentation to a supported SCM. In our case, we are going to use Git through BitBucket or GitHub. Maven SCM Plugin does allow you to publish multi-module site documentation through the various properties, but the scope of this blog post is to cover single/mono module projects and the process is a bit painful. Take a moment to look at the POM file located in the clickconcepts-master-pom project. This master POM is rather comprehensive and the site documentation is only one portion of the project, but we will focus on the site documentation. There are a few things to point out here, first, the scm-publish plugin and the idiosyncronies when implementing the plugin. In order to create the site documentation, the "site" plugin must first be run. This is accomplished by running site:site. The plugin will generate the documentation into the "target/site" folder by default. The SCM Publish Plugin, by default, looks for the site documents to be in "target/staging" and is controlled by the content parameter. As you can see, there is a mismatch between folders. NOTE: My first approach was to run the site:stage command which is supposed to put the site documents into the "target/staging" folder. This is not entirely correct, the site plugin combines with the distributionManagement.site.url property to stage the documents, but there is very strange behavior and it is not documented well. In order to get the site plugin's site documents and the SCM Publish's location to match up, use the content property and set that to the location of the Site Plugin output (). If you are using GitHub, there is no modification to the siteOutputDirectory needed, however, if you are using BitBucket, you will need to modify the property to add in a directory layer into the site documentation generation (see above for differences between GitHub and BitBucket pages). The second property will tell the SCM Publish Plugin to look at the root "site" folder so that when the files are copied into the repository, the project folder will be the containing folder. The property will look like: ${project.build.directory}/site/ ${project.artifactId} ${project.build.directory} /site Next we will take a look at the custom properties defined in the master POM and used by the SCM Publish Plugin above. Each project will need to define several properties to use the Master POM that are used within the plugins during the site publishing. Fill in the variables with your own settings. BitBucket ... ... master scm:git:git@bitbucket.org:mikeensor/mikeensor.bitbucket.org.git ${project.build.directory}/site/${project.artifactId} ${project.build.directory}/site ${changelog.bitbucket.fileUri} ${changelog.revision.bitbucket.fileUri} ... ... GitHub ... ... gh-pages scm:git:git@github.com:mikeensor/clickconcepts-master-pom.git ${changelog.github.fileUri} ${changelog.revision.github.fileUri} ... ... NOTE: changelog parameters are required to use the Master POM and are not directly related to publishing site docs to GitHub or BitBucket How to Generate If you are using the Master POM (or have abstracted out the Site Plugin and the SCM Plugin) then to generate and publish the documentation is simple. mvn clean site:site scm-publish:publish-scm mvn clean site:site scm-publish:publish-scm -Dscmpublish.dryRun=true Gotchas In the SCM Publish Plugin documentation's "tips" they recommend creating a location to place the repository so that the repo is not cloned each time. There is a risk here in that if there is a git repository already in the folder, the plugin will overwrite the repository with the new site documentation. This was discovered by publishing two different projects and having my root repository wiped out by documentation from the second project. There are ways to mitigate this by adding in another folder layer, but make sure you test often! Another gotcha is to use the -Dscmpublish.dryRun=true to test out the site documentation process without making the SCM commit and push Project and Documentation URLs Here is a list of the fully working projects used to create this blog post: Master POM with Site and SCM Publish plugins &ndash https://github.com/mike-ensor/clickconcepts-master-pom. Documentation URL: http://mike-ensor.github.com/clickconcepts-master-pom/ Child Project using Master Pom &ndash http://mikeensor.bitbucket.org/fest-expected-exception. Documentation URL: http://mikeensor.bitbucket.org/fest-expected-exception/
I am thrilled to announce first version of my Spring Data JDBC repository project. The purpose of this open source library is to provide generic, lightweight and easy to use DAO implementation for relational databases based on JdbcTemplate from Spring framework, compatible with Spring Data umbrella of projects. Design objectives Lightweight, fast and low-overhead. Only a handful of classes, no XML, annotations, reflection This is not full-blown ORM. No relationship handling, lazy loading, dirty checking, caching CRUD implemented in seconds For small applications where JPA is an overkill Use when simplicity is needed or when future migration e.g. to JPA is considered Minimalistic support for database dialect differences (e.g. transparent paging of results) Features Each DAO provides built-in support for: Mapping to/from domain objects through RowMapper abstraction Generated and user-defined primary keys Extracting generated key Compound (multi-column) primary keys Immutable domain objects Paging (requesting subset of results) Sorting over several columns (database agnostic) Optional support for many-to-one relationships Supported databases (continuously tested): MySQL PostgreSQL H2 HSQLDB Derby ...and most likely most of the others Easily extendable to other database dialects via SqlGenerator class. Easy retrieval of records by ID API Compatible with Spring Data PagingAndSortingRepository abstraction, all these methods are implemented for you: public interface PagingAndSortingRepository extends CrudRepository { T save(T entity); Iterable save(Iterable entities); T findOne(ID id); boolean exists(ID id); Iterable findAll(); long count(); void delete(ID id); void delete(T entity); void delete(Iterable entities); void deleteAll(); Iterable findAll(Sort sort); Page findAll(Pageable pageable); } Pageable and Sort parameters are also fully supported, which means you get paging and sorting by arbitrary properties for free. For example say you have userRepository extending PagingAndSortingRepository interface (implemented for you by the library) and you request 5th page of USERS table, 10 per page, after applying some sorting: Page page = userRepository.findAll( new PageRequest( 5, 10, new Sort( new Order(DESC, "reputation"), new Order(ASC, "user_name") ) ) ); Spring Data JDBC repository library will translate this call into (PostgreSQL syntax): SELECT * FROM USERS ORDER BY reputation DESC, user_name ASC LIMIT 50 OFFSET 10 ...or even (Derby syntax): SELECT * FROM ( SELECT ROW_NUMBER() OVER () AS ROW_NUM, t.* FROM ( SELECT * FROM USERS ORDER BY reputation DESC, user_name ASC ) AS t ) AS a WHERE ROW_NUM BETWEEN 51 AND 60 No matter which database you use, you'll get Page object in return (you still have to provide RowMapper yourself to translate from ResultSet to domain object. If you don't know Spring Data project yet, Page is a wonderful abstraction, not only encapsulating List , but also providing metadata such as total number of records, on which page we currently are, etc. Reasons to use You consider migration to JPA or even some NoSQL database in the future. Since your code will rely only on methods defined in PagingAndSortingRepository and CrudRepository from Spring Data Commons umbrella project you are free to switch from JdbcRepository implementation (from this project) to: JpaRepository, MongoRepository, GemfireRepository or GraphRepository. They all implement the same common API. Of course don't expect that switching from JDBC to JPA or MongoDB will be as simple as switching imported JAR dependencies - but at least you minimize the impact by using same DAO API. You need a fast, simple JDBC wrapper library. JPA or even MyBatis is an overkill You want to have full control over generated SQL if needed You want to work with objects, but don't need lazy loading, relationship handling, multi-level caching, dirty checking... You need CRUD and not much more You want to by DRY You are already using Spring or maybe even JdbcTemplate, but still feel like there is too much manual work You have very few database tables Getting started For more examples and working code don't forget to examine project tests. Prerequisites Maven coordinates: com.blogspot.nurkiewicz jdbcrepository 0.1 Unfortunately the project is not yet in maven central repository. For the time being you can install the library in your local repository by cloning it: $ git clone git://github.com/nurkiewicz/spring-data-jdbc-repository.git $ git checkout 0.1 $ mvn javadoc:jar source:jar install In order to start your project must have DataSource bean present and transaction management enabled. Here is a minimal MySQL configuration: @EnableTransactionManagement @Configuration public class MinimalConfig { @Bean public PlatformTransactionManager transactionManager() { return new DataSourceTransactionManager(dataSource()); } @Bean public DataSource dataSource() { MysqlConnectionPoolDataSource ds = new MysqlConnectionPoolDataSource(); ds.setUser("user"); ds.setPassword("secret"); ds.setDatabaseName("db_name"); return ds; } } Entity with auto-generated key Say you have a following database table with auto-generated key (MySQL syntax): CREATE TABLE COMMENTS ( id INT AUTO_INCREMENT, user_name varchar(256), contents varchar(1000), created_time TIMESTAMP NOT NULL, PRIMARY KEY (id) ); First you need to create domain object User mapping to that table (just like in any other ORM): public class Comment implements Persistable { private Integer id; private String userName; private String contents; private Date createdTime; @Override public Integer getId() { return id; } @Override public boolean isNew() { return id == null; } //getters/setters/constructors/... } Apart from standard Java boilerplate you should notice implementing Persistable where Integer is the type of primary key. Persistable is an interface coming from Spring Data project and it's the only requirement we place on your domain object. Finally we are ready to create our CommentRepository DAO: @Repository public class CommentRepository extends JdbcRepository { public CommentRepository() { super(ROW_MAPPER, ROW_UNMAPPER, "COMMENTS"); } public static final RowMapper ROW_MAPPER = //see below private static final RowUnmapper
ROW_UNMAPPER = //see below @Override protected Comment postCreate(Comment entity, Number generatedId) { entity.setId(generatedId.intValue()); return entity; } } First of all we use @Repository annotation to mark DAO bean. It enables persistence exception translation. Also such annotated beans are discovered by CLASSPATH scanning. As you can see we extend JdbcRepository
which is the central class of this library, providing implementations of all PagingAndSortingRepository methods. Its constructor has three required dependencies: RowMapper , RowUnmapper and table name. You may also provide ID column name, otherwise default "id" is used. If you ever used JdbcTemplate from Spring, you should be familiar with RowMapper interface. We need to somehow extract columns from ResultSet into an object. After all we don't want to work with raw JDBC results. It's quite straightforward: public static final RowMapper
ROW_MAPPER = new RowMapper
() { @Override public Comment mapRow(ResultSet rs, int rowNum) throws SQLException { return new Comment( rs.getInt("id"), rs.getString("user_name"), rs.getString("contents"), rs.getTimestamp("created_time") ); } }; RowUnmapper comes from this library and it's essentially the opposite of RowMapper : takes an object and turns it into a Map . This map is later used by the library to construct SQL CREATE / UPDATE queries: private static final RowUnmapper
ROW_UNMAPPER = new RowUnmapper
() { @Override public Map
mapColumns(Comment comment) { Map
mapping = new LinkedHashMap
(); mapping.put("id", comment.getId()); mapping.put("user_name", comment.getUserName()); mapping.put("contents", comment.getContents()); mapping.put("created_time", new java.sql.Timestamp(comment.getCreatedTime().getTime())); return mapping; } }; If you never update your database table (just reading some reference data inserted elsewhere) you may skip RowUnmapper parameter or use MissingRowUnmapper. Last piece of the puzzle is the postCreate() callback method which is called after an object was inserted. You can use it to retrieve generated primary key and update your domain object (or return new one if your domain objects are immutable). If you don't need it, just don't override postCreate() . Check out JdbcRepositoryGeneratedKeyTest for a working code based on this example. By now you might have a feeling that, compared to JPA or Hibernate, there is quite a lot of manual work. However various JPA implementations and other ORM frameworks are notoriously known for introducing significant overhead and manifesting some learning curve. This tiny library intentionally leaves some responsibilities to the user in order to avoid complex mappings, reflection, annotations... all the implicitness that is not always desired. This project is not intending to replace mature and stable ORM frameworks. Instead it tries to fill in a niche between raw JDBC and ORM where simplicity and low overhead are key features. Entity with manually assigned key In this example we'll see how entities with user-defined primary keys are handled. Let's start from database model: CREATE TABLE USERS ( user_name varchar(255), date_of_birth TIMESTAMP NOT NULL, enabled BIT(1) NOT NULL, PRIMARY KEY (user_name) ); ...and User domain model: public class User implements Persistable
{ private transient boolean persisted; private String userName; private Date dateOfBirth; private boolean enabled; @Override public String getId() { return userName; } @Override public boolean isNew() { return !persisted; } public User withPersisted(boolean persisted) { this.persisted = persisted; return this; } //getters/setters/constructors/... } Notice that special persisted transient flag was added. Contract of CrudRepository.save() from Spring Data project requires that an entity knows whether it was already saved or not ( isNew() ) method - there are no separate create() and update() methods. Implementing isNew() is simple for auto-generated keys (see Comment above) but in this case we need an extra transient field. If you hate this workaround and you only insert data and never update, you'll get away with return true all the time from isNew() . And finally our DAO, UserRepository bean: @Repository public class UserRepository extends JdbcRepository
{ public UserRepository() { super(ROW_MAPPER, ROW_UNMAPPER, "USERS", "user_name"); } public static final RowMapper
ROW_MAPPER = //... public static final RowUnmapper
ROW_UNMAPPER = //... @Override protected User postUpdate(User entity) { return entity.withPersisted(true); } @Override protected User postCreate(User entity, Number generatedId) { return entity.withPersisted(true); } } "USERS" and "user_name" parameters designate table name and primary key column name. I'll leave the details of mapper and unmapper (see source code). But please notice postUpdate() and postCreate() methods. They ensure that once object was persisted, persisted flag is set so that subsequent calls to save() will update existing entity rather than trying to reinsert it. Check out JdbcRepositoryManualKeyTest for a working code based on this example. Compound primary key We also support compound primary keys (primary keys consisting of several columns). Take this table as an example: CREATE TABLE BOARDING_PASS ( flight_no VARCHAR(8) NOT NULL, seq_no INT NOT NULL, passenger VARCHAR(1000), seat CHAR(3), PRIMARY KEY (flight_no, seq_no) ); I would like you to notice the type of primary key in Peristable
: public class BoardingPass implements Persistable
The term security has many meanings based on the context and perspective in which it is used. Security from the perspective of software/system development is the continuous process of maintaining confidentiality, integrity, and availability of a system, sub-system, and system data. This definition at a very high level can be restated as the following: Computer security is a continuous process dealing with confidentiality, integrity, and availability on multiple layers of a system. Key Aspects of Software Security Integrity Confidentiality Availability Integrity within a system is the concept of ensuring only authorized users can only manipulate information through authorized methods and procedures. An example of this can be seen in a simple lead management application. If the business decided to allow each sales member to only update their own leads in the system and sales managers can update all leads in the system then an integrity violation would occur if a sales member attempted to update someone else’s leads. An integrity violation occurs when a team member attempts to update someone else’s lead because it was not entered by the sales member. This violates the business rule that leads can only be update by the originating sales member. Confidentiality within a system is the concept of preventing unauthorized access to specific information or tools. In a perfect world the knowledge of the existence of confidential information/tools would be unknown to all those who do not have access. When this this concept is applied within the context of an application only the authorized information/tools will be available. If we look at the sales lead management system again, leads can only be updated by originating sales members. If we look at this rule then we can say that all sales leads are confidential between the system and the sales person who entered the lead in to the system. The other sales team members would not need to know about the leads let alone need to access it. Availability within a system is the concept of authorized users being able to access the system. A real world example can be seen again from the lead management system. If that system was hosted on a web server then IP restriction can be put in place to limit access to the system based on the requesting IP address. If in this example all of the sales members where accessing the system from the 192.168.1.23 IP address then removing access from all other IPs would be need to ensure that improper access to the system is prevented while approved users can access the system from an authorized location. In essence if the requesting user is not coming from an authorized IP address then the system will appear unavailable to them. This is one way of controlling where a system is accessed. Through the years several design principles have been identified as being beneficial when integrating security aspects into a system. These principles in various combinations allow for a system to achieve the previously defined aspects of security based on generic architectural models. Security Design Principles Least Privilege Fail-Safe Defaults Economy of Mechanism Complete Mediation Open Design Separation Privilege Least Common Mechanism Psychological Acceptability Defense in Depth Least Privilege Design Principle The Least Privilege design principle requires a minimalistic approach to granting user access rights to specific information and tools. Additionally, access rights should be time based as to limit resources access bound to the time needed to complete necessary tasks. The implications of granting access beyond this scope will allow for unnecessary access and the potential for data to be updated out of the approved context. The assigning of access rights will limit system damaging attacks from users whether they are intentional or not. This principle attempts to limit data changes and prevents potential damage from occurring by accident or error by reducing the amount of potential interactions with a resource. Fail-Safe Defaults Design Principle The Fail-Safe Defaults design principle pertains to allowing access to resources based on granted access over access exclusion. This principle is a methodology for allowing resources to be accessed only if explicit access is granted to a user. By default users do not have access to any resources until access has been granted. This approach prevents unauthorized users from gaining access to resource until access is given. Economy of Mechanism Design Principle The Economy of mechanism design principle requires that systems should be designed as simple and small as possible. Design and implementation errors result in unauthorized access to resources that would not be noticed during normal use. Complete Mediation Design Principle The Complete Mediation design principle states that every access to every resource must be validated for authorization. Open Design Design Principle The Open Design Design Principle is a concept that the security of a system and its algorithms should not be dependent on secrecy of its design or implementation Separation Privilege Design Principle The separation privilege design principle requires that all resource approved resource access attempts be granted based on more than a single condition. For example a user should be validated for active status and has access to the specific resource. Least Common Mechanism Design Principle The Least Common Mechanism design principle declares that mechanisms used to access resources should not be shared. Psychological Acceptability Design Principle The Psychological Acceptability design principle refers to security mechanisms not make resources more difficult to access than if the security mechanisms were not present Defense in Depth Design Principle The Defense in Depth design principle is a concept of layering resource access authorization verification in a system reduces the chance of a successful attack. This layered approach to resource authorization requires unauthorized users to circumvent each authorization attempt to gain access to a resource. When designing a system that requires meeting a security quality attribute architects need consider the scope of security needs and the minimum required security qualities. Not every system will need to use all of the basic security design principles but will use one or more in combination based on a company’s and architect’s threshold for system security because the existence of security in an application adds an additional layer to the overall system and can affect performance. That is why the definition of minimum security acceptably is need when a system is design because this quality attributes needs to be factored in with the other system quality attributes so that the system in question adheres to all qualities based on the priorities of the qualities. Resources: Barnum, Sean. Gegick, Michael. (2005). Least Privilege. Retrieved on August 28, 2011 from https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/principles/351-BSI.html Saltzer, Jerry. (2011). BASIC PRINCIPLES OF INFORMATION PROTECTION. Retrieved on August 28, 2011 from http://web.mit.edu/Saltzer/www/publications/protection/Basic.html Barnum, Sean. Gegick, Michael. (2005). Defense in Depth. Retrieved on August 28, 2011 from https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/principles/347-BSI.html Bertino, Elisa. (2005). Design Principles for Security. Retrieved on August 28, 2011 from http://homes.cerias.purdue.edu/~bhargav/cs526/security-9.pdf
It is easy to do - a few button clicks (generally) - but the button location is damn unintuitive. So, this is what you have got to do Go to "Help" menu item. Click on "About ..." button (why on earth should I click that when I am trying to un-install a plugin. By the way, the menu item just above "About ..." is "Install New Software ...". Would it have been too much pain to have a "Manage plugins" and / or "Un-install plugins" right underneath it?) A form opens up. At the bottom of it there is button "Installation details". Click that. (Again, why on earth would anyone think "Installation details" would have anything to do with un-installing stuff. I would have expected only a static display of stuff that are already installed.) Another multi tabbed form opens up (Anyone keeping count of the number of windows opened already. This is the 3rd window by now, including the parent editor window) which shows all the installed plugins. If you select any of the installed plugins, a button to "uninstall" becomes available. Click that and you should be able to un-install and after a restart everything should be fine. My interest in software and IT has always been much more than a 9 to 5 job (and I am sure there is a huge population that it holds equally true for). I have always wanted software to be efficient and beautiful apart from doing it's job. However, it took an excellent session on usability (which I joined only with casual curiosity but left with renewed interest in the subject and admiration for David Travis who delivered the course) to get me to start looking at all software with an "user's" perspective. And I was surprised with what I found and how it changed my coding. I have been using Eclipse and STS for years now (nearing a decade now) and I absolutely love these software. However when you start looking at them as a "user" and not only as a developer, there are quite a few usability opportunities of improvement that meets the eye. This article - apart from helping folks looking to un-install plugins in Eclipse - is also intended at folks who design Eclipse - just a humble request to consider this also as a usability improvement.
Lately I encountered a configuration tweak I was not aware of, the problem: I had a single Java installation on a Linux machine from which I had to start two JVM instances - each using a different set of JCE providers. A reminder: the JVM loads its security configuration, including the JCE providers list, from a master security properties file within the JRE folder (JRE_HOME/lib/security/java.security), the location of that file is fixed in the JVM and cannot be modified. Going over the documentation (not too much helpful, I must admit) and the code (more helpful, look for Security.java, for example here) reveled the secret. security.overridePropertiesFile It all starts within the default java.security file provided with the JVM, looking at it we will find the following (somewhere around the middle of the file) # # Determines whether this properties file can be appended to # or overridden on the command line via -Djava.security.properties # security.overridePropertiesFile=true If the overridePropertiesFile doesn’t equal to true we can stop here - the rest of this article is irrelevant (unless we have the option to change it – but I didn’t have that). Lucky to me by default it does equal to true. java.security.properties Next step, the interesting one, is to override or append configuration to the default java.security file per JVM execution. This is done by setting the 'java.security.properties' system property to point to a properties file as part of the JVM invocation; it is important to notice that referencing to the file can be done in one of two flavors: Overriding the entire file provided by the JVM - if the first character in the java.security.properties' value is the equals sign the default configuration file will be entirely ignored, only the values in the file we are pointing to will be affective Appending and overriding values of the default file - any other first character in the property's value (that is the first character in the alternate configuration file path) means that the alternate file will be loaded and appended to the default one. If the alternate file contains properties which are already in the default configuration file the alternate file will override those properties. Here are two examples # # Completely override the default java.security file content # (notice the *two* equal signs) # java -Djava.security.properties==/etc/sysconfig/jvm1.java.security # # Append or override parts of the default java.security file # (notice the *single* equal sign) # java -Djava.security.properties=/etc/sysconfig/jvm.java.security Be Carefull As an important configuration option as it is we must not forget its security implications. We should always make sure that no one can tamper the value of the property and that no one can tamper the alternate file content if he shouldn't be allowed to.
The following post will show how in one of the projects that I took part in we used Spring's AOP to introduce some security related functionalities. The concept was such that in order for the user to see some UI components he needed to have a certain level of security privillages. If that requirement was not met then the UIComponent was not presented. Let's take a look at the project structure: Then there were also the aopApplicationContext.xml : Now let's take a look at the most interesting lines of the Spring's application context. First we have all the required schemas - I don't think that this needs to be explained in more depth. Then we have: which enables the @AspectJ support. Next there is the first we are turning on Spring configuration via annotations. Then deliberatly we exclude aspects from being initialized as beans by Spring itself. Why? Because... we want to create the aspect by ourselves and provide the factory-method="aspectOf" . By doing so our aspect will be included in the autowiring process of our beans - thus all the fields annotated with the @Autowired annotation will get the beans injected. Now let's move on to the code: UserServiceImpl.java package pl.grzejszczak.marcin.aop.service; import org.springframework.stereotype.Service; import pl.grzejszczak.marcin.aop.type.Role; import pl.grzejszczak.marcin.aop.user.UserHolder; @Service public class UserServiceImpl implements UserService { private UserHolder userHolder; @Override public UserHolder getCurrentUser() { return userHolder; } @Override public void setCurrentUser(UserHolder userHolder) { this.userHolder = userHolder; } @Override public Role getUserRole() { if (userHolder == null) { return null; } return userHolder.getUserRole(); } } The class UserServiceImpl is immitating a service that would get the current user information from the db or from the current application context. UserHolder.java package pl.grzejszczak.marcin.aop.user; import pl.grzejszczak.marcin.aop.type.Role; public class UserHolder { private Role userRole; public UserHolder(Role userRole) { this.userRole = userRole; } public Role getUserRole() { return userRole; } public void setUserRole(Role userRole) { this.userRole = userRole; } } This is a simple holder class that holds information about current user Role. Role.java package pl.grzejszczak.marcin.aop.type; public enum Role { ADMIN("ADM"), WRITER("WRT"), GUEST("GST"); private String name; private Role(String name) { this.name = name; } public static Role getRoleByName(String name) { for (Role role : Role.values()) { if (role.name.equals(name)) { return role; } } throw new IllegalArgumentException("No such role exists [" + name + "]"); } public String getName() { return this.name; } @Override public String toString() { return name; } } Role is an enum that defines a role for a person being an Admin, Writer or a Guest. UIComponent.java package pl.grzejszczak.marcin.aop.ui; public abstract class UIComponent { protected String componentName; protected String getComponentName() { return componentName; } } An abstraction over concrete implementations of some UI components. SomeComponentForAdminAndGuest.java package pl.grzejszczak.marcin.aop.ui; import pl.grzejszczak.marcin.aop.annotation.SecurityAnnotation; import pl.grzejszczak.marcin.aop.type.Role; @SecurityAnnotation(allowedRole = { Role.ADMIN, Role.GUEST }) public class SomeComponentForAdminAndGuest extends UIComponent { public SomeComponentForAdminAndGuest() { this.componentName = "SomeComponentForAdmin"; } public static UIComponent getComponent() { return new SomeComponentForAdminAndGuest(); } } This component is an example of a UI component extention that can be seen only by users who have roles of Admin or Guest. SecurityAnnotation.java package pl.grzejszczak.marcin.aop.annotation; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import pl.grzejszczak.marcin.aop.type.Role; @Retention(RetentionPolicy.RUNTIME) public @interface SecurityAnnotation { Role[] allowedRole(); } Annotation that defines a roles that can have this component created. UIFactoryImpl.java package pl.grzejszczak.marcin.aop.ui; import org.apache.commons.lang.NullArgumentException; import org.springframework.stereotype.Component; @Component public class UIFactoryImpl implements UIFactory { @Override public UIComponent createComponent(Class componentClass) throws Exception { if (componentClass == null) { throw new NullArgumentException("Provide class for the component"); } return (UIComponent) Class.forName(componentClass.getName()).newInstance(); } } A factory class that given the class of an object that extends UIComponent returns a new instance of the given UIComponent. SecurityInterceptor.java package pl.grzejszczak.marcin.aop.interceptor; import java.lang.annotation.Annotation; import java.lang.reflect.AnnotatedElement; import java.util.Arrays; import java.util.List; import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Pointcut; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import pl.grzejszczak.marcin.aop.annotation.SecurityAnnotation; import pl.grzejszczak.marcin.aop.service.UserService; import pl.grzejszczak.marcin.aop.type.Role; import pl.grzejszczak.marcin.aop.ui.UIComponent; @Aspect public class SecurityInterceptor { private static final Logger LOGGER = LoggerFactory.getLogger(SecurityInterceptor.class); public SecurityInterceptor() { LOGGER.debug("Security Interceptor created"); } @Autowired private UserService userService; @Pointcut("execution(pl.grzejszczak.marcin.aop.ui.UIComponent pl.grzejszczak.marcin.aop.ui.UIFactory.createComponent(..))") private void getComponent(ProceedingJoinPoint thisJoinPoint) { } @Around("getComponent(thisJoinPoint)") public UIComponent checkSecurity(ProceedingJoinPoint thisJoinPoint) throws Throwable { LOGGER.info("Intercepting creation of a component"); Object[] arguments = thisJoinPoint.getArgs(); if (arguments.length == 0) { return null; } Annotation annotation = checkTheAnnotation(arguments); boolean securityAnnotationPresent = (annotation != null); if (securityAnnotationPresent) { boolean userHasRole = verifyRole(annotation); if (!userHasRole) { LOGGER.info("Current user doesn't have permission to have this component created"); return null; } } LOGGER.info("Current user has required permissions for creating a component"); return (UIComponent) thisJoinPoint.proceed(); } /** * Basing on the method's argument check if the class is annotataed with * {@link SecurityAnnotation} * * @param arguments * @return */ private Annotation checkTheAnnotation(Object[] arguments) { Object concreteClass = arguments[0]; LOGGER.info("Argument's class - [{}]", new Object[] { arguments }); AnnotatedElement annotatedElement = (AnnotatedElement) concreteClass; Annotation annotation = annotatedElement.getAnnotation(SecurityAnnotation.class); LOGGER.info("Annotation present - [{}]", new Object[] { annotation }); return annotation; } /** * The function verifies if the current user has sufficient privilages to * have the component built * * @param annotation * @return */ private boolean verifyRole(Annotation annotation) { LOGGER.info("Security annotation is present so checking if the user can use it"); SecurityAnnotation annotationRule = (SecurityAnnotation) annotation; List requiredRolesList = Arrays.asList(annotationRule.allowedRole()); Role userRole = userService.getUserRole(); return requiredRolesList.contains(userRole); } } This is the aspect defined at the pointcut of executing a function createComponent of the UIFactory interface. Inside the Around advice there is the logic that first checks what kind of an argument has been passed to the method createComponent (for example SomeComponentForAdminAndGuest.class). Next it is checking if this class is annotated with SecurityAnnotation and if that is the case it checks what kind of Roles are required to have the component created. Afterwards it checks if the current user (from UserService to UserHolder's Roles) has the required role to present the component. If that is the case thisJoinPoint.proceed() is called which in effect returns the object of the class that extends UIComponent. Now let's test it - here comes the SpringJUnit4ClassRunner AopTest.java package pl.grzejszczak.marcin.aop; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import pl.grzejszczak.marcin.aop.service.UserService; import pl.grzejszczak.marcin.aop.type.Role; import pl.grzejszczak.marcin.aop.ui.SomeComponentForAdmin; import pl.grzejszczak.marcin.aop.ui.SomeComponentForAdminAndGuest; import pl.grzejszczak.marcin.aop.ui.SomeComponentForGuest; import pl.grzejszczak.marcin.aop.ui.SomeComponentForWriter; import pl.grzejszczak.marcin.aop.ui.UIFactory; import pl.grzejszczak.marcin.aop.user.UserHolder; @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "classpath:aopApplicationContext.xml" }) public class AopTest { @Autowired private UIFactory uiFactory; @Autowired private UserService userService; @Test public void adminTest() throws Exception { userService.setCurrentUser(new UserHolder(Role.ADMIN)); Assert.assertNotNull(uiFactory.createComponent(SomeComponentForAdmin.class)); Assert.assertNotNull(uiFactory.createComponent(SomeComponentForAdminAndGuest.class)); Assert.assertNull(uiFactory.createComponent(SomeComponentForGuest.class)); Assert.assertNull(uiFactory.createComponent(SomeComponentForWriter.class)); } } And the logs: pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:26 Security Interceptor created pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:38 Intercepting creation of a component pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:48 Argument's class - [[class pl.grzejszczak.marcin.aop.ui.SomeComponentForAdmin]] pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:54 Annotation present - [@pl.grzejszczak.marcin.aop.annotation.SecurityAnnotation(allowedRole=[ADM])] pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:57 Security annotation is present so checking if the user can use it pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:70 Current user has required permissions for creating a component pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:38 Intercepting creation of a component pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:48 Argument's class - [[class pl.grzejszczak.marcin.aop.ui.SomeComponentForAdminAndGuest]] pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:54 Annotation present - [@pl.grzejszczak.marcin.aop.annotation.SecurityAnnotation(allowedRole=[ADM, GST])] pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:57 Security annotation is present so checking if the user can use it pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:70 Current user has required permissions for creating a component pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:38 Intercepting creation of a component pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:48 Argument's class - [[class pl.grzejszczak.marcin.aop.ui.SomeComponentForGuest]] pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:54 Annotation present - [@pl.grzejszczak.marcin.aop.annotation.SecurityAnnotation(allowedRole=[GST])] pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:57 Security annotation is present so checking if the user can use it pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:66 Current user doesn't have permission to have this component created pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:38 Intercepting creation of a component pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:48 Argument's class - [[class pl.grzejszczak.marcin.aop.ui.SomeComponentForWriter]] pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:54 Annotation present - [@pl.grzejszczak.marcin.aop.annotation.SecurityAnnotation(allowedRole=[WRT])] pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:57 Security annotation is present so checking if the user can use it pl.grzejszczak.marcin.aop.interceptor.SecurityInterceptor:66 Current user doesn't have permission to have this component created The unit test shows that for given Admin role only first two components get created whereas for the two others nulls are returned (due to the fact that user doesn't have proper rights). That is how in our project we used Spring's AOP to create a simple framework that would check if the user can have the given component created or not. Thanks to this after having programmed the aspects one doesn't have to remember about writing any security related code since it will be done for him. If you have any suggestions related to this post please feel free to comment it :)
We can exclude transitive dependencies easily from specific configurations. To exclude them from all configurations we can use Groovy's spread-dot operator and invoke the exclude() method on each configuration. We can only define the group, module or both as arguments for the exclude() method. The following part of a build file shows how we can exclude a dependency from all configurations: ... configurations { all*.exclude group: 'xml-apis', module: 'xmlParserAPIs' } // Equivalent to: configurations { all.collect { configuration -> configuration.exclude group: 'xml-apis', module: 'xmlParserAPIs' } } ...
It's awesome that JUnit is recognizing the usefulness of Hamcrest, because I use these two a lot. However, I find JUnit packaging of their dependencies odd, and can cause class loading problem if you are not careful. Let's take a closer look. If you look at junit:junit:4.10 from Maven Central, you will see that it has this dependencies graph: +- junit:junit:jar:4.10:test | - org.hamcrest:hamcrest-core:jar:1.1:test This is great, except that inside the junit-4.10.jar, you will also find the hamcrest-core-1.1.jar content are embedded! But why??? I suppose it's a convenient for folks who use Ant, so that they save one jar to package in their lib folder, but it's not very Maven friendly. And you also expect classloading trouble if you want to upgrade Hamcrest or use extra Hamcrest modules. Now if you use Hamcrest long enough, you know that most of their goodies are in the second module named hamcrest-library, but this JUnit didn't package in. JUnit however chose to include some JUnit+Hamcrest extension of their own. Now including duplicated classes in jar are very trouble maker, so JUnit has a separated module junit-dep that doesn't include Hamcrest core package and help you avoid this issue. So if you are using Maven project, you should use this instead. junit junit-dep 4.10 test org.hamcrest hamcrest-core org.hamcrest hamcrest-library 1.2.1 test See how I have to exclude hamcrest from junit. This is needed if you want hamcrest-library that has higher version than the one JUnit comes with, which is 1.1. Interesting enough, Maven's dependencies in pom is order sensitive when it comes to auto resolving conflicting versions dependencies. Actually it would just pick the first one found and ignore the rest. So you can shorten above without exclusion if, only if, you place the Hamcrest bofore JUnit like this: org.hamcrest hamcrest-library 1.2.1 test junit junit-dep 4.10 test This should make Maven use the following dependencies: +- org.hamcrest:hamcrest-library:jar:1.2.1:test | \- org.hamcrest:hamcrest-core:jar:1.2.1:test +- junit:junit-dep:jar:4.10:test However I think using the exclusion tag would probably give you more stable build and not rely on Maven implicit ordering rule. And it avoid easy mistake for Maven beginer users. However I wish JUnit would do a better job at packaging and remove duplicated classes in jar. I personally think it's more productive for JUnit to also include hamcrest-libray instead of just the hamcrest-core jar. What do you think?
This article aims to show you how to quickly fix the most common java security code violations. It assumes that you are familiar with the concept of code rules and violations and how Sonar reports on them. However, if you haven’t heard these terms before then you might take a look at Sonar Concepts or the forthcoming book about Sonar for a more detailed explanation. To get an idea, during Sonar analysis, your project is scanned by many tools to ensure that the source code conforms with the rules you’ve created in your quality profile. Whenever a rule is violated… well a violation is raised. With Sonar you can track these violations with violations drilldown view or in the source code editor. There are hundreds of rules, categorized based on their importance. Ill try, in future posts, to cover as many as I can but for now let’s take a look at some common security rules / violations. There are two pairs of rules (all of them are ranked as critical in Sonar ) we are going to examine right now. 1. Array is Stored Directly ( PMD ) and Method returns internal array ( PMD ) These violations appear in the cases when an internal Array is stored or returned directly from a method. The following example illustrates a simple class that violates these rules. public class CalendarYear { private String[] months; public String[] getMonths() { return months; } public void setMonths(String[] months) { this.months = months; } } To eliminate them you have to clone the Array before storing / returning it as shown in the following class implementation, so noone can modify or get the original data of your class but only a copy of them. public class CalendarYear { private String[] months; public String[] getMonths() { return months.clone(); } public void setMonths(String[] months) { this.months = months.clone(); } } 2. Nonconstant string passed to execute method on an SQL statement (findbugs) and A prepared statement is generated from a nonconstant String (findbugs) Both rules are related to database access when using JDBC libraries. Generally there are two ways to execute an SQL Commants via JDBC connection : Statement and PreparedStatement. There is a lot of discussion about pros and cons but it’s out of the scope of this post. Let’s see how the first violation is raised based on the following source code snippet. Statement stmt = conn.createStatement(); String sqlCommand = "Select * FROM customers WHERE name = '" + custName + "'"; stmt.execute(sqlCommand); You’ve already noticed that the sqlcommand parameter passed to execute method is dynamically created during run-time which is not acceptable by this rule. Similar situations causes the second violation. String sqlCommand = "insert into customers (id, name) values (?, ?)"; Statement stmt = conn.prepareStatement(sqlCommand); You can overcome this problems with three different ways. You can either use StringBuilder or String.format method to create the values of the string variables. If applicable you can define the SQL Commands as Constant in class declaration, but it’s only for the case where the SQL command is not required to be changed in runtime. Let’s re-write the first code snippet using StringBuilder Statement stmt = conn.createStatement(); stmt.execute(new StringBuilder("Select FROM customers WHERE name = '"). append(custName). append("'").toString()); and using String.format Statement stmt = conn.createStatement(); String sqlCommand = String.format("Select * from customers where name = '%s'", custName); stmt.execute(sqlCommand); For the second example you can just declare the sqlCommand as following private static final SQLCOMMAND = insert into customers (id, name) values (?, ?)"; There are more security rules such as the blocker Hardcoded constant database password but I assume that nobody is still hardcodes passwords in source code files… In following articles I’m going to show you how to adhere to performance and bad practice rules. Until then I’m waiting for your comments or suggestions.
Today we're going to build a simple TCP proxy server. The scenario: we've got one host (the client) that establishes a TCP connection to another one (the remote). client —> remote We want to set up a proxy server in the middle, so the client will establish the connection with the proxy and the proxy will forward it to the remote, keeping in mind the remote response also. With node.js is really simple to perform those kind of network operations. client —> proxy -> remote var net = require('net'); var LOCAL_PORT = 6512; var REMOTE_PORT = 6512; var REMOTE_ADDR = "192.168.1.25"; var server = net.createServer(function (socket) { socket.on('data', function (msg) { console.log(' ** START **'); console.log('<< From client to proxy ', msg.toString()); var serviceSocket = new net.Socket(); serviceSocket.connect(parseInt(REMOTE_PORT), REMOTE_ADDR, function () { console.log('>> From proxy to remote', msg.toString()); serviceSocket.write(msg); }); serviceSocket.on("data", function (data) { console.log('<< From remote to proxy', data.toString()); socket.write(data); console.log('>> From proxy to client', data.toString()); }); }); }); server.listen(LOCAL_PORT); console.log("TCP server accepting connection on port: " + LOCAL_PORT); Simple, isn’t it? Source code in github
While exploring HDFS, I came across these two syntaxes for querying HDFS: > hadoop dfs > hadoop fs Initally I couldn't differentiate between the two, and kept wondering why we have two different syntaxes for a common purpose. I found a number of people online with the same question -- their thoughts are below: Per Chris's explanation: it seems like there's no difference between the two syntaxes. If we look at the definitions of the two commands (hadoop fs and hadoop dfs) in $HADOOP_HOME/bin/hadoop ... elif [ "$COMMAND" = "datanode" ] ; then CLASS='org.apache.hadoop.hdfs.server.datanode.DataNode' HADOOP_OPTS="$HADOOP_OPTS $HADOOP_DATANODE_OPTS" elif [ "$COMMAND" = "fs" ] ; then CLASS=org.apache.hadoop.fs.FsShell HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS" elif [ "$COMMAND" = "dfs" ] ; then CLASS=org.apache.hadoop.fs.FsShell HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS" elif [ "$COMMAND" = "dfsadmin" ] ; then CLASS=org.apache.hadoop.hdfs.tools.DFSAdmin HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS" ... That was his reasoning. Unconvinced, I kept looking for a more persuasive answer, and these excerpts made more sense to me: FS relates to a generic file system which can point to any file systems like local, HDFS etc. But dfs is very specific to HDFS. So when we use FS it can perform operation with from/to local or hadoop distributed file system to destination. But specifying DFS operation relates to HDFS. Below are two excerpts from the Hadoop documentation that describe these two as different shells. FS Shell The FileSystem (FS) shell is invoked by bin/hadoop fs. All the FS shell commands take path URIs as arguments. The URI format is scheme://autority/path. For HDFS the scheme is hdfs, and for the local filesystem the scheme is file. The scheme and authority are optional. If not specified, the default scheme specified in the configuration is used. An HDFS file or directory such as /parent/child can be specified as hdfs://namenodehost/parent/child or simply as /parent/child (given that your configuration is set to point to hdfs://namenodehost). Most of the commands in FS shell behave like corresponding Unix commands. DFShell The HDFS shell is invoked by bin/hadoop dfs. All the HDFS shell commands take path URIs as arguments. The URI format is scheme://autority/path. For HDFS the scheme is hdfs, and for the local filesystem the scheme is file. The scheme and authority are optional. If not specified, the default scheme specified in the configuration is used. An HDFS file or directory such as /parent/child can be specified as hdfs://namenode:namenodeport/parent/child or simply as /parent/child (given that your configuration is set to point to namenode:namenodeport). Most of the commands in HDFS shell behave like corresponding Unix commands. So, based on the above, we can conclude that it all depends on the scheme configuration. When using these two commands with absolute URI (i.e. scheme://a/b) the behavior shall be identical. Only it's the default configured scheme value for file and hdfs for fs and dfs respectively, which is the cause for difference in behavior.
The more I use dependency injection (DI) in my code, the more it alters the way I see both my design and implementation. Injection is so convenient and powerful that you end up wanting to make sure you use it as often as you can. And as it turns out, you can use it in many, many places. Let’s cover briefly the most obvious scenarios where DI, and more specifically, Guice, are a good fit: objects created either at class loading time or very early in your application. These two aspects are covered by either direct injection or by providers, which allow you to start building some of your object graph before you can inject more objects. I won’t go too much in details about these two use cases since they are explained in pretty much any Guice tutorial you can find on the net. Once the injector has created your graph of objects, you are pretty much back to normal and instantiating your “runtime objects” (the objects you create during the life time of your application) the normal way, most likely with “new” or factories. However, you will quickly start noticing that you need some runtime information to create these objects, other parts of them could be injected. Let’s take the following example: we have a GeoService interface that provides various geolocation functions, such as telling you if two addresses are close to each other: public interface GeoService { /** * @return true if the two addresses are within @param{miles} * miles of each other. */ boolean isNear(Address address1, Address address2, int miles); } Then you have a Person class which uses this service and also needs a name and an address to be instantiated: public class Person { // Fields omitted public Person(String name, Address address, GeoService gs) { this.name = name; this.address = address; this.geoService = gs; } public boolean livesNear(Person otherPerson) { return geoService.isNear(address, otherPerson.getAddress(), 2 /* miles */); } } Something odd should jump at you right away with this class: while name and address are part of the identity of a Person object, the presence of the GeoService instance in it feels wrong. The service is a singleton that is created on start up, so a perfect candidate to be injected, but how can I achieve the creation of a Person object when some of its information is supplied by Guice and the other part by myself? Guice gives you a very elegant and flexible way to implement this scenario with “assisted injection”. The first step is to define a factory for our objects that represents exactly how we want to create them: public interface PersonFactory { Person create(String name, Address address); } Since only name and address participate in the identity of our Person objects, these are the only parameters we need to construct our objects. The other parameters should be supplied by Guice so we modify our Person constructor to let Guice know: @Inject public Person(@Assisted String name, @Assisted Address address, GeoService geoService) { this.name = name; this.address = address; this.geoService = geoService; } In this code, I have added an @Inject annotation on the constructor and an @Assisted annotation on each parameter that I will be providing. Guice will take care of injecting the rest. Finally, we connect the factory to its objects when creating the module: Module module1 = new FactoryModuleBuilder() .build(PersonFactory.class); The important part here is to realize that we will never instantiate PersonFactory: Guice will. From now on, all we need to do whenever we want to instantiate a Person object is to ask Guice to hand us a factory: @Inject private PersonFactory personFactory; // ... Person p = personFactory.create("Bob", new Address("1 Ocean st")); If you want to find out more, take a look at the main documentation for assisted injection, which explains how to support overloaded constructors and also how to create different kinds of objects within the same factory. Wrapping up Let’s take a look at what we did. First, we started with a suspicious looking constructor: public Person(String name, Address address, GeoService s) { This constructor is suspicious because it accepts parameters that do not participate in the identity of the object (you won’t use the GeoService parameter when calculating the hash code of a Person object). Instead, we replaced this constructor with a factory that only accepts identity fields: public interface PersonFactory { Person create(String name, Address address); } and we let Guice’s assisted injection take care of creating a fully formed object for us. This observation leads us to the Identity Constructor rule: If a constructor accepts parameters that are not used to define the identity of the objects, consider injecting these parameters. Once you start looking at your objects with this rule in mind, you will be surprised to find out how many of them can benefit from assisted injection.
We are proud to announce that the newest major release of our Eclipse-based developer tooling is now available. This is a major release not only in terms of new features but because of other serious changes like project componentization, open-sourcing and the fact that for the first time we are making multiple distributions available, each tailored for a different kind of developer. Check out the release announcement on Martin Lippert's Blog. 100% Open Sourced – All STS features that were previously under a free commercial license, have been donated under the Eclipse Public License (EPL) at GitHub! Intelligent Repackaging - Repackaging the product itself makes identifying what tools you need, and getting started with them much easier. In the past, Groovy/Grails developers had to install several extensions manually into Eclipse to get started. Now there are two full eclipse distributions, one targeted at Spring developers, the other at Groovy/Grails developers – just download, install and go, no assembly required. Componentized projects: Componentizing allows installation and configuration flexibility – developers can install components individually into their existing, plain Eclipse Java EE installations if they wish, preserving their hard work of configuring their Eclipse IDEs just the way they like them. Downloads, more information and FAQ You can find the downloads as well as more information on the project websites for the toolsuites: Spring Tool Suite Groovy/Grails Tool Suite Installation Instructions FAQ Feedback and discussions If you have feedback or questions for us, please do not hesitate to contact us via our SpringSource Tool Suite forum. Bugs and feature requests are always welcome as tickets in our JIRA or, even better, as pull requests on GitHub.