Compute Grids vs. Data Grids
Join the DZone community and get the full member experience.
Join For Freecompute grid
compute grids allow you to take a computation, optionally split it into multiple parts, and execute them on different grid nodes in parallel. the obvious benefit here is that your computation will perform faster as it now can use resources from all grid nodes in parallel. one of the most common design patterns for parallel execution is mapreduce . however, compute grids are useful even if you don't need to split your computation - they help you improve overall scalability and fault-tolerance of your system by offloading your computations onto most available nodes. some of the "must have" compute grid features are:- automatic deployment - allows for automatic deployment of classes and resources onto grid without any extra steps from user. this feature alone provides one of the largest productivity boosts in distributed systems. users usually are able to simply execute a task from one grid node and as task execution penetrates the grid, all classes and resources are also automatically deployed.
- topology resolution - allows to provision nodes based on any node characteristic or user-specific configuration. for example, you can decide to only include linux nodes for execution, or to only include a certain group of nodes within certain time window. you should also be able to choose all nodes with cpu loaded, say, under 50% that have more than 2gb of available heap memory.
- collision resolution - allows users to control which jobs get executed, which jobs get rejected, how many jobs can be executed in parallel, order of overall execution, etc.
- load balancing - allows to balance properly balance your system load within grid. usually range of load balancing policies varies within products. some of the most common ones are round robin, random, or adaptive. more advanced vendors also provide affinity load balancing where grid jobs always end up on the same node based on job's affinity key. this policy works well with data grids described below.
- fail-over - grid jobs should automatically fail-over onto other nodes in case of node crash or some other job failure.
- checkpoints - long running jobs should be able to periodically store their intermediate state. this is useful for fail-overs, when a failed job should be able to pick up its execution from the latest checkpoint, rather than start from scratch.
- grid events - a querying mechanism for all grid events is essential. any grid node should be able to query all events that happened on remote grid nodes during grid task execution.
- node metrics - a good compute grid solution should be able to provide dynamic grid metrics for all grid nodes. metrics should include vital node statistics, from cpu load to average job execution time. this is especially useful for load balancing, when the system or user need to pick the least loaded node for execution.
- pluggability - in order to blend into any environment a good compute grid should have well thought out pluggability points. for example, if running on top of jboss, a compute grid should totally reuse jboss communication and discovery protocols.
- data grid integration - it is important that compute grid are able to natively integrate with data grids as quite often businesses will need both, computational and data features working within same application.
some compute grid vendors:
-
gridgain
- professional open source
-
jppf
- open source
data grid
data grids allow you to distribute your data across the grid. most of us are used to the term distributed cache rather than data grid (data grid does sound more savvy though). the main goal of data grid is to provide as much data as possible from memory on every grid node and to ensure data coherency. some of the important data grid features include:
- data replication - all data is fully replicated to all nodes in the grid. this strategy consumes the most resources, however it is the most effective solution for read-mostly scenarios, as data is available everywhere for immediate access.
- data invalidation - in this scenario, nodes load data on demand. whenever data changes on one of the nodes, then the same data on all other nodes is purged (invalidated). then this data will be loaded on-demand the next time it is accessed.
- distributed transactions - transactions are required to ensure data coherency. cache updates must work just like database updates - whenever an update failed, then the whole transaction must be rolled back. most data grid support various transaction policies, such as read committed, write committed, serializable, etc...
- data backups - useful for fail-over. some data grid products provide ability to assign backup nodes for the data. this way whenever a node crashes, the data is immediately available from another node.
-
data affinity/partitioning
- data affinity allows you to split/partition your whole data set into multiple subsets and assign every subset to a grid node. in the purest form, data is not replicated between nodes at all, every node is only responsible for it's own subset of data. however, various data grid products may provide different flavors of data affinity, such as replication only to back up nodes for example.
data affinity is one of the more advanced features, and is not provided by every vendor. to my knowledge, according to product websites, out of commercial vendors oracle coherence and gemstone have it (there may be others). in professional open source space you can take a look at combination of gridgain with affinity load balancing and jbosscache .
some data grid/cache vendors:
-
oracle coherence
- commercial
-
gemstone
- commercial
-
gigaspaces
- commercial
-
jbosscache
- professional open source
-
ehcache
- open source
Opinions expressed by DZone contributors are their own.
Comments