Clustered Quartz Scheduler With Spring Boot and MongoDB
In this writing, I will be discussing how to achieve single execution in a clustered environment using Quartz. Do not be frightened of Quartz though.
Join the DZone community and get the full member experience.
Join For FreeSpring Boot library for Quartz does not work correctly if you start up two instances of service in parallel. Each one of the services starts the execution of the same jobs while the expected behavior is one of the instances is elected to execute the jobs and the other ones waiting in the background, and in the case of failure of the first service, another one is elected.
Springs’ @Scheduled annotation is a very convenient and easy way of scheduling tasks for several reasons. But, if you have a clustered environment and you must run any job just from one node, that does not hold true. You realize that all scheduled jobs will start executing almost at the same time. This may cause problems ranging from unnecessary calls to data inconsistency.
Searching for a solution to achieve this behavior, I found ShedLock and Quartz as prominent once while the latter one is much more robust and can be used in any Java application and so most widely used, while the former one is much easier to configure.
You can get a good introduction to ShedLock from here.
In this writing, I will be discussing how to achieve single execution in a clustered environment using Quartz. Do not be frightened of Quartz though.
So, let us start by adding dependencies to the application. By default, Quartz only provides support for the traditional relational databases. But, thanks to Michael Klishin and MuleSoft, who created a MongoDB implementation of the Quartz library in a clustered environment (can be found here over GitHub).
xxxxxxxxxx
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-quartz</artifactId>
</dependency>
<dependency>
<groupId>com.novemberain</groupId>
<artifactId>quartz-mongodb</artifactId>
<version>2.1.0</version>
</dependency>
I have set up a config like the following in my application.yml. You can choose a different structure or use the default spring.quartz.YYYY. It is a bit verbose indenting all those properties (laziness is evil though), so I choose this one. You can check all the scheduler properties over here.
xxxxxxxxxx
quartz
properties
org.quartz.scheduler.instanceName MyClusteredScheduler
org.quartz.scheduler.instanceId AUTO # you also can define a custom generator of your own or use already existing once like HostnameInstanceIdGenerator
org.quartz.scheduler.skipUpdateChecktrue
org.quartz.scheduler.jobFactory.class org.quartz.simpl.SimpleJobFactory
org.quartz.scheduler.threadsInheritContextClassLoaderOfInitializertrue
org.quartz.threadPool.threadCount2
org.quartz.threadPool.threadPriority5
org.quartz.jobStore.isClustered true #here is the magic which allows creating a lock automatically. Check here for more
org.quartz.jobStore.misfireThreshold30000
org.quartz.jobStore.class com.novemberain.quartz.mongodb.MongoDBJobStore
org.quartz.jobStore.mongoUri $ spring.data.mongodb.uri
org.quartz.jobStore.dbName $ your_db_name
org.quartz.jobStore.collectionPrefix qrtz
org.quartz.jobStore.clusterCheckinInterval 60000 # checks-in with the other instances of the cluster, default 15000
As you have seen I made theorg.quartz.jobStore.isClustered
to be true. As per the documentation clustering currently only works with theJDBC-Jobstore (JobStoreTX or JobStoreCMT)
, and essentially works by having each node of the cluster share the same database.
Load-balancing occurs automatically, with each node of the cluster firing jobs as quickly as it can. When a trigger’s firing time occurs, the first node that acquires a lock will fire it.
Moving on, we then need to let Spring manage the creation of the Quartz scheduler using the SchedulerFactoryBean. In addition, load all the properties from application.yml
xxxxxxxxxx
prefix = "quartz") (
public class QuartzConfiguration {
private Map<String, String> properties;
public Map<String, String> getProperties() {
return properties;
}
public void setProperties(Map<String, String> properties) {
this.properties = properties;
}
private Properties getAllProperties() {
Properties props = new Properties();
props.putAll(properties);
return props;
}
// Don't Inject this directly in other spring Components since it is a factory, instead use Scheduler
public SchedulerFactoryBean schedulerFactoryBean() {
SchedulerFactoryBean scheduler = new SchedulerFactoryBean();
scheduler.setQuartzProperties(getAllProperties ());
scheduler.setWaitForJobsToCompleteOnShutdown(true);
// Set the key of an ApplicationContext reference to
// expose in the SchedulerContext. If you extend QuartzJobBean instead,
// you have to use setApplicationContext
scheduler.setApplicationContextSchedulerContextKey("applicationContext");
return scheduler;
}
The job to be executed follows. Note that it is recommended not to add any business logic unless necessary and related to the job itself.
xxxxxxxxxx
public class SyncJob implements Job {
public void execute(JobExecutionContext context) throws JobExecutionException {
try {
ApplicationContext ctx = getContext(context);
SyncService SyncService =
(SyncService) ctx.getBean(SyncService.class);
SyncService.start();
} catch (Exception e) {
// do something
}
}
private ApplicationContext getContext(JobExecutionContext context) throws Exception {
ApplicationContext ctx =
(ApplicationContext) context.getScheduler().getContext().get("applicationContext");
if(ctx == null) {
throw new JobExecutionException("No application context available in scheduler context.");
}
return ctx;
}
}
And finally, let us define Jobs and their Triggers.
xxxxxxxxxx
public class JobConfiguration {
private static final String SYNCDATAJOB = "syncDataJob";
private static final String SYNCDATAGROUP = "syncData_group";
private static final String SYNCDATATRIGGER = "syncData_Trigger";
private Scheduler scheduler;
private void init() throws Exception {
scheduler.addJob(syncData(), true, true);
if (!scheduler.checkExists(new TriggerKey(SYNCDATATRIGGER, SYNCDATAGROUP))) {
scheduler.scheduleJob(triggerSyncData());
}
}
private JobDetail syncData() {
JobDetailImpl jobDetail = new JobDetailImpl();
jobDetail.setKey(new JobKey(SYNCDATAJOB, SYNCDATAGROUP));
jobDetail.setJobClass(SyncJob.class);
return jobDetail;
}
private Trigger triggerSyncData() {
return newTrigger()
.forJob(syncData())
.withIdentity(SYNCDATATRIGGER, SYNCDATAGROUP)
.withSchedule(simpleSchedule()
.withIntervalInMilliseconds(60000L))//hours, min, sec, millis
.build();
}
}
Finally, the service to be called upon job execution:
xxxxxxxxxx
public class SyncService {
private Logger logger = LoggerFactory.getLogger(SyncService.class);
public void start() {
logger.info("{} Requesting for a new token", "SyncService#start()");
//do the logic
}
}
When you now run the Spring Boot application you should see the following collections created:
- quartz_calendars
- quartz_jobs
- quartz_locks
- quartz_schedulers
- quartz_triggers
Note: If you are interested to have a custom job store you can follow this article.
If you read this far, it implies that you have enjoyed the article. Thank you.
Opinions expressed by DZone contributors are their own.
Comments