Reputation: 18520
I have a set of Gulp (v4) tasks that do things like compile Webpack and Sass, compress images, etc. These tasks are automated with a "watch" task while I'm working on a project.
When my watch task is running, if I save a file, the "default" set of tasks gets ran. If I save again before the "default" task finishes, another "default" task begins, resulting in multiple "default" tasks running concurrently.
I've fixed this by checking that the "default" task isn't running before triggering a new one, but this has caused some slow down issues when I save a file, then rapidly make another minor tweak, and save again. Doing this means that only the first change gets compiled, and I have to wait for the entire process to finish, then save again for the new change to get compiled.
My idea to circumvent this is to kill all the old "default" tasks whenever a new one gets triggered. This way, multiples of the same task won't run concurrently, but I can rely on the most recent code being compiled.
I did a bit of research, but I couldn't locate anything that seemed to match my situation.
How can I kill all the "old" gulp tasks, without killing the "watch" task?
EDIT 1: Current working theory is to store the "default" task set as a variable and somehow use that to kill the process, but that doesn't seem to work how I expected it to. I've placed my watch task below for reference.
// watch task, runs through all primary tasks, triggers when a file is saved
GULP.task("watch", () => {
// set up a browser_sync server, if --sync is passed
if (PLUGINS.argv.sync) {
CONFIG_MODULE.config(GULP, PLUGINS, "browsersync").then(() => {
SYNC_MODULE.sync(GULP, PLUGINS, CUSTOM_NOTIFIER);
});
}
// watch for any changes
const WATCHER = GULP.watch("src/**/*");
// run default task on any change
WATCHER.on("all", () => {
if (!currently_running) {
currently_running = true;
GULP.task("default")();
}
});
// end the task
return;
});
EDIT 2: Thinking about this more, maybe this is more a Node.js question than a Gulp question – how can I stop a function from processing from outside that function? Basically I want to store the executing function as a variable somehow, and kill it when I need to restart it.
Upvotes: 3
Views: 719
Reputation: 181339
As @henry stated, if you switch to the non-chokidar version you get queuing for free (because it is the default). See no queue with chokidar.
But that doesn't speed up your task completion time. There was an issue requesting that the ability to stop a running task be added to gulp - how to stop a running task - it was summarily dealt with.
If one of your concerns is to speed up execution time, you can try the lastRun()
function option. gulp lastRun documentation
Retrieves the last time a task was successfully completed during the current running process. Most useful on subsequent task runs while a watcher is running.
When combined with src(), enables incremental builds to speed up execution times by skipping files that haven't changed since the last successful task completion.
const { src, dest, lastRun, watch } = require('gulp');
const imagemin = require('gulp-imagemin');
function images() {
return src('src/images/**/*.jpg', { since: lastRun(images) })
.pipe(imagemin())
.pipe(dest('build/img/'));
}
exports.default = function() {
watch('src/images/**/*.jpg', images);
};
Example from the same documentation. In this case, if an image was successfully compressed during the current running task, it will not be re-compressed. Depending on your other tasks, this may cut down on your wait time for the queued tasks to finish.
Upvotes: 0
Reputation: 4385
There are two ways to set up a Gulp watch. They look very similar, but have the important difference that one supports queueing (and some other features) and the other does not.
The way you're using, which boils down to
const watcher = watch(<path glob>)
watcher.on(<event>, function(path, stats) {
<event handler>
});
uses the chokidar instance that underlies Gulp's watch()
.
When using the chokidar instance, you do not have access to the Gulp watch()
queue.
The other way to run a watch boils down to
function watch() {
gulp.watch(<path>, function(callback) {
<handler>
callback();
});
}
or more idiomatically
function myTask = {…}
const watch = () => gulp.watch(<path>, myTask)
Set up like this watch
events should queue the way you're expecting, without your having to do anything extra.
In your case, that's replacing your const WATCHER = GULP.watch("src/**/*");
with
GULP.watch("src/**/*", default);
and deleting your entire WATCHER.on(…);
That said, be careful with recursion there. I'm extrapolating from your use of a task named "default"… You don't want to find yourself in
const watch = () => gulp.watch("src/**/*", default);
const default = gulp.series(clean, build, serve, watch);
Using the chokidar instance can be useful for logging:
function handler() {…}
const watcher = gulp.watch(glob, handler);
watcher.on('all', (path, stats) => {
console.log(path + ': ' + stats + 'detected') // e.g. "src/test.txt: change detected" is logged immediately
}
Typically Browsersync would be set up outside of the watch function, and the watch would end in reloading the server. Something like
…
import browserSync from 'browser-sync';
const server = browserSync.create();
function serve(done) {
server.init(…);
done();
}
function reload(done) {
server.reload();
done();
}
function changeHandler() {…}
const watch = () => gulp.watch(path, gulp.series(changeHandler, reload);
const run = gulp.series(serve, watch);
Upvotes: 3