Disk quota exceeded

If the user reaches the quota limit, the filesystem that carries home directory of users on High Performance Computing (HPC) servers (called ZFS) does not let users delete files. So the following command would not work:

rm my_file.dat
rm: cannot remove file `my_file.dat': Disk quota exceeded

The reason is that the system needs to transiently write metadata to the system before performing the deletion process. The solution is the following commands:

# Copy a null file, and replace with the file you want to delete
cp /dev/null my_file.dat

# remove the file that is already changed
rm -r my_file.dat

The first line overwrite a null file on the file you would like to delete. The second line deletes that file. Now you should have enough quota to perform more deletions at your user directory.

Working with High Performance Computing (HPC) servers, I always need to submit sequential jobs or jobs with dependency. So, the job you submit stays on queue and waits for the running job to finish. I use the following command:

qsub -W depend=afterany:JOB_ID qsub_file

for example, if the running job id is 4321234, and you would like to submit myrun.qsub, the command will look like:

qsub -W depend=afterany:4321234 myrun.qsub

But sometimes you have tens of jobs you want to submit all at once. Here is a bash code you can use to submit a chain of jobs:

// File name : run_100jobs_for_me.sh
// the first job you submit
job='qsub myrun1.qsub'

// submission of the jobs 2 through 100!
for i in {2..100}
do
    job_next='qsub -W depend=afterany:$job myrun$i.qsub'
    job=$job_next
done

You can run the code above using the following command in Linux command line:

sh run_100jobs_for_me.sh

and one last point! You can set the condition in which the next job will run, simply by changing afterany in the commands above. These are the options:

// Job is scheduled if the Job JOB_ID exits without errors or is successfully completed.
afterok:JOB_ID

// Job is scheduled if the Job JOB_ID exited with errors.
afternotok:JOB_ID

// Job is scheduled if the Job JOB_ID exits with or without error
afterany:JOB_ID