The amount of time to schedule is problem dependent. The sbatch
flag -t 240
specifying 240 minutes of scheduled time was pedagogically supplied and is problem dependent. Your own job may need significantly less time, or potentially more.
Note: the htc partition has a max walltime of 4 hours (240 minutes) to increase the walltime beyond 4 hours add the -p general
flag example: sbatch -p general -t 300
Overview
How to tarball data using SBATCH with and without compression depending on the data type.
Without compression (good for binary data)
sbatch -t 240 --wrap="tar cvf mytarball.tar paths/to/be/tarred/"
The above command will submit a command to a compute node in the htc
partition that requests 1 core for 4 hours (240 minutes). The command will create the uncompressed archive mytarball.tar
that contains the contents of the paths specified.
With compression (good for ASCII data)
sbatch -t 240 --wrap="tar czvf mytarball.tgz paths/to/be/tarred/"
The above command will submit a command to a compute node in the htc
partition that requests 1 core for 4 hours (240 minutes). The command will create the gzipped compressed archive mytarball.tgz
that contains the contents of the paths specified.
Additional Help
If you require further assistance on this topic, please don't hesitate to contact the Research Computing Team. To create a support ticket, kindly send an email to rtshelp@asu.edu. For quick inquiries, you're welcome to reach out via our #rc-support Slack Channel or attend our office hours for live assistance
We also offer a series of workshops. More information here: Educational Opportunities and Workshops