[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Looping Shell Scripts and System Load



On Wed, Jun 24, 2020 at 12:19:30PM -0500, Martin McCormick wrote:
> #!/bin/sh

Why?  Use bash.

> unarchive ()  {
>  unzip $1

Quotes.  <https://mywiki.wooledge.org/Quotes>

> MEDIADIR=`pwd`

Don't use all caps variable names.

Don't use backticks.  Use $() for command substitution.

Don't use $(pwd) to get the current directory.  It's in the PWD variable
already.

> mountpoint /mags >/dev/null  ||mount /mags
> mountpoint /mags >/dev/null || exit 1
> cd /mags

Check the result of cd.  Exit if it fails.  cd /mags || exit 1

> #rm -r -f *
>      for MEDIAFILE in `ls $MEDIADIR/*`; do

Do not use ls.  <https://mywiki.wooledge.org/ParsingLs>

Quotes again.  <https://mywiki.wooledge.org/Quotes>

What you want is:   for mediafile in "$mediadir"/*; do

> dirname=`basename $MEDIAFILE`
> mkdir $dirname
> cd $dirname

Quotes, quotes, quotes.  <https://mywiki.wooledge.org/Quotes>

Always check the result of a cd.  cd "$dirname" || exit 1

> unarchive $MEDIAFILE &

Quotes!  <https://mywiki.wooledge.org/Quotes>

> 	If there are 3 zipped files, it's probably going to be ok
> and start 3 unzip processes.  This directory had 13 zip files and
> the first 2 or 3 roared to life and then things slowed down as
> they all tried to run.

<https://mywiki.wooledge.org/ProcessManagement> has some examples
for writing "run n jobs at a time".  We found some newer ways as well,
and those haven't all made it to the wiki yet.

One of the better ones is:

13:40 =greybot> Run N processes in parallel (bash 4.3): i=0 n=5; for elem in 
                "${array[@]}"; do if (( i++ >= n )); then wait -n; fi; my_job 
                "$elem" & done; wait

In your script, that would be something like:

#!/bin/bash
# Requires bash 4.3 or higher.

# cd and mount and stuff

i=0 n=3
for f; do
  if ((i++ >= n)); then wait -n; fi
  unarchive "$f" &
done
wait


If you have to target older versions of bash, see the ProcessManagement
page on the wiki for alternatives.


Reply to: