The link refers to some code like the one I am working on right now:
$ for i in $(seq -w $n); do …; done
And the guy replying on that ml thread suggests to better write it this way:
$ seq -w $n | while read i; do …; done
His reason:
to prevent all of seq’s output having to be buffered at once.
I actually second that, and that reminds me of a rather snobby programmer I met last year. That guy’s code was like that, and I was close to a heart failure because of the many times I had struggled with code like this during the last 30 years:
$ for i in $(/bin/ls); do …; done
People even do it with “find“, and then they wonder, what “command line too long” would mean.
Why not simply feed it into a “… | while read …“?
And if that’s not good enough, have you come across “xargs” and esp. “xargs -0“?
He said, he had found, that all the read-s would get started in their own process (WTF? what a crappy, crappy thought!), and he allowed no doubting at all. The guy (“JF”) was the customer’s guru shell and perl script programmer, so why struggle on this issue? The life-time I had spent there wasn’t paid bad, but that snobbish programmer guy was of a rather discouraging kind. That cost me quite some productivity in our intersecting area, but then: energies not spent in one area are usually available in other areas, which isn’t that bad after all.
Having said all that I still like my current code, as I think it reads well and 12 will not really exhaust the command line buffer:
$ for i in $(seq -w 1 12); do …; done
I am creating place holder files resp. directories for 2015 this way. This has actually long been in place, but coreutils’ seq is buggy on one of my NAS-s, and I had to find a way around the bug, and I finally resorted to BusyBox’s seq, as the NAS is operating BusyBox anyway. Now I am calling /usr/bin/seq instead of simply calling “seq” through PATH – there are worse things than that.