Run A Command Line Process On All Files In A Directory And Collect The Output Into A New File
The Script
This was a test script for setting up a larger process. It reads all the files in a directory, passes them through a secondary process, then puts all the outputs in a single file :
#!/bin/bash
OUTPATH="results file.txt"
for
do
|
Details
-
Using quotes around [TODO: Code shorthand span ] allows spaces in the path
-
The first ` : > "$OUTPATH" [TODO: Code shorthand span ] line clears out the file. From the bash manual, the ` : [TODO: Code shorthand span ] "Do nothing beyond expanding arguments and performing redirections. The return status is zero." When you point that to a file with ` > [TODO: Code shorthand span ] it clears it out
-
This version uses a ` for [TODO: Code shorthand span ] loop. I've seen other stuff that uses ` file [TODO: Code shorthand span ] but haven't explored the differences yet
-
The meat of the program runs ` cat [TODO: Code shorthand span ] on each file to output it then pipes that output to ` tr [TODO: Code shorthand span ] with ` PIPE _ HERE ` bash ` before appending it to the output file via ` > > ` bash `
-
The ` tr [TODO: Code shorthand span ] command changes all upper case letters to lowercase
-
Any command that works in the pipeline can be used in place of ` tr ` bash ` . It's simply what's being used for the illustration
Example
Given these three input files :
The script will produce :
Footnotes And References
- •