awk usage: awk ' pattern {action} '
Variable name meaning
ARGC number of command line arguments
ARGV command line argument array
FILENAME current input file name
FNR record number in the current file
FS input field separation character, the default is a space
RS Input record separator
NF Number of fields in the current record
NR Number of records so far
OFS Output field separator
ORS Output record separator
1, awk '/101/' file Display matching lines containing 101 in file file.
awk '/101/,/105/' file
awk '$1 == 5' file
awk '$1 == "CT"' file Note that double quotes are required
awk '$1 * $2 >100 ' file
awk '$2 >5 && $2<=15' file
2, awk '{print NR,NF,$1,$NF,}' file displays the current record number, domain number and first line of each line of file file and the last domain.
awk '/101/ {print $1,$2 + 10}' file displays the first and second fields of the matching line of the file file plus 10.
awk '/101/ {print $1$2}' file
awk '/101/ {print $1 $2}' file displays the first and second fields of the matching line of the file file, but there is no separator in the middle of the fields.
3. df | awk '$4>1000000 ' Get input through the pipe character, such as: display the rows in the 4th field that meet the conditions.
4. awk -F "|" '{print $1}' file operates according to the new delimiter "|".
awk 'BEGIN { FS="[: t|]" }
{print $1,$2,$3}' file Modify the input delimiter by setting the input delimiter (FS="[: t|]").
Sep="|"
awk -F $Sep '{print $1}' file uses the value of the environment variable Sep as the separator.
awk -F '[ :t|]' '{print $1}' file uses the value of the regular expression as the delimiter, which here represents spaces, :, TAB, and | as delimiters at the same time.
awk -F '[][]' '{print $1}' file uses the value of the regular expression as the separator, which here represents [,]
5, awk -f awkfile file proceeds through the contents of the file awkfile in sequence control.
cat awkfile
/101/{print "
8. awk 'BEGIN { max=100 ;print "max=" max} BEGIN represents the operation performed before processing any row.
{max=($1 >max ?$1:max); print $1,"Now max is "max}' file Get the maximum value of the first field of the file.
(expression 1? expression 2: expression 3 is equivalent to:
if (expression 1)
expression 2
else
expression 3
awk '{print ($1>4 ? "high "$1: " low "$1)}' file
9, awk '$1 * $2 >100 {print $1}' file displays the line (record) where the first field in the file matches 101.
10, awk '{$1 == 'Chi' {$3 = 'China'; print}' file After finding the matching line, replace the third field before displaying the line (record)
awk '{$7 %= 3; print $7}' file. Divide the 7th field by 3, assign the remainder to the 7th field and then print. awk '/tom/ {wage=$2+$3; printf wage}' file After finding the matching line, assign a value to the variable wage. Print this variable.
12, awk '/tom/ {count++;}
13. , awk 'gsub(/$/,"");gsub(/,/,""); cost+=$4;
1 2 3 $1,200.00
1 2 3 $2,300.00
1 2 3 $4,000.00
awk '{gsub(/$/,"");gsub(/,/, "" );
if ($4>1000&&$4<2000) c1+=$4; else if ($4>2000&&$4<3000) c2+=$4;
else if ($4>3000&&$4<4 000) c3+= $4;
else c4+=$4; }
END {printf "c1=[%d];c2=[%d];c3=[%d];c4=[%d]n",c1,c2,c3, c4}"' file
Complete the conditional statement through if and else if
awk '{gsub(/$/,"");gsub(/,/,"");
if ($4>3000&&$4<4000 ) exit;
else c4+=$4; }
END {printf "c1=[%d];c2=[%d];c3=[%d];c4=[%d]n",c1,c2,c3 ,c4}"' file
Exit under certain conditions through exit, but still perform the END operation.
awk '{gsub(/$/,"");gsub(/,/,"");
if ($4>3000) next;
else c4+=$4; }
END {printf "c4=[% d]n",c4}"' file
Skip the line under a certain condition through next and perform the operation on the next line.
14, awk '{print FILENAME,$0}' file1 file2 file3>fileall Change file1, The file contents of file2 and file3 are all written to fileall, and the format is
Print the file and prefix the file name $1! ,index($0," ") +1)>$1}' fileall Re-split the merged file into 3 files and make them consistent with the original file.
16, awk 'BEGIN {"date"|getline d; print d}' Send the date execution result to getline through the pipeline, assign it to the variable d, and then print it.
17. awk 'BEGIN {system("echo "Input your name:\c""); getline d;print "nYour name is",d,"b!n"}'
Enter name interactively through the getline command, and show it.
awk 'BEGIN {FS=":"; while(getline< "/etc/passwd" >0) { if($1~"050[0-9]_") print $1}}'
Print /etc/ The user name in the passwd file contains the user name 050x_.
18. awk '{ i=1;while(i
{ for(i=1;i
else { printf "%s/",$i } }}' Displays the full path of a file.
Use for and if to display the date
awk 'BEGIN {
for(j=1;j<=12;j++)
{ flag=0;
printf "n%dmonthn",j;
for(i=1 ;i<=31;i++)
)&&i>30) flag=1;
if (flag==0) {printf "%02d%02d ",j,i}
}
}
}'
Flag=abcd
awk '{print '$Flag'}' The result is abcd
awk '{print "$Flag"}' The result is $Flag
The above is transferred from chinaunix , the following is my own summary:
Sum:
$awk 'BEGIN{total=0}{total+=$4}END{print total}' a.txt -----The fourth of the a.txt file Sum the fields!
$ awk '/^(no|so)/' test-----Print all lines starting with mode no or so.
$ awk '/^[ns]/{print $1}' test-----If the record starts with n or s, print this record.
$ awk '$1 ~/[0-9][0-9]$/(print $1}' test-----If the first field ends with two numbers, print this record.
$ awk '$1 == 100 || $2 < 50' test-----If the first field is equal to 100 or the second field is less than 50, print this line.
$ awk '$1 != 10' test- ----If the first field is not equal to 10, print the line.
$ awk '/test/{print $1 + 10}' test-----If the record contains the regular expression test, the first one. Add 10 to the field and print it out.
$ awk '{print ($1 > 5 ? "ok "$1: "error"$1)}' test-----If the first field is greater than 5, print the one after the question mark. Expression value, otherwise print the expression value after the colon.
$ awk '/^root/,/^mysql/' test----Print the records starting with the regular expression root to the regular expression mysql. Record all records in the range. If a new record starting with the regular expression root is found, continue printing until the next record starting with the regular expression mysql, or to the end of the file. For articles related to commonly used awk commands, please pay attention to the PHP Chinese website