How to write awk command to group line data and dump to file
-
21-12-2019 - |
Question
One data file consists of multiple line data. A quick look of data file is like:
./gc_string/datadata.distr 10 1273377106 2
./gc_string/datadata.distr 10 -540812264 2
./gc_string/datadata.distr 10 318171673 2
./app_fib/datadata.distr 4 -1593911137 3
./app_fib/datadata.distr 4 -1345649758 3
./app_fib/datadata.distr 5 -1545930833 3
./app_fib/datadata.distr 5 1916879527 3
./app_fib/datadata.distr 5 609112984 3
./app_fib/datadata.distr 6 111417553 3
./app_fib/datadata.distr 6 -1545460791 3
.........
What i want to do is group line datas and write them to each different files according to 1st column. The rule is all columns except 1st column are written to the same file if 1st have same value. The file name is based on 1st column value. For example, two files will generated with above data:
gc_string.txt
-------------------
10 1273377106 2
10 -540812264 2
10 318171673 2
app_fib.txt
-------------------
4 -1593911137 3
4 -1345649758 3
5 -1545930833 3
5 1916879527 3
..
I think the bash awk can perform this task. I tried several ways but fail. Could anyone give me tips? Thanks
La solution
awk '{split($1,a,"/"); print $2,$3,$4 > a[2] ".txt"}' datafile
Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow