Reputation: 6731
Without using sed
or awk
, only cut
, how do I get the last field when the number of fields are unknown or change with every line?
Upvotes: 569
Views: 495548
Reputation: 13
There is a nice way to print column ranges even with last and previous positions with a fairly simple set of commands (including cut
).
If you have file table1
with 6 columns delimited by spaces and tabs in any number, then next pipe will print columns from 2nd to 6th:
cat table1 | tr -s ' ' '\t' | cut -f 2-`awk 'NR==1{print NF}' table1`
And next will print them from 3th to 5th:
cat table1 | tr -s ' ' '\t' | cut -f 3-$((`awk 'NR==1{print NF}' table1` - 1))
Explanation:
tr -s ' ' '\t'
- replaces all spaces with tabs, then squeezes tab sequences to single tab
`awk 'NR==1{print NF}' table1`
- print number of columns in first string in table1
(last column number = 6 )
$((`awk 'NR==1{print NF}' table1` - 1))
- print (last_column_number - 1) = 6 -1 = 5
Upvotes: 0
Reputation: 4682
Without awk? But it's so simple with awk:
echo 'maps.google.com' | awk -F. '{print $NF}'
AWK is a way more powerful tool to have in your pocket.
-F is for field separator
$NF variable is the number of fields (also stands for the index of the last)
Upvotes: 130
Reputation: 119
It is better to use awk
while working with tabular data. If it can be achieved by awk
, why not use that? I suggest you do not waste your precious time, and use a handful of commands to get the job done.
Example:
# $NF refers to the last column in awk
ll | awk '{print $NF}'
Upvotes: 7
Reputation: 1088
choose -1
choose supports negative indexing (the syntax is similar to Python's slices).
Upvotes: 0
Reputation: 22989
It is not possible using just cut
. Here is a way using grep
:
grep -o '[^,]*$'
Replace the comma for other delimiters.
Explanation:
-o
(--only-matching
) only outputs the part of the input that matches the pattern (the default is to print the entire line if it contains a match).[^,]
is a character class that matches any character other than a comma.*
matches the preceding pattern zero or more time, so [^,]*
matches zero or more non‑comma characters.$
matches the end of the string.grep
prefers the one that starts earliest. So the entire last field will be matched.Full example:
If we have a file called data.csv containing
one,two,three
foo,bar
then grep -o '[^,]*$' < data.csv
will output
three
bar
Upvotes: 155
Reputation: 31
An alternative using perl would be:
perl -pe 's/(.*) (.*)$/$2/' file
where you may change \t
for whichever the delimiter of file
is
Upvotes: 3
Reputation: 37129
You could try something like this:
echo 'maps.google.com' | rev | cut -d'.' -f 1 | rev
Explanation
rev
reverses "maps.google.com" to be moc.elgoog.spam
cut
uses dot (ie '.') as the delimiter, and chooses the first field, which is moc
com
Upvotes: 1194
Reputation: 464
Adding an approach to this old question just for the fun of it:
$ cat input.file # file containing input that needs to be processed
a;b;c;d;e
1;2;3;4;5
no delimiter here
124;adsf;15454
foo;bar;is;null;info
$ cat tmp.sh # showing off the script to do the job
#!/bin/bash
delim=';'
while read -r line; do
while [[ "$line" =~ "$delim" ]]; do
line=$(cut -d"$delim" -f 2- <<<"$line")
done
echo "$line"
done < input.file
$ ./tmp.sh # output of above script/processed input file
e
5
no delimiter here
15454
info
Besides bash, only cut is used. Well, and echo, I guess.
Upvotes: 0
Reputation: 10260
I realized if we just ensure a trailing delimiter exists, it works. So in my case I have comma and whitespace delimiters. I add a space at the end;
$ ans="a, b"
$ ans+=" "; echo ${ans} | tr ',' ' ' | tr -s ' ' | cut -d' ' -f2
b
Upvotes: -2
Reputation: 3350
If your input string doesn't contain forward slashes then you can use basename
and a subshell:
$ basename "$(echo 'maps.google.com' | tr '.' '/')"
This doesn't use sed
or awk
but it also doesn't use cut
either, so I'm not quite sure if it qualifies as an answer to the question as its worded.
This doesn't work well if processing input strings that can contain forward slashes. A workaround for that situation would be to replace forward slash with some other character that you know isn't part of a valid input string. For example, the pipe (|
) character is also not allowed in filenames, so this would work:
$ basename "$(echo 'maps.google.com/some/url/things' | tr '/' '|' | tr '.' '/')" | tr '|' '/'
Upvotes: 3
Reputation: 69
the following implements A friend's suggestion
#!/bin/bash
rcut(){
nu="$( echo $1 | cut -d"$DELIM" -f 2- )"
if [ "$nu" != "$1" ]
then
rcut "$nu"
else
echo "$nu"
fi
}
$ export DELIM=.
$ rcut a.b.c.d
d
Upvotes: 2
Reputation: 129
This is the only solution possible for using nothing but cut:
echo "s.t.r.i.n.g." | cut -d'.' -f2- [repeat_following_part_forever_or_until_out_of_memory:] | cut -d'.' -f2-
Using this solution, the number of fields can indeed be unknown and vary from time to time. However as line length must not exceed LINE_MAX characters or fields, including the new-line character, then an arbitrary number of fields can never be part as a real condition of this solution.
Yes, a very silly solution but the only one that meets the criterias I think.
Upvotes: 12
Reputation: 509
There are multiple ways. You may use this too.
echo "Your string here"| tr ' ' '\n' | tail -n1
> here
Obviously, the blank space input for tr command should be replaced with the delimiter you need.
Upvotes: 27
Reputation: 9
If you have a file named filelist.txt that is a list paths such as the following: c:/dir1/dir2/file1.h c:/dir1/dir2/dir3/file2.h
then you can do this: rev filelist.txt | cut -d"/" -f1 | rev
Upvotes: 0
Reputation: 295835
Use a parameter expansion. This is much more efficient than any kind of external command, cut
(or grep
) included.
data=foo,bar,baz,qux
last=${data##*,}
See BashFAQ #100 for an introduction to native string manipulation in bash.
Upvotes: 193