Random line using sed

Asked by At

I want to select a random line with sed. I know shuf -n and sort -R | head -n does the job, but for shuf you have to install coreutils, and for the sort solution, it isn't optimal on large data :

Here is what I tested :

echo "$var" | shuf -n1

Which gives the optimal solution but I'm afraid for portability that's why I want to try it with sed.

`var="Hi
 i am a student
 learning scripts"`

output:
i am a student

output:
hi

It must be Random.

6 Answers

3
William Pursell On Best Solutions

It depends greatly on what you want your pseudo-random probability distribution to look like. (Don't try for random, be content with pseudo-random. If you do manage to generate a truly random value, go collect your nobel prize.) If you just want a uniform distribution (eg, each line has equal probability of being selected), then you'll need to know a priori how many lines of are in the file. Getting that distribution is not quite so easy as allowing the earlier lines in the file to be slightly more likely to be selected, and since that's easy, we'll do that. Assuming that the number of lines is less than 32769, you can simply do:

N=$(wc -l < input-file)
sed -n -e $((RANDOM % N + 1))p input-file

-- edit --

After thinking about it for a bit, I realize you don't need to know the number of lines, so you don't need to read the data twice. I haven't done a rigorous analysis, but I believe that the following gives a uniform distribution:

awk 'BEGIN{srand()} rand() < 1/NR { out=$0 } END { print out }' input-file

-- edit -- Ed Morton suggests in the comments that we should be able to invoke rand() only once. That seems like it ought to work, but doesn't seem to. Curious:

$ time for i in $(seq 400); do awk -v seed=$(( $(date +%s) + i)) 'BEGIN{srand(seed); r=rand()} r < 1/NR { out=$0 } END { print out}'  input; done | awk '{a[$0]++} END { for (i in a) print i, a[i]}' | sort
1 205
2 64
3 37
4 21
5 9
6 9
7 9
8 46

real    0m1.862s
user    0m0.689s
sys     0m0.907s
$ time for i in $(seq 400); do awk -v seed=$(( $(date +%s) + i)) 'BEGIN{srand(seed)} rand() < 1/NR { out=$0 } END { print out}'  input; done | awk '{a[$0]++} END { for (i in a) print i, a[i]}' | sort
1 55
2 60
3 37
4 50
5 57
6 45
7 50
8 46

real    0m1.924s
user    0m0.710s
sys     0m0.932s
2
Cyrus On
var="Hi
i am a student
learning scripts"

mapfile -t array <<< "$var"      # create array from $var

echo "${array[$RANDOM % (${#array}+1)]}"
echo "${array[$RANDOM % (${#array}+1)]}"

Output (e.g.):

learning scripts
i am a student

See: help mapfile

1
Ed Morton On

This seems to be the best solution for large input files:

awk -v seed="$RANDOM" -v max="$(wc -l < file)" 'BEGIN{srand(seed); n=int(rand()*max)+1} NR==n{print; exit}' file

as it uses standard UNIX tools, it's not restricted to files that are 32,769 lines long or less, it doesn't have any bias towards either end of the input, it'll produce different output even if called twice in 1 second, and it exits immediately after the target line is printed rather than continuing to the end of the input.


Update:

Having said the above, I have no explanation for why a script that calls rand() once per line and reads every line of input is about twice as fast as a script that calls rand() once and exits at the first matching line:

$ seq 100000 > file

$ time for i in $(seq 500); do
    awk -v seed="$RANDOM" -v max="$(wc -l < file)" 'BEGIN{srand(seed); n=int(rand()*max)+1} NR==n{print; exit}' file;
done > o3

real    1m0.712s
user    0m8.062s
sys     0m9.340s

$ time for i in $(seq 500); do
    awk -v seed="$RANDOM" 'BEGIN{srand(seed)} rand() < 1/NR{ out=$0 } END { print out}' file;
done > o4

real    0m29.950s
user    0m9.918s
sys     0m2.501s

They both produced very similar types of output:

$ awk '{a[$0]++} END { for (i in a) print i, a[i]}' o3 | awk '{sum+=$2; max=(NR>1&&max>$2?max:$2); min=(NR>1&&min<$2?min:$2)} END{print NR, sum, min, max}'
498 500 1 2

$ awk '{a[$0]++} END { for (i in a) print i, a[i]}' o4 | awk '{sum+=$2; max=(NR>1&&max>$2?max:$2); min=(NR>1&&min<$2?min:$2)} END{print NR, sum, min, max}'
490 500 1 3

Final Update:

Turns out it was calling wc that (unexpectedly to me at least!) was taking most of the time. Here's the improvement when we take it out of the loop:

$ time { max=$(wc -l < file); for i in $(seq 500); do awk -v seed="$RANDOM" -v max="$max" 'BEGIN{srand(seed); n=int(rand()*max)+1} NR==n{print; exit}' file; done } > o3

real    0m24.556s
user    0m5.044s
sys     0m1.565s

so the solution where we call wc up front and rand() once is faster than calling rand() for every line as expected.

0
abdan On

on bash shell, first initialize seed to # line cube or your choice

$ i=;while read a; do let i++;done<<<$var; let RANDOM=i*i*i

$ let l=$RANDOM%$i+1 ;echo -e $var |sed -En "$l p"

if move your data to varfile

$ echo -e $var >varfile
$ i=;while read a; do let i++;done<varfile; let RANDOM=i*i*i

$ let l=$RANDOM%$i+1 ;sed -En "$l p" varfile

put the last inside loop e.g. for((c=0;c<9;c++)) { ;}

0
agc On

Using GNU sed and bash; no wc or awk:

f=input-file
sed -n $((RANDOM%($(sed = $f | sed '2~2d' | sed -n '$p')) + 1))p $f

Note: The three seds in $(...) are an inefficient way to fake wc -l < $f. Maybe there's a better way -- using only sed of course.

0
James Brown On

Using shuf:

$ echo "$var" | shuf -n 1

Output:

Hi