I want to select a random line with `sed`

. I know `shuf -n`

and `sort -R | head -n`

does the job, but for `shuf`

you have to install `coreutils`

, and for the `sort solution`

, it isn't optimal on large data :

Here is what I tested :

```
echo "$var" | shuf -n1
```

Which gives the optimal solution but I'm afraid for portability
that's why I want to try it with `sed`

.

```
`var="Hi
i am a student
learning scripts"`
output:
i am a student
output:
hi
```

It must be Random.

It depends greatly on what you want your pseudo-random probability distribution to look like. (Don't try for random, be content with pseudo-random. If you do manage to generate a truly random value, go collect your nobel prize.) If you just want a uniform distribution (eg, each line has equal probability of being selected), then you'll need to know a priori how many lines of are in the file. Getting that distribution is not quite so easy as allowing the earlier lines in the file to be slightly more likely to be selected, and since that's easy, we'll do that. Assuming that the number of lines is less than 32769, you can simply do:

-- edit --

After thinking about it for a bit, I realize you don't need to know the number of lines, so you don't need to read the data twice. I haven't done a rigorous analysis, but I believe that the following gives a uniform distribution:

-- edit -- Ed Morton suggests in the comments that we should be able to invoke rand() only once. That seems like it ought to work, but doesn't seem to. Curious: