[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: how to save video on web page



On Wed 08 Apr 2020 at 13:25:55 (-0000), Curt wrote:
> On 2020-04-07, Ihor Antonov <ihor@antonovs.family> wrote:
> >> 
> >> Finally, when all else fails, and if you've read this far,
> >> you can just capture the screen contents with ffmpeg's
> >> x11grab and record it to an mpg file. The disadvantages are
> >> that you capture extraneous screen decorations, and you've got
> >> to dedicate the whole screen to watching the video, remembering
> >> to increase your blanking timeout too. If you can only record
> >> audio through the microphone, you get more extraneous rubbish
> >> there too.
> >> 
> >
> > That is one comprehensive write up!
> > Thanks David, today I learned something new thanks to you.
> >
> 
> Yet in all that detailed thoroughness I believe he neglected to mention
> one of the simpler techniques, which is to view the relevant page source
> in your browser and search for the link (mp4 is a good fishing string)
> to the video (far from infallible, of course, but has worked for me on
> occasion).

I thought that was a something everybody did, and not just for videos.
I probably download more PDFs that way myself. I thought people might
not have thought of the techniques I outlined. Is everyone here doing
them already as a matter of routine?

Anyway, for links, I use

function http-lines {
    [ -n "$1" ] && printf '%s\n' "Usage:	$FUNCNAME < html-source
	breaks lines before any occurrence of "http" (whatever the context)
	after first replacing newlines by blanks." >&2 && return 1
    while read line; do
	printf '%s' "$line"
    done | sed -e 's,\(http[s]\?://\),\n\1,g' | less -S
}

to make finding links a bit easier. Sometimes you have to copy and
paste bits to assemble the link, eg where the filename is given
relative to some reference elsewhere in the web page.

I also have http-watches, a variant that looks specifically for
youtu* references, which can be plugged straight into youtube-dl
(which I have wrapped, see below).

And I think I've posted these here before:

function wg-in-quotes {
    [ -z "$1" ] && printf '%s\n' "Usage:	$FUNCNAME 'URL' (must be in single quotes!)
	strips the google prefix (up to url=) and suffix (from &usg=) from the argument,
	translates various common %nn sequences in the URL, and retrieves it." >&2 && return 1
    wget $(sed -e 's/http.*url=//;s/&usg=.*$//;s/%2F/\//g;s/%3A/:/g;s/%3D/=/g;s/%3F/?/g;s/%26/\&/g;s/%25/%/g' <<<"$1") # % characters must be edited last
}

function gy-in-quotes {
    [ -z "$1" ] && printf '%s\n' "Usage:	$FUNCNAME 'URL' (must be in single quotes!)
	strips the google prefix (up to url=) and suffix (from &usg=) from the argument,
	translates various common %nn sequences in the URL, and retrieves it." >&2 && return 1
    geto $(sed -e 's/http.*url=//;s/&usg=.*$//;s/%2F/\//g;s/%3A/:/g;s/%3D/=/g;s/%3F/?/g;s/%26/\&/g;s/%25/%/g' <<<"$1") # % characters must be edited last
}

which make it easier to copy and paste google's Link Location address
directly into the commandline. gy and geto[ther] are just my wrappers
round youtube-dl that download to a specific directory and maintain a
history file to avoid accidentally repeating the same download.

Cheers,
David.


Reply to: