←back to thread

798 points bertman | 3 comments | | HN request time: 0s | source
Show context
embedding-shape ◴[] No.45900337[source]
Seems its already in Arch's repositories, and seems to work, just add another flag to the invocation:

    yt-dlp --cookies-from-browser firefox --remote-components ejs:github -f "bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best" 'https://www.youtube.com/watch?v=XXX'
It is downloading a solver at runtime, took maybe half a second in total, downloads are starting way faster than before it seems to me.

    [youtube] [jsc:deno] Solving JS challenges using deno
    [youtube] [jsc:deno] Downloading challenge solver lib script from  https://github.com/yt-dlp/ejs/releases/download/0.3.1/yt.solver.lib.min.js
It would be great if we could download the solver manually with a separate command, before running the download command, as I'm probably not alone in running yt-dlp in a restricted environment, and being able to package it up together with the solver before runtime would let me avoid lessening the restrictions for that environment. Not a huge issue though, happy in general the start of downloads seems much faster now.
replies(8): >>45900780 #>>45901047 #>>45901146 #>>45901292 #>>45902245 #>>45902494 #>>45903312 #>>45906436 #
Wowfunhappy ◴[] No.45901292[source]
What environment are you using that: - Has access to Youtube - Can run Python code - Can’t run JS code

If the concern is security, it sounds like the team went to great lengths to ensure the JS was sandboxed (as long as you’re using Deno).

If you’re using some sort of weird OS or architecture that Deno/Node doesn’t support, you might consider QuickJS, which is written in pure C and should work on anything. (Although it will be a lot slower, I’m not clear just how slow.) Admittedly, you then loose the sandboxing, although IMO it seems like it should safe to trust code being served by Google on the official Youtube domain. (You don’t have to trust Google in general to trust that they won’t serve you actual malware.)

replies(5): >>45902060 #>>45902074 #>>45902207 #>>45902465 #>>45907442 #
ivankra ◴[] No.45902465[source]
> Although it will be a lot slower, I’m not clear just how slow.

Around 30-50x slower than V8 (node/deno).

I've been recently benchmarking a lot of different engines: https://ivankra.github.io/javascript-zoo/

replies(1): >>45902624 #
1. ranger_danger ◴[] No.45902624[source]
> Around 30-50x slower than V8 (node/deno).

A solver running at 50ms instead of 1ms I would say is practically imperceptible to most users, but I don't know what time span you are measuring with those numbers.

replies(1): >>45903051 #
2. ivankra ◴[] No.45903051[source]
My page is about generic JS benchmarks. Just did a quick run with a sample javascript challenge I got via yt-dlp (https://raw.githubusercontent.com/ivankra/javascript-zoo/ref...):

  $ time ./v8 /bench/yt-dlp.js | md5sum -
  a730e32029941bf1f60f9587a6d9554f  -
  real 0m0.252s
  user 0m0.386s
  sys 0m0.074s

  $ time ./quickjs /bench/yt-dlp.js | md5sum -
  a730e32029941bf1f60f9587a6d9554f  -
  real 0m2.280s
  user 0m2.507s
  sys 0m0.031s
So about 10x slower for the current flavor of YouTube challenges: 0.2s -> 2.2s.

A few more results on same input:

  spidermonkey 0.334s
  v8_jitless 1.096s => about the limit for JIT-less interpreters like quickjs
  graaljs 2.396s
  escargot 3.344s
  libjs 4.501s
  brimstone 6.328s
  modernc-quickjs 12.767s (pure Go port of quickjs)
  fastschema-qjs 1m22.801s (Wasm port of quickjs)
  boa 1m28.070s
  quickjs-ng 2m49.202s
replies(1): >>45904858 #
3. rdtsc ◴[] No.45904858[source]
Thanks for the benchmark!

I tried it on my slower laptop. I get:

   node(v8)  : 1.25s user 0.12s system 154% cpu 0.892 total
   quickjs   : 6.54s user 0.11s system 99% cpu 6.671 total
   quickjs-ng: 545.55s user 202.67s system 99% cpu 12:32.28 total
A 5x slowdown for an interpreted C JS engine is pretty good I think, compared to all the time, code and effort put into v8 over the years!