Unfortunately, it's not as up-to-date as yt-dlp so it can be fragile against blocks. I'm hoping that yt-dlp adds some functionality for downloading portions of a livestream (i.e. not downloading from the start, 120 hours ago).
You could encode these terms in a contract or something about allowed usage of a service, I believe.
yt-dlp --download-sections "*05:00-05:10" <YouTube URL>
Well, not important to some, but for enthusiasts and people looking to actually archive things, it is very important.
Case in point, hilariously, the last time I used YouTube's video download feature bundled with their Premium offering, I got a way worse quality output than with yt-dlp, which actually ripped the original stream without reencoding it.
I think I saw an idempotent h264 encoder at some point, where you wouldn't suffer generational loss if you matched the encoder settings exactly from run to run. But then you might need the people mastering the content (in this case YouTube) to adopt that same encoder, which they're not going to be "interested" in.
(It won't work for Youtube shorts though, because 10% of a 30s video just isn't enough for reliable smooth playback)
Maybe that was a difference in the stream itself though, since I've experienced both past-seekable and live-only live streams on YouTube.
The problem with this DVR feature is that if your connection is stuttering it will buffer you backwards a bit. Streamers like to disable this because they want to keep the time to deliver as low as possible so chat is more interactive and engaging, especially on youtube where your viewership might not qualify for the CCV metrics if the stream is not in a foreground tab. Best to leave it off if that is important for you.
https://greasyfork.org/en/scripts/485020-ytbetter-enable-rew...
In other (less biased) words: These old rules were rescinded haven't been enforced since 2012 (last example cited). This article was written in 2025 and still complaining about something that isn't happening anymore.
I don't think I believe this, as much as I'd like to. How many organizations would really consider this a critical need? My guess is, not enough for Google to care.
If you dive into the yt-dlp source code, you see the insane complexity of calculations needed to download a video. There is code to handle nsig checks, internal YouTube API quirks, and constant obfuscation that makes it a nightmare(and the maintainers heroes) to keep up. Google frequently rejects download attempts, blocks certain devices or access methods, and breaks techniques that yt-dlp relies on.
Half the battle is working around attempts by Google to make ads unblockable, and the other half is working around their attempts to shut down downloaders. The idea of a "gray market ecosystem" they tacitly approve ignores how aggressively they tweak their systems to make downloading as unreliable as possible. If Google wanted downloaders to thrive, they wouldn't make developers jump through these hoops. Just look at the yt-dlp issue tracker overflowing with reports of broken functionality. There are no secret nods, handshakes, or other winks, as Google begins to care less and less about compatibility, the doors will close. For example, there is already a secret header used for authenticating that you are using the Google version of Chrome browser [1] [2] that will probably be expanded.
[0] Ask HN: Does anyone else notice YouTube causing 100% CPU usage and stattering? https://news.ycombinator.com/item?id=45301499
[1] Chrome's hidden X-Browser-Validation header reverse engineered https://news.ycombinator.com/item?id=44527739
[2] https://github.com/dsekz/chrome-x-browser-validation-header
Last time I searched 'stacher open source' on Google, I found a Reddit thread discussing when it might become open source.
EDIT: The reason I ask is that the article says Stacher is open source, and that is news to me.
Once those devices get phased out, it is very likely they will move to Encrypted Media Extensions or something similar, I believe I saw an issue ticket on yt-dlp's repo indicating they are already experimenting with such, as certain formats are DRM protected. Lookup all the stuff going on with SABR which if I remember right is either related to DRM or what they may use to support DRM.
YouTube just doesn't make this available via API, but you've always been able to manually from YouTube Studio download your uploaded videos.
The efforts at DRM done by companies like Netflix is done because the companies that licensed the content demand it. That doesn't mean the DRM works. You can find torrents of all those shows.
> They perform a valuable role: If it were impossible to download YouTube videos, many organizations would abandon hosting their videos on YouTube for a platform that offered more user flexibility. Or they’d need to host a separate download link and put it in their YouTube descriptions. But organizations don’t need to jump through hoops -- they just let people use YouTube downloaders.
No, organizations simply use YouTube because it's free, extremely convenient, has been very stable enough over the past couple decades to depend on, and the organization does not have the resources to setup an alternative.
Also, I'm guessing such organizations represent a vanishly small segment of YouTube's uploaders.
I don't think people appreciate how much YouTube has created a market. "Youtuber" is a valid (if often derided) job these days, where creators can earn a living wage and maintain whole media companies. Preserving that monetization portal is key to YouTube and its content creators.
On my favorites YouTube downloaders with UI, I have:
- Varia https://giantpinkrobots.github.io/varia/
- Media Downloader https://github.com/mhogomchungu/media-downloader/
Wrong question leads to the wrong answer.
The right one is "how much of the ad revenue would be lost if". For now it's cheaper to spend bazillions on a whack-a-mole.
Or do you mean they read the source from hacking into a memory buffer after the player does decryption but before decoding, instead of doing the decryption themselves?
Didn't even mention https://3dyd.com
You should also look at PipePipe, also available on F-Droid and with similar enhancements (e.g., Sponsorblock) over NewPipe.
Seems like a bit of a presumptuous proposition. If it were impossible, the web just might be a bit more shit, and the video monopoly would roll on regardless. Most might just come to take for granted videos of personal significance (I downloaded one of my grandpa in WW2 footage, for example, and there was no other version available except the YouTube one) are dependent on YouTube continuing to host them.
Some would, of course, use alternative platforms that offer proper download links, but it's hard to think that would be most, easy to think that would be well under 5% of those uploading videos that should in ways be a kind of permanent archive, or that this loss would really amount to anything to Google, commercially. Maybe not something to poke the bear on?
Unlike with Youtube videos, you can't just freely pull something off GitHub and crack Widevine level 1 DRM. The tools and extracted secret keys that release groups use to pirate 4K content are protected and not generally available.
This doesn't matter if you want to find something popular enough for a release group to drop in a torrent, but if you have personal access to some bespoke or very obscure content the DRM largely prevents you from downloading it. (especially at level 1, used for 4K, which requires that only a separate hardware video decoder can access the keys)
tl;dr; DRM works in the sense it changes it from 1/100 people can download something (YouTube) to ~1/100000.
Can confirm at least one tech news website argued this point and tore down their own video hosting servers in favor of using Youtube links/embeds. Old videos on tweakers.net are simply not accessible anymore, that content is gone now
This was well after HTML5 was widely supported. As a website owner myself, I don't understand what's so hard now that we can write 1 HTML tag and have an embedded video on the page. They made it sound like they need to employ an expensive developer to continuously work on improving this and fixing bugs whereas from my POV you're pretty much there with running ffmpeg at a few quality settings upon uploading (there are maybe 3 articles with a video per day, so any old server can handle this) and having a quality selector below the video. Can't imagine what about this would have changed in the past decade in a way that requires extra development work. At most you re-evaluate every 5 years which quality levels ffmpeg should generate and change an integer in a config file...
Alas, little as I understand it, this tiny amount of extra effort, even when the development and setup work is already in the past(!), is apparently indeed a driving force in centralizing to Youtube for for-profits
Granted, you would have to deal with whatever your display does to the raw video signal - preferable to pointing a camcorder at the display but a little worse than the original file.
You need to breach the terms of service (use a downloader) to exercise the rights of the content license that youtube supports
16K = 15360x8640 8K = 7680x4320 4K = 3840x2160 2K = 1920x1080 1K = 960x540
(Every value is a doubling of the tier below it, or in the case of "1K" a halving.)
You acknowledge that it's not that simple:
> running ffmpeg at a few quality settings upon uploading (there are maybe 3 articles with a video per day, so any old server can handle this)
Can any old server really handle that? And can it handle the resulting storage of not only the highest-quality copy but also all the other copies added on top? My $5 Linode ("any old server") does not have the storage space for that. You can switch your argument to "storage is cheap these days," but now you're telling people to upgrade their servers and not actually claiming it's a one-click process anymore.
I use Vimeo as a CDN and pay $240 per year for it ($20/month, 4x more than I spend on the Linode that hosts a dozen different websites). If Vimeo were to shut down tomorrow, I'd be pretty out of luck finding anyone offering pricing even close to that-- for example, ScaleEngine charges a minimum of $25 per month and doesn't even include storage and bandwidth in their account fee. Dailymotion Pro offers a similar service to Vimeo these days, but their $9/month plan wouldn't have enough storage for my catalog, and their next cheapest price is $84/month. If you actually go to build out your own solution with professional hosting, it's not gonna be a whole lot cheaper.
Obviously, large corporations can probably afford to do their own hosting-- and if push came to shove, many of them probably would, or would find one of those more expensive partner options. But again, you're no longer arguing "it's just an HTML tag." You're now arguing they should spend hundreds or thousands per year on something that may be incidental to their business.
I have put serious thought into creating a tool that would automatically yt-dlp every video I open to a giant hard drive and append a simple index with the title, channel, thumbnail and date.
In general, I think people are way too casual about media of all kinds silently disappearing when you're not looking.
Looking closely, at least for yt-dlp, you would see it tries multiple methods to grab available formats, tabulates the working ones, and picks from them. Those methods are constantly being peeled away, though some are occasionally added or fixed. The net trend is clear. The ability to download is eroding. There have been moments when you might seriously consider that downloading, at least without a complicated setup(PO-Tokens, widevine keys, or something else), is just going to stop working.
As time goes on, even for those rare times you want to grab a video, direct downloading may no longer work. You might have to resort to other methods, like screen recording through software or an actual camera, for as long as your devices will let you do even that.
Yes, it could. It'd only just waste more bandwidth having to redownload and cause more pauses. But people who buy things (the important ones to advertise to) have fast connections and everyone's locked in so google can focus more on advertiser issues over the lower end network user experience.
In the end, I decided it is not worth it. In the scenario you described, I would take the video link/ID and paste it into Bing and Yandex. There is large chance they still have that page cached in their index.
FWIW if you are going to create your own tool, my advice will be to make it a browser extension, and try to pull the video straight from YouTube's <video> element.
This just made me incredibly grateful for the people who do this kind of work. I have no idea who writes all the uBlock Origin filters either, but blessed be the angels, long may their stay in heaven be.
I'm pretty confident I could figure it out eventually but let's be honest, the chance that I'd ever actually invest that much time and energy is approximates zero close enough that we can just say it's flat nil.
Maybe Santa Claus needs to make some donations tonight. ho ho ho
Someone at google please give us the ability to see titles!
Well, how about thanks the people who's maintaining the downloader to make it possible?
> they haven't made it impossible to download videos, so that is a win IMO.
At some point you can just fire up OBS Studio and do a screen rip, then cut the ads out manually and put it on Torrent/ED2k.
Will you still think it's a win then?
> If you choose to upload Content, you must not submit to the Service any Content that does not comply with this Agreement (including the YouTube Community Guidelines) or the law. For example, the Content you submit must not include third-party intellectual property (such as copyrighted material) unless you have permission from that party or are otherwise legally entitled to do so. [...]
> By providing Content to the Service, you grant to YouTube a worldwide, non-exclusive, royalty-free, sublicensable and transferable license to use that Content (including to reproduce, distribute, prepare derivative works, display and perform it) in connection with the Service and YouTube's (and its successors' and Affiliates') business, including for the purpose of promoting and redistributing part or all of the Service.
If you include others' work with anything stronger than CC0, that's not a license you can grant. So you'll always be in trouble in principle, regardless of whether or how YouTube decides to exercise that license. In practice, I wouldn't be surprised if the copyright owner could get away with a takedown if they wanted to.
The server is $30/month hosted by OVH, which comes with 2TB of storage. The throughout on the dedicated server is 1gbps. Unlimited transfer is included (and I've gone through many dozens of TB of traffic in a month).
Bandwidth these days can be less than .25/m at a 100g commit in US/EU, and OVH is pushing dozens of tb/s.
Big ups on keeping independent.
It’s a trade off I suppose - you can very well host your own streaming solution, and for the same price you can get a great single node, but if you want good TTFB and nodes with close proximity to many regions you may as well pay for a managed solution as the price for multiple VPS/VM stacks up quickly when you have a low budget
Edit: I think I missed your point about bandwidth pricing lol, but the second still stands
The part that is so infuriating is that they try to turn around and capture that value would should rightly be owned by the public by offering downloads for premium members, especially when that we KNOW the only reason YouTube isn't totally financially ruined is that the ISPs are legally required to price worthless youtube's (literally AI generated) spam traffic the same as useful services. (Net Neutrality)
And the are using these ill gotten gains to create their own backbone for yet more profit. Entirely pointless exercise when you realize the government is eventually going to break these companies up and will of course nationalize the one that owns all the infrastructure. Corporations are simply not the correct tool for managing infrastructure that the public relies on. I'm sure anybody whose tried to run a business on top of a google service can attest that's it's a bad strategy and is guaranteed to fail in the long term.
YouTube plays a game with downloaders to be and stay popular, supported even by niche devices.
On the other hand they will use vague license statements to strike opponents.
Even RSS YouTube is accessible to bots, but block d by robots. They can always claim you are doing something wrong of they wanted.
Im glad I wasn't blocked or throttled, but it seems like it'd be trivial to block someone like me
Am I missing something? It does sort of feel like they're allowing it
EDIT: Spooky skeletons.. Youtube suddenly as of today forces a "Sign in to confirm you’re not a bot" on both the website and yt-dl .. So maybe I've been fingerprinted and blacklisted somehow
It's "just" a yt-dlp frontend with a nice UI, meaning it works with sites other than youtube as well.
It also adds a quick-download option to the android sharing menu when sharing a link, which I've found incredibly convenient.
Youtube does try to throttle the data speeds, when that first happened, youtube-dl stopped being useful and everyone upgraded their python versions and started using yt-dlp instead.
Don’t forget - most “content creators” are not technical - self hosting is not an option.
And even if it were - it costs money.
I very rarely download YouTube videos but simply having done it a few times over the years, and even watching the text fly by in the terminal with yt-dlp, everything you’ve said is obvious.
Screen recording indeed might fail—Apple lets devs block it, so even screen recording the iPhone Mirroring app can result in an all-black recording.
How long until YouTube only plays on authorized devices with screens optimized for anti-camera recording? Silver lining, could birth a new creative industry of storytelling, like courtroom sketch artists with more Mr. Beast.
https://github.com/DialmasterOrg/Youtarr and https://github.com/Jocomol/newsboat_video_downloader were already mentioned in the comments, I think there's a few different apps that support playback, but I've never tried any of them.
To buy premium to support creators.
Once yt becomes hostile the deal between me and yt is off.
There's even a good Jellyfin integration
So there's probably at least some calculation where they have to decide how much effort they're putting into cracking down on these things, simply because on the one hand they don't want to anger Hollywood and music labels, and on the other hand they don't want to kill off 3/4 of media analysis content on the platform.
There's also the fact a lot of creators will deliberately turn a blind eye to people reusing their video footage so long as they credited in return. For a lot of them, it's less work to just let people figure out how to get the footage from their channels than to set up a third party hosting service where you can officially download them.
Interesting to hear about the terms of service provision though. Wonder how well it would hold up now given that a lot of modern outlets use donations or paid subscriptions for financing rather than ads? I can see an outlet like 404 Media covering YouTube downloaders at some point because of that.
That is not what we're talking about - the working assumption here is that the DRM scheme is sound and effective. In which case your only possible but also guaranteed stage of recapture is at the analog hole, by which point the media encoding is already undone, incurring a generational loss.
[0] I consider presently existing and historical DRM implementations deeply flawed and misguided; they severely overstep their boundaries implied by the name "DRM", in certain cases quite disgustingly - hence the many added adjectives for clarification
[1] puzzlingly, any access control will actually get you in the same legal situation, regardless of whether the access control mechanism is effective or sound, so this is actually a design decision; but it's pretty universally taken afaik.
Y'all just have two different budgets. For one person $30 / mo is reasonable for the other it's expensive.
But the core claim, that $5 / mo hosts a lot of non-video content but not much video content, holds.
We really dropped the ball when it came to running random js from websites. The number of people who truly run only free software these days is close to zero. I used to block all js about 10 years ago but it was a losing battle and ended up being essentially an opt out from society.
Google needs to be broken up already.
A lot of the reason for that is because yt-dlp explicitly makes it easy for you to update it, so I would imagine that many frontends will do so automatically - something which is becoming more necessary as time goes on, as YouTube and yt-dip play cat and mouse with each other.
Unfortunately, lately, yt-dip has had to disable by default the downloading of certain formats that it was formerly able to access by pretending to be the YouTube iOS client, because they were erroring too often. There are alternatives, of course, but those ones were pretty good.
A lot of what you see in yt-dlp is because of the immense amount of work that the developers put in in order to keep it working. Despite that it now allows for downloading from many more sites than it originally was developed for, they're still not going to give up YouTube support (as long as it still allows DRM-free versions) without a fight.
Once YouTube moves to completely DRM'd videos, however, that may have to be when yt-dlp retires support for YouTube, because yt-dlp very deliberately does not bypass DRM. I'd imagine the name would change at that point.
I used to be obsessed with this.
The way I saw it was the universe took billions of years of concerted effort to generate a random number that represents a unique file such as an interesting video or image. It would be such a shame if all that effort was invalidated due to bullshit YouTube reasons or copyright nonsense or link rot or whatever.
So I started hoarding this data. I started buying hardware and designing a home data center with ZFS and hundreds of terabytes to hold it all. I started downloading things I never actually gave a shit about just because they were rare and I wanted to preserve them.
I think getting married cured me of this. Now it's all moments that will be lost to time, like tears in the rain.
And your ideas on ISP caches are from the prehistoric ages, everything is over HTTPS, the only caches for youtube are dedicated servers given to the ISP by google.
Mostly the only expense the pirates have is the cost of the media itself, so subscription for the streaming service or the cost of Blu-rays or movie rental.
The DRM decryption isn’t the hard bit - it’s actually mostly a standard thing, and there are plenty of tools on GitHub that will decrypt it from you if you have a key, e.g. Devine.
The issue is mostly around getting a key, but those are easy enough to get if you know where to look (e.g. TV firmware dumps).
Once you have this though, and any piracy group will have this, it’s so much easier to do this than to screen record, and will give you the original quality as well.
Had never heard of Statcher, but it seems to just be a gui front for yt-dlp.
In my experience, New Pipe has never worked properly.
Lol. That is not possible.
If I'm able to watch something, my device must be able to decrypt the DRM. If my device can decrypt the DRM, I can take my device apart and figure out how it does this, and do it myself.
The most DRM encumbered format is DCP, used my cinemas. Each projector has a unique key burnt into it, the decryption, decoding and watermarking happen on the same piece of silicon, and the entire device is built like an HSM, opening it wipes the keys.
There are bit-perfect DCP rips on the high seas, with the original compressed data.
HDCP is meant to prevent me from copying HDMI signals. Every conference center and lecture hall has cheap Chinese devices that remove it.
Regarding the analog hole, with a properly calibrated professional video camera recording in RAW, with both camera and monitor genlocked and color calibrated, and the proper postprocessing, you can capure the original pixel values exactly.
I've done that part more often than I'd like...
And worst case, you can then brute force which parameters the original encode used to re-encode your data without generation loss.
Maybe I am too old, but I remember a time where the broad population recorded radio shows on Tape and video shows on VHS, and no one was asking any copyright questions. And yes, recording a TV show in your home VHS recorder and selling/giving it to third parties, was illegal in most jurisdictions and the same laws apply probably still today.
But recording something for your own archive and watch it home (possibly with family members) is probably also still perfectly legal (as long as no DRM bypass was used).
It is possible, both under the conservative interpretation of the word (like how AACS is continuously updated as security holes are found and compromises happen, with the keys being rotated each time), and on a theoretical level (FHE). The latter is not being done because it is not nearly performant enough, and the former is an ongoing cat and mouse game that is once again irrelevant to what we were discussing here.
With FHE, the "take the machine apart and analyze what it does to get the original bitstream" would be cryptographically hard, so good luck with that. With the usual DRM schemes it isn't, so they are pretty much always cracked in a few months or a few years, but until that point they are both sound and effective.
> Regarding the analog hole, with a properly calibrated professional video camera recording in RAW, with both camera and monitor genlocked and color calibrated, and the proper postprocessing, you can capure the original pixel values exactly.
Yes, that's what I'm suggesting too...
> And worst case, you can then brute force which parameters the original encode used to re-encode your data without generation loss.
Can you? I mean in practice, in a practical amount of time. And is that actually done?
This is why I recommend everybody to stay AWAY from Youtube Music. I migrated my curated playlists from Spotify a few years ago, and to my surprise now I have dozens of songs that are no longer available and Youtube doesn’t offer a way to at least let me know which song it was. Indeed, I was a paying user and Youtube caused intentional and irretrievable data loss.
After a decade of paying for Youtube Premium I have unsubscribed and have vowed never to give them any more money whatsoever.
To date, many Creative Commons licenses do in fact amount to "permission from that party", but if they start using DRM, those licenses would cease to grant YouTube permission.
A generic 404 for something you don't even know exists won't leave a `video_title` sized hole in your heart and chip on your shoulder, and won't give competitors opportunities to serve your needs instead.
And, theres plenty of tutorials on using ffmpeg based tools to make the files. And yes, "oh no, I need to learn something new for my video workflow."
You can just go online and grab software to bypass any and all DRM.
It's called OBS.
All DRM content must be rendered into meatspace at some point and there is literally no possible way to prevent this. Record your screen, record your system audio. It's pretty trivial
And yet with consumer hardware you can accurately rip every UHD Bluray right on release day, without any trouble.
> Can you? I mean in practice, in a practical amount of time. And is that actually done?
Most of the time effort of encoding is already spent on brute forcing these parameters to most effectively match the original input. You just need to change the way the encoder scores each possible result.
> With FHE, the "take the machine apart and analyze what it does to get the original bitstream" would be cryptographically hard, so good luck with that.
So what? You could just distribute the encrypted datastream and the key you've extracted from a device, say a smartphone you bought from an electronics recycler.
Also, I don't see how FHE with video would be possible in the next decades, considering the limitations to Moore's Law in recent years.
Video codecs are extremely complicated, requiring lots of memory and processing power.
Ten years ago, even the best gaming PCs couldn't handle software decoding. Even today many mid-range systems can't handle software video decoding.
Even hardware implementations regularly require more power and memory than the entire rest of the system. There's a reason Raspberry Pi's are primarily a video decoding chip with a CPU as coprocessor, which is why the GPU runs the bootloader.
Video is constantly at the limitations of what we can physically accomplish.
Your suggestion would require spending a massive chunk of silicon area of a SoC on a FHE decoding ASIC that would only be useful for the small minority of DRM-encumbered content people watch.
And every manufacturer, every device, would have to join, for it to make any sense at all.
I don't have a problem with Youtube trying to make downloads difficult, but using its advertising monopoly to bully publications into not discussing software Google doesn't like is outrageous.
YouTube ad money works a bit differently, and downloading the video tends to bypass that process. Also, Streaming is by its nature on-demand, so there are a lot fewer benefits for downloading the video. Furthermore, U.S. copyright law has changed since the previously cited case, so there are some extra restrictions to contend with that the Sony case did not.
Been using it as a replacement for YouTube. I don't stream nowadays, only download.
I'd advise against downloading via someone else's servers anyway, as can be seen by their instance being blocked by Google. Hopefully some day someone manages to port yt-dlp to WASM, then it can be run locally in the browser without needing a download.
Fun fact: yt-dlp is public domain! That's really generous of them and I'm really thankful for their work.
For Linux... youtube-dl I guess, because there's still no really nice alternative to it. It doesn't seem like Stacher supports non-debian systems, or non-gnu libc systems, either.
Reddit struggles to provide a video player that is up to YouTube’s par. Do you have more resources than Reddit? Better programmers?
The concern is likely that if they let it become too easy the small minority becomes a large majority and the ad business becomes unsustainable.
Consider the ease and adoption of newsgroups in the early 90s vs Napster/Limewire later and the effect that had on revenues.
Reddit and Youtube have just a massive number of people visiting and trying to watch video at all time. It requires an enormous amount of bandwidth to serve up that video.
Youtube goes through heroic efforts to make videos instantly available and to apply high quality compression on videos that become popular.
If you don't have a huge viewership or dynamic content then yeah, it's actually pretty easy to setup and run videos sites (infowars has managed it). Target h264 and aac audio with a set number of resolutions and bitrates and viola, you've got something that's pretty competitive on the cheap that can play on pretty much any device.
It's not optimal for bandwidth, for that you need to start sniffing client capabilities. However, it'll get the job done while being pretty much universally playable.
It's much more likely YouTube just doesn't want to mess with it's caching apparatus unless it really has to. And I think they will eventually, but the numbers just don't quite add up yet.
But even if they did - I don't see why Google would care about these organizations. I expect anyone doing this is not expecting to get any views from the algorithm, is not bringing in many views or monetizing their videos.
To steelman it though, maybe Google cares if their monopoly benefits from nobody even knowing other video sites exist.
Or is it something different you are thinking about?
What benefits does DRM even provide for public, ad-supported content that you don't need to log for in order to watch it?
Does DRM cryptography offer solutions against ad blocking, or downloading videos you have legitimate access to?
Sorry that I'm too lazy to research this, but I'd appreciate if you elaborate more on this.
And also, I think they're playing the long game and will be fine to put up a login wall and aggressively block scraping and also force ID. Like Instagram.
Would be glad if I'm wrong, but I don't think so. They just haven't reached a sufficient level of monopolization for this and at the same time, the number of people watching YouTube without at least being logged in is probably already dwindling.
So they're not waiting anymore to be profitable, they already are, through ads and data collection.
But they have plenty of headroom left to truly start boiling the frog, and become a closed platform.
> No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.
(https://creativecommons.org/licenses/by-sa/4.0/legalcode.en)
Most of the CC licenses include such language and have since the first versions.
> Reddit struggles to provide a video player that is up to YouTube’s par. Do you have more resources than Reddit? Better programmers?
It's hard to say whether MDN and I have/am better programmer(s) and resource(s) than reddit without having any examples or notions of what issues reddit has run into.
If you mean achieving feature-parity, like automatic subtitles and dubbing and control menus to select languages and format those subtitles etc., that's a whole different ball game of course. The site I was speaking of doesn't support that today either (they don't dub/sub), at best you get automatically generated Dutch subtitles from yt now, i.e. shit subtitles (worse than English autogen and we all know how well those deal with jargon and noise)
But yeah okay, not any server variant can do this and the cloud premium is real. You'd need so spend like 5€/month on real hard drives if you want, say, 4TB of video storage on top of your existing website (the vimeo and dailymotion price points mentioned suggest that their catalogue is above 1 but below 2 TB). The 5€/month estimate is based on the currently most-viewed 4TB hard drive model in the tweakers pricewatch (some 100€), a modest average longevity of 5 years, triple redundancy, and that you would otherwise be running on smaller drives anyway for your normal website and images so there's no (significant) additional labor or electricity costs for storage (as in, you just buy a different variant of the same setup, not need to install additional ones)
I can't speak to Dutch websites but in the U.S., a news website will usually feel obligated to provide subtitles on their videos to avoid lawsuits under the ADA.
A $28/mo (Australian) vimeo subscription, or the "Advanced" $91/mo plan include the same 2TB bandwidth/month for viewers of your videos.
If you upload a 100MB video and it gets 20000 views the whole way through, you are now in the "contact sales" category of pricing.
This is why Youtube has a monopoly, because you've been badly tricked into thinking this pricing is fair and that 2TB is in any way shape or form adequate.
Could be a lot cheaper and less need for global distribution if low latency weren't a requirement. And by low latency I mean the video stream you watch is ~2s behind reality just like Youtube, Twitch, Kick, etc. If your use case is VOD or can tolerate 10s latency streaming, single PoP works fine.
The point is that if I chose Vimeo or AWS/GCP/Azure for this, at their bandwidth pricing my (in my opinion modest) usage would be costing tens of thousands of dollars monthly, which it most certainly does not.
Managed service pricing and the perception of it needs to change, because it feels like a sham once you actually do anything yourself.
I'd not be shocked if they do more aggressive compression for well known creators.
For nobodies (like myself) the compression is very basic. Whatever I send ends up compressed with VP9 which, I believe, youtube has a bunch of hardware that can do that really fast.
This is quite different from websites monetizing traffic through and trackers placed on their own webpages. Those can still be reliably blocked by preventing websites from loading third party content.
Indeed the complexity is insane
https://news.ycombinator.com/item?id=45256043
But what is meant by "a video". Is this referring to the common case or an edge/corner case. Does "a" mean one particular video or all videos
"There is code to handle nsig checks, internal YouTube API quirks, and constant obfuscation that makes it a nightmare(and the maintainers heroes) to keep up."
True, but is this code required for all YouTube videos
The majority of YT videos are non-commercial, unpromoted with low view counts. These are simple to download
For example, the current yt-dlp project contains approximately 218 YT IDs. A 2024 version contained approximately 201 YT IDs. These are often for testing edge cases
The example 1,525-character shell script below outputs download URLs for almost all the YT IDs found in yt-dlp. No Python needed
By comparison the yt-dlp project is 15,679,182 characters, approximately
The curl binary is used in the example only because it's popular, not because I use it. I use simpler, more flexible software than curl
I have been using tiny shell script to download YT videos for over 15 years. I have been downloading videos from googlevideo.com for even longer, before Google acquired YouTube.^1 Surprisingly (or not), when YT changes something that requires updating the script (and this has only happened to me about 5 times or less in 15 years) I have generally been able to fix the shell script faster than yt-dl(p) fixes its Python program (same for NewPipe/NewPipeSB)
I prefer non-commercial videos that are not promoted. The ones with relatively low view counts. For more popular videos, I listen to the audio file first before downloading the video file. After listening to the audio, I may decide to skip the video. Also I am not overly concerned about throttling
1. The original Google Video made a distinction between commercial and non-commercial(free) videos. The later were always easy to download, and no sign-in/log-in was required. This might be a more plausible theory why YT has always allowed downloads for non-commercial videos
# custom C filters to make scripts faster, easier to write
# yy030 filters URLs from stdin
# yy082 filters various strings from stdin,
# e.g., f == print format descriptions, v == print YT IDs
# x is a YouTube ID
# script accepts YT ID on stdin
#/bin/sh
read x;
y=https://www.youtube.com/youtubei/v1/player?prettyPrint=false
curl -K/dev/stdin $y <<eof|yy030|if test $# -gt 0;then egrep itag=$1;else yy082 f|uniq;fi;
silent
#verbose
ipv4
http1.0
tlsv1.3
tcp-nodelay
resolve www.youtube.com:443:142.251.215.238
user-agent "com.google.ios.youtube/19.45.4 (iPhone16,2; U; CPU iOS 18_1_0 like Mac OS X;)"
header "content-type: application/json"
header "X-Youtube-Client-Name: 5"
header "X-Youtube-Client-Version: 19.45.4"
header "X-Goog-Visitor-Id: CgtpN1NtNlFnajBsRSjy1bjGBjIKCgJVUxIEGgAgIw=="
cookie "PREF=hl=en&tz=UTC; SOCS=CAI; GPS=1; YSC=4sueFctSML0; __Secure-ROLLOUT_TOKEN=CJO64Zqggdaw7gEQiZW-9r3mjwMYiZW-9r3mjwM%=; VISITOR_INFO1_LIVE=i7Sm6Qgj0lE; VISITOR_PRIVACY_METADATA=CgJVUxIEGgAgIw=="
data "{\"context\": {\"client\": {\"clientName\": \"IOS\", \"clientVersion\": \"19.45.4\", \"deviceMake\": \"Apple\", \"deviceModel\": \"iPhone16,2\", \"userAgent\": \"com.google.ios.youtube/19.45.4 (iPhone16,2; U; CPU iOS 18_1_0 like Mac OS X;)\", \"osName\": \"iPhone\", \"osVersion\": \"18.1.0.22B83\", \"hl\": \"en\", \"timeZone\": \"UTC\", \"utcOffsetMinutes\": 0}}, \"videoId\": \"$x\", \"playbackContext\": {\"contentPlaybackContext\": {\"html5Preference\": \"HTML5_PREF_WANTS\", \"signatureTimestamp\": 20347}}, \"contentCheckOk\": true, \"racyCheckOk\": true}"
eofAs for that video being small and not receiving thousands of simultaneous views: sure, but buying sufficient bandwidth is not a "hire better programmers" problem. You don't need to beat Reddit's skills and resources to provide smoother video playback. Probably the opposite actually: smaller scale should be easier to handle than reddit scale, and they already had that all set up
Use OBS to capture the 4K version of this test pattern[0] off Netflix and I'll buy you lunch!
You can point a camera at your monitor of course, but with level 1 DRM the video decoding happens in a hardware video decoder that's not accessible by the operating system. If you try to screen record or use OBS on 4K content on macOS/Windows, you just get black. Same with phones. It's not "just use OBS". If it works for you it's probably because you're getting 720p content. (which admittedly provides the majority of the value)
[0] https://www.netflix.com/title/80018499 # <-- this test pattern is the most reliable way to actually know what resolution video Netflix is sending you.
So between the LCD and the connection, the signal is passing in cleartext. You can intercept that.
There's HDCP converters available for cheap.
And a static mp4 file encoded with ffmpeg -movflags +faststart -start_at_zero can be seeked within in any browser from any static webserver and viewed at any seeking time without downloading the rest. Is this streaming too?
Given the first example of fully watching and the second implementation of streaming with a simple static file you can see the streaming/file download distinction you make is much thinner than it first appears. At least from a technical perspective.
In lawyer-ese you are making a valid semantic point which I do acknowledge. "Streaming" in lawyer-ese means a corporation is doing it so there's: 1. legal liability 2. lots of abstraction layers involved so they don't get upset and 3. the local files are hidden from view of the user. Unlike the simple streaming example with a properly encoded .mp4 file.
That's rude and uncalled for. I didn't say I was unwilling to learn something new. I said the economics don't work out for the solution someone else proposed. And I also disputed the statement, "we can write 1 HTML tag and have an embedded video on the page." Now you've moved the goalpost to "learn something new" (which actually means "design and deploy an entire new system").