I'm probably alone in this, but WEI is a good thing. Anyone who's run a site knows the headache around bots. Sites that don't care about bots can simply not use WEI. Of course, we know they will use it, because bots are a headache. Millions of engineer hours are wasted yearly on bot nonsense.
With the improvements in AI this was inevitable anyway. Anyone who thinks otherwise is delusional. Reap what you sow and what not.
edit: removing ssl comparison since it's not really my point to begin with
> Can we just refuse to implement it?
> Unfortunately, it’s not that simple this time. Any browser choosing not to implement this would not be trusted and any website choosing to use this API could therefore reject users from those browsers. Google also has ways to drive adoptions by websites themselves.
This is true of any contentious browser feature. Choosing not to implement it means your users will sometimes be presented with a worse UX if a website's developers decide to require that feature.But as a software creator, it's up to you to determine what is best for your customers. If your only hope of not going along with this is having the EU come in and slapping Google's wrist, I'm concerned that you aren't willing to take a hard stance on your own.
A TLS client does not contain any trusted private key. You can write one yourself by reading the RFCs. The same is not true for WEI.
The other provides the website the ability to ensure that the user's device is one of an approved set of devices, with an approved set of operating system builds, with an approved set of browsers.
These are fundamentally different, surely you can see that.
> similarly both can be avoided if you're willing to not participate.
Actually, no. Unless your definition of "avoided" is simply not using a website which requires attestation, which, over time, could become most of them
Or they're just walled off from most of the web entirely.
I use a variety of personally developed web scraper scripts. For instance, I have digital copies of every paystub. These will almost all become worthless. My retirement plan at a previous employer would not let me download monthly statements unless I did it manually... it was able to detect the Mechanize library, and responded with some creepy-assed warning against robots.
No one would go to the trouble to do that manually every month, and no one was allowed robots apparently. But at least they needed to install some specialty software somewhere to disallow it. This shit will just make it even easier for the assholes.
I also worry about tools I sometimes use for things like Selenium.
This isn't SSL.
Are you sure you actually understand these two technologies (WEI and TLS) sufficiently to make these claims?
This is indeed concerning. I'd like to see Brave's response to this, and we already know how Firefox has responded.
In WEI, the users (the ones being attested) _cannot_ avoid WEI. If a website decides to not allow an unattested user, they can simply decide to refuse access.
The answer to this one is that the fundamental problem that current TPMs aim to "solve" is that of allowing corporate control and inspection of end users' computers. To continue having a free society where individuals have some autonomy over the devices they purportedly own, this needs to be soundly rejected.
Unfortunately people who have rooted phones, who use nonstandard browsers are not more than 1% of users. It’s important that they exist, but the web is a massive platform. We can not let a tyranny of 1% of users steer the ship. The vast majority of users would benefit from this, if it really works.
However i could see that this tool would be abused by certain websites and prevent users from logging in if on a non standard browser, especially banks. Unfortunate but overall beneficial to the masses.
Edit: Apparently 5% of the time it intentionally omits the result so it can’t be used to block clients. Very reasonable solution.
Other than Encrypted Media Extensions (and these are much more constrained than WEI!), I don't know of any other web standard that does that.
The EV certs still exists, but the browsers don't really differenciate between DV and EV certs anymore.
I don't think it does that. Nothing about this reduces the problem that captchas are attempting to solve.
> i could see that this tool would be abused by certain websites and prevent users from logging in if on a non standard browser, especially banks.
That's not abusing this tool. That's the very thing that this is intended to allow.
My problem isn't that I as a developer don't have an option to not implement attestation checks on my own web properties. I already know that (and definitely won't be implementing them).
My problem is that a huge number of websites will, ostensibly as an easier way to prevent malicious automation, spam etc, but in doing so will throw the baby out with the bathwater: That users will no longer have OS and browser choice because the web shackles them to approved, signed, and sealed hardware/software combinations primarily controlled by big tech.
In either case, WEI has the potential to be proper DRM, like in the “approved devices” fashion. It’s deeply invasive, and can be used to exclude any type of usage at the whim of mega corps, like screen readers, ad blocking, anti-tracking/fingerprinting, downloading copyrighted content, and anything new they can think of in the future. It’s quite literally the gateway to making the web an App Store (or at best, multiple app stores).
> What's the alternative solution?
To what problem? Bots specifically or humans who want to use the web in any way they want?
If bots, then elaborate. Many bots are good, and ironically the vast majority of bot traffic comes from the very corporations that are behind this stuff. As for the really bad bots, we have IP blocklisting. For the gray/manipulative bots, sure, that’s a problem. What makes you think that problem needs to be addressed with mandatory handcuffs for everyone else?
But TLS certificates solve a much narrower problem than WEI ("are you communicating with the site you think you are") and are widely and cheaply available from multiple organizationally independent certificate authorities.
In particular, TLS certificates don't try to make an assertion about the website visited, i.e. "this site is operated by honest people, not scammers". WEI does, with the assertion being something like "this browser will not allow injecting scripts or blocking elements".
Remember when Apple killed Flash? I heard it was because they wanted people to use their app store more instead of us playing games in the browser, so they could make more money. And Microsoft installing IE and setting it as the default browser? And now, Google is making changes to how we browse the web and adding things like Manifest v3, to boost their ad business.
The most irritating part is it is always gets packaged as being for our safety. The sad thing is I've often seen people even drink this user safety kool-aid, especially with Apple (like restricting browser choices on mobile - not sure if it's changed now).
I really think there should be some laws in place to prevent this kind of behavior. It's not fair to us, the users and we can't just rely on the EU to do it all the time.
I already block all ads so I'm obviously not totally sympathetic to developers who make decisions based on what will maximize ad revenue, but it still is not fair to put the burden on developers here and say "it's your choice, just say no".
Depends on what you count as "nonstandard", but various estimates put non-top 6 browser usage at between 3-12% (https://en.wikipedia.org/wiki/Usage_share_of_web_browsers#Su...) and non-Windows/macOS/iOS/Android usage at ~4% (https://en.wikipedia.org/wiki/Usage_share_of_operating_syste....) These also don't take into account traffic on older operating systems or hardware that would be incompatible with these attestations, or clients that spoof their user agent for anonymity.
In an ideal world, we would see this number grow, not shrink. It's not good for consumers if our choices dwindle to just one or two options.
> We can not let a tyranny of 1% of users steer the ship.
Far less than 1% of my users use the accessibility features. In fact, it is closer to 1% of 1%. Does that justify the far, far easier development and bug testing that I would enjoy if I were to stop providing accessibility features?* Allow web servers to evaluate the authenticity of the device and honest representation of the software stack and the traffic from the device.
* Offer an adversarially robust and long-term sustainable anti-abuse solution.
* Don't enable new cross-site user tracking capabilities through attestation. Continue to allow web browsers to browse the Web without attestation.
From: https://github.com/RupertBenWiser/Web-Environment-Integrity/...
If it actually won't do any of those things, then that should be debated first.
This notion of destroying the open web is so nonsensical. WEI is not obligatory. If it's being implemented it's because it solves a real problem. Think about it. There will still be sites that don't use it.
People's real issue is that the big sites will use WEI because the problem it solves is legitimate but they don't want to identify themselves, which makes sense, but they were never obligated to let you visit their site to begin with.
I can see this show up on Youtube (why not - under Google's control, and they want you to watch the ads on their official browser) and on banking apps. Initially. In the longer run, it either withers and dies, or it leads to antitrust action. I really can't see another way.
Our best hope is kicking up a huge fuss so legislators and media will notice, so Google will be under pressure. It won't make them cancel the feature but don't forget to remember that they aren't above anti-trust law. There is a significant chance that some competition authority will step in if the issue doesn't die down. Our job is to make sure it won't be forgotten really quickly.
I think this makes a category error. Most browser features/APIs are indeed treated as progressive enhancements by web developers, at least until an overwhelming number of the users have access to that feature. And even then, even if the developer makes assumptions that the feature/API is present, often the result is a degraded experience rather than an all-out broken experience.
The same is not true of web attestation. If a website requires it and a browser refuses to implement it, in at least some cases (probably a concerningly high number of cases though) the result will be that the user is entirely locked out of using that website.
It's also worth noting that _even if_ Vivaldi implements WEI, there's a solid chance that the attestation authority (Google, Microsoft, Apple) or possibly the website itself[1] will not accept it as a valid environment at all! After all, what makes Vivaldi not a "malicious or automated environment" in their eyes? What if Vivaldi allows full ad blocking extensions? User automation/scripting? Or any example of too much freedom to the user. Will the attestation authority decide that it is not worthy of being an acceptable environment?
[1] if this ends up spiralling out of control by allowing the full attestation chain to be inspected by the website
Absolutely zero large web properties do anything based on what's best for users. If this gains traction, Google will simply deny adsense payments for impressions from an "untrusted" page, and thus all the large players that show ads for revenue will immediately implement WEI without giving a single flying shit about the users, as they always have and always will.
Yes. Every SECOPS person let out a collective sigh of relief when the weekly p0 patches for flash stopped coming. Apple may have been trying to push towards 'native' apps but that was almost certainly secondary; safari was leading the way on html5 APIs.
Let's not pretend that the death of Flash was a tragedy.
I do remember the controversy at the time of everybody shifting to HTTPS only, though, and how it might exclude small/hobbyist sites. Fortunately, we've found ways to mitigate that friction in the end. I'm much less optimistic here.
I also think web developers getting together like we did with SOPA/PIPA and raising awareness on our web properties can also help. How do we organize that?
That’s kinda tricky to do well. Traffic for monitoring, you can do with a jwt, but like, enabling chunked transfer in python request lib is a problem you discover. An array of attestors could guarantee feature sets.
This all seems to me that in a decade we'll be having the same discussion, with the same excuse, but eventually the proposal from big corporations will be to require plugging-in a government-issued ID card into a smartcard reader in order to access pre-approved websites with pre-approved client portals running in pre-approved machines.
Google can reduce the page rank of websites that dont enable it (or just not give any page rank at all) and now everyone who wants to be found has to enable it
Provenance to the extent it is a problem is already handleable and largely handled. Note that "handled" here does not mean it is 100% gone, only that it is contained. Monopolistic control over the web is not containable.
Insects in a swarm can choose where to go but they can't choose where the swarm goes.
Normally I'd agree with you on that the tyranny of the minority is a bad thing, but sometimes the minority actually has a point and this is one of the cases where the minority is _objectively_ correct and letting the majority decide would end up in a complete dystopia. Democracy only works if everyone is informed (and able to think logically/critically, not influenced (either by force or by salary), etc.) and in this case the 99% simply do not have any clue on the effects of this being implemented (nor do they care). This entire proposal is pure orwellian shit.
There are a number of sites I frequent but don't log in to or register for an account.
Every single one of them has an absurd number of captchas, or I see the cloudflare protection thing come up for first for 3 seconds.
So while hypothetically it may be true that they don't have to do it, they will. It's not even clear to me that Firefox could implement it too... so do I have to switch back to Chrome (or [barf] Safari?)? Dunno. I can't predict the future, but you'd have to be in some sort of denial to not see where this is going.
> At the end of the day bots are a real issue
Bots are fucking awesome. We should all have bots, out there doing the boring stuff, bringing back the goodies to us. If someone tells you that bots are bad, they're lying to you because they're afraid that you might find out how much you'd want one.
I also don't understand how WEI does much to prevent a motivated user from faking requests. If you have Chrome running on your machine it's not gonna be too hard to extract a signed WEI token from its execution, one way or another, and pass that along with your Python script.
It looks like it basically gives Google another tool to constrain users' choices.
I'm guessing the reason we want attestation is so that Chrome can drop ad blockers and websites can drop non-Chrome browsers. But there is no reason why you can't do the thing where you point a video camera at a monitor, have AI black out the ads, and then view the edited video feed instead of the real one.
The only use for attestation I see is for work-from-home corporate Intranets. Sure, make sure that OS is up to date before you're willing to send High-Value Intellectual Property to the laptop. That... already works and doesn't involve web standards. (At my current job, I'm in the hilarious position where all of our source code is open-source and anyone on Earth can edit it, but I have to use a trusted computer to do things like anti-discrimination training. It's like opsec backwards. But, the attestation works fine, no new tech needed.)
Even without the incentive of “moar profit$” they never entertained Flash because fundamentally, it sucked. When it landed in Android, it was a bloated mess that sucked the battery dry and was slow as molasses. On every platform it existed on, it was a usability and security nightmare. No, Apple “killed” Flash by making a sane decision not to allow it in their fledgling platform because Flash outright sucked, informed largely by the abhorrent performance on all platforms.
> And Microsoft installing IE and setting it as the default browser?
SMH. There was never an issue with Microsoft providing IE as a default initially - that came later with the EU. The biggest issue was that if an OEM (a Dell or an HP) struck a deal with Netscape to provide that as default, Microsoft threatened to remove the OEMs license to distribute Windows. In the late ‘90s and early ‘00s that would have been the death knell of an OEM. And that is the anti-trust part. They abused the position as the number 1 desktop os ( by a significant margin) to take control of the then nascent browser market.
- Display a small text or a link to raise awareness about WEI
- Display a "Works best with Firefox, a browser which respects you and your privacy" banner in a similar way to the chrome nagging popups.
- Display a fullscreen modal (just like the SOPA/PIPA ones) with a detailed write-up of the problem
- Subtly degrade the website's experience on chromium (just check window.chrome)
- Outright block chromium, and explain why.
Nah, let jfoutz pay the fraction of a penny and get no ads. Content producer gets paid, and I get to read the article.
The site can choose, based on attested properties to send to an ad network, or just take the money from the user.
There are other ways to do it, but this makes it a standard.
If anything you are just proving the point of the most paranoid.
I don't even have a strong opinion on this but it's so weird to see this argument over and over. It's just calling for even an even more extreme reaction to any effort that goes in this direction, just in case it's used to justify a push for even worse stuff down the line.
They're not. Depending on your competency, you have a _ton_ of tools at your disposal for filtering traffic ranging from basic throttle to sophisticated behavior/request profiling.
I've spent more than a little bit of my career dealing with bots and I'm not really sure that a cryptographically signed blob proving that the request came from $thisSpecificVersion of firefox running on $thisExactVersion of osx is really going to help me.
I don't care _what_ made the request because that can always be spoofed; this cat and mouse game always ends at the analog loop hole. I care about what the request(s) are trying to do and that is something that I figure out with just the data I have server side.
The whole "this will block bots" part of the spec is complete bollocks and a red herring to distract from the real purpose - to block adblockers and competition from new browsers. And DRM, of course.
Talked to a few friends inside Google as well and they are also against it.
Firefox is going to be my default moving forward.
There is no reason or way to discuss it with technical merits anyway. Nobody can create a new issue on that repo, nor can they create a PR. Comments on reviews are also disabled.
Many of us are at technical spots that can do this. We need to bring back "Works best with Mozilla Firefox" pop-overs.
So… fuck it. Let them DRM their part of the internet. It is mostly shit nowadays anyway. They can index Reddit, X, and a bunch of sites that are GPT SEO trash.
We’re never getting 201X internet back anyway, so let Google and friends do their thing and everybody who doesn’t want anything to do with it can go back to the 200X internet. It was kind of disorganized but it it better than fighting them on DRM over an over again.
*exactly*. The analog loophole is where this cat/mouse game must end. Since we already know how it'll play out, can't we invest our time into more useful endeavors?
Edit: Ah, here's something about it from a degoogling perspective: https://www.reddit.com/r/degoogle/comments/x1610t/what_are_y...
Google is already pushing WEI into Chromium - https://news.ycombinator.com/item?id=36876301 - July 2023 (705 comments)
Google engineers want to make ad-blocking (near) impossible - https://news.ycombinator.com/item?id=36875226 - July 2023 (439 comments)
Google vs. the Open Web - https://news.ycombinator.com/item?id=36875164 - July 2023 (161 comments)
Apple already shipped attestation on the web, and we barely noticed - https://news.ycombinator.com/item?id=36862494 - July 2023 (413 comments)
Google’s nightmare “Web Integrity API” wants a DRM gatekeeper for the web - https://news.ycombinator.com/item?id=36854114 - July 2023 (447 comments)
Web Environment Integrity API Proposal - https://news.ycombinator.com/item?id=36817305 - July 2023 (437 comments)
Web Environment Integrity Explainer - https://news.ycombinator.com/item?id=36785516 - July 2023 (44 comments)
Google Chrome Proposal – Web Environment Integrity - https://news.ycombinator.com/item?id=36778999 - July 2023 (93 comments)
Web Environment Integrity – Google locking down on browsers - https://news.ycombinator.com/item?id=35864471 - May 2023 (1 comment)
If it were impossible for a company to have such a high market share in all of these areas at once, this proposal would be much less concerning.
I don't think I've made a category error, that again is true of all browser features. If your browser does not support JavaScript or WebSockets or WebGL, many sites would lock you out of them entirely as well. It's a choice of the website creator what to assume and what to require, and how to degrade the experience or offer alternatives when a feature is missing.
The way I imagine it, WEI will start with skipping CAPTCHA. Then it will be about serving ads (users without WEI would generate no or very limited ad revenue.) Then it's up to the owner of a site whether or not they want to allow non-WEI traffic at all. Some will choose to block users without WEI, and hopefully the number of browsers that have chosen not to implement it, and the number of users on those browsers is high enough that that option will not be appealing.
I hope that Vivaldi remains one of the browsers that doesn't implement it, whether or not the EU rules against it.
But the power is too significant. If it were some small subset of positive assertions I'd be ok with this, but the ability to perform arbitrary attestation is beyond what is required and is far too abusable.
- “I don't know why this enrages folks so much.” Googler re Chrome anti-feature https://news.ycombinator.com/item?id=36868888
I think that just meant some users with sufficient karma flagged it, but I was a bit confused because for a while it didn't say "[flagged]" but didn't show up in the first several pages or continue to get upvotes. Is there a delay in saying "[flagged]"?
It'll end DDOS by botnet. Compromised computers would (presumably) have to run a full browser. That's much more computationally expensive and (presumably) the user would see it running.
And the flaw here is that the proposal doesn't do enough. If that signed blob allowed you to uniquely ID the device it would help solve a lot more problems. That would end DDOS for the most part and make managing abuse a lot easier.
That's just the beginning. Attestation will eventually allow advertisers to demand that user is present and looking at the screen like in Black Mirror episode Fifteen Million Merits.
But many here are (in my view rightly) arguing that this would be too high a price to pay for bot/spam protection, since it would almost inevitably cement the browser, OS, and device monoculture even further.
[1] https://www.cultofmac.com/311171/crazy-iphone-rig-shows-chin...
Also that you're talking about anti virus shows that you're not really in touch with the gamut of computing. From my perspective, anti virus was something that was relevant two decades ago.
I can understand shopping. And reporters of hot news. But why everything?
Why does my http site, which has nothing important on it at all, get flagged by chrome as "insecure"?
This strikes me as a bunch of bs.
I'm fine with attestation when it comes to high-risk tasks such as confirming financial transactions or signing legal documents, or anonymous "proof-of-humanity" solutions such as Apple's Private Access Tokens (as long as there's a CAPTCHA-based or similar alternative!) for free trials or account creations (beats using SMS/phone number authentication, at least), but applying Trusted Computing to the entire browser just goes much too far.
So is it a headache for all/most sites or is it not?
1) FLoC: https://www.theverge.com/2022/1/25/22900567/google-floc-aban...
2) Dart: Google wanted this to replace javascript, but Mozilla and MS both said no way, as they had no part in it. So that project ended up dying.
Google tries lots of things. Mozilla, MS, and Apple are still strong enough (especially outside the US) to push back on things that they think are a bad idea.
How to Email to the President and Members of Congress
https://www.whitehouse.gov/contact/
https://www.facebook.com/joebiden/
Write a Letter
The online form is the fastest way to send a message, but if you prefer to write or type a letter, keep the following in mind:
Use 8 1/2 by 11-inch paper
Either type your message or handwrite it as neatly as possible
Include your return address on both the letter and the envelope
Mail the letter to The White House, 1600 Pennsylvania Avenue NW, Washington, DC 20500
Include the appropriate postage (stamp)
If you have any additional questions about how to email Joe Biden or Kamala Harris, please post a comment below. If you are still trying to email Donald Trump or Mike Pence, please post a comment below.
Contact the White House By PhoneEven though you can’t email the President, you can call the White House. However, to be clear, you will likely only speak with a staff member. To call, use the following phone numbers:
For general comments, call 202-456-1111
To reach the switchboard, call 202-456-1414
For TTY/TTD, use Comments: 202-456-6213 or the Visitor’s Office: 202-456-2121
It is highly unlikely that you will get to speak with any sitting POTUS directly on the phone.
How to Send an E-mail Your House RepresentativeTo find your representative, search the House of Representatives database by zip code. As an alternative, visit the Representative’s personal website. Most government websites have email and mailing addresses listed on the Contacts page.
Many websites also offer a contact form, but we recommend using this only as a last resort. Many online contact forms go to the website maintenance team and often don’t reach the representative or their staff. If you want a response, send a direct email or a letter. How to Send an E-mail to Your Senator
To find your state Senator(s), select your Senator from the state-by-state list on the United States Senate’s Web site. Note the list is in alphabetical order and provides the following information for each senator:
Senator’s full name
Political party affiliation and state they represent
Mailing address
Phone number
Link to an email contact form, usually on the Senator’s website.
Also, you can call the United States Capitol switchboard at (202) 224-3121. A switchboard operator will connect you directly with the state Senator’s office you request.Questions and Comments
If you have any questions about how to email the President, Joe Biden, U.S. representatives, members of Congress, or other government officials, please leave a message below. Please don’t post a comment on the form below and think it will be forwarded to the White House, Congress, the Biden administration, President Joe Biden, or Kamala Harris.
lifted from, https://www.einvestigator.com/government-email-addresses/
Basic reality and the easiness of attacks made it impossible to stick with HTTP for much longer. And hell if I watch Scammer Payback on Youtube, I'm beginning to think it might be a good idea to disable developer tools on browsers and to only unlock them if you can prove physical, un-remoteable access to a machine, similar to Apple's SIP.
There are a number of issues with your imagined scenario. I'll address two of them. Firstly, as nvy points out[0]:
If this gains traction, Google will simply deny adsense payments for
impressions from an "untrusted" page, and thus all the large players that
show ads for revenue will immediately implement WEI without giving a single
flying shit about the users, as they always have and always will.
This is the primary reason Google wants WEI -- to make it harder for users of ad/tracking blockers to access sites they sell ads on.The second issue is who is providing this "attestation" and what their criteria might be for "trustworthy" browsers. This will break down to a handful (Google, Microsoft, Apple and maybe Cloudflare and/or one or two others) of trusted "attestors" who will decide which browser/plugins/OS combinations are "trustworthy."
Since these folks all have a stake in walled gardens^W hellscapes, who's to say that Apple won't "attest" that any browser other than Safari on iOS or MacOS isn't trustworthy? Or Google may decide that any browser with uBlockOrigin, uMatrix or NoScript isn't trustworthy -- thus permanently deprecating ad/tracking blockers.
Since the spec doesn't specify the criteria for a "trusted" client, nor does it allow for the web site to determine for itself what constitutes the same, it's almost certain that such "trusted attestors" will penalize those who don't dance to their tune.
There are a host of other issues with WEI, especially privacy and property rights related, but those two (IMHO) are most relevant to your imaginings.
What might those benefits be? Not being snarky here, but AFAICT the only folks who gain any benefit seem to be Google and their customers (advertisers).
What am I missing here?
I see that as a downside, not a benefit -- who decides whether or not a client (i.e., my software running on my hardware) has those "desired properties" and what might those properties be?
Is this truely going to work though? Captcha provider already monitor mouse and keyboard movement while on the page. Can you really "synthesize" human-like mouse movements around the page? I'm not so sure.
If even extensions can be detected, why wouldn't selenium be detected? Granted, I don't know how it works exactly.
In addition to the 5$ robot arm you need to add 200$ for the device it is operating. Drastically raising the cost to run a bot farm is key. You can't fully eliminate inauthentic behavior, but you can make a lot of it unprofitable.
Not all of us. _This_ security guy just didn't like having to patch an entire fleet against a new critical exploit in the flash VM every week.
>I can understand shopping. And reporters of hot news. But why everything?
So Google can capture more ad revenue by refusing to "attest" clients who run ad blockers?
And so other attestors can dictate the "approved" software that can be used.
What could go wrong? /s
There's nothing about payments that requires testing client properties though. What you want is the ability to test if there's a corresponding payment, that has nothing really to do with the client's device. It just seems like irrelevant information, what are these "desired properties"?
You want a corresponding token with the request that matches a payment. And WEI seems like a strictly inferior way to get that instead of just... asking a payment provider for the token. What does my hardware/OS/browser have to do with a payment token?
If you can still run extensions you still need captchas. So one possible road this takes is Google launches it, everybody still uses captchas because extensions in desktop browsers still make automating requests trivial -- and then we lock down extensions because "we already locked down the hardware and we really do need to do something about captchas..."
This is more or less what the proposal does? It's akin to the same shady stuff seen here [1] except this time some third party gets to sign it.
> That would end DDOS for the most part and make managing abuse a lot easier.
Not every bot that I'm defending against is a DDoS but I can probably figure out a way to overwhelm the "pre-content" filter that's trying to figure out if a token is legit or not.
If we are serious about protesting this, let’s do as follows: We implement code in our websites that checks whether the user agent implements this API. If the check passes, we tell the user that their browser is not welcome and why that is.
#BoycottGoogle #BoycottChrome #BoycottBullshit
With that, at one point we actually started running low on physical space in the office. We've had a running joke (started by a Flash dev of course) that we'll just move all of the remaining Flash guys to the toilet...
But in all honesty, Flash was a terrible, absolutely horrible technology. I was lucky enough that I've only had to work with it from the backend, but I still remember the dread.
I think Adobe missed a huge opportunity where they could have built new tooling and a framework to target HTML5.
I can't literally emulate mouse movements but the only place that matters is... captchas. If you're not watching for those kinds of behaviors, then a browser even without webdriver can be automated just fine. And if you are watching for those behaviors, then you're running a captcha, so what is WEI helping with?
Google claims this is not going to impact browser extensions, debugging, etc... but if it's not going to impact that stuff, then it's not really helpful for guaranteeing that the user isn't automating requests. What it is helpful for is reducing user freedom around their OS/hardware and setting the stage for attacking extensions like adblockers more directly in the future.
The original iPhone which killed flash didn’t even ship with the App Store. They assumed we’d only be using web apps.
It’s in the original Steve Jobs presentation when he announced the iPhone.
I completely agree about the spec's vagueness about what makes a client trusted, and that attesters can choose arbitrary criteria, and will likely favor things that make the walls on their gardens higher.
I hope you're not misunderstanding my position, I think WEI is bad for users and I'm hoping that alternative browser vendors like Vivaldi take a stand to not implement it.
I don't know much about the online ad market. I assume advertisers will pay more for attested impressions than for unattested ones. But unattested impressions will still be worth something.
Strongest possible disagreement here.
On the other hand, you can bet that that's absolutely something scammers will be able to convince people to do while they're on the phone with them...
The page must first load, then it requests an attestation using js and sends it back to the server for further use (like a recaptcha token).
So for something like curl it could be no change.
https://github.com/RupertBenWiser/Web-Environment-Integrity/...
I definitely agree that AdSense blocking clients that don't implement WEI seems likely. At that point, it will be up to websites that rely on AdSense revenue to decide what to do with customers they aren't monetizing. That's already a question they have from users with ad blockers, although that is a little bit more challenging to detect.
My hope is that the majority of sites accept that they can't rely on ad revenue, and instead resort to directly monetizing users as a way to make ends meet. IMO that's a better relationship than indirectly selling their data and attention.
It's very simple. Google has concerns of click/impression fraud. Unattested traffic would be more likely to be fraudulent. Not paying for unattested impressions/clicks is therefore an easy way to cut costs and combat fraud.
> On the other hand, you can bet that that's absolutely something scammers will be able to convince people to do while they're on the phone with them...
Indeed but it will slow them down significantly and reduce the amount of marks by a significant amount as well.
A bot is just some computer doing what its owner wants. OP is happy because WEI will eliminate bots. OP is inconvenienced by other people using computers in ways they don't like, and wants to take control of the computer away.
As strong AI is knocking on the door, we see people wanting to take general purpose computing away. All the worst outcomes involve people losing the ability to control their own computers.
I wouldn't mind being able to use the TPM to tell me whether the hardware and software are what I expected them to be, but that's different.
- Mozilla is already publicly and officially opposed (https://github.com/mozilla/standards-positions/issues/852#is...), on principle ("Any browser, server, or publisher that implements common standards is automatically part of the Web") as well as on technical concerns around the safeguards and downsides of the proposal.
- WebKit is not committed to a position, but has mentioned several concerns (https://github.com/WebKit/standards-positions/issues/234):
"We have Private Access Tokens (aka Privacy Pass) for some of the claimed use cases of this spec. We think it's a more privacy-respecting solution. The Explainer isn't very clear on why specifically Web Environment Integrity is better. It mentions a feedback mechanism, but not the specific mechanism. It also exposes more info to the page. The Explainer claims this spec is necessary because Privacy Access Tokens don't support feedback from websites on false positives / false negatives, however, neither the spec nor the explainer include a feedback mechanism. Without more specifics, we would not be enthusiastic about duplicating an existing standards-track solution for the same use cases."
- Vivaldi is clearly opposed, per this blog post.
- Holdback as a mechanism is a weak defense against abuse. Some potential stakeholders are already suggesting to scrap holdback to support their use-cases (https://github.com/RupertBenWiser/Web-Environment-Integrity/...), leading to the possibility that it may not even be part of the final standard. Holdback is not technically enforced: a user agent can choose not to hold back, and if they are sufficiently popular they may induce web site operators to rely on their signal (at least for that browser) which would have the exact "DRM" effect that the proposal claims to avoid. The exact implementation of holdback matters a lot: if it's e.g. per-request, a site can simply ask repeatedly; if it's per-session or per-user, a malicious agent can pretend to be heldback the entire time.
- Since holdback is being touted as essentially the only defense against "DRMing" the web, it's a real mistake to have it be so poorly specified. The way it's currently specified makes it sound more like an afterthought than a serious attempt to mitigate harm.
- Compared to Private Access Tokens, WEI leaks far more information. WEI allows attesters to provide arbitrary metadata in their (signed) attestation verdict, whereas PAT tokens are fully opaque and blindly signed. Furthermore, PAT tokens can be in principle obtained through alternate attestation mechanisms (e.g. captcha, authentication, ...) without leaking the details of how that attestation is performed. WEI does not provide for this, and instead is designed around explicitly validating the "web environment".
Or probably CFAA, it seems inevitable to me that these organizations will use state violence to enforce their monopolies.
Isn’t this a no brainer? Ad funded websites have zero incentive to serve pages to ad blocker users. Not only they don’t make any money from them, they cost them money.
https://httptoolkit.com/blog/apple-private-access-tokens-att...
https://toot.cafe/@pimterry/110775130465014555
The sorry state of tech news / blogs. Regurgitating the same drama without ever looking at the greater picture.
streamlink "https://twitch.tv/$streamer" best --twitch-disable-ads --player mpv
No ads, no tracking, no purple screens, no psuedo social network stuff to hijack your dopamine systems.That's true of every DDOS filter. It doesn't mean that having a cryptographically secure way to make requests more expensive to produce isn't a tremendous help.
>This is more or less what the proposal does? It's akin to the same shady stuff seen here [1] except this time some third party gets to sign it.
The fingerprint isn't unique to the extent that you can rely on it always correctly identifying a single user. So you can't ban based on the fingerprint or automatically log someone in.
But this one’s simple: “literally go fuck yourself with this. we will fight you tooth and fucking nail every fucking angstrom on this one. it’s a bridge too far.”.
It only makes it impossible for legitimate users to run their own code -- people who want to run OpenBSD, or fork Chrome to make sure that ManifestV3 doesn't permanently hobble adblockers, or maintain their own alternative browser UI.
WEI does prevent any customization.
You have lost an acquaintance.
Which a lot of them already do: https://www.youtube.com/watch?v=hsCJU9djdIc
Or just use a botnet to steal use of someone else's hardware, which is also very common for malicious bots.
I am sympathetic, I agree let's all do that....
...I cannot imagine any of the money people I work with agreeing
How?
You see, this is the problem I have with all these debates where advertising is declared the villain. "Directly monetising" usually means subscriptions and logins, which means you lose all anonymity, not just gradually like under an ad targeting regime, but definitively and completely. Now payment processors and banks also get a share of the surveillance cake.
The greatest irony is that you may not even get rid of advertising. Advertising only becomes more valuable and more effective. All the newspaper subscriptions I have run ads.
The second issue is that advertising is paid for by consumers in proportion to their spending power, because a certain share of every £$€ spent is used to buy ads. Therefore, rich people fund more of our free at the point of use online services than poor people do.
If rich people move to subscriptions, this subsidy ends. Poor people will either be cut off from high quality services and relegated to their own low quality information and services (as is already the case with newspapers) or they will have to suffer through even more advertising.
When implemented without holdouts (closed loop), you do have a tight DRM web, which will attract legislators. Or so we hope.
When implemented with holdouts, it's barely useful to websites since they still need the backup mechanisms to detect fraud that they have anyway. If they need to keep it around, might as well use that as singular solution which has the added "benefit" of collecting way more personal data.
The consolifocation of personal computing has been moving this way for sometime. It’s essentially late stage capitalism gate keeping.
As a child of the 80’s is hard to watch things keep moving in this direction :/
Except Google of course, the only allowed scrapper.
A malicious actor wouldn't bother. They'll tap `/dev/random` when it comes time to send the blessed token to the origin. The onus is going to be on the origin to figure out that it's _not_ a valid/signed token. If it's as easy for the origin to do this as it is for the adversary to tap a RNG then what was the point? If it's harder for the origin to figure out my token isn't legit than it was for me to generate one, how is the origin better off?
In any case, you're filtering the DDOS out *after* you've managed to set up the TCP/TLS/HTTP connection. That seems to be a rather late/expensive point to do so!
- cost mostly marginal money
- continue to use your platform, potentially watch ads later
- their usage can be sold to anyone: where are they at a given time and what are they doing
- don't go to rival platforms
- tell their friends about the website
- etc
The people who want to use DRM to solve their problems should just suck it up and find alternatives.
I've opened a brand new Firefox instance and got "Your browser is not currently supported. Please use a recommended browser or learn more here." (linking to https://help.twitch.tv/s/article/supported-browsers?language...) on the login screen.
The login made a zero-payload POST to https://passport.twitch.tv/integrity and it responded with 400 and a JSON body {"error_code": 5025, "error_description": "integrity failed", "error": "Oops! We encountered an unexpected error. Please try again.", ...}.
It seems that this is not about GNU/Linux, though, as it happens at random (searches for `twitch "integrity failed"` produce results from all sort of platforms and browsers). Must be that some pointy haired boss had some important ideas about security.
I was able to log in from a Firefox on a different GNU/Linux system, so it's not like those are always blocked. I suspect there's some User-Agent whitelist or similar kind of nonsense (but looking at the console logs and bunch of WebGL errors it certainly tries to fingerprint the system), but I'm too lazy to investigate this any further.
This tech is not to prevent serving content to people who adblock, this technology is to make sure that people don't have the ability to make that choice and force certain setups that prevent adblocking
Exactly, absent all platforms except OSS operating systems like Linux.
Windows 11, with its required hardware TPM, can absolutely do this. So can modern macOS.
> Now payment processors and banks also get a share of the surveillance cake.
I agree this is a problem. I work on Bitcoin and the Lightning Network, so that's my preferred solution to the problem, but there are other approaches to addressing the poor state of privacy and payments too. I don't think that that being a problem means that the relationship we have with advertising isn't as bad though.
> If rich people move to subscriptions, this subsidy ends.
There are plenty of examples where this is not the case. The freemium model exists in places where injected advertisements are not the norm, such as free to play games. Fortnite whales subsidize millions of low income players to get a high quality game for free. Whether or not you think the relationship between Epic and its players is another question, but it's a model that can continue to exist without advertisement. Especially when free users are necessary to provide content for paying users, like posts on Twitter or Reddit, or players in a game.
Also turn on a VPN some time (a signal to Google et al. that you're trying to bypass content region-restrictions, or funnel mobile traffic through an ad-blocker) and you are basically guaranteed to see nothing but CAPTCHAs from the predominantly CloudFlare owned and operated Internet.
So yes, it's a big problem, but only if your web environment (tracking metadata) are not sufficiently "trusted" :D
That being said; creators needs money to keep making what they are making. Too bad ads is such an all encompassing method. The web is literally worse with it, but would not have been as big without it.
I tried once before, when I quit working at Google and was trying to de-Google a bunch, and I never succeeded.
I plan to move everything over over the next few days. Wish me luck!
Next up: getting my photos out of Google Photos.
The attestation API will allow websites to verify certain things about the user agent which they then may use to either deny access or alter the access for the requested resource. This is similar to existing methods of checking the "User-Agent" header string but is much more robust to tampering because it can rely on a full-chain of trust from the owning website.
So will existing tools work with this?
Websites that do not require attestation should work fine. This will probably be the vast majority of websites.
Websites that require attestation may or may not work depending on the results of the attestation. Since programs like curl do not currently provide a mechanism to perform attestation, they will indicate a failure. If the website is configured to disallow failed attestation attempts, then tools like curl will no longer be able to access the same resources that user agents that pass attestation can.
My opinion is that it is likely that attestation will be used for any website where there is a large media presence (copyright/drm), large data presence (resource utilization/streams), high security, or any large company that is willing to completely segment its web resources into attested and non-attested versions. Tools like curl will no longer work with these sites until either a suitable attestation system is added to them, or the company changes its attestation policy.
Granted, the difference between the tiers may be small engouh in some cases for this to be an acceptable compromise, but the principle is still the same.
Can someone send attlestation requests from the range of residential ips with such frequency that the attlestation sequence is forced to captcha users, thus defeating it? You don't need the token response back from an attlestation, so you could spoof your ip and not worry about getting a response.
I do, however, routinely interact with websites that implement Google Analytics and/or Google ads. If those sites start rejecting my browser of choice I will most certainly be locked out of a significant portion of the internet. And the remaining 60% of all internet users would be essentially forced to accept this technology or else. That's an order of magnitude or two more users, and seems to me like a good reason to raise the alarm.
Unless you have an obvious and accessible way of getting secure third party builds whitelisted, this is still a very anti-user approach, which is not justifiable unless the user of the device isn't its owner (like with company-owned work phones).
Worse than that -- unless you disallow any sort of scripting and accessibility hooks, WEI doesn't prevent malicious requests. It just forces you to script your system via autohotkey or its equivalent.
Apple already built and shipped this same feature last year, so they're not opposed. MS? Probably gonna love this. Mozilla hasn't said anything on it (yet at least). I'm not expecting any of those players to save us.
You can self host it or even use the alternative server implementation Vaultwarden.
I'm also in the process of de-googling, so far I have passwords, contact sync and calendar sync all self hosted.
Photos are tricker since my home server doesn't have a lot of storage right now.
a galactic irony that Ben Wiser, the Googler who posted this proposal, has a blog where his most recent post is a rant about how he's being unfairly restricted and can't freely run the software he wants on his own device.
https://benwiser.com/blog/I-just-spent-%C2%A3700-to-have-my-...
Requiring HTTPS means you require clients to have up-to-date TLS certificates and implementations. This provides a ratchet that slowly makes it harder and harder to use old computers and old software to access the web. Forced obsolescence and churn is highly desirable for anybody who controls the new standards, including Google.
You didn't finish your metaphor, let me.
I don't let anyone in my house, therefore what? Therefore I am joining a worldwide program whereby I am able to find out from a source I choose whether I want to let this person into my house. If they don't make their information available to my trusted source, they ain't getting in.
Also my house happens to contain things that billions of people want to see and use, but they have to sit through my time share pitch first. And they HAVE to listen.
> If it's being implemented it's because it solves a real problem.
If something solves a real problem, must it then be implemented?
Also, it solves a problem for web sites, and in such a way that non-malicious users will be less free to use the web the way they want.
A better plan might be for websites to find some a better way to sustain themselves, possibly by running ads that are more relevant and less obnoxious so that users wouldn't block them.
It seems that almost any software/website can be framed as having a legitimate benefit for users, e.g., increased convenience and/or security.^1 The more pertinent inquiry is what benefit(s) does it have for its author(s). What does it do (as opposed to "what is it"). Let the user draw their own conclusions from the facts.
1. Arguably it could be a distortion to claim these are not mutually exclusive.
We can use web clients that do not leak excessive data that might be collected and used for advertising and tracking by so-called "tech" companies. Google would prefer that we not use such clients. But why not. A so-called "tech" company might frame all non-approved web clients as "bots" and all web usage without disclosing excessive data about the computer user's setup^2 as relating to "fraud". It might frame all web usage as commercial in nature and thus all websites as receptacles for advertising. This "all or nothing" thinking is a classic cognitive distortion.
2. This was the norm in the eary days of the web.
People used browser APIs and some other people thought to take that away. When some people use autohotkey, what will the other people think about doing?
They can buy government many times over with their vast resources. This may be too late for that. What ideally should happen is that corporations this big should be split until each of the new entities meet the definition of SME. That's what is broken in the current iteration of capitalism. There is no real competition any more, so it no longer works.
But… is there scope for the attestor in WEI to be a third party site that does a super fancy “click on all the stop lights / stairs / boats” captcha, and then repurposes that captcha result for every other site? That doesn’t sound like an awful service to add to the web. It would mean each individual site no longer had to do their own captcha.
(Probably impossible without third party cookies. But then that kind of implies that if WEI does make it possible then it could be shown to provide a tracking service equivalent to third party cookies? Again, gross.)
Large companies will invest significant resources with us to achieve AAA compliance with WCAG 2.1
Smaller companies will spend SOME additional budget to achieve AA.
Tiny companies will spend nothing until they get a demand letter.
I was writing Flash-based apps/sites at the time and there wasn't a single device we had in our QA set that we thought was "acceptable" in its performance. It was buggy. It'd crash out of nowhere. It'd consume so much memory that user's apps were force quit left and right. It would kill a battery with a quickness such that we had one customer who had to carry multiple spare batteries just to use the app we wrote for their internal team.
It was bad in every way a thing could be bad.
This is fundamentally different from a world where Google gets to decide if I am a risk to them.
Not really, for two reasons.
First, is that it can be bypassed, for instance with an extension which hides the relevant JS property and/or switches the user agent, or even on-the-fly edits the site's Javascript. The whole point of WEI is that it cannot be bypassed.
Second, is that just blocking Chromium does not prevent the development and use of new web browsers and/or operating systems, while a predictable consequence of WEI is making them non-viable in practice (they'd have to first convince Google that both the browser and operating system is DRM-ed enough that the user does not have enough control over the browser to make it do everything the user wants, and only then the browser would be allowed to access WEI-walled content).
Is that the one rendering [1] text and UI widgets into an HTML canvas element from JavaScript/Dart (completely coincidentally breaking ad blocking in the process)? What a beautiful piece of software.
> Apple already built and shipped this same feature last year,
Are you referring to Private Access Tokens (PAT)? These seem quite a bit more limited in what they do. WEI seems to specifically set out to roll back some of the blinding/anonymization aspects of PAT under the banner of debuggability/providing "feedback" to attesters.
[1] https://docs.flutter.dev/platform-integration/web/renderers
Because an attacker can inject JavaScript code on it, and use it to attack other sites. The most famous example of that is "Great Cannon", which used a MITM attack on http sites to inject JavaScript code which did a distributed denial of service attack on GitHub. Other possibilities include injecting code which uses a browser vulnerability to install malware on the computer of whoever accesses your site (a "watering hole" attack), without having to invade your site first.
This should be against their tactical interests, because it hurts their accuracy driving away users, but absent a significantly more accurate competitor they'll get away with it for a long time.
Regarding Google search there are some hopeful signs. For one some people report Google's accuracy dropping, and Google keeps switching up its idiosyncrasies to avoid spam but in doing so they devalue the effort people put into SEO and into refining their Google-fu. These might be the same thing however.
* The device integrity verdict must be low entropy, but what granularity of verdicts should we allow? Including more information in the verdict will cover a wider range of use cases without locking out older devices.
* A granular approach proved useful previously in the Play Integrity API.
* The platform identity of the application that requested the attestation, like com.chrome.beta, org.mozilla.firefox, or com.apple.mobilesafari.
* Some indicator enabling rate limiting against a physical device
Are Chrome users really Google's customers, though? Arguably, they're part of the product.
Youtube used to be the same, although that's changing a bit with the current aggressive push for Youtube Premium.
"You" in this scenario being, most likely, an engineer at a large, regulated, risk-averse corporation that might have to justify this choice during an audit.
What would your decision be?
This of course only covers half of the use cases discussed (the half about preventing bots, not to say anything about the more DRM-ey aspects).
Keep in mind that Pinephones and similar are a thing. Lots of people are hoping they don't fizzle out and die off like previous "open" phone projects. :)
Or by isolating the browser from third party software. Android does not let applications mess with each other. Windows already prevents non-elevated applications from touching elevated applications (i.e. running as administrator).
What makes you think that Windows won't add an "untouchable" mode to executables belonging to "approved" browsers? The kernel is already locked down so you won't be able to bypass it that easily.
(What's with the trend of completely omitting any dates on a blog?)
Think about "don't use a smartphone" in 2013. That was viable back then.
It isn't anymore. What you can do is live smartphone-lite, using it only as a secondary device (as grandparent suggested). The same will be true in a couple years (if the big G is successful). Until, then, yea, don't use it, actively campaign against it.
> BezMouse is a lightweight tool written in Python to simulate human-like mouse movements with Bézier curves. Some applications might include:
> BezMouse was originally written for a RuneScape color bot and has never triggered macro detection in over 400 hours of continuous use.
:)
Apple already shipped attestation on the web, and we barely noticed https://news.ycombinator.com/item?id=36862494
No it's not? Android has upwards of 70% of the mobile market[0], and Chrome has nearly 65% of the mobile browser market, compared to Safari with under 25%.[1]
> the only choice any iphone users have
Sort of. WebKit is the only choice iOS users have, but there are plenty of browsers available on iOS (including Chrome and Firefox) that use WebKit, not just Safari.
[0]https://gs.statcounter.com/os-market-share/mobile/worldwide
[1]https://gs.statcounter.com/browser-market-share/mobile/world...
<item>
<title>I just spent £700 to have my own app on my iPhone</title>
<link>
https://benwiser.com/blog/I-just-spent-£700-to-have-my-own-app-on-my-iPhone.html
</link>
<pubDate>2022-03-04T11:30:34.067Z</pubDate>
</item>
I mean.. I think you’re answering your own question here.
You can argue that the web shouldn’t be open. In fact, there are many arguments for that, which I don’t mind arguing against.
There are many things that do not belong on the web, precisely because it’s open. For instance, a registry of people’s political views. Or naked pictures you don’t want the world to retain forever. And so on. The fact that the (open) web is not suitable for everything has been true since its inception. Openness comes with trade offs.
The honest way of putting it, is that WEI wants to make the web less open so that it can have more content, or protect content better.
On an opt-in basis, this is fine in theory. But WEI would never ever be opt-in with meaningful consent. It’s entirely dead in the water there, because non techies will not understand what or why this is “needed”. Heck, people don’t even grok cookies. In practice, this will be enabled by default, which is exactly the fear. Alt browsers would be bullied to support it, and users would be forced to use it.
Though, at this point I am the founder of my own company. Any software we use will not require attestation. I would be willing to switch vendors over that.
As for web attestation: the software I use regularly needs to run on OpenBSD. It's that simple.
If you use a browser which supports attestation you will be denied service by companies who disapprove of what you run on your computer.
If you don't use a browser which supports attestation you will be denied service by companies who disapprove of what you run on your computer.
So everyone loses. If this goes live everyone in the world loses.
It is an utterly heinous proposal. It is perhaps the worst thing Google has ever produced. I use Firefox and will never use any browser that implements attestation, even if I have to stop using most of the WWW one day.
But unfortunately individual action is not going to be enough here, because no matter what you do, you lose.
https://www.creativebloq.com/sony-tv-patent
> In it, TV viewers are only able to skip an advert by shouting the name of the brand. Yep, crying 'McDonald's!' is the only way to make the Big Mac disappear.
Companies will do the most insane, terrible things if not stopped. This will happen.
We then got AngularJs, but with Dart (AngularDart). This was again trying to improve the coding experience of making web apps.
When typescript came and the Angular team picked that up, TS seems to be the primary path forward (though angulardart is still getting updated).
At this point dart wasn't seeing a lot of attention. The Flutter team was able to pick up Dart as the primary owner and has been driving it since then.
Ban attestation methods that owners can't control.
It's good for google to care, it's not good for them to do this.
I couldn't run my bank's app on an up to date and security patched lineageOS ROM Thanks to safetynet, even trying the hack around approaches.
They'd happily accept the out of date, CVE riddled official ROM however as it had the "popes blessing" from Google.
I think it's so that your blog does not run into the risk of looking inactive when you might stop posting for a while.
It would also drive the point home to the very same legislators that the author is deferring to.
If browsers now start pre-emptively folding, Google just straight up won. It's great that the Vivaldi team is against this change, but a blog post and hoping for regulation just won't cut it. You have actual leverage here, use it.
https://medium.com/@danielraffel/compromised-apple-id-expose...
Those sites that showed you the “disable ad blocker” pop up that prompted you to leaving won’t miss you.
The point Google seem to be making quite clearly, is that the browser does not serve my needs, but the needs of Googles paying customers.
You're behind the times. It's not widespread but it's been happening for years.
Also the other day selenium author (iirc) said they are working on such a thing for "automated testing"
So this proposal will do nothing to prevent bots; maybe increase the cost a little.
On the other hand, it will surely discriminate people, new emerging technology and companies. No other search engines can be built. No new browsers. No openness.
Anyone supporting this proposal is either pure evil or stupid or both.
And Firefox will get blocked by even more sites if they don't implement this shit too?
Because this is an incredible way of exerting their total control over the web across all browsers. If they don't like a feature, they get to downgrade the user's attestation or fail it. If it costs them some unattested traffic in order to create a permanently unassailable market position, it's worth the money.
It'll block all other search engines by preventing web scraping except those blessed by Google. For this reason alone many websites will adopt it. This will impact competition, research and freedom.
After this, all user choice is gone, and it'll only be governments who can break the racket.
If the CCP don't already do this, I expect they'll quickly implement something similar.
I take umbridge at this implication. When a monopoly like Google takes anti-competitive actions it's not fair or just to expect individuals to stand up to it. Governments exist to counter anti-competitive behavior like this and governments have been doing a terrible job chopping down companies with too much vertical integration lately.
I've never seen a usage of Safetynet which I would consider right, pretty much everybody thinks it creates some kind of "security" whereas it doesn't.
One very rare useful usage for it could be removing bots for game leaderboards but certainly not banking apps.
Students still forgot in the first year but got heavily marked down for it. It quickly got etched into your brain to date and version just about anything you did.
Today when I see an undated blog entry it seriously affects my perception of the writers integrity.
But hey, it's great that some people want to make the devices they own and holds extremely valuable days of their own person, something controlled by external entities.
Don't worry, those of us who know our tech and value our privacy, will continue not listening to the "just take it" crowd.
That's not the case with GrapheneOS:
https://grapheneos.org/articles/attestation-compatibility-gu...
SafetyNet is deprecated anyway:
https://developer.android.com/training/safetynet/deprecation...
Basically my arguments were it's anti-competitive, against the open web, and a risk to country's security agencies. The latter while a valid argument is to hopefully rattle politicians and government agencies.
It could also hinder pentesters hired to test the owners website, but they already have to contend with WAFs.
You want to support the ad-funded website you keep coming to, yes or no? Yeah ideally every website would have a paid option for the HN crowd with cushy jobs, but that's not always feasible.
In that case, ads, being psychological manipulation to get users to do things they would not otherwise do, are already highly unethical. The ethical think to do is to discourage their use, which includes blocking them for yourself thus making them less profitable overall.
Yes, but you see it. The canonical reasoning I've heard for missing dates is that it avoids SEO penalties for old content.
I await the realisation of the Hitchhiker's guide's remedy for the Marketing department...
You probably recall that mobile internet in general was far from fluid in those days; Browsers couldn't handle multiple tabs well, and iOS would show an annoying mosaic if you scrolled web pages too fast (before the browser could render the page). I would rather have the option of having something imperfect available, than have the OS vendor lock them out entirely.
Because it's less computational intense than serving responses and/or trying to fingerprint malicious actors. It also tells you with near certainty that the request is malicious and future requests from that IP can be blocked.
SafetyNet is deprecated, but it’s just been rolled into Play Integrity which does all the same things. All the same concerns still apply to Play Integrity.
GrapheneOS is asking developers not to use SafetyNet/Play Integrity (because they presumably block GrapheneOS), but instead to use the native hardware attestation API so they can specifically allow GrapheneOS keys. If a developer doesn’t allow their keys, they’ll be blocked.
Otherwise, what would the point be of using to, say, protect DRM content on a webpage if I can just attach a debugger to the process in question?
Is this not how WEI works?
The internet was already going increasingly-downhill anyway.
thisisfine.png
Yep. I'm not saying Dart is a good thing - I've never used it and don't currently have plans too. All I'm saying is that it is NOT dead as GP asserted.
> Are you referring to Private Access Tokens (PAT)? These seem quite a bit more limited in what they do. WEI seems to specifically set out to roll back some of the blinding/anonymization aspects of PAT under the banner of debuggability/providing "feedback" to attesters.
Yes. PATs don't provide as much information about the attestation to the website, but they do provide the critical part which is "is this person using a blessed client." That's plenty for a website to block people on.
So I can still use this to DDOS. My malware running somewhere on your network just needs to submit a bogus request from your IP address. Origin sees the bogus requests from your IP and now that IP is on the bad list. Later - your legit requests from the same IP are ... denied.
I don't know that an "inverse" DDOS is novel, but it's certainly not been common. Perhaps that may change in the future...
https://news.ycombinator.com/item?id=30553448 (5 comments)
If a company wants control over devices they own, that's still fine.
It's like staying on a dancing elephant. And it requires MONEY. Lots of.
I suspect this is the desired result of Google to protect chromium despite it's opensource.
I don't think Google has actually done anything. The bar for experimenting with new code in Chromium is pretty low. This Chicken Little reaction to a non-starter is just a result of developing in the open.
But you can "care" about something in good and bad ways, and the criticism is not "Google bad".
You won't get the chance to refuse this feature. There'll be too much money at stake for manufacturers to not retool for it. It'll be the only thing they make to sell, so take it or leave it chump.