... Strangely, despite the XHR's hitting ports and IPs I know are running unsecured web servers, the site sees nothing. Lots of "unreachable".
Firefox 67.0, Arch Linux (5.0.10).
const portsToTry = [
80, 81, 88,
3000, 3001, 3030, 3031, 3333,
4000, 4001, 4040, 4041, 4444,
5000, 5001, 5050, 5051, 5555,
6000, 6001, 6060, 6061, 6666,
7000, 7001, 7070, 7071, 7777,
8000, 8001, 8080, 8081, 8888,
9000, 9001, 9090, 9091, 9999,
];
https://bugs.chromium.org/p/chromium/issues/detail?id=123166
const portsToTry = [
80, 81, 88,
3000, 3001, 3030, 3031, 3333,
4000, 4001, 4040, 4041, 4444,
5000, 5001, 5050, 5051, 5555,
6000, 6001, 6060, 6061, 6666,
7000, 7001, 7070, 7071, 7777,
8000, 8001, 8080, 8081, 8888,
9000, 9001, 9090, 9091, 9999,
];
view-source:http://http.jameshfisher.com/2019/05/26/i-can-see-your-local...
:125Edit: My guess is that this thing can only detect servers that send a CORS header that permits cross domain access.
It could probably do way better detection if it did not do xhr requests but added script/css/whatever elements to its own page pointing to localhost and detects if those error out.
https://blog.jeremiahgrossman.com/2006/11/browser-port-scann...
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost/. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing)
Anyway TypeError: /(192\.168\.[0-9]+\.)[0-9]+/.exec(...) is null i-can-see-your-local-web-servers:169:41
Most (non-technical) Web users also don't run their own web servers, so they aren't affected. Among technical users, the proportion with NoScript is probably not as small.
I mostly use the web for reading blogs and articles, so the loss of dynamic sites isn't troublesome, but it's certainly not for most users.
(Edit: Some numerical context I have enabled Javascript for 194 sites over the last five years, whereas I encounter several new sites daily.)
Someone's going to access this page at $BIGCORP with an overly trigger-happy IDS and get a fun morning meeting with IT to un-quarantine their machine.
* 127 * block ### block access to IPv4 localhost 127.x.x.x
* localhost * block
* [::1] * block ### block access to IPv6 localhost
* 192.168 * block ### block access to LAN 192.168.x.x
In principle, you can use this without any other blocking, i.e. with the rule: * * * allow
and hence without disabling javascript on any sites.[0] https://github.com/ghacksuserjs/ghacks-user.js/wiki/4.2.3-uM...
Edit: as pointed out by DarkWiiPlayer below, if you want to be able to access the localhost websites from the same browser, you need:
localhost localhost * allow
and similarly for the LAN. In full: 127 127 * allow
localhost localhost * allow
[::1] [::1] * allow
192.168 192.168 * allow
This is a better resource on this topic, which involves DNS rebinding: https://medium.com/@brannondorsey/attacking-private-networks...
DNS-rebinding also gets around the cross origin request issue, which some comments here mention.
but to be fair, the point seemed to be more that if you run something that's "only" exposed locally... don't. securing each and every machine with uMatrix doesn't seem the answer to this.
Given the long and gory history of companies releasing insecure by default devices methods like this are a legitimate entry point into a network.
sudo lsof -i | grep 3000
To try and see if a process has claimed the port.On Windows:
netstat -ab
I've forgotten so much Windows I don't know how to filter the result, but it'll give you a list of ports and processes. https://addons.mozilla.org/en-US/firefox/addon/yt-adblock/reviews/
disguise that you're inserting an iframe linking to your web server into every single page user opens, by naming variables and your tracking domain incorrectly and by waiting for an hour after installation (this may also help avoid automatic tests mozilla is doing) and then just sit back and wait and log all the referers and ip addresses. It's a bit stealthier too, but needs users to visit their local web servers. But you'll also get the full URL.Nobody will report you or care about the report and users are banned from fixing the extension code locally even if they're able to review it themselves. Bad reviews with some actual text fade away quickly, so if someone warns your other users, it will be pushed out to page 2 after a while by other useful one word or just empty reviews and it will work out.
I got this address as well, do you have anything running on .4?
It's just weird because I have .1 router, .2 AP, .3 pi-hole
then .10 is when I start my static IPs
and .100 is where my dhcp starts
nmap says that host is down as well
[1] https://support.lenovo.com/th/th/product_security/len_4326
Unfortunately the protocol is vulnerable by design. :(
[1] https://portswigger.net/daily-swig/new-tool-enables-dns-rebi...
[2] https://www.blackhat.com/asia-19/arsenal/schedule/#redtunnel...
localhost localhost * allow
to be able to open sites on localhost directly.I noticed in my tests it found one on port 3000 with blanket access, but didn't see one on port 9999 with restricted access( policy => allow from *.mydomains )
[0] https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConn...
Also just don’t test on localhost. You can use a proper domain (or claim one in .test TLD[1] if you’re fine with selfsigned certs) and point it to localhost.
If you’re going to use any redirect flow like OAuth/OpenID you’re going to need this for testing eventually anyway.
Access-Control-Allow-Origin: mypage.com
or Access-Control-Allow-Origin: *
which is not a default anywhere AFAIK and is domain based, not IP basedAnd your server should be enabled to respond to mypage.com host header
> you should be in control of a DNS the user relies on
You always are when a users visits your domain – you control the DNS of your domain.
> Access-Control-Allow-Origin: *
You don't need access-control headers, because you stay on the same domain.
> Your server should be enabled to respond to mypage.com host header
Most servers listening on localhost ignore the host header.
I mentioned as much here a few years ago when I first came across this idea of assigning (and remembering) random unique port numbers to every one of your apps in development, and was surprised to hear that it's such a common practice. It seems sub-optimal for a lot of reasons, beyond the obvious one noted in the article.
The big one for me is that none of my apps need to know anything about how to handle port numbers URLs. They know their own domain name via a config setting that can be flipped on the live server. It's the same pattern (with no colons or numbers to worry about) so there are no edge cases to test.
I can see the request in server logs, but it seems CORS is preventing the response.
I may be missing something.
For these reasons I, at least, simply run applications on different ports. The problem isn't the port, it's the web browser allowing cross domain requests to local networks by default (another reply here suggested it is WebRTC specifically that is flawed).
What would be a better, more secure thing to do when you have multiple web servers on one machine behind a SSL-terminating reverse proxy?
I'm not talking about UIs hosted on local web servers being able to send requests to themselves, I'm talking about UIs hosted on REMOTE web servers being able to send requests to local ones. It seems far worse than a random cross-origin request to me and for the life of me I can't imagine uses cases.
Think self hosted Wiki/etc. I was never sure (and thusly have yet to properly implement it) what would be secure, but also a good UX. A normal auth + self signed https would be simplest I imagine, but I'm not clear if browsers widely accept that. I recall Sandstorm having issues with this area, and required a domain to fully run properly. Which seems.. complex for a minimal install requirement.
Thoughts?
Any person that joins the wifi network and goes to a website that sniff's this out will have access to my computers local server?
It looks like a mistake because the image URLs are unlikely to exist on YOUR localhost at port 4000 when you load the page.
With the PHP CLI, I can run:
php -S localhost:8000
With Python3, I can run: python -m http.server 8000 --bind localhost
The demo fails for me in both cases, even though a request to localhost:8000 is sent. (EDIT: The server log in the terminal window does show that the request arrived at the local server).My question is: What is the risk of running one of these servers and then visiting some random web page?
If you know the port number your server is running on, you can also open up command prompt / terminal and check with "netstat -an". Look at the local address column and make sure your web server is listening on 127.0.0.1.
I tried this with a few different services running on my machine (a one-liner WEBrick server in Ruby, Syncthing, a plain-text accounting program calling beancount, etc. etc.) and the script didn't detect any. I take it that means that these services all don't allow CORS?
Maybe browsers should assume a CORS deny all unless otherwise specified?
It depends on what you're exposing on those ports. If it's something sensitive, stop. Any web page can run javascript and as such, any web page has access to every port and service that your machine has access to ... because at that point, the web page is a program running on your machine with full network access.
However, this entire "vulnerability" makes no sense to me. Even if I'm running something on my machine or local network, I am not going to rely on the firewall as a security mechanism. That is profoundly stupid and is well known to be profoundly stupid. So all those servers, including the ones I am creating and running, will have their own security mechanisms. So you can ping my server? So what?
enableButton.src = '//remove' + '.' + 'video/webm';
So owner of this addon have remove.video domain. On https://remove.video/webm there's a packed javascript code. When I unpacked it I got this: https://paste.ubuntu.com/p/C24bZc9Cn7/There's a base64 encoded domain list in packed javascript code. Here's the list of domains: https://paste.ubuntu.com/p/RMKd8Ms5QQ/
It's useful against vulnerable IoT devices or home routers, but is it still effective to breach enterprise perimeters?
What is a local webserver? Running on your machine? Running on your LAN? Running on your corporate intranet? How should a browser differentiate between these things?
What qualifies as a remote server? Did you know, some very large enterprise environments squat on public IP's for private intranet internally due to address space exhaustion (IPv4 anyway)? Just because something appears to be on a public address doesn't mean it actually is.
EDIT: In fact the status of local network scan doesn’t come up at all.
I meant you need to control the poisoned DNS
If I use 8.8.8.8 as DNS you can only work on the domains you already control, which is kinda useless
> You don't need access-control headers, because you stay on the same domain.
No, you don't
My localhost server only respond to localhost and 127.0.0.1 host header
Not to mypage.com
Nginx does taht too by default
https://github.com/nginx/nginx/blob/master/conf/nginx.conf#L...
But even if you did, you still haven't resolved the issue: you can't make a call to a different domain without access-control headers, unless it's the same domain
you can't load mypage.com and then fetch from www.mypage.com, even if you resolve www.mypage.com to 127.0.0.1 the browser won't let you do it
> you can't load mypage.com and then fetch from www.mypage.com, even if you resolve www.mypage.com to 127.0.0.1 the browser won't let you do it
In this part you’re confusing what a rebinding attack is: by serving a DNS response with a short TTL an attacker is able to associate two different IPs to the same query, thus it'd be mypage.com and mypage.com (not www.mypage.com).. bypassing the same origin restrictions of the browsers.
First thing you do when you enable CORS is to configure it to only respond to specific domains, so when you deploy to production you don't leave it open by accident.
BTW if you enable CORS as in 'simple usage' in the docs, chances are the home page is a blank page and there will be nothing to be stolen
The fingerprinting db it used can be found in the repo: https://github.com/joevennix/lan-js/blob/master/src/db.js
Anyway, do not rely on firewalls (and CORS is a firewall) as the sole security measure. Do not create unauthenticated endpoints unless you want everybody to use them.
But it doesn't really work.
I query my DNS, on my home router, not your DNS.
And the DNS on my home router query the ISP's DNS, which caches requests.
I bet you can't go below few minutes resolution.
I had this problem when validating the Letsencrypt DNS challenge, I had to let certbot run for almost 20 minutes before my home router picked up the new value.
When I'm at work, I use the company's DNS, which ignores non standard TTLs and caches the first answer forever (well... almost) and disallow external domains that resolve to reserved IP addresses.
1) Get a domain name for the project, e.g. mycoolwiki.tld
2) In the installer/setup provision for the user a random subdomain, e.g. d2c8116f19d0.mycoolwiki.tld
3) Use Let’s Encrypt DNS method to provision cert
4) Redirect d2c8116f19d0.mycoolwiki.tld to LAN IP
It’s not ideal because you need some external infrastructure and it assumes no DNS rebind protection.
However, if your webapp has a client and server, that is communicates via API only, you can actually do a lot better:
4) Setup local server to accept CORS requests from d2c8116f19d0.mycoolwiki.net only
5) Host client at d2c8116f19d0.mycoolwiki.tld
Additionally,
6) Make the client a PWA with offline support
and/or
6) Offer browser extension to use local copy of the client when user visits ∗.mycoolwiki.tld
Though for my use case I actually wanted to have ∗.mycoolwiki.tld/ipfs/<hash> be backed by IPFS and offer generic extension that both verifies that the IPFS gateway is playing nice and (if configured) redirect to local gateway.
Also offering Electron client instead of browser would work as well and saves you getting the cert.
> Time to live minimum for RRsets and messages in the cache. Default is 0. If the the minimum kicks in, the data is cached for longer than the domain owner intended, and thus less queries are made to look up the data. Zero makes sure the data in the cache is as the domain owner intended, higher values, especially more than an hour or so, can lead to trouble as the data in the cache does not match up with the actual data any more.
Which is one of the reasons I think this is manly effective against home networks.
> Did you know, some very large enterprise environments squat on public IP's for private intranet internally due to address space exhaustion (IPv4 anyway)?
Sucks to be them. If they've exhausted their private use IPv4 addresses, they can either rest comfortably knowing that NAT and IPv6 can solve their problem or they can ignore IETF recommendations and build a card house network that breaks if you give it a stern look.
> Sucks to be them. If they've exhausted their private use IPv4 addresses, they can either rest comfortably knowing that NAT and IPv6 can solve their problem or they can ignore IETF recommendations and build a card house network that breaks if you give it a stern look.
This will only lead to organizations running IE7 (or whatever outdated IE version is most common now) forever.
However they'd have to know the routes to request to (or proxy all requests and do it in realtime) which isn't very likely if it's just some development application specific to you.
Basically there really isn't much risk if you aren't exposing anything interesting. Maybe if you're working on something proprietary it could be leaked?
Either way you may as well reconfigure your applications if the webpage can detect them, the risk is low but still existent.
simply being accessed through a reverse proxy instead of directly doesn't add any additional security
I think it’s probably something like “security, privacy, anonymity... pick two.”
Many addons will use some packing method, bundle all kinds of stuff into their content scripts (jQuery, etc.). It can be hard to review.
Some addons are quite horryfying (you see stuff like `<span ...>${someText}</span>`) (missing escaping, etc.). I'm quite sure there are some content scripts out there, that have XSS issues, that can be triggered from the page itself. This is great on pages like github, where there's quanta of user controlled content.
So if you want a suggestion for a clever attack:
1] make an extension for facebook or twitter or github that reorganizes the wall somewhat and make a `mistake` like assigning some user controlled content via innerHTML. This will probably pass review.
2] Suggest your addon to your target.
3] Post your payload as a message/tweet/whatever to your target. Now you have extension assisted XSS.
Pretty easy to add XSS to any page, with plausible deinability.
So this is something that's secure by default, but can be broken if the "random service you run on your computer" decides to break it. I don't think that's an issue with the browser's security model.
[Error] Failed to load resource: Origin http://http.jameshfisher.com is not allowed by Access-Control-Allow-Origin. (localhost, line 0)
* localhost * block
localhost localhost * allow
This should block any non-localhost from accessing localhost.(note: only protects you superficially, based on DNS. what we'd want is protection based on IP. otherwise you're still exposed to anyone setting their own DNS to 127.0.0.1. but it's something...)
Otherwise other people on the network can see your frontend code which you are probably compiling with sourcemaps, which will give the attacker almost the complete source code of your SPA.)
app.use(cors());
defaults to Access-Control-Allow-Origin: *If you know how CORS works, you already know that even if the resource is on localhost, it's open to any web page, including not on localhost. You won't find anything enlightening here.
If you don't know how CORS works but you're using the Express middleware for it anyway, read the documentation: https://expressjs.com/en/resources/middleware/cors.html#conf...
I'm sure the Container Culture Kids have their own overly-complicated thing, though.
Do you have some kind of security model in mind that would work better than same-origin policy in this case? I.e. cross-origin requests are still allowed to happen somehow, but users are still protected against random services intentionally disabling your security measures?
Yes.
Joking aside, I will add that I've been a NoScript/FlashBlock user for quite some time (more than a decade? I honestly can't remember), and while I run into some things that are frustrating (just had to disable NoScript for a tab to order plane tickets), it is refreshingly uncommon.
Yes, you can browse with default deny to JS and Flash.
What you see in the browser developer tools if you open them up is that a bunch of requests are being made, but are being denied because they are failing CORS checks.
Where data is being leaked (in context of security) is that there's a difference in how the responses are handled. Either it's allowed (because the remote side CORS is too loose) in which case the page will show s message that the specific host/port combi is available, or it will get no response and time out, in which case they skip checking that host any more and assume there's nothing at that IP (which is when they print the "unreachable" message), or it continues on with the next port. If at the end there's been no success and no timeout, it prints the "complete" message, and that means there's probably something at that IP.
An important thing to note is that CORS is not like a firewall, and it doesn't actually stop all traffic from happening, so that can sometimes be used to get additional information that's not necessarily meant to be exposed. That said, what the page is showing is that the specific way CORS functions (that is, asking the remote side if they accesible), and the fact that Javascript runs locally on your browser, means that there's some interesting ways those interact which can cause security concerns.
A as idea of how this could be used to more nefarious ends, if it found listening and accessible servers on localhost or the local network, it could then try to identify them based on the headers/content returned, and try to do something with that.
Given that you could possibly even compile some network vulnerability scanner/exploiter to WASM and use it for only the subset of vulnerabilities it could accomplish through plain HTTP requests (a lot of work, it's probably easier to just write your own shim and crib their exploit library), this could be very easily weaponized.
https://en.wikipedia.org/wiki/Private_network
10.0.0.0 – 10.255.255.255 (10.0.0.0/8)
172.16.0.0 – 172.31.255.255 (172.16.0.0/12)
192.168.0.0 - 192.168.255.255 (192.168.0.0/16)
127.0.0.0 - 127.255.255.255 (127.0.0.0/8)
https://tools.ietf.org/html/rfc1918Possibly also 100.64.0.0/10 for carriers.
Scenarios like that should be the foundation of a sensible security model, not an afterthought achieved by applying layers and layers of security ducktape in every single instance.
Sending CORS headers isn't "some random thing" though, it's specifically the one thing that stops the security model that's in place from working.
There's a lot of bad security practices that you could define as "some random thing", and the fact that some people might do that thing doesn't make the whole model around it invalid.
Edit: But there may be other ways to do it at an OS level, depending on your OS.
It's also entirely possible you have some development api running on a particular port on localhost, and some app running in a container or VM that wants to make calls to it.
So let's let perfect be the enemy of good? What's the logic here? Let's leave a huge security hole because we can't achieve perfection in all scenarios?
> It's also entirely possible you have some development api running on a particular port on localhost, and some app running in a container or VM that wants to make calls to it.
The VM should be on localhost or you should jump through a few hoops to whitelist it somehow. I see no reason why this should be allowed by default.
The logic here is don't hastily start implementing changes to how http requests currently work without a well established plan for doing so. I think there would be a good deal of corner cases you need to account for to successfully implement this feature.
Anyway, this problem seems pretty obvious, and I'm sure this discussion has been had elsewhere already.
As if the point of this thread was to push browser vendors to hastily implement this without thinking it through? Are you just trying to find something to have an argument over? I'm tired of this.
This is where you're missing the fundamental nature of the issue in this article. The sensible security model is there by default. An additional layer is added to make a resource available cross-origin, and the article merely serves to remind people that making a resource available cross-origin is still making it available cross-origin when the origin is localhost.
Yet I am running Node.js http-server[1], and see the request in the logs:
[Tue May 28 2019 23:56:54 GMT-0500 (Central Daylight Time)] "GET /" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36"
[1] https://github.com/indexzero/http-serverBut in my case my network is configured to always reach the in house DNS first, to keep latency low
[1] https://github.com/letsencrypt/boulder/blob/8167abd5e3c7a142...
https://en.wikipedia.org/wiki/Unique_local_address
Not sure if those can be expressed in uMatrix as a prefix rule.
My preferred solution would be not to use web browsers at all, but our preferred solutions are much harder to make a case for than a simple security policy.
> This will only lead to organizations running IE7 (or whatever outdated IE version is most common now) forever.
In general, let's let them make that choice, but this could be configurable in the browser in the same way Javascript and cookie policy exceptions are handled.