If it's a global blacklist on apple servers, it should instead be downloaded to the client, and be a local blacklist.
Too big? Use a bloom filter. Now you only end up keeping less than one byte per blacklisted item. Update the bloom filter with an autoupdater. Any positive hit you can check against the server just incase it's a false positive.
My understanding is the only thing exclusive to Apple systems is developing Apple apps. But anything related on a platform like iOS is already going to be public knowledge.
Also the latest Mac hardware seems to go through great pains to make sure the primary storage device is both encrypted, unrecoverable if keys are unknown (T2 security chip), and non-removable (SSD soldered to board). So why would they make this back door for surveillance?
Sounds very Orwellian for a privacy focussed company...
That data would be a wet dream for some IP lawyer looking for pirate copies of software...
It’s not really that great for mass-surveillance. It “phone’s home” only on first run. And it doesn’t look like it sends data about your identity, device or location. They can have your IP address but another mechanism would be needed to relate that to you. (I doubt they are logging the IP address anyway. The only purpose would be to surveil you, but if they wanted to do that they would surely use a more capable mechanism . That makes keeping the IP addresses a burden and risk.)
Well, this is the problem people have I think - that it comes down good intentions on Apple's part, no matter how trustworthy they are deemed to be.
I'm concerned about this behavior (both from privacy and performance perspectives), but I'm also not (quite) convinced this is working as described/implied here.
Before I get started: If you poke at this, open Console.app first. You can see recent logged "assessment" checks logged in "Mac Analytics Data" with the search "process:syspolicyd". You can use the same search to watch log messages (including all of the TLS negotiation etc.) for the checks in the device log.
The part that seems weird is that, if it is transmitting a hash (which seems possible/logical) the caching behavior doesn't appear to care or respect it?
The article suggests the following test:
echo $'#!/bin/sh\necho Hello' > /tmp/test.sh && chmod a+x /tmp/test.sh
time /tmp/test.sh && time /tmp/test.sh
I tried this test and got real runtimes of 0m0.289s and 0m0.006s. Then, I changed the file: echo $'#!/bin/sh\necho Hellok' > /tmp/test.sh && chmod a+x /tmp/test.sh
When I re-ran the script, both runs are under 10ms. The content changed, but it didn't bother re-checking. I wrote the original script to a new file path: echo $'#!/bin/sh\necho Hello' > /tmp/test2.sh && chmod a+x /tmp/test2.sh
This ran with runtimes similar to the original (0m0.232s and 0m0.006s). Same content, new path, new check. Here too, if it cares about the hash, it either isn't bothering to use it for caching decisions or the hash includes the path.Then I tried rming the file, writing it again, and running it. Once again, it checks on the first request. I think this suggests it may be caching the result by inode? The author said they saw new checks after saving changes in TextEdit--I don't know much about TextEdit, but I'd guess it is doing atomic write/rename here.
Other random details I noticed:
1. it holds the connection open for a minute, presumably to minimize connection overhead for executions that'll generate many checks. My first checks were all in the 280-300ms range, but I tried one additional check within the minute and it only took 72ms. Making multiple requests in less than a minute may make it harder to notice
2. The device log has a "summary for task success" entry with pretty precise timing details on all parts of the request.
3. On my system, each of these attempts produces a "os_unix.c:43353: (2) open(/var/db/DetachedSignatures) - No such file or directory" error in the log from the libsqlite3 subsystem after the response comes back.
4. The "Mac Analytics Data" log entry for each request has a good summary that looks like:
assessment denied for test.sh
com.apple.message.domain: com.apple.security.assessment.outcome2
com.apple.message.signature2: bundle:UNBUNDLED
com.apple.message.signature3: test.sh
com.apple.message.signature5: UNKNOWN
com.apple.message.signature4: 1
com.apple.message.signature: denied:no usable signature
SenderMachUUID: ...snip...
5. When I add Terminal to the Developer Tools exemption on the privacy tab it does appear to kill the check. I'm not sure if there's genuine protection this check provides at some level, but I'll be considering adding either Terminal or at least some specific build tools to the exemptions I add on a new system.6. After adding the Developer Tools exemption, if you have the app open, it'll ask if it can quit it for you. I took the hint and restarted Terminal. It'll do the same thing when you remove it from the list. But I didn't see the checks actually return until I rebooted. Also, my system froze during reboot. Hopefully a coincidence. :)
7. To put a better number on how this performance impact can compound for the kinds of builds I do all of the time, I ran `nix-build ci.nix` in the local directory for one of my projects before and after enabling the Developer Tools exemption for ~/.nix-profile/bin/nix. The run took 1m22s before, 45.5s after.
8. Looks like this is the same check as is run by `spctl --asses -v <path>` (at least, per the Console.app logs). That may make it easier to play with.
I recently bought a new System76 laptop as a stopgap, but it might end up becoming permanent. Kind of a sad end for 25+ years of Mac use.
Edit: What I should have said is that the binary is signed, notarized, and the notarization stapled to it, as described here: https://developer.apple.com/documentation/xcode/notarizing_m...
Feels like somebody could flesh out this argument in terms of accidental vs necessary complexity, but in terms of how much you need to trust the other party.
Few would accept the argument "This code is already very complex, why do you have a problem with doubling the complexity?" on its own merits, so why is it sensible in terms of trust?
Do we know that? The cache may well have an expiration date. Does the cache stay after an OS upgrade? I suspect further research will discover it's not just 'first time'.
>They can have your IP address but another mechanism would be needed to relate that to you.... if they wanted to do that they would surely use a more capable mechanism.
The app profile would already tell a great deal. Once enough of these 'non-capable' surveillance mechanisms add up, you end up with a very capable surveillance mechanism.
It'll show the total of each partition (the normal, and the new read-only system partition) on their own line, thereby giving a false total. e.g. 1x 100GB disk, 50GB normal, 50GB system partition will show as capacity 100GB for both partitions which would mean a 200GB disk.
Small things like this just make me completely lose faith in Catalina.
EDIT: Other "fun" things I noticed within half an hour:
a. Text search in PDF no longer works
b. I can't create anything under /
c. I have to use synthetic.conf to map paths from / to my real partition, but the parser of synthetic.conf is very particular to tabs/spaces unlike any other /etc/ file format
d. Xcode wants to ask for my password to debug every single time I reboot and debug a C++ app. This is incredibly incredibly incredibly incredibly annoying.
Safari is faster in general use. But that's so far the only good point.
I'll keep it on a SSD for App Store submissions and keep my machine on an older decent version thanks
I’m torn. I don’t want to return the machine because everything else is crap. At least the 16” works well as a laptop so long as you don’t plug anything into the ports. But Apple’s Q&A has seriously gone down the toilet ever since Steve Jobs died. Clearly him throwing staplers at people was the glue holding Apple together.
I had a similar problem, and it turned out that the dock needed a driver. I don't think I've installed a driver for an external device since I switched to Macs years ago, so it never occurred to me that something like a dock would need a driver.
But it turns out that once I installed the vendor's driver, the problems all went away. I'm not sure who's fault that is.
This is the height of insanity to think that Apple or anyone else would want this data or use it for some nefarious purposes...anonymous hashes of junk data are essentially useless outside of this purpose. It's fine to claim not to trust anyone for anything, but most of us aren't willing or able to build our own hardware, write our own operating system, and write our own applications. We have asked our vendors for devices and code that are more trustworthy, and when they give them to us we COMPLAIN about it incessantly. This makes no sense to me.
Oh is this macOS? I'd just assumed all the PDFs I've tried to search for the past while have been poorly formatted with the text as images, but that makes more sense.
> I'll keep it on a SSD for App Store submissions and keep my machine on an older decent version thanks
FYI it's pretty easy to integrate binary upload to App Store Connect on the CLI of your CI system.
While this is an awful stopgap solution, at least I can get back to work.
I my self have a maxed out 16 MacBook Pro and a for the first few weeks after the upgrade it was literally in usable because routine user input would result in the entire system locking up. I suspect it was actually this issue but, thankfully, the issue is now resolved.
You may very well be right. I used TextEdit simply because it was easiest for me to guarantee a new notarization check every time, but I don't know the exact criteria that macOS uses to identify an executable as "the same". There's probably some combination of path and/or inode in addition to the hash.
Edit: Checked and double checked: When I run a new shell script, syspolicyd just makes a connection with no application data
This is no longer true. It is a very similar and annoying experience for me.
I use OSX, Windows and various versions of Linux.
The browser is the real platform at this point and is the shared experience between all three.
Much easier and better to just stop using it all and move to a system like Linux or BSD. 99% of people do everything in a browser these days anyhow.
My guess is that it's strictly for banning app store apps that they pull from the app store, but would like also to cripple retroactively on installed machines. But that doesn't explain why it had to run against random shell scripts? This is all still confusing. We don't have all the info.
1. Upgrades are not optional. The system will relentlessly nag me until I upgrade even if I don't want to upgrade.
2. Upgrades are crap shoots. An Apple upgrade nowadays is as likely to break things as it is to fix things.
3. Upgrades are difficult and sometimes impossible to revert. If an upgrade breaks something, I'm just screwed.
So I'm still running Mavericks. It works. It's reliable. It does everything I need it to do. And I can count on that still being the case tomorrow. If I upgrade, all bets are off.
https://www.youtube.com/watch?v=J-RXqNafscs
And if something breaks on your MacBook Pro, most likely you will have to replace the entire motherboard or display.
PS: I own lots of Macs but sad to see the direction Apple is heading in.
It's easy for us tech nerds in our little gadget bubbles to suppose that everybody is like us. But most people are simple browser users, and Office 365 and Google Docs have all but killed off office software on the desktop for many users.
$ echo $'#!/bin/sh\necho Hello' > /tmp/test.sh && chmod a+x /tmp/test.sh
$ time /tmp/test.sh && time /tmp/test.sh
Hello
real 0m0.016s
user 0m0.002s
sys 0m0.010s
Hello
real 0m0.006s
user 0m0.002s
sys 0m0.004s
I don't believe the 0.01s difference is long enough, and could easily explained by filesystem caching. The article says:> Some people try to explain away the delay, e.g., "I would put the 300 vs 5 ms down to filesystem caching", but such hand waving doesn't stand up to further scrutiny.
...but does not provide any "further scrutiny", so for me, occam's razor applies.
It pushed me to buy Little Snitch to block it, so I guess somebody won out.
My 2017 13" MBP (without discreet GPU) was barely usable when powering my 4K monitor but at least it was quiet. It makes me think that the more modern integrated Intel GPU in the 16" should be enough to power my monitor without fan noise. Sadly, Apple has decided I can't have that option.
That hasn't been my experience at all. While those tools are definitely used - especially for collaboration - most people on my company's Office 365 subscription are downloading and using the full products for their daily work. This is true in both very large companies and the (non-tech) startup I work at now.
It's akin to the people spending tens of thousands of dollars on disaster prep.
The only people who lose are the end-users of software. Who are forced to use crappy software.
In reality, I see most people use desktop software instead of the browser (without using the internet in some cases) to do their work. (Think CAD, Adobe, DAW software, Excel, video production software) even on mobile/tablets Office can be used where no internet connection is available.
I seriously doubt that users would spend all their time in a browser window other than for consumption purposes like social media and video sites. The idea of 99% of people doing everything in the browser seems questionable to me and some data about this would be helpful here.
Apart from the computer science department, I also doubt that people would find it easier to go to Linux, BSD or the other galaxy of distros.
1. confirm the checks are enabled:
spctl --status
2. Make sure your terminal/shell/etc aren't already exempted System Preferences > Security & Privacy > Developer Tools.3. If you already ran something that could generate a check in the last minute, the connection is still open. Most of the overhead people are recording is negotiation/handshake. If you're fairly close to the server, it seems plausible your observed time could be enough for the communication minus the negotiation. You can open Console.app and search `process:syspolicyd` in the device log to see the entries for the negotiation; wait for it to terminate.
4. Try removing and re-creating a new file as in the test you did before and check it a little more directly:
spctl --assess -v --ignore-cache --no-cache /tmp/test.sh
If it's working, you should see a log entry with the text "summary for task success" in it with a detailed breakdown of the request (times taken per phase, bytes sent/received, etc).I'm not sure why you couldn't reproduce the delay. There are several possibilities I can imagine, but these could only be proved or disproved by more testing. In any case, many people have reproduced the delay, on close to "factory default" Mac installs.
And basically everybody, whom I know personally, complains about the UX of anything web-based, so don't even think about putting CAD, CAS or InDesign into the browser.
I'm running both experiments. I've tested and compared script notarization to app notarization.
You're getting apparently unusual results with script notarization. So the natural next step would be to compare against app notarization.
In Safari or in general? I have only noticed the former.
> I have to use synthetic.conf to map paths from / to my real partition, but the parser of synthetic.conf is very particular to tabs/spaces unlike any other /etc/ file format
You may already know this, but man synthetic.conf will explain that you must use tabs.
> Xcode wants to ask for my password to debug every single time I reboot and debug a C++ app. This is incredibly incredibly incredibly incredibly annoying.
I can only offer you my condolences as I cruise by with SIP off.
Perhaps because such a system is extremely centralized?
> we are complaining because our shell scripts take n seconds to run?
Yes? If I buy a computer and it spends time doing stupid stuff, then I think I am fairly justified in being angry.
> If you are running signed and notarized (stapled) binaries, the system never even reports them to Apple in the first place.
Great, let's notarize and staple tickets to every little piece of software you write…
> This is the height of insanity to think that Apple or anyone else would want this data or use it for some nefarious purposes...anonymous hashes of junk data are essentially useless outside of this purpose.
Executable hashes tell you if someone is running a specific piece of code.
> It's fine to claim not to trust anyone for anything, but most of us aren't willing or able to build our own hardware, write our own operating system, and write our own applications.
That's why I buy them from other people and would desire them to be good.
> We have asked our vendors for devices and code that are more trustworthy, and when they give them to us we COMPLAIN about it incessantly.
How exactly does this make the device more trustworthy?
The other day I tried for the 100th time to move to Linux. I installed a recent build of a maintained, popular distribution (no it doesn't matter which one - I have tried them all), on hardware that is famous for it's Linux support.
Everything worked for a day and a half, then the sound just fucking died. No input or output.
I get millions of people use Linux daily, and are happy with it -- I'm genuinely grateful that's a thing. I would love to also use Linux, but I really don't have the time to diagnose why it broke yet again.
Any suggestions for people stuck on macOS? I guess I could block all Apple domains in my DNS resolver? Other than app updates, I can't think of anything that would stop working. That still sounds less painful than trying to deal with Linux's atrocious UX.
Telemetry has a specific use-case. Taking measurements in a place you can't go. What industry employs it for nowadays is much closer to spyware in the sense you can get so much more of it done without it producing a noticeable effect for the user in terms of how much work their computer is actually doing. So what if you spin through a couple rounds of telemetry gathering while the user's process is blocked, am I right? Not like they're using it. /s
https://www.vagrantup.com/downloads.html
Spin up a Linux box in macOS and ssh into it directly. It is a true joy if you are comfortable working with text files (programming, admin, focused writing, etc.)
It will default to using VirtualBox as the underlying virtualization. That works a treat and hides all the GUI madness of VirtualBox.
However, if you open up VirtualBox then you can interact with the host you just created with “vagrant up” just fine, including using a graphical environment.
where people such as doctors and lawyers might be violating the law by using a modern computer
That reminds me of a story I heard not long ago --- a company wanted to have more defense against malware, so signed up for a "security solution" from one of the big vendors and got it installed on all the company's machines. After a developer who was doing network tracing discovered that it was phoning home on every executable being run, and further digging discovered that it was periodically uploading file hashes and sometimes actual files --- not just the executables being run but other random files --- to the security vendor's servers, the reaction was "oh hell no!" and they immediately terminated the service and removed the product from all their machines.
Nobody seems to have an issue with checking this for apps -- it's a good security feature to protect from malware, right? And which everyone knows about? And it only happens the first time you run something, so it's not a performance issue in everday usage.
And the article even states that there seems to be a valid reason for checking shell scripts, because they can be used to compile malware.
The original complaint was about slowness, but how often do you run something for the first time? The only scenario in which I can imagine this would become a practical peformance problem is if, somehow, you have an app that spawns new shell scripts all day long to execute, every few seconds, and a really flaky internet connection. Or new shell scripts hundreds of times a second, even with a good internet connection.
Is that something anyone ever needs to do? Programs can run shell commands directly, without a file, so it seems unlikely. Also, another comment here suggests that even if a shell script is modified, it isn't re-verified, so there would seem to be a trivial workaround anyways.
Or is the issue just that this is undocumented behavior? Or what am I missing here?
It runs adobe software like total shit.
I think it's something to do with Catalina + accessing files in Google Drive File Stream + Adobe.
It runs illustrator horribly.
It's basically the saddest thing I've ever seen.
I think I'll get her a 17" XPS for christmas this year.
Lenovo has a couple of AMD-based ThinkPads that are looking pretty appealing, although they come in around $1500 with the configuration I'd want.
[1] https://stackoverflow.com/questions/16647069/should-i-use-va...
I don't know about Android. I only have one Android device. It is so old I don't even remember how old it is and I've only ever upgraded it once. It still works like a charm for all the things I need it to do.
It's a mystery. I'd certainly be much more willing to buy a machine if it came with good documentation. Back in the 1980's, they (Apple and others) used to include complete schematics for their computers.
Unfortunately, while the kernel panics will most likely be fixed eventually (10.15.4 is a complete shitshow, even by Catalina standards), it seems the dGPU is actually working as designed with the high idle power draw. If you search for “navi multiple monitor power draw” you can find reports of desktop AMD cards that predate the 16” MacBook Pro that exhibit the exact same behaviour. It’s something to do with memory clocks and mismatched resolutions/refresh rates between monitors, and I very much doubt it will ever be addressed via software (if it even can be).
Very annoying as it causes the fans to spin up audibly when you put it under the slightest stress.
Like you I don’t know what to do. I’m able to return it due to the extended return window they have currently, but I have absolutely no intent of switching to Windows or Linux.
I had OSX lock up or lose any display on MBPs with NVidia chips.
On my wife's old windows desktop I had to plug a USB audio dongle, because of audio glitches.
Some of it is sloppy drivers, some, faulty or poorly designed hardware.
"Sound just died" is, unfortunately, not specific to Linux in any way.
I tried Murus/Vallum[1] too for a while but it was still a bit too complicated and demanding. There's still a space for a firewall with a decent UI available.
Won't be a Mac for me after this machine though.
At least when operating in clamshell mode with one external monitor, I can get the power usage to drop from 20W down to 5W by using switchresx and dropping the refresh rate from 59.88 to 56.88 Hz. When I do that, even light WebGL work doesn’t cause it to exceed 7-8 watts.
It sounds like some work around for the special case of a single external monitor with the internal display closed isn’t kicking in like it’s supposed to.
Linux really sucks for anything other than servers. I hate to say that because I badly wish it weren't true, but it is.
A Linux conference is usually focused on the Linux kernel, drivers, filesystems, networking, more or less everything POSIXy.
If you want to learn about improvements at the UI level, there are XDG, GUADEC, Kacademy, each focused on their own silo, and other parts of the stack or UI tooling don't have any at all.
Meanwhile WWDC, Google IO, BUILD / Ignite are about all levels of the stack.
AirDrop requires touch based selection and sharing on the iOS device and the transfer is very quick and if it’s your Mac the files go straight to your downloads folder.
So it’s more streamlined overall with AirDrop.
I had a colleague who used to do a similar dance with Image Capture. He had no idea he could AirDrop photos even though he airdropped files from Finder to others all the time.
Personally I just have them all sync via iCloud Photo Library.
It has worked very well for me. I originally installed qubes years ago, but it was all the security of vm/containers with 1/10th of the convenience. I switched to arch, it was a completely painless install and that's what I have now.
(hardware-wise it is more of the same - standardized screws on the case, 19v power adapter with standard barrel jack, socketed standard memory, m.2, sata)
huge privacy settings pane with legalese, motd-news phones home, snapd continues to reinstall itself and use resources, whoopsie and kerneloops phone home. amazon app, apport, ubuntu-report, unattended-upgrades...
I haven't tried 20.04 yet, don't know if it is worse or better.
arch didn't seem to do anything.
heck, even pfsense phones home. Last I remember there was some data file it downloaded each time they used for metrics.
ubuntu phones home a lot.
I have decided to give up trying Linux, at-least for a few years.
My understanding of the issue is that the card has variable memory clocks to save power. However, to avoid visual distortion/tearing, the clocks can only be changed during the monitor's v-blank. However, when you have multiple monitors, presumably you would need extra circuitry or at least some mechanism ensure each monitor is in sync, or to detect when the blanking intervals match when using monitors with different refresh rates. I don't have a strong knowledge of this sort of thing, so I don't know how exactly this is achieved, but in this case AMD has "solved" the problem by simply running the memory clocks at full tilt 100% of the time, thereby avoiding the need to precisely time changes in speed.
Do you have an obscure sound card or something? With consumer grade hardware I have rarely had issues with compatibility. Well yes recently with wifi USB adaptors.
Finding a USB cable though, sometimes that does take a search and a wee bit of cursing!
https://support.apple.com/guide/security/app-security-overvi...
IMO, you have to pick a security vendor to trust (only as far as you can throw them). Apple is incentivized to provide better end-user experiences as a matter of having end to end authority for their products. They’ve built a robust (albeit prescriptive) security/sandboxing ecosystem to avoid just pointing at software vendors if a customer is unhappy.
I like these features, but the fact that they’re not always optional is frustrating. Note also that soon macOS will be running on Apple ARM SoCs; similar trade off.
Buy any other computer, and you now have Microsoft or Google, a CPU vendor, a consumer hardware vendor, and possibly also an antivirus company installing background services.
Maybe OS X 10.4 worked for me out of the box.
The longer you use a computer, the more you expect certain behavior, and the less you get it out of the box.
* https://app.bitrise.io/integrations/steps/deploy-to-itunesco...
* https://developer.apple.com/documentation/xcode/notarizing_m...