EDIT: I do this more for avoiding certain disk reads/writes than security actually
> There should be per-user temporary directories. In fact, on modern systems there are per-user temporary directories! But this solution came several decades too late.
> If you have per-user $TMPDIR then temporary filenames can safely be created using the simple mechanisms described in the mktemp(1) rationale or used by the old deprecated C functions. There’s no need to defend against an attacker who doesn’t have sufficient access to mount an attack! There’s no need for sticky directories because there aren’t any world-writable directories.
May I introduce you to PrivateTMP= ?
> PrivateTmp=¶
> Takes a boolean argument. If true, sets up a new file system namespace for the executed processes and mounts private /tmp/ and /var/tmp/ directories inside it that are not shared by processes outside of the namespace
https://www.freedesktop.org/software/systemd/man/latest/syst...
Notably you don't even need to change how programs work (no $TMPDIR necessary)! It creates a filesystem namespace for your process, such-that you see the normal fs, but with your own /tmp ! That way your program behaves regularly/as convention goes everywhere else, and existing programs you run can also benefit without re-writing!
I cannot emphasize enough how many excellent well integrated kick ass security features systemd gives you totally for free. DynamicUser= turns on PrivateTmp= by default and is an easy way to insure isolation, to prevent needing to hand-code & safely manage uid/gids yourself; I'd start there if you can.
There's so so so many great isolation features in this man page.
Pages in physical memory are not typically zero'ed out upon disuse. Yes, they're temporary... but only guaranteed temporary if you turn the system off and the DRAM cells bleed out their voltage.
As as someone said, you can mount /tmp as tmpfs if you can spare the memory.
I recall using 'shared hosting' where instead of using your default IP address for fetching anything from the network, you could do some funky stuff in the shared environment to discover many more IPs that could be used. Useful for scraping and such. Generally any shared hosting that used cpanel would expose all their network interfaces, often a /24 or two.
Because at some point a few months later people will say "where did the database go?" and you'll have a lot of explaining and reconstruction to do.
(mid 90s)
If it gets too full for regular OS operations, you get the fun of the OOM Killer shutting down services (tmpfs is never targeted by the OOM Killer) until the entire OS just deadlocks if you somehow manage to fill the tmpfs up entirely.
Something like
tmpdir := "/tmp/${USERNAME}"
loop:
rmdir(tmpdir, recurse=true)
while not mkdir(tmpdir, 0o700, must-create=true)
chown(tmpdir, user=$USERNAME, group=$USERGROUP)
export("TMPDIR", tmpdir)
with /tmp having root:root owner with 0o775 permissions on it? Yeah, would've been nice.It's relatively easy to set up[2] and provides a pretty huge defense mitigation against abusing /tmp.
[1] https://www.man7.org/linux/man-pages/man8/pam_namespace.8.ht...
[2] https://docs.redhat.com/en/documentation/red_hat_enterprise_...
That defeats the idea GP presented.
I'm wondering if there's programs that will break with such a change. One example would be if multiple users in a group need access to the same file under /tmp.
$HOME/.tmp for user operations and /tmp for system operations?
EDIT: I see from other posters it can be done. Why the heck isn't this the default?!
For practical reasons, swapspace isn't really the same thing as keeping it in an actual storage folder - the OS treats swapspace as essentially being empty data on each reboot. (You'd probably be able to extract data from swapspace with disk recovery tools though.)
On a literal level it's not the same as "keep it in RAM", but practically speaking swapspace is treated as a seamless (but slower) extension of installed RAM.
Arguably, many are not relevant to /tmp, but it's good to keep in mind.
I wonder if Fedora does this by default?
1) You could modify the namespace init script used by pam_namespace to also mount a shared directory under each user's /tmp, and do this only for the users who need it.
2) Rely on a different shared directory for the users who need it.
3) Configure namespace.conf to isolate by SELinux context and put each user who needs a shared /tmp into the same SELinux role.
Not to say they couldn't have one!
I read the GP as 'literal level' in-RAM. If I interpreted that incorrectly, apologies to GP.
Normal case: tmpfs data stays in RAM
Worst case: it is pushed to swap partitions/files, which is no worse than it being in a filesystem on physical media to start with (depending on access patters and how swap space is arranged it may still be a little more efficient).
It isn't quite the same as /tmp being on disk anyway but under normal loads in cache, because the data will usually get written to disk even if only ever read from cache and the cached data from disk will be evicted to make room for caching other data where tmpfs data is less likely to.
I agree, but I think that shared mutable global state is a bad default. I think it'd be better to be opt-in (eg, you get a `/tmp/${USER}` and your user can `chmod o+rw` during setup if it needs to be globally mutable.
Yeah it’s a Linux/FHS thing.
In other words, for many systems, a home-level temp directory is virtually the same as /tmp anyway since other than system daemons, all applications are being started as a single user anyway.
And that might be a security regression. For servers you're spinning up most services at bootup and those should either be running fully sandboxed from each other (containerization) or at least as separate system users.
But malware doesn't necessarily need root, or a daemon process user id to inflict harm if it's running as the human user's id and all temp files are in $HOME/.tmp.
What you really want is transient application-specific disk storage that is isolated to the running process and protected, so that any malware that tries to attack another running application's temp files can't since they don't have permission even when both processes are running under the same user id.
At that point malware requires privilege escalation to root first to be able to attack temp files. And again, if we're talking about a server, you're better off running your services in sandboxes when you can because then even root privilege escalation limits the blast radius.
In these systems, the responsibility passes to EDRs or similar. But neither a $HOME/.tmp or /tmp matter in these scenarios. _Shared_ systems are where the concept of $HOME/.tmp might be more interesting.
Microsoft have tried to get people to use the newer, more heavily sandboxed APIs like UWP, but only very weakly, and they haven't committed to transitioning their own apps over as dogfood. Nearest they've got is actually migrating a lot of office to the cloud as Office365.
Sunsetting Win32, or even having significant backwards-compatibility breaks, would upset so many corporate customers who would then refuse to upgrade.
Very true, and this is a real weakness of the UNIX (and Windows, even worse!) style security model in the modern environment. Android/iOS do a lot better.
I think there is a use for such a thing (I take advantage of these features somewhat regularly), but having it also be the default $TMPDIR is definitely a bad idea.
Back when it was just environment variables, I could pipe /proc/PID/environ to xargs and get basically the same state. Given that things like unix domain sockets may end up in $TMPDIR, I can be left unable to do certain things.
https://magnusviri.com/what-is-var-folders.html
That's not the reason for /private though. Rather, /private is a holdover from NeXTSTEP days which could mount the OS via NFS (NetBoot), and where /private was local to the machine:
"Each NetBoot client will share the server's root file system, but there are several administrative files (such as the NetInfo database, log files, and the swapfile) that must be unique to each client. The server must have a separate directory tree for each client, which the client mounts on its own /private directory during startup. This lets a client keep its own files separate from those of other clients."
https://www.nextcomputers.org/files/manuals/nsa/13_NetBoot.h...
Unfortunately Ubuntu 24.04 has put restrictions on unprivileged user namespaces, so that it no longer works out of the box :(
> Firejail is a SUID sandbox program that reduces the risk of security breaches by restricting the running environment of untrusted applications using Linux namespaces, seccomp-bpf and Linux capabilities. It allows a process and all its descendants to have their own private view of the globally shared kernel resources, such as the network stack, process table, mount table. Firejail can work in a SELinux or AppArmor environment, and it is integrated with Linux Control Groups.
It supports "--private" (mounts new /root and /home/user directories in temporary filesystems), along with "--private-{bin,cache,cwd,dev,etc,home,lib,opt,srv,tmp} (plus "noexec /tmp")". It also supports "keep-config-pulse", "keep-dev-shm", and so forth, meaning you can have shared files between process if you so wish (for DBus, etc.).
> I can be left unable to do certain things
Most of what I can imagine of "certain things" falls into two categories: debugging (for which much better tools exist), or concerns that would be better served by a program providing an API of some kind rather than "go muck with state in $TMPDIR".
I replied to your similar comment upthread as well.
Also, /proc/ is (among other things) a debug interface.
On Linux+systemd, I think this is referring to /run/user/$UID. $XDG_RUNTIME_DIR is set to this path in a session by default. There's a spec for that environment variable at <https://specifications.freedesktop.org/basedir-spec/latest/>. I assume there's also some systemd doc talking about this.
On macOS, I see that $TMPDIR points to a path like /var/folders/jd/d94zfh8d1p3bv_q56wmlxn6w0000gq/T/ that appears to be per-user also.
What do FreeBSD/OpenBSD/NetBSD do?
I think /tmp is a poor solution even if we are going to use the filesystem for this (some sort of per-user spool makes far more sense), but its value is in its ubiquity.
nsenter --all --target $PID
or something like that?If the user tmp files were placed in /tmp/${USER}/ then that would achieve the same goal.
So "echo $API_TOKEN" failed, but getting the output of the complete environment was as easy as "env | base64".
Its origins are quite clear, QDOS, Quick and Dirty Operating System.
For example, the Android(/iOS?) permission based model at kernel level where apps (that could be processes in general?) can only access some private storage (which presumably has its own isolated tmp/ directory) really should be default, and permissions should be opt-in (of course, there should be a 'legacy permission' that makes things work as before).
(I believe most of permission functionality is technically possible through SELinux (??), or you could use containers, but is not easy to use or default)
I think containers arose partially to provide some of this isolation? But they have their own overhead and redundancy issues.
---
It seems some of this work is being done in SELinux project? Is it going to be enough? (and easy enough to use by default?)
https://wiki.archlinux.org/title/SELinux
I think a simple permission model might have been more elegant than the SELinux model ?
They would if they were designed with the user's security in mind, instead of Google's/Apple's control.
But I disagree, they don't do better at all. Any software that wants to get access to everything just needs to insist.
So, not really a fan of /tmp in memory. (And I don't much run massive and bloated browsers that may murder your SSD lifetime with excessive file writes better diverted to an in-memory /tmp.)
The problem is that Unices use access control, rather than capabilities, so ensuring state is shared only by those who need it is quite a bit more difficult than just punting, and declaring that 'those who need it' is 'everyone'.
Nor has the design problem of a user-friendly capabilities architecture truly been solved, IHMO. Nonetheless, we shouldn't confuse convenience with correctness.
On Linux it's typically created by a PAM, so if you're not using PAM then it doesn't exist. This means that on Kubernetes pods/containers... it doesn't exist!
Yes, /tmp/ is a security nightmare on multi-user systems, but those are a rarity nowadays.
Lots of things want to write things into /tmp, like Kerberos, but not only. I recently implemented a token file-based cache for JWT that... is a lot like a Kerberos ticket cache. I needed it because the tokens all have specific aud (audience) values. Now where to keep that cache?? The only reasonable place turned out to be /tmp/ precisely because /run/user/$UID/ is not universally available, not even on Linux.
This is supremely annoying. /run/user/${UID}/ needs to exist universally. Ugh.
For example, Kubernetes doesn't use PAM in the pods it creates to run your containers.
You might think "who cares", but I've written code that is agnostic as to whether it's running in a logged-in user's session or something else. https://news.ycombinator.com/item?id=41916623
Not as good as a real capability-based access control, but quite good compared to the other things that are usable on Linux.
Anything that requires login(8) or PAM to make it happen is insufficient. This has to happen in environments like Kubernetes too.
I get your point. Yeah as a newbie flipping on random options listed under "sandbox" may be bad for you. But this hardly seems like a good dig against a well integrated unit process that has lots on tap to do the job very very well, in a succint manner.
shm and memory mounts use half the available system memory by default. so this is not typically possible.
> are not typically zero'ed out upon disuse
They're zeroed when they're reallocated.
> and the DRAM cells bleed out their voltage.
This occurs in less than a second in almost every room temperature environment.
What's not a rarity though is apps (or code in general) that you don't fully trust, and that you don't want to give a chance to exfiltrate all your data for example.
Sadly, the POSIX permission model is entirely ill-suited for that, precisely because it tries to solve the multi-user problem, wherein all code belonging to a single user is effectively treated omnipotent within that user's domain (i.e. the files the user owns). That's why iOS and macOS (the non-POSIX parts) has a container model with strong sandboxing, entitlements, etc.
It's very different from 20-30 years ago.
That's why you still usually see machines with unreasonable amounts of GB of RAM having swap partitions: Instead of having data that's rarely, if ever, used occupy precious DRAM, it's much better to have that data in swap so that the DRAM can contain, say, filesystem caches.
Rarely used data that got evicted then behaves more or less like a normal /tmp filesystem when it does eventually get accessed, i.e. it gets read in from disk, while other data still gets all the benefits from tmpfs (e.g. ephemerality).
(If you take the thought experiment to its logical conclusion, you'll anyway end up in transparent hierarchical storage a la AS/400, where all data is just addressed by a single pointer in a very very large address space and the OS decides where that currently points to, but let's stay within the confines of what we're mostly used to...)
How many of these do you have? I have 1 and I have it installed via a flatpak with sandboxing (that has no access to /tmp).
Flatpak's are an implementation of that container model for software on Linux.
It's even worse than that: We're all using the same shared applications on some cloud.
Browser security would be a lot less time-sensitive if that were the case.
The same logic applies to games, etc. etc. I do NOT trust the developers of these things to get things right 100% of the time, so why even take the risk of allowing their programs unfettered access to all of my files? As a dev, I don't even trust myself to be perfect and I'd like to be able (in my program) to state up from "my code will never touch anything outside Downloads/" or whatever.
ETA: The point is minimal trust for any given program to do its thing. I'd like to be even more pithy with something about "trust but verify", but that doesn't quite fit, alas.
Don't put . in your PATH, that's for sure.
And of course those libraries' code that uses those files had to be written very carefully.
Sure, the more modern thing is to have a daemon called `kcm` that does that and which has an AF_LOCAL socket in... /var/run/, but it's a multi-user-capable daemon, so it doesn't need /var/run/user/${UID}, which as I've noted elsewhere here, is not universally available (for the same reasons that /run/user/${UID} is not either).
On my Mac? Less, but it happens. But text messages, photos, and the banking apps installed there etc. are still inaccessible by anything except the thing's I've explicitly given access.
joker@e2509h:~/test_tmp$ ll
total 12K
drwxr-xr-x 3 joker joker 4.0K Oct 22 22:12 ./
drwxr-x--- 11 joker joker 4.0K Oct 22 22:12 ../
drwxr-xr-x 3 root root 4.0K Oct 22 22:13 tmp/
joker@e2509h:~/test_tmp$ cd tmp
joker@e2509h:~/test_tmp/tmp$ ll
total 12K
drwxr-xr-x 3 root root 4.0K Oct 22 22:13 ./
drwxr-xr-x 3 joker joker 4.0K Oct 22 22:12 ../
drwxr-xr-x 2 joker joker 4.0K Oct 22 22:13 joker/
-rw-r--r-- 1 root root 0 Oct 22 22:15 z
joker@e2509h:~/test_tmp/tmp$ touch x
touch: cannot touch 'x': Permission denied
joker@e2509h:~/test_tmp/tmp$ rm z
rm: remove write-protected regular empty file 'z'? y
rm: cannot remove 'z': Permission denied
joker@e2509h:~/test_tmp/tmp$ touch joker/x
joker@e2509h:~/test_tmp/tmp$ ll joker
total 8.0K
drwxr-xr-x 2 joker joker 4.0K Oct 22 22:13 ./
drwxr-xr-x 3 root root 4.0K Oct 22 22:15 ../
-rw-r--r-- 1 joker joker 0 Oct 22 22:13 x
joker@e2509h:~/test_tmp/tmp$ rm joker/x
joker@e2509h:~/test_tmp/tmp$
Looks like it works just fine.In principle, there's nothing precluding e.g. having a separate user per app on Linux, either...
On DOS it wasn't a thing because it didn't have a standard temp directory to begin with, so when the need arose, it was bolted on in an adhoc way (I remember the apps couldn't even agree on whether it should be %TEMP% or %TMP% or something else). Windows introduced it as a proper first party concept, but then moved it around so much that you pretty much had to use the API to retrieve the actual value if you wanted it to work. And then NT made that API return a path to a per-user directory by default.
I assume (hope?) that's the intention, that nobody is advertising this as a way to prevent exfiltration of secrets.
They have access to _all_ storage. Permissions on Android are at the DOS level, all or nothing.
Creating solutions in search of a problem doesn't work. I have to do my job and my computer shall halp me not get in the way. We are at the end of 2024 and i need admin access to install and use an USB to serial converter (or any other HW device) in Windows.
Modern high performance clusters still follow that logic, and are found in almost all large universities or companies doing research on heavy computational topics (artificial intelligence, comp. chemistry, comp. biology, comp. engineering, and so on)
> So where should temporary files have gone, if not in /tmp?
> There should have been per-user temporary directories in different per-user locations. In fact, on some modern systems there are per-user temporary directories! But this solution came several decades too late.
And no good system makes it into Linux because it has a huge, well supported one, and some 3 other candidates pushing to get there already.
I love Linux and many of the fruits of its messy evolution, but such fruits are certainly not all equally delicious. :(