The relevant line from fstab is:
tmpfs /tmp tmpfs noatime 0 2
Now any program that writes to /tmp will be writing to a RAM disk, thus sparing unnecessary wear on my SSD.The relevant line from fstab is:
tmpfs /tmp tmpfs noatime 0 2
Now any program that writes to /tmp will be writing to a RAM disk, thus sparing unnecessary wear on my SSD.some of my kernels been running for weeks, as I come back and redo or rework some processing.
the neat thing about jupyter notebooks, is you can interleave python one-liners with bash one-liners and mix them as you wish.
$ ls -ld /dev/shm
drwxrwxrwt 3 root root 120 Jun 32 02:47 /dev/shm/
Incidentally, "30 years ago" is the cutoff date for music being considered the oldies. This just made me realize Nevermind is now an oldie, and soon The Lonesome Crowded West will be too.For the author's purposes, any benefit is just placebo.
There absolutely are times where /dev/shm is what you want, but it requires understanding nuances and tradeoffs (e.g. you are already thinking a lot about the memory management going on, including potentially swap).
Don't use -funroll-loops either.
You are relying on random implementation details instead of universal APIs that work across OSes and environments. Please stop.
So help me God, if I make a Linux system, I will make it _not_ have a /dev/shm just to avoid people relying on non-standard stuff for no good reason. Honestly, it's because of stuff like this that we need Docker.
So in theory some program might pass a name to shm_open that collides with whatever you put in /dev/shm.
Unlikely but possible
I'm not really seeing a right or wrong here anyway unless you're distributing a script that's meant to run on all sorts of Linux systems. In which case you probably aren't concerned with the physical storage medium being used.
Some hosts should have tmpfs mounted and some shouldn't. For those that don't, I can just /dev/shm. This isn't a "right" or "wrong" sorta thing.
I’m wondering if I can completely hide away the detail where I can work exclusively in memory (even when I habitually save my code) and “reconcile” as some task I do before shutdown.
In fact, that doesn’t even feel necessary… I git push my day’s work a number of times. None of that needs a local disk. And 64GB of memory was surprisingly affordable.
"This optimization [of putting files directly into RAM instead of trusting the buffers] is unnecessary" was an interesting claim, so I decided to put it to the test with `time`.
$ # Drop any disk caches first.
$ sudo sh -c 'sync; echo 3 > /proc/sys/vm/drop_caches'
$
$ # Read a 3.5 GB JSON Lines file from disk.
$ time wc -l /home/andrew/Downloads/kaikki.org-dictionary-Finnish.jsonl
255111 /home/andrew/Downloads/kaikki.org-dictionary-Finnish.jsonl
real 0m2.249s
user 0m0.048s
sys 0m0.809s
$ # Now with caching.
$ time wc -l /dev/shm/kaikki.org-dictionary-Finnish.jsonl
255111 /dev/shm/kaikki.org-dictionary-Finnish.jsonl
real 0m0.528s
user 0m0.028s
sys 0m0.500s
$
$ # Drop caches again, just to be certain.
$ sudo sh -c 'sync; echo 3 > /proc/sys/vm/drop_caches'
$
$ # Read that same 3.5 GB LSON Lines file from /dev/shm.
$ time wc -l /dev/shm/kaikki.org-dictionary-Finnish.jsonl
255111 /dev/shm/kaikki.org-dictionary-Finnish.jsonl
real 0m0.453s
user 0m0.049s
sys 0m0.404s
Compared to the first read there is indeed a large speedup, from 2.2s down to under 0.5s. After the file had been loaded into cache from disk by the first `wc --lines`, however, the difference dropped to /dev/shm being about ~20% faster. Still significant, but not game-changingly so.I'll probably come back to this and run more tests with some of the more complex `jq` query stuff I have to see if we stay at that 20% mark, or if it gets faster or slower.
> Usually, it is a better idea to use memory mapped files in /run/ (for system programs) or $XDG_RUNTIME_DIR (for user programs) instead of POSIX shared memory segments, since these directories are not world-writable and hence not vulnerable to security-sensitive name clashes.
$XDG_RUNTIME_DIR usually points to /run/user/${uid}, so you're guaranteed that other users won't write there, and possibly won't even be able to read there.
https://pubs.opengroup.org/onlinepubs/9799919799/
It doesn't get more standard than that.
It's because of people doing random nonstandard shit that we need to Docker-ize a lot of software these days. People refuse to lift a single finger to adhere to conventions that let programs co-exist without simulating a whole god damn computational universe for each damn program.
I have it running on a Raspberry Pi so that my already sparingly-used SD card's lifespan gets extended to, hopefully, several years. I have never seen the green writing LED light blink on without me specifically triggering it.
I primarily use it as a cronslave [1]. It has ~50 separate cronjobs on it by now, all wheedling away at various things I want to make happen for free on a clock. But if you live out of a terminal and could spend your days happily inside tmux + vim or emacs -nw, there's nothing stopping you from just doing this. Feels a lot like driving stick shift.
[0]: http://tinycorelinux.net/
[1]: https://hiandrewquinn.github.io/til-site/posts/consider-the-...
For the more venturous there is GPURamDrive [1] , not as many options, as it was made as a more of an experiment, but with gpu's adding more and more vram, why not?
It doesn't say anything about what it's backed by.
The article has been corrected.
Swap on an SSD isn't even that slow.
#!/bin/bash
ramfs_size_mb=1024
mount_point=/private/tmp
counter=0
ramfs_size_sectors=$((${ramfs_size_mb}*2048))
ramdisk_dev=`hdiutil attach -nomount ram://${ramfs_size_sectors}`
while [[ ! -d "/Volumes" ]]
do
sleep 1
counter=$((counter + 1))
if [[ $counter -gt 10 ]]
then
echo "$O: /Volumes never created"
exit 1
fi
done
diskutil eraseVolume HFS+ 'RAM Disk' ${ramdisk_dev} || {
echo "$O: unable to create RAM Disk on: ${ramdisk_dev}"
exit 2
}
umount '/Volumes/RAM Disk'
mkdir -p ${mount_point} 2>/dev/null
mount -o noatime -t hfs ${ramdisk_dev} ${mount_point} || {
echo "$0: unable to mount ${ramdisk_dev} ${mount_point}"
exit 3
}
chown root:wheel ${mount_point}
chmod 1777 ${mount_point}
Adding a plist definition to /Library/LaunchDaemons can ensure the above is executed when the system starts.My use case is to use yt-dlp to download videos to ramfs, watch them and then delete. Before i switched to ramfs, the final pass of yt-dlp (where audio and video tracks are merged to one file) ordinarily caused the issue with choppy system.
Back in the day you might place /tmp in a good spot for random access of small files on a disk platter. /var is vaguely similar but intended for things that need to be persistent.
Anyway it's not uncommon for systems to persist /tmp and clean it periodically from cron using various retention heuristics.
Ultimately POSIX concepts of mountpoints are strongly tied to optimizing spinning rust performance and maintenance and not necessarily relevant for SSD/NVME.
This is not the case. RAM-based file system capacities are unrelated to process memory usage, of which "swap space" is for the latter.
1 - Programs such as wc (or jq) do sequential reads, which benefit from file systems optimistically prefetching contents in order to reduce read delays.
2 - Check to see if file access time tracking is enabled for the disk-based file system (see mount(8)). This may explain some of the 20% difference.
Aren't both solved by swapping?
Although I suppose on Linux, neither having swap, nor it being backed by dynamically growing files, is guaranteed.
Glad to help out. Here[0] is more information regarding Linux swap space as it relates to processes and the VMM subsystem.
> I stand by my original point, downvotes be damned.
:-D
I do not run systemd-based distros, so cannot relate.
1. I use removable, external drives for anything I want to save long-term. No "cloud" storage.
Maybe some other ram disk things won't.
It seems the advantage of this has been mostly forgotten.