I accidentally untarred archive intended to be extracted in root directory, which among others included some files for /etc directory.
I went on to rm -rv ~/etc, but I quickly typed rm -rv /etc instead, and hit enter, while using a root account.
Reminds me in the t-shirt: “don’t drink and root”
I fucking hate using
rmfor these very reasons.There’s another program called “trash-cli” that gives you a
trashcommand instead of going straight to deletion.I’m not sure why more distros don’t include it by default, or why more tutorials don’t mention it.
it could be worse:
rm -rv ~ /etcOOOOOOOOOOOF!!
One trick I use, because I’m SUPER paranoid about this, is to mv things I intend to delete to /tmp, or make /tmp/trash or something.
That way, I can move it back if I have a “WHAT HAVE I DONE!?” moment, or it just deletes itself upon reboot.
Just get a cli trash tool and alias it to rm. Arch wiki
That’s certainly something you can do! I would personally follow the recommendation against aliasing rm though, either just using the trash tool’s auto complete or a different alias altogether.
Reason being as someone mentioned below: You don’t want to give yourself a false sense of security or complacency with such a dangerous command, especially if you use multiple systems.
I liken it to someone starting to handle weapons more carelessly because the one they have at home is “never loaded.” Better safe than sorry.
Lol we should have “rules of rm safety”:
- Assume rm is always sudo unless proven otherwise.
- (EDIT)Finger should be off the Enter key until you are certain you are ready to delete.
- Never point rm at something you aren’t willing to permanently destroy.
- Always be aware of your target directory, and what is recursively behind it!
Yeah, there’s no need to alias it. Trash-cli comes with its own
trashcommand.I think this is the best approach. I’ve created a short alias for my trash tool and also aliased
rmto do nothing except print a warning. This way you train yourself to avoid using it. And if I really need it for some reason I can just type\rm.If you want to train yourself even more effectively you can also alias
rmto runslinstead :)you can also alias
rmto runslinstead :)Choo-choo!!
Hehe I just thought of a hilariously nefarious prank: alias ls to sl. 😂
i always do “read;rm ./file” which gives me a second to confirm and also makes it so i don’t accidentally execute it out of my bash history with control-r
Also stealing this. What an awesome tip
Hey that’s a pretty good idea. I’m stealing that.
After being bitten by rm a few times, the impulse rises to alias the rm command so that it does an
“rm -i”or, better yet, to replace the rm command with a program that moves the files to be deleted to a special hidden directory, such as~/.deleted. These tricks lull innocent users into a false sense of security.I’ve read this somewhere too! Where are you quoting it from if I may ask?
But yes I also agree 💯%. rm should always be treated with respect and care by default rather than “customizing the danger away.”
Quoting from Linux Hater’s Handbook, lovely read
EDIT: UNIX Haters, not Linux hater, my bad
… is it the “UNIX-Hater’s Handbook” from 1994 with a parody of “The Scream” on the cover?
Yup, that one. It’s also available here, sans cover - https://web.mit.edu/~simsong/www/ugh.pdf
LOL nice, I’ll have to check it out. :) Thanks!
This need’s to be higher in the comments!
Next time:
ls ~/etc rm -rv !$Or press
alt+.to paste final argument of previous commandThis is also dangerous because you could run the second command by accident later when browsing command history
with tab you can expand the !$, should be a zsh thing
Genuinely curious… why using root for operations like these?
To feel the thrill
Reminds me of when I had a rogue
~directory sitting in my own home directory (probably from a badly written script). Three seconds intorm -rf ~and me wondering why it was taking so long to complete, I CTRL+C, reboot, and pray.Alas, it was a reinstall for me that day (good excuse to distro hop, anyway). Really glad I don’t mount my personal NAS folder in my home directory anymore, holy shit.
Bruh
Reusing names of critical system directories in subdirectories in your home dir.

I agree with this take, don’t wanna blame the victim but there’s a lesson to be learned.
except if you read the accompanying text they already stated the issue by accidentally unpacking an archive to their user directory that was intended for the root directory. that’s how they got an etc dir in their user directory in the first place
Could make one archive intended to be unpacked from /etc/ and one archive that’s intended to be unpacked from /home/Alice/ , that way they wouldn’t need to be root for the user bit, and there would never be an etc directory to delete. And if they run tar test (t) and pwd first, they could check the intended actions were correct before running the full tar. Some tools can be dangerous, so the user should be aware, and have safety measures.
they acquired a tar package from somewhere else. the instructions said to extract it to the root directory (because of its file structure). they accidentally extracted it to their home dir
that is how this happened. not anything like what you were saying
I understand that they were intending to unpack from / and they unpacked from /home/ instead. I’m just arguing that the unpack was already a potentially dangerous action, especially if it had the potential to overwrite any system file on the drive. It’s in the category of “don’t run stuff unless you are certain of what it will do”. For this reason it would make sense to have some way of checking it was correct before running it. Any rms to clean up files will need similar steps before running as well. Yes this is slower, but would argue deleting /etc by mistake and fixing it is slower still.
I’m suggesting 3 things:
- Confirm the contents of the tar
- Confirm where you want to extract the contents
- Have backups in case this goes wrong somehow
Check the contents:
- use "tar t’’ to print the contents before extracting, this lists all the files in the tar without extracting the contents. Read the output and check you are happy with it
Confirm where:
- run pwd first, or specify “-C ‘/output-place/’” during extraction, to prevent output to the wrong folder
Have backups:
- Assume this potentially dangerous process of extracting to /etc (you know this because you checked) may break some critical files there, so make sure this directory is properly backed up first, and check these backups are current.
I’m not suggesting that everyone knows they should do this. But I’m saying that problems are only avoidable by being extra careful. And with experience people build a knowledge of what may be dangerous and how to prevent that danger. If pwd is /, be extra careful, typos here may have greater consequences. Always type the full path, always use tab completion and use “trash-cli” instead of rm would be ways to make rm safer.
If you’re going to be overwriting system files as root, or deleting files without checking, I would argue that’s where the error happened. If they want to do this casually without checking first, they have to accept it may cause problems or loss of data.
I’ll provide some cover. This is my current home directory:
bin/ bmp/ cam/ doc/ eot/ hhc/ img/ iso/ mix/ mku/ mod/ mtv/ mus/ pkg/ run/ src/ tmp/ vid/ zim/. It’s your home directory, enjoy it however you like.[OP] accidentally untarred archive intended to be extracted in root directory, which among others included some files for /etc directory.
Oh, my! Perfect use of that scene. I don’t always lol, when I say lol. But I lol’ed at this for real.
I dunno, ~/bin is a fairly common thing in my experience, not that it ends up containing many actual binaries. (The system started it, miss, honest. A quarter of the things in my system’s /bin are text based.)
~/etc is seriously weird though. Never seen that before. On Debians, most of the user copies of things in /etc usually end up under ~/.local/ or at ~/.filenamehere
It should be ~/.local/bin
~/bin is the old-school location from before .local became a thing, and some of us have stuck to that ancient habit.
I think the home directory version of etc is ~/.config as per xdg.
I use ~/config/* to put directories named the same as system ones. I got used to it in BeOS and brought it to LFS when I finally accepted BeOS wasn’t doing what I needed anymore, kept doing it ever since.
So, you don’t do backups of /etc? Or parts of it?
I have those tars dir ssh, pam, and portage for Gentoo systems. Quickset way to set stuff up.
And before you start whining about ansible or puppet or what, I need those maybe 3-4 times a year to set up a temporary hardened system.
But may, just maybe, don’t assume everyone is a fucking moron or has no idea.
Edit Or just read what op did, I think that is pretty much the same
But may, just maybe, don’t assume everyone is a fucking moron or has no idea.
Well, OP didn’t say they used Arch, btw so it’s safe to assume.
(I hate that this needs a /s)
I am new to Linux and just getting somewhat comfortable as my daily driver, very proud of myself that I got the joke pretty quickly :)
💀
Sudo apt-get install /etc
Ok speaking of this, where do a distro’s config and boot scripts even come from? Are they in a package? Like on Debian so the .debs have metadata that can add cron jobs and such?
Yup
HAH rookie, I once forgot the . before the ./
Nvidia once did it in their install script
o.7
Be happy that you didn’t remeber the ~ and put a space between it and etc😃.
Yeah, same thing like with unclosed bottles, cup too close to the table edge, etc.: Accidents that can hapen, will happen.
Better name them something else in your user dir.And yes, painful experience.
So good to see that, even in 2026, Unix Haters’ Handbook’s part on rm is still valid. See page 59 of the pdf
The biggest flaw with cars is when they crash. When I crash my car due to user error, because I made a small mistake, this proves that cars are dangerous. Some other vehicles like planes get around this by only allowing trusted users to do dangerous actions, why can’t cars be more like planes? /s
Always backup important data, always have the ability to restore your backups. If rm doesn’t get it, ransomware or a bad/old drive will.
A sysadmin deleting /bin is annoying, but it shouldn’t take them more than a few mins to get a fresh copy from a backup or a donor machine. Or to just be more careful instead.
Unix aficionados accept occasional file deletion as normal. For example, consider following excerpt from the comp.unix.questions FAQ:
6) How do I “undelete” a file?
Someday, you are going to accidentally type something like:
rm * .foo
and find you just deleted “*” instead of “*.foo”. Consider it a rite of passage.
Of course, any decent systems administrator should be doing regular backups. Check with your sysadmin to see if a recent backup copy of your file is available“A rite of passage”? In no other industry could a manufacturer take such a cavalier attitude toward a faulty product. “But your honor, the exploding gas tank was just a rite of passage.”
There’s a reason sane programs ask for confirmation for potentially dangerous commands
True, in this case trash-cli is the sane command though, it has a much different job than rm. One is remove forever no take backs, the other is more mark for deletion. It’s good to have both options imo. Theres a lot of low level interfaces that are dangerous, if they’re not the correct tool for the job then they don’t have to be used. Trying to make every low level tool safe for all users just leads to a lot of unintended consequences and inefficiencies. Kill or IP address del can be just as bad, but netplan try or similar also exist.
The handbook has numbered pages, so why use “page X of the pdf”? I don’t see the page count in my mobile browser - you made me do math.
(I think it’s page number 22 btw, for anyone else wondering)
The handbook has numbered pages, so why use “page X of the pdf”?
Because the book’s page 1 is the pdf’s page 41, everything before is numbered with roman numerals :)
I also wasn’t expecting anyone to try and read with a browser or reader that doesn’t show the current page number
I dont know if you use firefox on your phone, but i do, and i fucking hate it that i cant jump to a page or see the page number im on.
That is what I’m using. I don’t really read enough pdf:s to notice it normally, but I guess it’s another reason to get off my ass about switching browsers ¯\_(ツ)_/¯
Mjpdf is decent, while still zen.
deleted by creator
Edit: nevermind, wrong section.
Btw, what’s this about QWERTY to slow them down?
Far as i know, it’s to reduce finger travel?Qwerty was developed so that typewriter hammers have a low chance of hitting each other and get stuck. It was never about finger travel or ergonomics.
PCs adapted the layout and unfortunately we stuck with it ever since. There are many better layouts, some more extreme in terms of difference to qwerty, some just fix the most blatant problems. Colemak and Dvorak for example.
On mechanical typewriters the little arms that slap the steel letters onto the ink ribbon/paper could get physically jammed. QWERTY was designed to make it so that was less likely to happen by placing the keys in an order that discouraged it.
At least, that’s the way I learned it.
Source: trust me bro













