Error handling in bash
(notifox.com)
from BlackEco@lemmy.blackeco.com to programming@programming.dev on 11 Feb 22:31
https://lemmy.blackeco.com/post/2647893
from BlackEco@lemmy.blackeco.com to programming@programming.dev on 11 Feb 22:31
https://lemmy.blackeco.com/post/2647893
#programming
threaded - newest
I am so glad this article is not following blind recommendations, as lot of people usually do. It’s better to handle the error, instead closing the script that caused the error. I think the option
-eshould be avoided by default, unless there is a really good reason to use it.BTW here is an interesting discussion on Github about this topic: bash_strict_mode.md
The point of using -e is that it forces you to handle the error, or even be aware that there is one.
In my experience this option is too risky. Making simple changes to the script without scientifically proofing and testing it works under all cases becomes impossible (depending on how complex the script and task itself is). It has a bit of the energy of “well you have to make no errors in C, then you can write good code and it never fails”.
This option is good if the script MUST fail under any circumstances, if any error return of a program occurs. Which is usually not the case for most scripts. It’s also useful in testing when debugging or when developing. Also useful if you purposefully enable and disable the option on the fly for sensitive segments of the script. I do not like this option as a default.
I don’t have the Bash experience to argue against that, but from a general programming experience, I want things to crash as loudly as possible when anything unexpected happens. Otherwise, you might never spot it failing.
Well, and nevermind that it could genuinely break things, if an intermediate step fails, but it continues running.
Bash and the commandline are designed to work after an error. I don’t want it to fail after an error. It depends on the error though, and how critical it is. And this option makes no distinction. There are lot of commands where a fail is part of normal execution. As I said before, this option can be helpful when developing, but I do not want it in production. Often “silent” fails are a good thing (but as said, it depends on the type). The entire language is designed to sometimes fail and keep working as intended.
You really can’t compare Bash to a normal programming language, because the language is contained and developed in itself. While Bash relies on random and unrelated applications. That’s why I do not like comparisons like that.
Edit: I do do not want to exit the script on random error codes, but maybe handle the error. With that option in place, I have to make sure an error never happens. Which is not what I want.
Silent fails have caused me to waste many hours of my time trying to figure out what the fuck was happening with a simple script. I’ve been using -e on nearly all bash code I’ve written for years - with the exception of sourced ones - and wouldn’t go back.
If an unhandled error happened, I want my program to crash so I can evaluate whether I need to ignore it, or actually handle it.
But you can just as well make an exception to allow errors when -e is enabled with something like
command || true, or even some warning message.I feel like, while it does occur, allowing errors like this is more unusual than stopping the script in an error, so it’s good to explicitly mark this case, therefore -e is still a reasonable default in most cases.
Exactly, if an unhandled error happened I want my program to terminate.
-eis a better default.Ehhh I don’t think I’ve used bash outside of random stuff on my machine in years except in CI pipelines and wanting them to stop and fail the pipeline the second anything goes wrong is exactly what I want.
I do not want to think about every possible error that can happen. I do not want to study every program I call to look for any possible errors. Only errors that are important to my task.
As I said, there are reasons to use this option when the script MUST fail on error.And its helpful for creating the script. I just don’t like generalizations to always enable this option.
I mean, that or file mangling, because you didn’t catch a error of some unplanned usecase.
Errors in command substitution e.g. $(cat file) are ignored by ‘set -e’, one example of its confusing nature. It does not force you to all handle errors, just some errors and which ones depends on the code you write.
https://mywiki.wooledge.org/BashPitfalls#set_-euo_pipefail
This is a great article. I just want to highlight this insane behavior in particular (slightly dramatized):
set -e safeDelete() { false # Surely we don't reach this, right? echo "rm $@ goes brr..." } if safeDelete all of my files; then : # do more stuff fiFrankly if you actually need robustness (which is not always), you should be using a real programming language with exceptions or result types or both (i.e. not C). UNIX processes are just not really up to the task.
EXCELLENT Article!
Now interested in Notifox, that person’s pet-project, too…
( :
What I always find frustrating about that, is that even a colleague with much more Bash experience than me, will ask me what those options are, if I slap a
set -euo pipefailor similar into there.I guess, I could prepare a snippet like in the article with proper comments instead:
Maybe with the whole trapping thing, too.
But yeah, will have to remember to use that. Most Bash scripts start out as just quickly trying something out, so it’s easy to forget setting the proper options…
Problem is, -o pipefail isn’t portable.
I only use
-eu.-osometimes breaksIf you think you need this you’re doing it wrong. Nobody should be writing bash scripts more than a few lines long. Use a more sane language. Deno is pretty nice for scripting.
Let me just drop my materials for a talk i’ve given about basically this topic: codeberg.org/flart/…/you_suck.md
Mainly because: The linked article is all nice and dandy, but it completely ignores the topic of double brackets and why they’re nice.
And also, and this is my very strong opinion: if you end up thinking about exception handling (like the mentioned traps) in shell scripts, you should stop immediately and switch to a proper programming language.
Shell scripts are great, i love them. But they have an area they’re good for and a lot of areas where they aren’t.
But call it a bash script then! Remember:
#!/bin/shis run by all kinds of shells; consider them POSIX. Bash is#!/bin/bash.Fully agree. Shell scripts quickly get ugly over 50 loc. Please avoid spaghetti code in shell scripts too. The usual
if [ -n "$var" ]; then xyz "$var" fiis ok once or twice. But if you have tens of them,
is more readable. Or leave the check entirely away if xyz reports the error too.
And please.do.functions. Especially for error handling. And also for repeated patterns. For example the above, if it’s always xyz, then something like
checkxyz() { [ -n "$1" ] && xyz "$1"; } checkxyz "$var1" && abc checkxyz "$var2" && 123 checkxyz "$var3 || error "failed to get var3" 2is more readable.
And sometimes, a function is better for readability, even if you use it only once. For example, from one of my bigger scripts (i should have done in python).
full_path() { case "$1" in /*) printf "%s\n" "${1%/}";; *) printf "%s\n" "$PWD/${1I like using bash a lot for terminal automation but as soon as anything goes beyond around 7-15 lines I reach for a scripting language like python or js. Bash is just really hard and counterintuitive
I’m curious about why there seems to be such hostility over scripts that are more than X number of lines? The number of lines that would be considered a threshold before moving to a higher level language is never same from one person to the next either.
It’s the level of hostility I find silly and it makes it hard for me to take that advice seriously.