Error handling in bash (notifox.com)
from BlackEco@lemmy.blackeco.com to programming@programming.dev on 11 Feb 22:31
https://lemmy.blackeco.com/post/2647893

#programming

threaded - newest

thingsiplay@lemmy.ml on 11 Feb 23:13 next collapse

As you’ll learn later in this blogpost, there are some footguns and caveats you’ll need to keep in mind when using -e.

I am so glad this article is not following blind recommendations, as lot of people usually do. It’s better to handle the error, instead closing the script that caused the error. I think the option -e should be avoided by default, unless there is a really good reason to use it.

thingsiplay@lemmy.ml on 11 Feb 23:23 next collapse

BTW here is an interesting discussion on Github about this topic: bash_strict_mode.md

thenextguy@lemmy.world on 12 Feb 00:25 collapse

The point of using -e is that it forces you to handle the error, or even be aware that there is one.

thingsiplay@lemmy.ml on 12 Feb 00:49 next collapse

In my experience this option is too risky. Making simple changes to the script without scientifically proofing and testing it works under all cases becomes impossible (depending on how complex the script and task itself is). It has a bit of the energy of “well you have to make no errors in C, then you can write good code and it never fails”.

This option is good if the script MUST fail under any circumstances, if any error return of a program occurs. Which is usually not the case for most scripts. It’s also useful in testing when debugging or when developing. Also useful if you purposefully enable and disable the option on the fly for sensitive segments of the script. I do not like this option as a default.

Ephera@lemmy.ml on 12 Feb 03:03 next collapse

I don’t have the Bash experience to argue against that, but from a general programming experience, I want things to crash as loudly as possible when anything unexpected happens. Otherwise, you might never spot it failing.

Well, and nevermind that it could genuinely break things, if an intermediate step fails, but it continues running.

thingsiplay@lemmy.ml on 12 Feb 03:28 next collapse

Bash and the commandline are designed to work after an error. I don’t want it to fail after an error. It depends on the error though, and how critical it is. And this option makes no distinction. There are lot of commands where a fail is part of normal execution. As I said before, this option can be helpful when developing, but I do not want it in production. Often “silent” fails are a good thing (but as said, it depends on the type). The entire language is designed to sometimes fail and keep working as intended.

You really can’t compare Bash to a normal programming language, because the language is contained and developed in itself. While Bash relies on random and unrelated applications. That’s why I do not like comparisons like that.

Edit: I do do not want to exit the script on random error codes, but maybe handle the error. With that option in place, I have to make sure an error never happens. Which is not what I want.

eager_eagle@lemmy.world on 12 Feb 08:37 next collapse

Often “silent” fails are a good thing

Silent fails have caused me to waste many hours of my time trying to figure out what the fuck was happening with a simple script. I’ve been using -e on nearly all bash code I’ve written for years - with the exception of sourced ones - and wouldn’t go back.

If an unhandled error happened, I want my program to crash so I can evaluate whether I need to ignore it, or actually handle it.

Gobbel2000@programming.dev on 12 Feb 08:55 collapse

But you can just as well make an exception to allow errors when -e is enabled with something like command || true, or even some warning message.

I feel like, while it does occur, allowing errors like this is more unusual than stopping the script in an error, so it’s good to explicitly mark this case, therefore -e is still a reasonable default in most cases.

eager_eagle@lemmy.world on 12 Feb 08:31 collapse

Exactly, if an unhandled error happened I want my program to terminate. -e is a better default.

Feyd@programming.dev on 12 Feb 04:18 next collapse

Ehhh I don’t think I’ve used bash outside of random stuff on my machine in years except in CI pipelines and wanting them to stop and fail the pipeline the second anything goes wrong is exactly what I want.

thingsiplay@lemmy.ml on 12 Feb 04:27 collapse

I do not want to think about every possible error that can happen. I do not want to study every program I call to look for any possible errors. Only errors that are important to my task.

As I said, there are reasons to use this option when the script MUST fail on error.And its helpful for creating the script. I just don’t like generalizations to always enable this option.

MonkderVierte@lemmy.zip on 12 Feb 19:18 collapse

This option is good if the script MUST fail under any circumstances

I mean, that or file mangling, because you didn’t catch a error of some unplanned usecase.

IanTwenty@piefed.social on 12 Feb 05:05 collapse

Errors in command substitution e.g. $(cat file) are ignored by ‘set -e’, one example of its confusing nature. It does not force you to all handle errors, just some errors and which ones depends on the code you write.

https://mywiki.wooledge.org/BashPitfalls#set_-euo_pipefail

Oinks@lemmy.blahaj.zone on 12 Feb 15:18 collapse

This is a great article. I just want to highlight this insane behavior in particular (slightly dramatized):

set -e

safeDelete() {
  false

  # Surely we don't reach this, right?
  echo "rm $@ goes brr..."
}

if safeDelete all of my files; then
    : # do more stuff
fi

Frankly if you actually need robustness (which is not always), you should be using a real programming language with exceptions or result types or both (i.e. not C). UNIX processes are just not really up to the task.

Paragone@lemmy.world on 12 Feb 02:41 next collapse

EXCELLENT Article!

Now interested in Notifox, that person’s pet-project, too…

( :

Ephera@lemmy.ml on 12 Feb 03:15 next collapse

What I always find frustrating about that, is that even a colleague with much more Bash experience than me, will ask me what those options are, if I slap a set -euo pipefail or similar into there.

I guess, I could prepare a snippet like in the article with proper comments instead:

set -e # exit on error
set -u # exit on unset variable
set -o pipefail # exit on errors in pipes

Maybe with the whole trapping thing, too.

But yeah, will have to remember to use that. Most Bash scripts start out as just quickly trying something out, so it’s easy to forget setting the proper options…

vext01@lemmy.sdf.org on 12 Feb 07:25 next collapse

Problem is, -o pipefail isn’t portable.

SlykeThePhoxenix@programming.dev on 12 Feb 07:46 collapse

I only use -eu. -o sometimes breaks

FizzyOrange@programming.dev on 12 Feb 07:16 next collapse

If you think you need this you’re doing it wrong. Nobody should be writing bash scripts more than a few lines long. Use a more sane language. Deno is pretty nice for scripting.

cr1cket@sopuli.xyz on 12 Feb 12:34 next collapse

Let me just drop my materials for a talk i’ve given about basically this topic: codeberg.org/flart/…/you_suck.md

Mainly because: The linked article is all nice and dandy, but it completely ignores the topic of double brackets and why they’re nice.

And also, and this is my very strong opinion: if you end up thinking about exception handling (like the mentioned traps) in shell scripts, you should stop immediately and switch to a proper programming language.

Shell scripts are great, i love them. But they have an area they’re good for and a lot of areas where they aren’t.

MonkderVierte@lemmy.zip on 12 Feb 16:06 collapse

Do you need POSIX compability?

  • If not, use bash-isms without shame

But call it a bash script then! Remember: #!/bin/sh is run by all kinds of shells; consider them POSIX. Bash is #!/bin/bash.

MonkderVierte@lemmy.zip on 12 Feb 14:01 next collapse

When to use what

My advice is to optimize for read- and understand-ability.

This means to use the || operator when the fallback/recovery step is short, such as printing an error or exiting the program right away.

On the flip side, there are many cases where an if else statement is preferred due to the complexity of handling the error.

Fully agree. Shell scripts quickly get ugly over 50 loc. Please avoid spaghetti code in shell scripts too. The usual

if [ -n "$var" ]; then
    xyz "$var"
fi

is ok once or twice. But if you have tens of them,

[ -n "$var" ] && xyz "$var"

is more readable. Or leave the check entirely away if xyz reports the error too.

And please.do.functions. Especially for error handling. And also for repeated patterns. For example the above, if it’s always xyz, then something like

checkxyz() { [ -n "$1" ] && xyz "$1"; }

checkxyz "$var1" && abc
checkxyz "$var2" && 123
checkxyz "$var3 || error "failed to get var3" 2

is more readable.

And sometimes, a function is better for readability, even if you use it only once. For example, from one of my bigger scripts (i should have done in python).

full_path() {
  case "$1" in
    /*)  printf "%s\n" "${1%/}";;
    *)   printf "%s\n" "$PWD/${1
[deleted] on 12 Feb 15:39 next collapse
.
olafurp@lemmy.world on 12 Feb 18:49 next collapse

I like using bash a lot for terminal automation but as soon as anything goes beyond around 7-15 lines I reach for a scripting language like python or js. Bash is just really hard and counterintuitive

confusedpuppy@lemmy.dbzer0.com on 13 Feb 02:02 collapse

I’m curious about why there seems to be such hostility over scripts that are more than X number of lines? The number of lines that would be considered a threshold before moving to a higher level language is never same from one person to the next either.

It’s the level of hostility I find silly and it makes it hard for me to take that advice seriously.