grumpper
What environment anad shell are task commands executed in?
I am trying to evaluate the result of a command which pipes through tee so that output of the command is logged to the STDOUT and saved in a file at the same time.
The idea is to then invoke a script that logs the outcome of the command depending on the exit code in a file.
BUT:
If there are pylint findings the exit code is 16 so a non-zero one but the end result of the pipe is 0 (due to the tee command)... Now how can I catch that in taskfiles?
If I try with
$?
it's 0
... in order to correctly catch it I must use set: [pipefail]
.
But if I use it, the task errors out and never reaches the script that logs the result (not to mention that it interrupts the whole task execution)...
Ok so I will ignore the error instead using ignore_error: true
.
Still problem though as ignore_error
is not actually ignoring anything - it is instead overriding the exit code to be 0 instead of 16. More appropriate name should have been override_error
🙂
Ok so let's use the appropriate way of handling this by utilizing ${PIPESTATUS[0]}
(${pipestatus[1]}
in zsh):
This way the execution will not fail but I will still catch that the exit code of the pylint command is non zero.
Awesome! But whatever environment or shell this is running in does not have pipe status being set at all...
In fact if you simply run set
it outputs nothing...
Soo... how can I handle that error handling mess?4 replies
Continue on error
Is there a functionality that could make a task continue execution even there is an error ?
Context:
I am trying taskfile to do CI checks.
So I am currently doing python (pylint & bandit) & terraform (tflint & trivy) checks.
I migrated the checks for each tech in an included taskfile like this:
So in each of these taskfiles each check is a separate task.
For example the pylint checks can be executed as
python_tasks:run_pylint
, etc.
Naturally I setup the default to depend on each check (example with the python taskfile):
Now I have the below tasks defined to triger the included ones:
So when I run the task I want all the checks to happen (regardless if one of them fails).
Again example with the python tasks: if the run_pylint
check fails because there are pylint findings, the run_bandit
one should still execute.
And the same logic for the parent taskfile: if the python_checks
task fails the terraform_checks
one still executes.
What I have seen so far is the whole execution halts on error so even on python checks issues, the terraform checks also stop. Which defeats the whole purpose...
Now I could work around all that by setting ignore_error: true
on every task but then when this is all run in CI/CD everything will just exit with 0 (no matter the findings) so all tests will be marked successful (even if they weren't)...
What are my options here?
TL;DR: How can I make deps
finish even when errors in one of them occurs?1 replies
dotnev variable value calculation
Hello !
I canot find anything in the docs so I decided to ask here:
Is there some sort of prioirity in the dotenv declaraion?
Like if I declare it to be
['.env', '$HOME/.env']
and have DEPLOY="true"
in .env
but DEPLOY="false"
in $HOME/.env
what will be the value of DEPLOY
in the end?
- true
because .env
is first so its values take precedence?
- false
because $HOME/.env
is last so its value override the rest?7 replies