We have a masked variable for example called SECRET_JAMES_BOND with value /vault/abc. During the pipeline execution, the log shows as [MASKED]. When we changed the value to ABCDE, the value was shown. Even we don't change it, we can easily get the value from the log if we perform SECRET_JAMES_BOND = $(echo $SECRET_JAMES_BOND) | base64). By echoing the base64 value, it will display on the UI because the value has changed. When we copy the encoded base64 string and decode it, we get the actual secret. How can we prevent a masked variable from echoing if its value has been changed ? Shouldn't masked variables show [MASKED] even the value is different from its original value ?
How can we prevent a masked variable from echoing if its value has been changed
In short: you can't if someone with access is determined to do so.
Masking is not meant to prevent developers with access to control CI steps from revealing secrets through jobs. It is meant to prevent disclosure by accident. There are endless ways that a value could be exfiltrated through the job output it would be impossible to cover them all.
If you wanted to prevent the base64 representation of the secret from being shown, you can register that value as another masked variable, but it must be done in advance.
Shouldn't masked variables show [MASKED] even the value is different from its original value ?
Good arguments could be made that GitLab should mask some common variations of masked variables, such as base64, url-encoded, backwards, etc. Other CI services (Travis CI for example) do this.
However, this would still be ONLY for the purposes of accidental disclosure.
For example, another way I've seen developers accidentally reveal secrets is by using curl with the -v flag.
script:
- curl -v https://myusername:${SECRET_JAMES_BOND}#myhost.example.com/secret
In the above example, the output of curl will show (in part) an output like:
> Authorization: Basic bXl1c2VybmFtZTpzZWNyZXQ=
So, if GitLab also masked base64 variations of a password, it would have prevented this accidental disclosure.
But like I mentioned, there's endless ways to output a secret: url-encoding, Caesar cipher, rot-N (rot16, rot24, rot32, etc) or even custom ways like just echoing each character one line at a time, potentially in reverse order.
So, really, you can only realistically prevent accidents, not malicious exfiltration by a user with access to execute code in a job.
Related
My team is working to integrate an infrastructure-as-code scanning solution into our build pipelines and we've discovered that the string "GCP" is being replaced with three asterisks when tasks are being executed in our build pipelines. This isn't unique to one task either whereas I created a bash script to execute and list our our repository and all directories that start with "GCP" are replaced by the three asterisks. The only variable set using the "GCP" value is the "system.teamProject" variable and we are not using any secret values that I know of and there are no variable groups used.
Any help would be greatly appreciated. Thanks!
Bash Asterisk Output "ls -a"
IaC Scanning Asterisk Task Failure
If you have set any secret variables in your pipeline, or have linked any variable groups that contain secret variables (include the secrets from the connected external and remote services services), generally the values of these secrets will be masked as asterisks.
When you try to print the values of the secrets to the output logs, the values will display as asterisks in the logs. If you try to output the values into a text file, the values will still display as asterisks in the file.
In addition, if a string that is not set as secret but its substrings are the values of some existing secrets in the pipeline, these substring parts may be masked as asterisks when trying to output this string.
If you do not set any secrets, for us to investigate this issue further, would you like to share us with the actual value that was masked as asterisks in the the logs? We well investigate and evaluate whether this string contains some special or sensitive characters that may be automatically identified as secrets by Azure DevOps.
There is quite a common issue in unix world, that is when you start a process with parameters, one of them being sensitive, other users can read it just by executing ps -ef. (For example mysql -u root -p secret_pw
Most frequent recommendation I found was simply not to do that, never run processes with sensitive parameters, instead pass these information other way.
However, I found that some processes have the ability to change the parameter line after they processed the parameters, looking for example like this in processes:
xfreerdp -decorations /w:1903 /h:1119 /kbd:0x00000409 /d:HCG /u:petr.bena /parent-window:54526138 /bpp:24 /audio-mode: /drive:media /media /network:lan /rfx /cert-ignore /clipboard /port:3389 /v:cz-bw47.hcg.homecredit.net /p:********
Note /p:*********** parameter where password was removed somehow.
How can I do that? Is it possible for a process in linux to alter the argument list they received? I assume that simply overwriting the char **args I get in main() function wouldn't do the trick. I suppose that maybe changing some files in /proc pseudofs might work?
"hiding" like this does not work. At the end of the day there is a time window where your password is perfectly visible so this is a total non-starter, even if it is not completely useless.
The way to go is to pass the password in an environment variable.
I want to run a script which calls tpm_sealdata many times and I don't want to enter the SRK password each time.
In the man page I found this:
-z, --well-known
Use TSS_WELL_KNOWN_SECRET (20 zero bytes) as the SRK password.
You will not be prompted for the SRK password with this option.
However, I couldn't figure out which value I have to use as TSS_WELL_KNOWN_SECRET.
As the name of the constant implies, the value of TSS_WELL_KNOWN_SECRET is well known. It is just 20 bytes of zero.
But you don't actually need the value. The -z option does not require a value, it's just a switch to tell the program to use the well known secret. The help text you cite also states this fact.
So a call to tpm_sealdata might look like this:
tpm_sealdata -z -i data.in -o data.out
However, to use this method the SRK must have been created with the well known secret of course. When using tpm_takeownership:
tpm_takeownership -z
Is it safe to pass a key to the openssl command via the command line parameters in Linux? I know it nulls out the actual parameter, so it can't be viewed via /proc, but, even with that, is there some way to exploit that?
I have a python app that I want to use OpenSSL to do the encryption/description through stdin/stdout streaming in a subprocess, but I want to know my keys are safe.
Passing the credentials on the command line is not safe. It will result in your password being visible in the system's process listing - even if openssl erases it from the process listing as soon as it can, it'll be there for an instant.
openssl gives you a few ways to pass credentials in - the man page has a section called "PASS PHRASE ARGUMENTS", which documents all the ways you can pass credentials into openssl. I'll explain the relevant ones:
env:var
Lets you pass the credentials in an environment variable. This is better than using the process listing, because on Linux your process's environment isn't readable by other users by default - but this isn't necessarily true on other platforms.
The downside is that other processes running as the same user, or as root, will be able to easily view the password via /proc.
It's pretty easy to use with python's subprocess:
new_env=copy.deepcopy(os.environ)
new_env["MY_PASSWORD_VAR"] = "my key data"
p = subprocess.Popen(["openssl",..., "-passin", "env:MY_PASSWORD_VAR"], env=new_env)
fd:number
This lets you tell openssl to read the credentials from a file descriptor, which it will assume is already open for reading. By using this you can write the key data directly from your process to openssl, with something like this:
r, w = os.pipe()
p = subprocess.Popen(["openssl", ..., "-passin", "fd:%i" % r], preexec_fn=lambda:os.close(w))
os.write(w, "my key data\n")
os.close(w)
This will keep your password secure from other users on the same system, assuming that they are logged in with a different account.
With the code above, you may run into issues with the os.write call blocking. This can happen if openssl waits for something else to happen before reading the key in. This can be addressed with asynchronous i/o (e.g. a select loop) or an extra thread to do the write()&close().
One drawback of this is that it doesn't work if you pass closeFds=true to subprocess.Popen. Subprocess has no way to say "don't close one specific fd", so if you need to use closeFds=true, then I'd suggest using the file: syntax (below) with a named pipe.
file:pathname
Don't use this with an actual file to store passwords! That should be avoided for many reasons, e.g. your program may be killed before it can erase the file, and with most journalling file systems it's almost impossible to truly erase the data from a disk.
However, if used with a named pipe with restrictive permissions, this can be as good as using the fd option above. The code to do this will be similar to the previous snippet, except that you'll need to create a fifo instead of using os.pipe():
pathToFifo = my_function_that_securely_makes_a_fifo()
p = subprocess.Popen(["openssl", ..., "-passin", "file:%s" % pathToFifo])
fifo = open(pathToFifo, 'w')
print >> fifo, "my key data"
fifo.close()
The print here can have the same blocking i/o problems as the os.write call above, the resolutions are also the same.
No, it is not safe. No matter what openssl does with its command line after it has started running, there is still a window of time during which the information is visible in the process' command line: after the process has been launched and before it has had a chance to null it out.
Plus, there are many ways for an accident to happen: for example, the command line gets logged by sudo before it is executed, or it ends up in a shell history file.
Openssl supports plenty of methods of passing sensitive information so that you don't have to put it in the clear on the command line. From the manpage:
pass:password
the actual password is password. Since the password is visible to utilities (like 'ps' under Unix) this form should only be used where security is not important.
env:var
obtain the password from the environment variable var. Since the environment of other processes is visible on certain platforms (e.g. ps under certain Unix OSes) this option should be used with caution.
file:pathname
the first line of pathname is the password. If the same pathname argument is supplied to -passin and -passout arguments then the first line will be used for the input password and the next line for the output password. pathname need not refer to a regular file: it could for example refer to a device or named pipe.
fd:number
read the password from the file descriptor number. This can be used to send the data via a pipe for example.
stdin
read the password from standard input.
All but the first two options are good.
I have changed the Configure::write('Security.salt', '############'); value in the file
config/core.php
file to a '256-bit hex key'. Is it safe or a good practice to change these lines for every different installation of cakephp application or shall I revert back to the original ?
I also changed the Configure::write('Security.cipherSeed','7927237598237592759727'); to a different one of more length.
Please throw some light on this.
Thanks
It is absolutely necessary that you change the salt values. When you do a clean install of CakePHP the default home page will give a warning if you have not changed the salt value.
On the salt length, see this discussion: What is the optimal length for user password salt?