How to authenticate with Google Cloud in a Docker container running Serverless?
I'm trying to authenticate using a service account json key file running a Docker container so I can store objects to GCS. I've added the json file's content as a Secret but without success. Am I missing something here or how would you advise me to authenticate?
Update: Looks like the entire json file's content doesn't fit as a Secret which explains why it doesn't work. Still, I'd like to find a way to authenticate.
18 Replies
I've had the same problem. Have not found a safe workaround yet
How do you authenticate to google cloud storage?
what library are you using to upload files?
btw i found this, might worth trying:
https://cloud.google.com/docs/authentication/provide-credentials-adc#on-prem
Google Cloud
Set up Application Default Credentials  | Authentication  | Googl...
Discover how to set up Application Default Credentials for Cloud Client Libraries, Google API Client Libraries, and other environments.
yep the env wont fit the entire JSON file contents, but this way it'll work:
Set the environment variable
GOOGLE_APPLICATION_CREDENTIALS
to the path of the JSON file that contains your credentials. This variable applies only to your current shell session, so if you open a new session, set the variable again.
For example:
export GOOGLE_APPLICATION_CREDENTIALS="/service-folder/service-account-file.json"
Thanks @nerdylive. I'm using
GOOGLE_APPLICATION_CREDENTIALS
the way you suggest during local development. However, for deployment this would require me to keep the JSON file inside the Docker container which is not great practice from a security perspective.
This is how I read the credentials and upload the objects:
@zacksparrow would it be possible to extend the Secret length to 4000 characters similar to the Registry Credentials? That way I would be able to read the credentials directly as follows:
Try this if you're following a good security practice @sallok00
I'm trying the service account approach as I explained above but it's not working. Workload identity federation seem a little bit trickier (and it's not clear whether that key is shorter) so I think I'll just wait to see if they can extend the Secret length to 4k instead - it seems like a potentially easier solution 🙂
Alright then
Friendly ping @flash-singh, is this something you could help out with? Similar to discussed here:
https://discord.com/channels/912829806415085598/1195092101671690260/1195092101671690260
thanks will bring this up, should be easy, does it let you save the secret or does it save the secret and truncates some parts?
truncates
lets you save but value is truncated when you retrieve
I save a base64 encoded json credentials file (ala service-account.json) in a secret (previously env var). I opened a ticket a while back to have the limit increased to support my use case. You can look for and decode the file in your entrypoint. If you have questions or anything please @ me because I start work soon and might not see your messages.
This code snippet might help. Its easy to accomplish in any language. Could be done in straight bash easily.
@sallok00 Let me know if this helps
Thanks @MuddyRumbles, and sorry for my slow reply. Looks like I somehow missed your ping.
I've also tried to save it as a secret but it gets truncated. For now I'm doing something similar to what you do above - saving the .json file in the container and read it directly. This is not ideal on longer term though due to the security risk.
The truncation is fixed
in runpod secrets
oh whats the limit now
Not sure, my gcp service creds fit now though
we have increased the field, it was a mysql thing and kept truncating the data
Amazing - thanks a lot!