Recommended way of connecting datadog-agent
I saw a thread that the template is broken. What's the recommended way to do it?
35 Replies
Project ID:
120b5ec5-59d8-4087-84ae-4e0b3d934aa7
120b5ec5-59d8-4087-84ae-4e0b3d934aa7
have you tried the template yourself?
yes, started running it but trying to figure out how it would be able to ingest information from the server instance which is separate
How does networking work across different "Github Repos" that are running?
I'm asking a Railway question not a Datadog question. If Datadog wants to pull data from a docker application, I'm pretty sure it needs networking https://docs.datadoghq.com/containers/docker/apm/?tab=linux#tracing-from-other-containers. From what I understand, Railway doesn't support docker-compose but I don't know if the services that are spun up in a single application is actually connected on a network which is why I am asking
Datadog Infrastructure and Application Monitoring
Tracing Docker Applications
Datadog, the leading service for cloud-scale monitoring.
No, there is no internal networking at the moment.
Ideally you will want to run the datadog agent alongside your application so the agent can monitor the program inside the container, and that's why I sent the link to the datadog docs.
I personally don't see the point of running a separate service for the agent then having your app call the agents API when your app could call the datadog API directly instead, that's why it makes more sense to me to run the agent from the same container as your app
I was just worried about coupling of memory management since DDAgent hogs some by default and it was unclear to me how much resource Railway allocates to the server pod
But I will likely follow https://www.agiliq.com/blog/2021/07/django-apm-with-datadog/
Django application monitoring with datadog
Django application monitoring allows us to analyze the pitfalls of the application so that we can fix it and improve the application.
So that the server instance just runs datadog in it itself
On the dev plan you get 8gb
I see I see
it's 8vcpu right?
Correct
What's the default nixpacks used for Django? I wanted to modify it but I didn't want to change anything with the existing things that it does
Not quite sure what you mean
I would want to run
basically somewhere for building the container image, but it seemed like based on the screenshot provided, Railway detects the language and by default uses some template setup script (which seems to be a nixpacks plan). Is my understanding wrong?
Fyi - I haven't added a Dockerfile nor a nixpacks in the root of my project
Ah got it, to chain build / install scripts you do need to add a nixpacks.toml file
https://nixpacks.com/docs/configuration/file
Read the whole page and you'll be set
Is there a default nixpacks that gets used behind the scenes already for our app (Python + Django + psql + Redis)? Just so that I have something as a reference
You have the wrong idea, you don't need to define an entire nixpacks.toml file, just define anything extra you need
Read the page and you'll understand
Hmm ok on it
For example if you don't define any providers all the defaults will be used
ah I see.
How can I leverage env variables in the script?
Just regular shell script $<env_var_name>?
That's a good question
Let me look into it
Will take a bit, I'm on mobile so I can't do much
you can set
DD_API_KEY
and whatever else is needed in the service variables, then the variables will be available to that datadog install script, no need to define any variables in the nixpacks.toml
file
...
will run the default install command(s)I ended up just doing it in the Procfile because I didn't want to experiment with nixpacks, but figured it should work too
For future reference, I just did:
1. Set DD_API_KEY and DD_AGENT_MAJOR_VERSION in the env
2. The following for Procfile
youd want to install the dd agent at build time, procfile runs after build
Yeah haha will change it
sounds good, glad you got it working though!
Unknown User•2y ago
Message Not Public
Sign In & Join Server To View
how to run it inside ? should we create a Dockerfile and add the agent in the Dockerfile? do you have any ressource? the template Datadog agent is misleading
id recommend exactly what you are proposing, use a dockerfile and run the agent alongside the app
Hi ! Following @Rafal 's suggestion I tried running a datadog agent and an app in the same Dockerfile (two stages since we have FROM datadog/agent:7 for the agent and FROM node:16.6.1-alpine3.14 for the app).
Locally it works perfectly fine, both stages are working. But on railway, only the second one is ran : the app one. The datadog agent stage is discarded, any idea why ?
you may be starting the agent in the build phase
How do you think I should proceed ?
if you don't mind, could you open your own help thread, and then show us your dockerfile?