I want to add xray to my Fargate service. Everything works (synth/deploy) but in the logs I'am seeing the following error:
2022-02-07T13:38:22Z [Error] Sending segment batch failed with:
AccessDeniedException: 2022-02-07 14:38:22status code: 403, request
id: cdc23f61-5c2e-4ede-8bda-5328e0c8ac8f
The user I'am using to deploy the application has the AWSXrayFullAccess permission.
Do I have to grant the task the permission manually? If so how?
Here is a snippet of the application:
const cdk = require('#aws-cdk/core');
const ecs = require('#aws-cdk/aws-ecs');
const ecsPatterns = require('#aws-cdk/aws-ecs-patterns');
class API extends cdk.Stack {
constructor(parent, id, props) {
super(parent, id, props);
this.apiXRayTaskDefinition = new ecs.FargateTaskDefinition(this, 'apixRay-definition', {
cpu: 256,
memoryLimitMiB: 512,
});
this.apiXRayTaskDefinition.addContainer('api', {
image: ecs.ContainerImage.fromAsset('./api'),
environment: {
"QUEUE_URL": props.queue.queueUrl,
"TABLE": props.table.tableName,
"AWS_XRAY_DAEMON_ADDRESS": "0.0.0.0:2000"
},
logging: ecs.LogDriver.awsLogs({ streamPrefix: 'api' }),
}).addPortMappings({
containerPort: 80
})
this.apiXRayTaskDefinition.addContainer('xray', {
image: ecs.ContainerImage.fromRegistry('public.ecr.aws/xray/aws-xray-daemon:latest'),
logging: ecs.LogDriver.awsLogs({ streamPrefix: 'xray' }),
}).addPortMappings({
containerPort: 2000,
protocol: ecs.Protocol.UDP,
});
// API
this.api = new ecsPatterns.ApplicationLoadBalancedFargateService(this, 'api', {
cluster: props.cluster,
taskDefinition: this.apiXRayTaskDefinition,
desiredCount: 2,
cpu: 256,
memory: 512,
createLogs: true
})
props.queue.grantSendMessages(this.api.service.taskDefinition.taskRole);
props.table.grantReadWriteData(this.api.service.taskDefinition.taskRole);
}
}
The user I'am using to deploy the application has the AWSXrayFullAccess permission.
This is irrelevant, the task will not get all the rights of the user that deploys the stack.
Yes, you need to add the required permissions to the task with
this.apiXRayTaskDefinition.taskRole.addManagedPolicy(
iam.ManagedPolicy.fromAwsManagedPolicyName('AWSXRayDaemonWriteAccess')
);
References:
AWS managed policy with required access for the X-Ray daemon: https://docs.aws.amazon.com/xray/latest/devguide/security_iam_id-based-policy-examples.html#xray-permissions-managedpolicies
Import an AWS-managed policy: https://docs.aws.amazon.com/cdk/api/v1/docs/#aws-cdk_aws-iam.ManagedPolicy.html#static-fromwbrawswbrmanagedwbrpolicywbrnamemanagedpolicyname
Access the task role: https://docs.aws.amazon.com/cdk/api/v1/docs/#aws-cdk_aws-ecs.FargateTaskDefinition.html#taskrole-1
Add a policy: https://docs.aws.amazon.com/cdk/api/v1/docs/#aws-cdk_aws-iam.IRole.html#addwbrmanagedwbrpolicypolicy
Related
I am trying to filter all incoming requests based on a jwt token to the spring-boot application.
I have the below configuration of OPA, but still my spring boot application not getting the fields added in OPA configuration.
I am excepting those fields in spring boot based on that I will take action.
Am I missing anything.. please help me to solve this.
if anyone has working version of OPA please do share
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: opa-test
spec:
httpPipeline:
handlers:
- name: opa-policy
type: middleware.http.opa
---
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: opa-policy
spec:
type: middleware.http.opa
version: v1
metadata:
- name: includedHeaders
value: "x-my-custom-header, x-jwt-header"
- name: defaultStatus
value: 403
# `rego` is the open policy agent policy to evaluate. required
# The policy package must be http and the policy must set data.http.allow
- name: rego
value: |
package http
default allow = true
allow = {
"status_code": 301,
"additional_headers": {
"location": "https://my.site/authorize"
}
} {
not jwt.payload["my-claim"]
}
# You can also allow the request and add additional headers to it:
allow = {
"allow": true,
"additional_headers": {
"x-my-claim": my_claim
}
} {
my_claim := jwt.payload["my-claim"]
}
jwt = { "payload": payload } {
auth_header := input.request.headers["Authorization"]
[_, jwt] := split(auth_header, " ")
[_, payload, _] := io.jwt.decode(jwt)
}
I created moodle and mariadb containers with Docker.
Moodle: 3.11.4
Mariadb: 10.3
I am trying following webservice to execute:
client:
wwwroot: 'http://localhost:8012',
service: 'moodle_mobile_app',
token: '8faf4879d2c654f11e404095032ae382',
strictSSL: true
call:
curl "http://localhost:8012/webservice/rest/server.php?wstoken=8faf4879d2c654f11e404095032ae382&moodlewsrestformat=json&wsfunction=core_user_get_users_by_field&moodlewsrestformat=json&id=2"
but getting follwing error:
{"exception":"invalid_parameter_exception","errorcode":"invalidparameter",
"message":"Invalid parameter value detected (Missing required key in single structure:field)",
"debuginfo":"Missing required key in single structure: field"
}
I tried it same with moodle client for node
... client.call({ wsfunction: "core_user_get_users_by_field", method: "POST", args: { id: 2 } })...
but also receiving same error.
I checked API documentation and id is valid parameter for this
webservice.
Can you please help?
Issue is resolved
client.call({
method: "POST",
wsfunction: "core_user_get_users_by_field",
args: {
field: "id",
values: ["2"]
}
}).then(function(info) {
var str = JSON.stringify(info, null, 4);
console.log(str);
});
I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the #elastic/elasticsearch client for node. However, the connection is "timing out".
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.
I am using the following docker-compose file:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.
I have the following file located in the backend container (connection.js):
const { Client } = require("#elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:
node server/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:
docker exec mp-backend "node" "server/connection.js"
Then I get the following response:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):
const client = new Client({ node: "http://172.24.0.1:9200" });
Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"
I am using the following version:
"#elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.
Does anyone know what I am doing wrong here? Thank you in advance :)
EDIT:
I did a "docker network inspect" on the network with *_elastic. There I see the following:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.
In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)
Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.
An error occurred (AccessDenied) when calling the CreateStack operation: User: arn:aws:iam::812520856627:user/dimitris is not authorized to perform: cloudformation:CreateStack on resource: arn:aws:cloudformation:us-west-2:812520856627:stack/blog-stage/*
I tried to run this on command :
aws cloudformation create-stack --stack-name blog-stage --template-body file://$PWD/stack.yml --profile demo --region us-west-2
Resources:
AppNode:
Type: AWS::EC2::Instance
Properties:
InstanceType: t2.micro
ImageId: ami-0c579621aaac8bade
KeyName: jimapos
SecurityGroups:
- !Ref AppNodeSG
AppNodeSG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: for the app nodes that allow ssh, http and docker ports
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: 0.0.0.0/0
You are trying to create stack against User dimitris which is not authorized to perform cloudformation:CreateStack
To assign permission to the user goto https://console.aws.amazon.com/iam/home#/home -> users -> select user -> add permission
Try to add this policy with user dimitries.
Example A sample policy that grants create and view stack actions
{
"Version":"2012-10-17",
"Statement":[{
"Effect":"Allow",
"Action":[
"cloudformation:CreateStack",
"cloudformation:DescribeStacks",
"cloudformation:DescribeStackEvents",
"cloudformation:DescribeStackResources",
"cloudformation:GetTemplate",
"cloudformation:ValidateTemplate"
],
"Resource":"*"
}]
}
You can check this link to customize or restick policy to specific resources.
Either you can create custom policy or you can attach the below-existing one.
I need to get the DNS name (ILPIP DNS name) in a Node.JS application (an IoT gateway) that is running on a VM in Azure.
Background details.
I need this so the application can inform the web frontend where to open the socket.io connection to when the web based client wish to communicate with the IoT gateway.
I have been looking in Microsofts Azure modules for Node.JS but I haven't found anything that gives the ILPIP (assigned dns name)
You can try to use RoleEnvironment.getCurrentRoleInstance() function in Azure SDK for Node.js, run the following code snippet in a classic VM:
var azure = require('azure');
azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) {
if (!error && instance['endpoints']) {
//You can get information about "endpoint1" such as its address and port via
console.log(instance)
} else {
console.log(error);
}
});
You may get the following similar info of role instance:
{ id: 'WorkerRole1_IN_0',
roleName: 'WorkerRole1',
faultDomain: '0',
updateDomain: '0',
endpoints:
{ 'Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp':
{ name: 'Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp',
address: '100.104.92.19',
port: '3389',
publicPort: '0',
protocol: 'tcp',
roleInstanceId: 'WorkerRole1_IN_0' },
'Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput':
{ name: 'Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput',
address: '100.104.92.19',
port: '20000',
publicPort: '3389',
protocol: 'tcp',
roleInstanceId: 'WorkerRole1_IN_0' }
}
}