I'm working with aws-cli dynamodb-local, in docker-compose I have an entry as:
volumes:
dynamo-db:
driver: local
services:
dynamodb-local:
container_name: local-db
image: amazon/dynamodb-local
restart: always
command: -jar DynamoDBLocal.jar -sharedDb -dbPath /home/dynamodblocal/
volumes:
- dynamo-db:/home/dynamodblocal
ports:
- '8000:8000'
env_file:
- ...
The error I receive during the application startup is
ResourceNotFoundException: Cannot do operations on a non-existent table
2023-02-06T23:13:33.829569752Z at Request.extractError (/srv/node_modules/aws-sdk/lib/protocol/json.js:52:27)
2023-02-06T23:13:33.829573169Z at Request.callListeners (/srv/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
2023-02-06T23:13:33.829576085Z at Request.emit (/srv/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
2023-02-06T23:13:33.829578835Z at Request.emit (/srv/node_modules/aws-sdk/lib/request.js:686:14)
2023-02-06T23:13:33.829581502Z at Request.transition (/srv/node_modules/aws-sdk/lib/request.js:22:10)
2023-02-06T23:13:33.829584252Z at AcceptorStateMachine.runTo (/srv/node_modules/aws-sdk/lib/state_machine.js:14:12)
2023-02-06T23:13:33.829587169Z at /srv/node_modules/aws-sdk/lib/state_machine.js:26:10
2023-02-06T23:13:33.829601794Z at Request.<anonymous> (/srv/node_modules/aws-sdk/lib/request.js:38:9)
2023-02-06T23:13:33.829605460Z at Request.<anonymous> (/srv/node_modules/aws-sdk/lib/request.js:688:12)
2023-02-06T23:13:33.829608252Z at Request.callListeners (/srv/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
2023-02-06T23:13:33.829610960Z message: Cannot do operations on a non-existent table
2023-02-06T23:13:33.829613585Z code: ResourceNotFoundException
2023-02-06T23:13:33.829616169Z requestId: 40aaa8a1-575f-45be-b8b1-e64fb28c9cb4
2023-02-06T23:13:33.829618710Z statusCode: 400
I found articles saying that this might be caused when -sharedDb flag is missing, but I do have specific path for this value. The config from the app level is correct, because I change only url, comparing to other environments where everything works properly. Any suggestions or articles explaining this image's behaviour are more than welcome.
If it helps, I'm using node16 & dynamoose.
If you do not use -sharedDb it means that you have a unique environment for every set of access keys provided. So if you spin up multiple environments with different keys, then you will have to create the tables in that environment.
Either set -sharedDb or ensure you use the same access keys for all invocations.
Docs
The AWS SDKs for DynamoDB require that your application configuration specify an access key value and an AWS Region value. Unless you're using the -sharedDb or the -inMemory option, DynamoDB uses these values to name the local database file. These values don't have to be valid AWS values to run locally. However, you might find it convenient to use valid values so that you can run your code in the cloud later by changing the endpoint you're using.
Related
I am sorry if this issue has already been resolved, but I could not find any related answers.
I am trying to set up a self-hosted gitlab instance through docker-compose, which I wish to connect to an LDAP server.
(I have connected other applications to the same LDAP server in the past without issues, and also the account I am trying to login to is that of a valid user.)
However, no matter what I've tried I keep receiving this error upon login: Could not authenticate you from Ldapmain because "Invalid filter syntax.".
My current docker-compose file is as follows:
version: '3.7'
services:
web:
image: 'gitlab/gitlab-ee:14.8.6-ee.0'
restart: on-failure
hostname: 'host.namespace.com'
container_name: gitlab-ee
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://host.namespace.com'
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_host'] = 'ldap://something.something.com'
gitlab_rails['ldap_port'] = 389
gitlab_rails['ldap_base'] = 'ou=people,dc=namespace,dc=com'
gitlab_rails['ldap_uid'] = 'uid'
ports:
- '80:80'
- '443:443'
- '22:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
As you can see, in my current configuration I did not set ldap_user_filter at all, since it is not listed as required: https://docs.gitlab.com/ee/administration/auth/ldap/#basic-configuration-settings.
However, I have also tried setting gitlab_rails['ldap_user_filter'] = '' or gitlab_rails['ldap_user_filter'] = '(&(objectClass=zimbraAccount)(uid={login}))' without any luck. Setting gitlab_rails['bind_dn'] and other attributes did not help as well. I keep receiving the same "Invalid filter syntax." error over and over again.
Could you please point me to the right direction?
Thank you in advance!
FIXED
gitlab_rails['ldap_host'] = 'ldap://something.something.com'
changed to:
gitlab_rails['ldap_host'] = 'something.something.com'
I have been battling this for a while now.
Scenario
We are using Cosmos SQL API and Cosmos Graph (Gremlin) in our project. For a long time we have been forced to use Azure resources when developing for the graph databases.
I whish to get rid of this for the LDE and run Azure CosmosDB emulator in docker using mcr.microsoft.com/cosmosdb/windows/azure-cosmos-emulator:latest image. The reason for docker is that we have some weird policies forced upon us from the IT Corp which made it impossible to get Azure Cosmos DB Emulator to work properly even for the SQLAPI. SQL Api works fine with the docker image. And I'm running docker compose.
After some investigation I found that the image is actually not looking for the environment variables like AZURE_COSMOS_EMULATOR_GREMLIN_ENDPOINT:true to be set. This was after investigating C:/CosmosDB.Emulator/Start.ps1 in the container. So i figured that I could fix this by simply replacing the Start.ps1 in the container, stop it, and commit it as a new image.
Which worked! Then I created a script for replicating the manual steps so my team does not have to do the same procedure. And now its not working the SQLAPI and Azure Cosmos DB Explorer works perfect but I cannot connect with Gremlin over port 8901 which worked earlier, once at least.
I have confirmed that the Start-CosmosDbEmulator command executed during start of the container has the -EnableGremlin flag set. But no luck, I just getting:
Unable to connect to the remote server ---> System.Net.Http.HttpRequestException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond..
Is there anyone that has got this two work? I can't figure out what the issue is.
This is what I tried / have done:
Certificates are imported and the https://localhost:8081/_explorer/index.html is trusted.
The port setup in the docker-compose file is standard.
I have tried to start container with docker run no luck.
I 100% sure that the start-up command is rans with the -EnableGremlin set. Because of the transcript log file and investigating the Start.ps1 file in the container.
The computemachine.config in container has the "IsGremlinEndpointEnabled":true for the container administrator user.
I connect over localhost:8901 with the standard key and the database and container/collection is created.
Querying with SQLAPI works find.
Note: It worked fine the single time I got it to work also together with the Gremlin API functionality.
Docker Compose
version: '2.4' # Do not upgrade to 3.x yet, unless you plan to use swarm/docker stack: https://github.com/docker/compose/issues/4513
networks:
default:
external: false
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
services:
cosmosdb:
container_name: "azurecosmosemulator-sqlapi"
hostname: "azurecosmosemulator-sqlapi"
image: 'customimagename'
platform: windows
tty: true
restart: always
mem_limit: 3GB
ports:
- '8081:8081'
- '8900:8900'
- '8901:8901'
- '8902:8902'
- '10250:10250'
- '10251:10251'
- '10252:10252'
- '10253:10253'
- '10254:10254'
- '10255:10255'
- '10256:10256'
- '10350:10350'
environment:
- AZURE_COSMOS_EMULATOR_PARTITION_COUNT:10
- AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE:true
- AZURE_COSMOS_EMULATOR_GREMLIN_ENDPOINT:true
networks:
default:
ipv4_address: 172.16.238.246
volumes:
- '${hostDirectoryCosmosDBSQLAPI}:C:\CosmosDB.Emulator\bind-mount'
I haven't removed the environment variables that does not seems to have any purpose at all. But I have tried to have them removed several times if that had any affect in code that I cannot or have not yet investigated inside the container.
The transcript log
**********************
Windows PowerShell transcript start
Start time: 20220204162955
Username: User Manager\ContainerAdministrator
RunAs User: User Manager\ContainerAdministrator
Machine: AZURECOSMOSEMUL (Microsoft Windows NT 10.0.14393.0)
Host Application: powershell.exe -NoExit -NoLogo -Command C:\CosmosDB.Emulator\Start.ps1
Process ID: 1868
PSVersion: 5.1.14393.4583
PSEdition: Desktop
PSCompatibleVersions: 1.0, 2.0, 3.0, 4.0, 5.0, 5.1.14393.4583
BuildVersion: 10.0.14393.4583
CLRVersion: 4.0.30319.42000
WSManStackVersion: 3.0
PSRemotingProtocolVersion: 2.3
SerializationVersion: 1.1.0.1
**********************
Transcript started, output file is C:\CosmosDB.Emulator\bind-mount\Diagnostics\Transcript.log
INFO: Stop-CosmosDbEmulator
INFO: Start-CosmosDbEmulator -AllowNetworkAccess -NoFirewall -NoUI -Key C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw== -Consistency Session -Timeout 300 -EnableGremlin
Directory: C:\CosmosDB.Emulator\bind-mount
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 2/4/2022 4:30 PM 804 CosmosDbEmulatorCert.cer
Key : GremlinEndpoint
Value : {http://azurecosmosemulator-sqlapi:8901/, http://172.16.238.246:8901/}
Name : GremlinEndpoint
Key : TableEndpoint
Value : {https://azurecosmosemulator-sqlapi:8902/, https://172.16.238.246:8902/}
Name : TableEndpoint
Key : Key
Value : C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
Name : Key
Key : Version
Value : 2.14.5.0
Name : Version
Key : IPAddress
Value : 172.16.238.246
Name : IPAddress
Key : Emulator
Value : CosmosDB.Emulator
Name : Emulator
Key : CassandraEndpoint
Value : {tcp://azurecosmosemulator-sqlapi:10350/, tcp://172.16.238.246:10350/}
Name : CassandraEndpoint
Key : MongoDBEndpoint
Value : {mongodb://azurecosmosemulator-sqlapi:10255/, mongodb://172.16.238.246:10255/}
Name : MongoDBEndpoint
Key : Endpoint
Value : {https://azurecosmosemulator-sqlapi:8081/, https://172.16.238.246:8081/}
Name : Endpoint
I hope there is someone out their that can point me in the right direction here.
Maybe the solution is to skip this entirely and live with the fact that I have to create collections in Azure and work against does for the graph databases.
Greatful for any advice.
I'm currently working on a project where I have a virtual machine on Microsoft Azure and I'm trying to have multiple Docker containers to be accessed through different routes with the help of a Traefik reverse proxy. Besides the reverse-proxy, the first service I need to have is RabbitMQ and I should be able to access its user-interface on a /rmq route. Right now, I have the following docker-compose file to build both services:
version: "3.5"
services:
rabbitmq:
image: rabbitmq:3-alpine
expose:
- 5672
- 15672
volumes:
- ./rabbit/enabled_plugins:/etc/rabbitmq/enabled_plugins
labels:
- traefik.enable=true
- traefik.http.routers.rabbitmq.rule=Host(`HOST.com`) && PathPrefix(`/rmq`)
# needed, when you do not have a route "/rmq" inside your container (according to https://stackoverflow.com/questions/59054551/how-to-map-specific-port-inside-docker-container-when-using-traefik)
- traefik.http.routers.rabbitmq.middlewares=strip-docs
- traefik.http.middlewares.strip-docs.stripprefix.prefixes=/rmq
- traefik.port=15672
networks:
- proxynet
traefik:
image: traefik:2.1
command: --api=true # Enables the web UI
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/traefik.toml:/etc/traefik/traefik.toml:ro
ports:
- 80:80
- 443:443
labels:
traefik.enable: true
traefik.http.routers.traefik.rule: "Host(`HOST.com`)"
traefik.http.routers.traefik.service: "api#internal"
networks:
- proxynet
And this is the content of my traefik.toml file:
logLevel = "DEBUG"
debug = true
[api]
dashboard = true
insecure = false
debug = true
[providers.docker]
endpoint = "unix:///var/run/docker.sock"
watch = true
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web-secure]
address = ":443"
[log]
level = "DEBUG"
format = "json"
The enabled_plugins file specifies which RabbitMQ plugins should be activated. Here, I have the rabbitmq_management plugin (among others), which I think is needed to access the RabbitMQ UI. I even checked the logs of the RabbitMQ container and apparently the rabbitmq_management was properly started:
rabbitmq_1 | 2021-01-30 15:50:30.538 [info] <0.730.0> Server startup complete; 7 plugins started.
rabbitmq_1 | * rabbitmq_stomp
rabbitmq_1 | * rabbitmq_federation_management
rabbitmq_1 | * rabbitmq_mqtt
rabbitmq_1 | * rabbitmq_federation
rabbitmq_1 | * rabbitmq_management
rabbitmq_1 | * rabbitmq_web_dispatch
rabbitmq_1 | * rabbitmq_management_agent
rabbitmq_1 | completed with 7 plugins.
rabbitmq_1 | 2021-01-30 15:50:30.539 [info] <0.730.0> Resetting node maintenance status
With these configurations running with docker-compose up, if I try to access HOST.com/rmq, I get a 502 (Bad Gateway) error on the console of my browser. And initially, this was where I was stuck. However, after searching for some help online, I found a different way to specify the traefik port on the RabbitMQ container labels (traefik.http.services.rabbitmq.loadbalancer.server.port=15672) and, with this modification, I don't have the Bad Request error anymore, but I get a lot of ERR_ABORTED 404 (Not Found) errors on the console of my browser (the list bellow does not contain all the errors):
rmq:7 GET http://HOST.com/js/ejs-1.0.min.js net::ERR_ABORTED 404 (Not Found)
rmq:18 GET http://HOST.com/js/charts.js net::ERR_ABORTED 404 (Not Found)
rmq:19 GET http://HOST.com/js/singular/singular.js net::ERR_ABORTED 404 (Not Found)
Refused to apply style from 'http://HOST.com/css/main.css' because its MIME type ('text/plain') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
rmq:27 Uncaught ReferenceError: sync_get is not defined at rmq:27
I don't have much experience with this kind of projects and I don't know if I'm doing something wrong or if there's something missing in these configurations or on the configurations of the virtual machine itself. Do you know what I should do to be able to access the RabbitMQ UI with the URL HOST.com/rmq ?
If I get this running properly, I think I would also be able to configure Docker to only allow access to the Traefik UI with a route such as HOST.com/dashboard, instead of accessing it only with the URL without any routes.
Thanks in advance!
Solved it. I don't know why, but when I used the configuration traefik.http.services.rabbitmq.loadbalancer.server.port=15672, I changed the order of the lines traefik.http.routers.rabbitmq.middlewares=strip-docs and traefik.http.middlewares.strip-docs.stripprefix.prefixes=/rmq, making the prefix appear before the middleware. Changed that and now I can access the RabbitMQ UI on HOST.com/rmq. So my final docker-compose was this:
version: "3.5"
services:
rabbitmq:
image: rabbitmq:3-alpine
expose:
- 5672
- 15672
volumes:
- ./rabbit/enabled_plugins:/etc/rabbitmq/enabled_plugins
labels:
- traefik.enable=true
- traefik.http.routers.rabbitmq.rule=Host(`HOST.com`) && PathPrefix(`/rmq`)
# needed, when you do not have a route "/rmq" inside your container (according to https://stackoverflow.com/questions/59054551/how-to-map-specific-port-inside-docker-container-when-using-traefik)
- traefik.http.routers.rabbitmq.middlewares=strip-docs
- traefik.http.middlewares.strip-docs.stripprefix.prefixes=/rmq
- traefik.http.services.rabbitmq.loadbalancer.server.port=15672
networks:
- proxynet
traefik:
image: traefik:2.1
command: --api=true # Enables the web UI
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/traefik.toml:/etc/traefik/traefik.toml:ro
ports:
- 80:80
- 443:443
labels:
traefik.enable: true
traefik.http.routers.traefik.rule: "Host(`HOST.com`)"
traefik.http.routers.traefik.service: "api#internal"
networks:
- proxynet
I'll mark this question as solved, but if you know why the order of these 2 lines matters, please explain for future reference.
Thanks!
Trace of how I determined an answer to suggest for this question, given that I haven't used the specific tools:
By searching rabbitmq admin url, I found the rabbitmq management docs page, which near the top, mentions support of a path prefix setting. I searched the page for that, and under the relevant heading, found that you likely will need to set this setting in your rabbitmq config:
management.path_prefix = /rmq
So, to apply it to your docker config, I looked up the rabbitmq docker image, which discusses that configuration files need to be injected via a bind mount, or can be provided via an esoteric erlang config thing which I'd personally not mess with. Therefore, the steps I'd follow from here would be:
look in the existing rabbitmq image to find out what the default config file in /etc/rabbitmq/rabbitmq.conf is, eg by running docker-compose run rabbitmq cat /etc/rabbitmq/rabbitmq.conf, or an appropriate docker cp command if it turns out rabbitmq sets a docker ENTRYPOINT which prevents use of shell commands on the image command line
add a volume just like you have with enabled plugins but move it one directory upward, mapping rabbit/ to /etc/rabbitmq/, and then put the default config from the container in rabbit/
add that line to the config file
With any luck that should at least get you closer. I'm curious to hear how it goes!
By the way, while looking at the rabbitmq docker image docs, I discovered that there are special tags for if you need management interface support. You may find that you need to switch to one of those instead of plain 3-alpine in order for this to work, eg rabbitmq:3-management-alpine.
I'm trying to setup AWS CLI login via OneLogin - but it doesn't seem to work.
I created the onelogin.sdk.properties file as follows:
onelogin.sdk.client_id=xxxxxxxxxxxxxxxxxxxxxxxxxxx
onelogin.sdk.client_secret=xxxxxxxxxxxxxxxxxxxxxxxxxxx
onelogin.sdk.region=us
onelogin.sdk.ip=
I'm running the below command from the same directory where the above properties file resides:
java -jar onelogin-aws-cli.jar --appid 123456 --subdomain mycompany --username myusername --region us-east-1 --profile onelogin
This prompts me for the password and after I enter it, I get the following error:
Exception in thread "main" OAuthProblemException{error='bad request', description='bad request', uri='null', state='400', scope='null', redirectUri='null', responseStatus=400, parameters={}}
at org.apache.oltu.oauth2.common.exception.OAuthProblemException.error(OAuthProblemException.java:59)
at org.apache.oltu.oauth2.client.validator.OAuthClientValidator.validateErrorResponse(OAuthClientValidator.java:63)
at org.apache.oltu.oauth2.client.validator.OAuthClientValidator.validate(OAuthClientValidator.java:48)
at org.apache.oltu.oauth2.client.response.OAuthClientResponse.validate(OAuthClientResponse.java:127)
at com.onelogin.sdk.conn.OneloginOAuthJSONResourceResponse.init(OneloginOAuthJSONResourceResponse.java:31)
at org.apache.oltu.oauth2.client.response.OAuthClientResponse.init(OAuthClientResponse.java:101)
at org.apache.oltu.oauth2.client.response.OAuthClientResponse.init(OAuthClientResponse.java:120)
at org.apache.oltu.oauth2.client.response.OAuthClientResponseFactory.createCustomResponse(OAuthClientResponseFactory.java:82)
at com.onelogin.sdk.conn.OneloginURLConnectionClient.execute(OneloginURLConnectionClient.java:75)
at org.apache.oltu.oauth2.client.OAuthClient.resource(OAuthClient.java:81)
at com.onelogin.sdk.conn.Client.getSAMLAssertion(Client.java:2238)
at com.onelogin.aws.assume.role.cli.OneloginAWSCLI.getSamlResponse(OneloginAWSCLI.java:437)
at com.onelogin.aws.assume.role.cli.OneloginAWSCLI.main(OneloginAWSCLI.java:256)
I know for a fact that my onelogin.sdk.properties is correct, because intentionally setting incorrect client_id/client_secret or changing the region to eu makes the application fail with another error (error='Unauthorized')
What might be the problem?
Is there a debug switch I can use to help me understand what's going on?
Thanks,
Yosi
The problem was me using the username in incorrect format (needed the domain suffix - e.g. myusername#mydomain.com)
I'm trying to use DynamoDB Local. It works perfectly fine using the AWS CLI, but when I try to use it with the AWS SDK in Node, I keep getting a "Method Not Allowed" error. The same code works perfectly fine with the real DynamoDB, so I know it's not an issue with the code.
This is how I've setup the SDK. My understanding is the region is ignored, so it doesn't matter.
new DocumentClient({
region: 'local',
endpoint: 'http://localhost:8000',
sslEnabled: false,
})
Node just gives me:
UnknownError: Method Not Allowed
at Request.extractError (/.../node_modules/aws-sdk/lib/protocol/json.js:51:27)
at Request.callListeners (/.../node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/.../node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/.../node_modules/aws-sdk/lib/request.js:683:14)
at Request.transition (/.../node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/.../node_modules/aws-sdk/lib/state_machine.js:14:12)
at /.../node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/.../node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/.../node_modules/aws-sdk/lib/request.js:685:12)
at Request.callListeners (/.../node_modules/aws-sdk/lib/sequential_executor.js:116:18)
I'm running DynamoDB Local on macOS 10.14.6 with Java:
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
But I also tried with Amazon's Docker image and still the same error.
The port was in use by another application. And Java didn't bother to mention it when starting the DynamoDB Local server...
But that doesn't explain why the AWS CLI was working. Now I'm confused...
put any valid region like "us-east-1" instead of "local".