Deploy Nacos on Ubuntu, find that the cluster management node on the Nacos page is empty, and then report the error "no such API: get: / Nacos / V1 / NS / operator / cluster / states"
Related
My MERN app works fine locally and on Heroku but not on AWS ECS. I've deployed my app on AWS ECS and I get the following React Router error in the console in Google Chrome when I enter my AWS ELB (load balancer) domain name.
react-dom.production.min.js:189 Error
at y (router.ts:5:20)
at A (components.tsx:147:3)
at El (react-dom.production.min.js:167:137)
at Su (react-dom.production.min.js:290:337)
at bs (react-dom.production.min.js:280:389)
at gs (react-dom.production.min.js:280:320)
at vs (react-dom.production.min.js:280:180)
at os (react-dom.production.min.js:271:88)
at as (react-dom.production.min.js:268:429)
at k (scheduler.production.min.js:13:203)
di # react-dom.production.min.js:189
n.callback # react-dom.production.min.js:189
Ao # react-dom.production.min.js:144
wu # react-dom.production.min.js:262
bu # react-dom.production.min.js:260
yu # react-dom.production.min.js:259
(anonymous) # react-dom.production.min.js:283
xs # react-dom.production.min.js:281
as # react-dom.production.min.js:270
k # scheduler.production.min.js:13
O # scheduler.production.min.js:14
router.ts:5 Uncaught Error
at y (router.ts:5:20)
at A (components.tsx:147:3)
at El (react-dom.production.min.js:167:137)
at Su (react-dom.production.min.js:290:337)
at bs (react-dom.production.min.js:280:389)
at gs (react-dom.production.min.js:280:320)
at vs (react-dom.production.min.js:280:180)
at os (react-dom.production.min.js:271:88)
at as (react-dom.production.min.js:268:429)
at k (scheduler.production.min.js:13:203)
y # router.ts:5
A # components.tsx:147
El # react-dom.production.min.js:167
Su # react-dom.production.min.js:290
bs # react-dom.production.min.js:280
gs # react-dom.production.min.js:280
vs # react-dom.production.min.js:280
os # react-dom.production.min.js:271
as # react-dom.production.min.js:268
k # scheduler.production.min.js:13
O # scheduler.production.min.js:14
Also, when I enter in my actual domain name (which was purchased on Porkbun), it says that no website has been registered on this domain name even though my domain name is an alias to my AWS ELB domain name, but I'm guessing that's because the ELB domain name can't re-alias back to the actual domain name because of errors.
Can anyone properly diagnose my error(s)? What do I need to fix?
I figured out why. For some reason, my EC2 container is using old code that uses a Route component in my App.js that is not wrapped inside of a Routes component.
I have created AWS ec2 instance and a python script to automate lighthouse reports
I have successfully ran the lighthouse python code in AWS server manually by running below lines
ubuntu#ip:~/Python_lighthouse$ python3 SEO_PYTHON_LIGHTHOUSE_lighthouse.py
However, I am struggling to run the same code via cronjob.
*/10 * * * * /usr/bin/python3 /home/ubuntu/Python_lighthouse/SEO_PYTHON_LIGHTHOUSE_lighthouse.py /home/ubuntu/Python_lighthouse/lighthouse.log 2>&1
(above cronjob working fine but getting error: /bin/sh/: lighthouse not found)
Following version and locations:
which npm
/home/ubuntu/.nvm/versions/node/v17.5.0/bin/npm
which node
/home/ubuntu/.nvm/versions/node/v17.5.0/bin/node
which lighthouse
/home/ubuntu/.nvm/versions/node/v17.5.0/bin/lighthouse
I have tried passing lighthouse to my cronjob like this but didn't work for me
*/10 * * * * /home/ubuntu/.nvm/versions/node/v17.5.0/bin/lighthouse && /usr/bin/python3 /home/ubuntu/Python_lighthouse/SEO_PYTHON_LIGHTHOUSE_lighthouse.py /home/ubuntu/Python_lighthouse/lighthouse.log 2>&1
please correct me if I am doing something wrong. Thanks in advance!!!!
I am trying to setup a Celery application under Flask to accept API requests and then separate Celery workers to perform the long running tasks. My problem is that my Flask and everything else in my environment uses MongoDB so I do not want to setup a separate SQL db just for the Celery results. I cannot find any good examples of how to properly configure Celery with a MongoDB cluster as the backend.
Here are the settings I have tried to make it accept:
CELERY_RESULT_BACKEND = "mongodb"
CELERY_MONGODB_BACKEND_SETTINGS = {"host": "mongodb://mongodev:27017",
"database": "celery",
"taskmeta_collection": "celery_taskmeta"}
No matter what I do, Celery seems to ignore the config settings and launched without any results backend. Does anywon have a working example using the latest version of Celery? The only other examples I can find are of v3 Celery setups and that didn't work for me either since I am using a Mongo replica cluster in production which seems unsupported for that version.
[Edit]Adding more information in the complicated way I am setting the config to work with the rest of the application.
The config values are first passed as environment variables through a docker-compose file like this:
environment:
- PYTHONPATH=/usr/src/
- APP_SETTINGS=config.DevelopmentConfig
- FLASK_ENV=development
- CELERY_BROKER_URL=amqp://guest:guest#rabbit1:5672
- CELERY_BROKER_DEV=amqp://guest:guest#rabbit1:5672
- CELERY_RESULT_SERIALIZER=json
- CELERY_RESULT_BACKEND=mongodb
- CELERY_MONGODB_BACKEND_SETTINGS={"host":"mongodb://mongodev:27017","database":"celery","taskmeta_collection":"celery_taskmeta"}
Then, inside the config.py file they are loaded:
class DevelopmentConfig(BaseConfig):
"""Development configuration"""
CELERY_BROKER_URL = os.getenv('CELERY_BROKER_DEV')
CELERY_RESULT_SERIALIZER = os.getenv('CELERY_RESULT_SERIALIZER')
CELERY_RESULT_BACKEND = os.getenv('CELERY_RESULT_BACKEND')
CELERY_MONGODB_BACKEND_SETTINGS = ast.literal_eval(os.getenv('CELERY_MONGODB_BACKEND_SETTINGS'))
Then, when Celery is initiated, the config is loaded:
app = Celery('celeryworker', broker=os.getenv('CELERY_BROKER_URL'),
include=['celeryworker.tasks'])
print('app initiated')
app.config_from_object(app_settings)
app.conf.update(accept_content=['json'])
print("CELERY_MONGODB_BACKEND_SETTINGS",
os.getenv('CELERY_MONGODB_BACKEND_SETTINGS'))
print("celery config",app.conf)
When the application comes up here is what I see with all my troubleshooting prints. I have redacted a lot of the config output just to show what I have here is passing through the config.py to app.config but being ignored by Celery. You can see the value makes it into the celery.py file and I am sure Celery does something with it because before I added the ast.literal_eval in the config.py Celery would throw an error saying that the MongoDB backend settings needed to be a dict rather then a string. Unfortunately now that it is being passed as a proper dict Celery ignores it.
app_settings SGSDevOps.config.DevelopmentConfig
app initiated
CELERY_MONGODB_BACKEND_SETTINGS {"host":"mongodb://mongodev:27017","database":"celery","taskmeta_collection":"celery_taskmeta"}
celery config Settings(Settings({'BROKER_URL': 'amqp://guest:guest#rabbit1:5672', 'CELERY_INCLUDE': ['celeryworker.tasks'], 'CELERY_ACCEPT_CONTENT': ['json']}, 'BROKER_URL': 'amqp://guest:guest#rabbit1:5672', 'CELERY_MONGODB_BACKEND_SETTINGS': None, 'CELERY_RESULT_BACKEND': None}))
APP_SETTINGS config.DevelopmentConfig
app.config <Config {'ENV': 'development', 'CELERY_BROKER_URL': 'amqp://guest:guest#rabbit1:5672', 'CELERY_MONGODB_BACKEND_SETTINGS': {'host': 'mongodb://mongodev:27017', 'database': 'celery', 'taskmeta_collection': 'celery_taskmeta'}, 'CELERY_RESULT_BACKEND': 'mongodb', 'CELERY_RESULT_SERIALIZER': 'json', }>
-------------- celery#a5ea76b91f77 v4.2.1 (windowlicker)
---- **** -----
--- * *** * -- Linux-4.9.93-linuxkit-aufs-x86_64-with-debian-9.4 2018-10-29 17:25:27
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: celeryworker:0x7f28e828f668
- ** ---------- .> transport: amqp://guest:**#rabbit1:5672//
- ** ---------- .> results: mongodb://
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. celeryworker.tasks.longtime_add
I still do not know why the above config is not working but I found a workaround to update the config after the app loads using the new config value names:
app = Celery('celeryworker', broker=os.getenv('CELERY_BROKER_URL'),
backend=os.getenv('CELERY_RESULT_BACKEND'),
include=['SGSDevOps.celeryworker.tasks'])
print('app initiated')
app.config_from_object(app_settings)
app.conf.update(accept_content=['json'])
app.conf.update(mongodb_backend_settings=ast.literal_eval(os.getenv('CELERY_MONGODB_BACKEND_SETTINGS')))
Im using awsebcli(*pip install awsebcli —upgrade —user)
for build NLB (NetWork Load Balancer) on elasticbeanstalk with awsebcli(https://pypi.python.org/pypi/awsebcli/3.0.3)
But I have a problem now.....
$ eb create
Enter Environment Name
(default is ko-dev-dev):
Enter DNS CNAME prefix
(default is ko-dev-dev):
and then
Select a load balancer type
1) classic
2) application
3) network
(default is 1): 3
**ERROR: AlreadyExistsError - Cannot exceed quota for PoliciesPerRole: 10**
I wonder what is causing this problem now . . . . .
requirements.txt
awscli==1.14.31
awsebcli==3.12.1
blessed==1.14.2
botocore==1.8.35
cement==2.8.2
colorama==0.3.7
docker-py==1.7.2
dockerpty==0.4.1
docopt==0.6.2
docutils==0.14
jmespath==0.9.3
pathspec==0.5.0
pyasn1==0.4.2
python-dateutil==2.6.1
PyYAML==3.12
requests==2.9.1
rsa==3.4.2
s3transfer==0.1.12
semantic-version==2.5.0
six==1.11.0
tabulate==0.7.5
termcolor==1.1.0
wcwidth==0.1.7
websocket-client==0.46.0
There are some limits for resources in AWS.
So suppose you want to increase that limit,
Just go to
Service Quotas --> Aws Services --> IAM --> Raise a ticket
e.g
Note: For IAM quotas to increase, you need to select the region US East (N. Virginia)
For More details https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-quotas.html
I review the Cloud Foundry project and try to install it on a server
I will use Couchdb as a database service.
My principal question is : How use CouchDB in Cloud Foundry?
I install a CF instance with : vcap_dev_setup -c devbox_all.yml -D mydomain.com
The devbox.yml contains :
$ install :
- all.
In this install the couchdb_node and the couchdb_gateway is present by default.
But it seems to be bug in general.
When I delete a app and I have this error for example :
$ vmc delete notes2
Provisioned service [mongodb-d216a] detected, would you like to delete it? [yN]: y
Provisioned service [redis-8fcdc] detected, would you like to delete it? [yN]: y
Deleting application [notes2]: OK
Deleting service [mongodb-d216a]: Error 503: Unexpected response from service gateway
So I tried to install a CF instance with this config.
(A standard single-node with redis, couch and mongo)
conf.yml :
$ jobs:
install:
- nats_server
- router
- stager
- ccdb
- cloud_controller:
builtin_services:
- redis
- mongodb
- couchdb
- health_manager
- dea
- uaa
- uaadb
- redis_node:
index: "0"
- couchdb_node:
index: "0"
- mongodb_node:
index: "0"
- coudb_gateway
- redis_gateway
- mongodb_gateway
First, this config doesn't work, because the option 'couchdb' is not a valable keyword (In the part built-in services)
So, what I do wrong?
Is in the way to integrate couch and it's not finished last week ?
To continue, I success to install the CF instance without the couchdb built-in services option but with a couchdb_node, and a couchdb_gateway. And they start.
I suppose the service is runnable.
But i can't use 'couchdb' in my app manifest.yml or choose this service to bind on.
(It's seems normal because it's not install as a service)
So, It seems to be close to work, but it's not.
I resquest Ideas, Advice on this subject here because I didn't find people talking about around the web.
Thank's to read me.
Lucas
I decided to try this myself and it appears to work OK. I created a new VCAP instance with vcap_dev_setup and the following configuration ..
---
deployment:
name: "cloudfoundry"
jobs:
install:
- nats_server
- cloud_controller:
builtin_services:
- mysql
- postgresql
- couchdb
- stager
- router
- health_manager
- uaa
- uaadb
- ccdb
- dea
- couchdb_gateway
- couchdb_node:
index: "0"
- postgresql_gateway
- postgresql_node:
index: "0"
- mysql_gateway
- mysql_node:
index: "0"
I was able to bind instances of CouchDB to a node app and read the service info from VCAP_SERVICES, as below;
'{"couchdb-1.2":[{"name":"couchdb-c7eb","label":"couchdb-1.2","plan":"free","tags":["key-value","cache","couchdb-1.2","couchdb"],"credentials":{"hostname":"127.0.0.1","host":"127.0.0.1","port":5984,"username":"7f3c0567-89cc-4240-b249-40d1f4586035","password":"8fef9e88-3df2-46a8-a22c-db02b2917251","name":"dde98c69f-01e9-4e97-b0d6-43bed946da95"}}]}'
I was also able to tunnel the service to a local port and connect to it which you can see in this image
What version of Ubuntu have you used to install VCAP?