I got this error even if I only have around 10 files in my directory.
I am using GithubActions to deploy my react app.
This is my current directory
├── app.yaml
└── build
├── index.html
├── favicon.ico
└── ...etc (around 10files)
here, I execute 'gcloud app deploy' but it returns error and says " too many files"
Though I don't have 10000 files in current directory , I have more 10000files in hierarchy shallow directory, does app.yaml see there ??
I mean ,
.
├── node_modules ← 10000files
└── current_directory
├── app.yaml
└── build
├── index.html
├── favicon.ico
└── ...etc (around 10files)
this is error
ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: This deployment has too many files. New versions are limited to 10000 files for this app.
- '#type': type.googleapis.com/google.rpc.BadRequest
fieldViolations:
- description: This deployment has too many files. New versions are limited to 10000
files for this app.
field: version.deployment.files[...]
this is my app.yaml
runtime: nodejs12
api_version: 1
threadsafe: true
handlers:
- url: /
static_files: build/index.html
upload: build/index.html
- url: /
static_dir: build
I'm not sure why they are trying to upload 10000files.
Thanks
I am trying to make a shared pipeline repository, from where I am running a Dangerfile. The cool thing here should be that each repo could simply include the shared-pipeline repo's Dangerfile and .gitlab-ci.yml-file and then get all the cool stuff from it. My problem is, from repo1 I am trying to use include to inherit the Dangerfile from repository shared-pipeline. However, it is only possible to include .yaml files. I can include .gitlab-ci.yml, but how do I include external files such as the Dangerfile?
├── shared-pipeline
│ ├── Dangerfile
│ └── .gitlab-ci.yml
├── repo1
│ └── .gitlab-ci.yml
├── repo2
│ └── .gitlab-ci.yml
└── repo3
└── .gitlab-ci.yml
This is what I have so far:
include:
- project: 'myproject/shared-pipeline'
file: '.gitlab-ci.yml'
This is what I was trying:
include:
- project: 'myproject/shared-pipeline'
file: '.gitlab-ci.yml'
- project: 'myproject/shared-pipeline'
file: 'Dangerfile' # Syntax error here as this is no yml file
you can not include other files than yml files. GitLab will merge your included yml files to one big file and will execute it on the runners.
This will just fetch the content of the file and merge it. If you do want to use a file from your shared pipeline, you do have multiple options:
put it in a shared docker image, which will be used by your action, this way everyone will get the same dangerfile with the docker image executed in the job.
fetch the docker image via API in a pre step and make it available to your job with the artifact directive
you could also put the file in a separate docker image, and use the artifact directive in an earlier step to hand it over.
... and most likely many more
... but to be clear again, you can not use the include directive to include any kind of file into your pipeline.
Just as an example, my git repository would look like so:
└── git_repo_example
├── modules
│ ├── module_1
│ │ ├── main.tf
│ │ └── var.tf
│ └── module_2
│ ├── main.tf
│ └── var.tf
└── projects
├── project_1
│ ├── main.tf
│ ├── terraform.tfstate
│ └── vars.tf
└── project_2
├── main.tf
├── terraform.tfstate
└── vars.tf
7 directories, 10 files
My team wants to make our terraform state files gitlab-managed, so that the statefiles would be locked in case multiple people want to run or modify a single project at the same time.
All of the examples I can find for managing terraform via gitlab only seem to assume 1 tfstate file and project, but my repository has multiple. Breaking this up into multiple repositories would be difficult to manage since they all reference the same modules, and it seems that placing everything into one folder is against terraform best-practices.
How would one best go about managing one repository with multiple terraform projects / statefiles via gitlab?
I have a similar-ish directory structure that works well with GitLab managed state per project directory.
I'd recommend replacing local TF development with GitLab CI/CD pipelines, using the provided GitLab container image as it supports the GitLab backend by default.
I use environments (representing each project directory) to mange the pipeline (CI/CD variables are managed by environment). The TF statefile is named according to the TF_ADDRESS variable:
image: registry.gitlab.com/gitlab-org/terraform-images/stable:latest
variables:
TF_ROOT: ${CI_PROJECT_DIR}/${ENVIRONMENT}
TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${ENVIRONMENT}
Here a build job is defined to create the TF plan and run only when the development directory is modified and merged to the default branch. An identical job for the production directory is also defined. Each environment references a unique TF state file managed by GitLab:
.plan:
stage: build
environment:
name: $ENVIRONMENT
script:
- gitlab-terraform plan
- gitlab-terraform plan-json
cache:
policy: pull # don't update the cache
artifacts:
name: plan
paths:
- ${TF_ROOT}/plan.cache
reports:
terraform: ${TF_ROOT}/plan.json
Development Build Terraform Plan:
extends: .plan
variables:
ENVIRONMENT: development
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- "*"
- "development/**/*"
Production Build Terraform Plan:
extends: .plan
variables:
ENVIRONMENT: production
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- "*"
- "production/**/*"
I have exactly the same kind of terraform scripts repository, with a "run" script at the top, which will do, for each application, a
cd modules/modulesx
terraform init -backend-config=backend.tf -reconfigure
With backend.tf (here for Azure):
container-name = "tfbackend"
key = "aStorageAccount/aFileShare/path/to/modulex.tfstate"
resource_group_name = "xxx"
That does create a modules/modulex/.terraform/terraform.tfstate
However, this file is local and not versioned nor "locked".
I have a project structure like:
.
├── app
│ ├── BUILD
│ ├── entry.py
│ ├── forms.py
│ ├── __init__.py
│ ├── jinja_custom_filter.py
│ ├── models.py
│ ├── __pycache__
│ ├── static
│ ├── templates
│ ├── utils.py
│ └── views.py
├── app.db
├── app.yaml
├── BUILD
├── cloudbuild.yaml
├── config.py
├── __init__.py
├── LICENSE
├── manage.py
├── requirements.txt
├── run.py
└── WORKSPACE
4 directories, 20 files
Project uses flask, sqlalchemy (see further below)
How does one deploy using google cloud builder to appengine using non-containerized deployment option? just artifact?
Here is my cloudbuild.yaml?:
# In this directory, run the following command to build this builder.
# $ gcloud builds submit
steps:
# Fetch source.
#- name: "docker.io/library/python:3.6.8"
# args: ['pip', 'install', '-t', '/workspace/lib', '-r', '/workspace/requirements.txt']
#- name: 'gcr.io/cloud-builders/git'
# args: ['clone', '--single-branch', '--branch=develop', 'https://github.com/codecakes/<myproject>_gae.git', '<myproject>_gae']
# Build the Bazel builder and output the version we built with.
#- name: 'gcr.io/cloud-builders/docker'
# args: ['build', '--tag=gcr.io/$PROJECT_ID/deploy:latest', '.']
# Build the targets.
#- name: 'gcr.io/$PROJECT_ID/bazel'
- name: 'gcr.io/cloud-builders/bazel'
args: ['build', '--spawn_strategy=standalone', '//app:entry', '--copt', '--force_python=PY3', '--color=yes', '--curses=yes', '--jobs=10', '--loading_phase_threads=HOST_CPUS', '--aspects=#bazel_tools//tools/python:srcs_version.bzl%find_requirements', '--output_groups=pyversioninfo', '--verbose_failures']
dir: '.'
- name: 'gcr.io/cloud-builders/bazel'
# args: ['run', '--spawn_strategy=standalone', '//:run', '--copt', '--verbose_failures=true', '--show_timestamps=true', '--python_version=PY3', '--build_python_zip', '--sandbox_debug', '--color=yes', '--curses=yes', '--jobs=10', '--loading_phase_threads=HOST_CPUS', '--aspects=#bazel_tools//tools/python:srcs_version.bzl%find_requirements', '--output_groups=pyversioninfo']
args: ['build', '--spawn_strategy=standalone', ':run', '--copt', '--aspects=#bazel_tools//tools/python:srcs_version.bzl%find_requirements', '--verbose_failures=true', '--show_timestamps=true', '--python_version=PY3', '--build_python_zip', '--sandbox_debug', '--color=yes', '--curses=yes', '--jobs=10', '--loading_phase_threads=HOST_CPUS']
dir: '.'
artifacts:
objects:
location: 'gs://<myproject>/'
paths: ['cloudbuild.yaml']
#images: ['gcr.io/$PROJECT_ID/deploy:latest']
I then do
sudo gcloud builds submit --config cloudbuild.yaml ./
which gives me
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
d4dfd7dd-0f77-49d1-ac4c-4e3a1c84e3ea 2019-05-04T07:52:13+00:00 1M32S gs://<myproject>_cloudbuild/source/1556956326.46-3f4abd9a558440d8ba669b3d55248de6.tgz - SUCCESS
then i do
sudo gcloud app deploy app.yaml --user-output-enabled --account=<myemail>
which gives me:
Updating service [default] (this may take several minutes)...failed.
ERROR: (gcloud.app.deploy) Error Response: [3] Docker image asia.gcr.io/<myproject>/appengine/default.20190502t044929:latest was either not found, or is not in Docker V2 format. Please visit https://cloud.google.com/container-registry/docs/ui
My workspace file is:
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
# Use the git repository
git_repository(
name = "bazel_for_gcloud_python",
remote = "https://github.com/weisi/bazel_for_gcloud_python.git",
branch="master",
)
# https://github.com/bazelbuild/rules_python
git_repository(
name = "io_bazel_rules_python",
remote = "https://github.com/bazelbuild/rules_python.git",
# NOT VALID! Replace this with a Git commit SHA.
branch= "master",
)
# Only needed for PIP support:
load("#io_bazel_rules_python//python:pip.bzl", "pip_repositories")
pip_repositories()
load("#io_bazel_rules_python//python:pip.bzl", "pip_import")
# This rule translates the specified requirements.txt into
# #my_deps//:requirements.bzl, which itself exposes a pip_install method.
pip_import(
name = "my_deps",
requirements = "//:requirements.txt",
)
# Load the pip_install symbol for my_deps, and create the dependencies'
# repositories.
load("#my_deps//:requirements.bzl", "pip_install")
pip_install()
And app.yaml:
runtime: custom
#vm: true
env: flex
entrypoint: gunicorn -b :$PORT run:app
runtime_config:
# You can also specify 2 for Python 2.7
python_version: 3
handlers:
- url: /$
secure: always
script: auto
- url: /.*$
static_dir: app/static
secure: always
# This sample incurs costs to run on the App Engine flexible environment.
# The settings below are to reduce costs during testing and are not appropriate
# for production use. For more information, see:
# https://cloud.google.com/appengine/docs/flexible/python/configuring-your-app-with-app-yaml
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
packages
alembic==0.8.4
Babel==2.2.0
blinker==1.4
coverage==4.0.3
decorator==4.0.6
defusedxml==0.4.1
Flask==0.10.1
Flask-Babel==0.9
Flask-Login==0.3.2
Flask-Mail==0.9.1
Flask-Migrate==1.7.0
Flask-OpenID==1.2.5
flask-paginate==0.4.1
Flask-Script==2.0.5
Flask-SQLAlchemy==2.1
Flask-WhooshAlchemy==0.56
Flask-WTF==0.12
flipflop==1.0
future==0.15.2
guess-language==0.2
gunicorn==19.9.0
itsdangerous==0.24
Jinja2==2.8
Mako==1.0.3
MarkupSafe==0.23
pbr==1.8.1
pefile==2016.3.28
PyInstaller==3.2
python-editor==0.5
python3-openid==3.0.9
pytz==2015.7
six==1.10.0
speaklater==1.3
SQLAlchemy==1.0.11
sqlalchemy-migrate==0.10.0
sqlparse==0.1.18
Tempita==0.5.2
Werkzeug==0.11.3
Whoosh==2.7.0
WTForms==2.1
As I understand after talking to google cloud developers, you have to use dockerfile to allow google cloud build a it builds using containers to push your image to app engine.
See my own work around
I'm using Serverless Framework 1.32.0 with AWS Lambdas and Python 3.6. I would like to deploy multiple lambdas in a separate way, since at this moment I can only do deployments one by one for every lambda in my directory, which can be confusing with many lambdas in a short future.
This is my current project structure:
└── cat_service
│
├── hello_cat
│ ├── hello_cat-functions.yml
│ └── service.py
│
├── random_cat_fact
│ ├── random_cat_fact-functions.yml
│ └── service.py
│
└── serverless.yml
serverless.yml
service: cat-service
provider:
name: aws
runtime: python3.6
stage: dev
stackName: cat-service
deploymentBucket:
name: test-cat-bucket
role: arn:aws:iam::#{AWS::AccountId}:role/lambda-cat-role
cfnRole: arn:aws:iam::#{AWS::AccountId}:role/cloudformation-cat-role
functions:
- ${file(lambdas/hello_cat/hello_cat-functions.yml)}
stepFunctions:
stateMachines:
catStateMachine:
definition:
Comment: "Get cat hello"
StartAt: hello_cat
States:
hello_cat:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:${self:service}-${opt:stage}-hello_cat"
End: true
plugins:
- serverless-step-functions
- serverless-pseudo-parameters
hello_cat-functions.yml
msc_cat_facts:
handler: service.handler
name: ${self:service}-${opt:stage}-msc_cat_facts
The problem is that, when I deploy it with serverless deploy --stage dev, it zips the full project and does not separate lambdas, so the actual Lambda in the AWS console shows as hello_cat but includes the full project structure instead of separating every lambda files in its own directory.
Is there a way to deploy separate lambdas in the same project with only one serverless.yml?
Thanks in advance.
You'll need to configure Serverless to package individually
To do this add the following to your serverless.yaml:
package:
individually: true
Besides including into serverless.yml, as proposed by #thomasmichaelwallace, the:
package:
individually: true
Try changing the path of your handler function into hello_cat-functions.yml,
from handler: service.handler to:
msc_cat_facts:
handler: hello_cat/service.handler
name: ${self:service}-${opt:stage}-msc_cat_facts