Is it expected behavior that gitlab provider on terraform adds [DELETE] if previous commit message was changed in your tf code?
For example I had a tf file with
resource "gitlab-repository-files_gitlab_repository_file" "this" {
project = gitlab_project.foo.id
file_path = "meow.txt"
branch = "main"
content = base64encode("hello world")
author_email = "meow#catnip.com"
author_name = "Meow Meowington"
commit_message = "feature: add meow file*"
}
Then changed it to
commit_message = "[ci skip] terraform templating commit\n\nJob URL: ${local.gitlab_configuration_details.pipeline_job_url}"
After the change my commit message on gitlab was [DELETE]: feature: add meow file
If this is the expected behavior is there any way around it to prevent the provider from adding comments?
Because I expected after the change for the commit to read on git as "[ci skip] terraform templating commit\n\nJob URL: https:url.com"
Thanks!
After further investigating, turns out the [DELETE] gets inserted if you are deleting files. If you just make code changes it will not edit your commit message, this only happens when files are deleted.
Related
I have been working on Terraform using AzureDevOps before that I developed tf files using VS code and everything worked fine when try to move files from VS code to Azure DevOps , getting issue on Archive source file path it unable to find the directory, searched every where but unable to resolve this,
Path which was working fine on VS code was “…/Folder name” using same path in Azure DevOps as I have upload completed folder that I have build in VS code but it always get failed when try to archive files as it un-able to find the directory.
[Code Block DevOps]
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
# Root module should specify the maximum provider version
# The ~> operator is a convenient shorthand for allowing only patch releases within a specific minor release.
version = "~>2.11"
}
}
}
provider "azurerm" {
features {}
#skip_provider_registration = true
}
locals {
location = "uksouth"
}
data "archive_file" "file_function_app" {
type = "zip"
source_dir = "../BlobToBlobTransferPackage"
output_path = "blobtoblobtransfer-app.zip"
}
module "windows_consumption" {
source = "./modules/fa"
archive_file = data.archive_file.file_function_app
}
output "windows_consumption_hostname" {
value = module.windows_consumption.function_app_default_hostname
}
Image of VS Code where everything is working fine:
Image of DevOps where getting Missing Directory Error:
Folder Structure that is working fine with VS code
It was due to path which is fixed now,
With Terraform I am trying to create a directory inside Repos, with a repository.
resource "databricks_directory" "test_directory" {
path = "/Repos/test123"
}
resource "databricks_repo" "test_repo" {
url = "https://somegiturl.com"
path = databricks_directory.test_directory.path
# Other variations tried:
#2 path = "/Repos/test123"
#3 path = "${databricks_directory.test_directory.path}/"
#4 path = "/test123"
branch = "main"
}
The first resource successfully creates the test123 folder inside Repos.
The second resource states for path option 1, 2 and 3:
Error: Invalid repo path specified
Option 4:
Error: Repos can only be created in the /Repos folder
Apparently I am missing something... How can I successfully place the repository inside the test123 folder?
Okay, so apparently you need to put the repo as a folder inside the directory.
So it should be:
path = "/Repos/test123/MyRepo"
or
path = "${databricks_directory.test_directory.path}/MyRepo"
I am attempting to deploy a Lambda function using Terraform, where my source files are in a different directory adjacent to where I have my Terraform files. I want to have Terraform do the zipping of the source files for me and deploy them into the Lambda. Terraform doesn't seem to want to recognize that my files are there, though.
My directory structure:
project_root/
deployment/
terraform/
my-terraform.tf
function_source/
function.py
I want it to package everything in function_source directory (there is only one file there now, but may be more later) and drop it into the deployment directory.
My Terraform:
data "archive_file" "lambda_zip" {
type = "zip"
output_path = "../function.zip"
source_dir = "../../function_source/"
}
resource "aws_lambda_function" "my_lambda" {
filename = "${data.archive_file.lambda_zip.output_path}"
function_name = "my-function"
role = "${aws_iam_role.lambda_role.arn}"
handler = "function.handler"
runtime = "python3.7"
}
When I run this, though, I get the error message data.archive_file.lambda_zip: data.archive_file.lambda_zip: error archiving directory: could not archive missing directory: ../../function_source/
I have tried using absolute paths without success (which wouldn't be a good solution anyway). I have also tried creating the .zip file manually and hardcoding its directly in Lambda declaration, but it only works if I put the .zip file in my terraform directory. It seems Terraform can only see files in its own directory or below, but I'd rather not co-mingle my source files there. Is there a way to do this?
I am using Terraform v0.12.4
I have the following code:
import github
token = "my gitHub token"
g = github.Github(token)
new_repo = g.get_user().create_repo("NewMyTestRepo")
print("New repo: ", new_repo)
new_repo.create_file("new_file.txt", "init commit", "file_content ------ ")
I have run this code, and this is the result:
New repo: Repository(full_name="myname/NewMyTestRepo")
Traceback (most recent call last):
...
File "/home/serega/PycharmProjects/GitProj/myvenv/lib/python3.5/site-packages/github/Requester.py", line 180, in __check
raise self.__createException(status, responseHeaders, output)
github.GithubException.UnknownObjectException: 404 {'message': 'Not Found', 'documentation_url': 'https://developer.github.com/v3'}
I think there may be problem in the scope of my token, it has repo scope. Nevertheless, I have managed to create a repo, so it seems, it should be allowed to make commit in that repo with new file inside.
About scopes I saw that link : https://developer.github.com/v3/oauth/#scopes
And it states:
repo
Grants read/write access to code, commit statuses, repository
invitations, collaborators, and deployment statuses for public and
private repositories and organizations.
I will really appreciate if somebody can clarify about required token's scope, and what could be the problem.
repo scope is enough to create files in a repository. It would seem from this question that the problem is that your file must have a leading slash:
new_repo.create_file("new_file.txt", "init commit", "file_content ------ ")
For some reason the following gitolite.conf does not add any repository to projects.list.
When I set 'R = gitweb' for each repository manually, they get added to projects.list.
[....]
repo aaa
repo bbb
repo #all
RW+ = #admins
R = gitweb
[...]
Any hints for me? I'd really like to allow gitweb access to all repositories and then remove permissions for a single repositories via '- = gitweb' ...
I don't actually need gitweb rules or project.list to be complete in my gitweb setup:
I only make sure I have a gitweb.conf.pl which:
will be called by gitweb (through the gitweb_config.perl file, called if gitweb detects it exists)
will call gitolite to see if the access to a repo can be granted or should be denied.
I just ran into the similar problem, but the resolution was different :
In gitolite3, it seems that if you simply see a gitweb.* property, then your repository is gitweb enabled:
repo foobar
desc = "Foobar repository"
category = "foobar"
Rw+= myself
Or if you prefer :
repo foobar
config gitweb.description = "Foobar repository"
config gitweb.category = "foobar"
Rw+= myself
I don't know if it works with a #all, like:
repo #all
category= "uncategorized"
But since a description or (valid) category is not a bad thing to have, I'd say it works for me.
On the other hand, I tried also making an #almost-all group with all my repositories except gitolite-admin, except I don't know if it works because of gitweb.description/category config.