Pulumi file archive - no such file or directory - node.js

I feel like I'm making a silly mistake here, but Pulumi cannot seem to see the file I'm using to [create an asset][1] with. The files are right beside each other in the directory. Here is the file structure:
-- /resources
-- function.js
-- lambda.ts
Pretty simple. Here is the file where we try and use the Javascript file as an asset for the Lambda we are creating:
// lambda.ts
import * as pulumi from '#pulumi/pulumi';
import * as aws from '#pulumi/aws';
const file = new pulumi.asset.FileAsset('./function.js');
console.log('file', file);
export const exampleFunction = new aws.lambda.Function('exampleFunction', {
role: role.arn,
runtime: 'nodejs14.x',
code: file,
});
I've written a script to run my Pulumi commands. I also pwd and ls the file directory to confirm that the file is there.
# deploy.sh
#!/bin/sh
ls resources
pulumi preview
Output/error:
ls resources -- function.js lambda.ts
pulumi preview --
Error: failed to register new resource exampleFunction [aws:lambda/function:Function]: 2 UNKNOWN: failed to compute archive hash: couldn't read archive path './function.js': stat ./function.js: no such file or directory
Kinda lost on this one. The file is right there.
Edit:
I've realized that the file path must be relative to where the Pulumi.yaml file is located, so one directory above /resources in this case.

Where is deploy.sh or more specifically, pulumi preview being run?
Is it being run in the "resources" folder, or someplace else - e.g. in the folder above?
If it's not being run from the resources folder, then that's likely the issue and so you need to provide the path to function.js from where pulumi is being run.

You should be using FileAsset and not FileArchive, as FileArchive is for tar.gz and related archive files: https://www.pulumi.com/docs/intro/concepts/assets-archives/
At least I think that's what's going on. If that's true then the error message is not great as it doesn't tell you what the actual problem is.

Related

reading .env file from node - env file is not published

I am trying to read .env file using "dotenv" package but it returns undefined from process.env.DB_HOST after published to gcloud run. I see all files except for the .env file in root directory when I output all files to log. I do have .env file in my project on a root directory. Not sure why it's not getting pushed to gcloud or is it?. I do get a value when I tested locally for process.env.DB_HOST.
I used this command to publish to google run.
gcloud builds submit --tag gcr.io/my-project/test-api:1.0.0 .
If you haven't a .gcloudignore file in your project, gcloud CLI use the .gitignore by default
Create a .gcloudignore and put the file that you don't want to upload when you use gcloud CLI command. So, don't put the .env in it!
EDIT 1
When you add a .gcloudignore, the gcloud CLI no longer read the .gitignore file and use it instead.
Therefore, you can define 2 different logics
.gitignore list the file that you don't want to push to the repository. Put the .env file in it to NOT commit it
.gcloudignore list the file that you don't want to send with the gcloud CLI. DON'T put the .env file in it to include it when you send your code with the gcloud CLI

Terraform cloud failing when referencing module using relative local path

I have a repository with several separate configs which share some modules, and reference those modules using relative paths that look like ../../modules/rabbitmq. The directories are setup like this:
tf/
configs/
thing-1/
thing-2/
modules/
rabbitmq/
cluster/
...
The configs are setup with a remote backend to use TF Cloud for runs and state:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "my-org"
workspaces {
prefix = "config-1-"
}
}
}
Running terraform init works fine. When I try to run terrform plan locally, it gives me an error saying:
Initializing modules...
- rabbitmq in
Error: Unreadable module directory
Unable to evaluate directory symlink: lstat ../../modules: no such file or
directory
...as if the modules directory isn't being uploaded to TF Cloud or something. What gives?
It turns out the problem was (surprise, surprise!) it was not uploading the modules directory to TF Cloud. This is because neither the config nor the TF Cloud workspace settings contained any indication that this config folder was part of a larger filesystem. The default is to upload just the directory from which you are running terraform (and all of its contents).
To fix this, I had to visit the "Settings > General" page for the given workspace in Terraform Cloud, and change the Terraform Working Directory setting to specify the path of the config, relative to the relevant root directory - in this case: tf/configs/config-1
After that, running terraform plan displays a message indicating which parent directory it will upload in order to convey the entire context relevant to the workspace. 🎉
Update #mlsy answer with a screenshot. Using Terraform Cloud with free account. Resolving module source to using local file system.
terraform version
Terraform v1.1.7
on linux_amd64
Here is the thing I worked for me. I used required_version = ">= 0.11"
and then put all those tf files which have provider and module in a subfolder. Kept the version.tf which has required providers at root level. Somehow I have used the same folder path where terraform.exe is present. Then Built the project instead of executing at main.tf level or doing execution without building. It downloaded all providers and modules for me. I am yet to run on GCP.
enter image description here - Folder path on Windows
enter image description here - InteliJ structure
enter image description hereenter image description here
Use this source = "mhmdio/rabbitmq/aws
I faced this problem when I started. Go to hashicorp/terraform site and search module/provider block. They have full path. The code snippets are written this way. Once you get path run Terraform get -update Terraform init - upgrade
Which will download modules and provider locally.
Note: on cloud the modules are in repo but still you need to give path if by default repo path not mapped
I have similar issue, which I think someone might encounter.
I have issue where in my project the application is hosted inside folder1/folder2. However when I run terraform plan inside the folder2 there was an issue because it tried to load every folder from the root repository.
% terraform plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
The remote workspace is configured to work with configuration at
infrastructure/prod relative to the target repository.
Terraform will upload the contents of the following directory,
excluding files or directories as defined by a .terraformignore file
at /Users/yokulguy/Development/arepository/.terraformignore (if it is present),
in order to capture the filesystem context the remote workspace expects:
/Users/yokulguy/Development/arepository
â•·
│ Error: Failed to upload configuration files: Failed to get symbolic link destination for "/Users/yokulguy/Development/arepository/docker/mysql/mysql.sock": lstat /private/var/run/mysqld: no such file or directory
│
│ The configured "remote" backend encountered an unexpected error. Sometimes this is caused by network connection problems, in which case you could retry the command. If the issue persists please open a support
│ ticket to get help resolving the problem.
╵
The solution is sometime I just need to remove the "bad" folder whic is the docker/mysql and then rerun the terraform plan and it works.

How to generate jhipster application into a different directory?

When I run the following in jhipster-generator's /cli directory:
cd cli
node jhipster.js
I'm generating the application in the same directory (cli). How would I change this directory to somewhere else? For example, export all the generated files into a specific directory.
I believe the directory has something to do with this line of code in jhipster.js:
const localCLI = require.resolve(path.join(process.cwd(), 'node_modules', 'generator-jhipster', 'cli', 'cli.js'));
Note: I'm not running the application with "jhipster" command.

Is that the 'path to war' which I am giving wrong? If yes,how do I do a rollback?

I have been trying to use the command to rollback the last process of deploying the website which was interrupted due to a network failure.
The generic command that I am using while inside the bin directory of server's SDK (On Linux) is :
./appcfg.sh rollback /path_to_the_war_directory_that_has_appengine-web.xml
Is this the way we do a rollback ? If not please tell me the method.
_(I was asked to make a directory war in the project directory and place the WEB-INF folder in that with appengine-web.xml inside it. It may be wrong)_
I am fully convinced that I am making a mistake while giving the path to my app .
Shot where my .war file is there :
Now the command that I am using is (while inside the bin directory of the server's SDK) :
./appcfg.sh rollback /home/non-admin/NetbeansProjects/'Personal Site'/web/war
The following is the representation of the path to war directory :
Where am I wrong ? How should I run this command so that I am able to deploy my project once again ?
On running the above command I get this message :
Unable to find the webapp directory /home/non-admin/NetbeansProjects/Personal Site/web/war
usage: AppCfg [options] <action> [<app-dir>] [<argument>]
NOTE : I have duplicated the folder WEB-INF. There is still a folder named WEB-INF inside the web directory that contains all other xml files.
The error tells you that the folder /home/non-admin/NetbeansProjects/Personal Site/web/war does not exist. If you look carefully the name of the folder is NetBeansProjects (the filesystem in Linux is case-sensitive).
So, you should run instead the command:
./appcfg.sh rollback /home/non-admin/NetBeansProjects/'Personal Site'/web/war
and just to make sure that the directory exists run first
ls /home/non-admin/NetBeansProjects/'Personal Site'/web/war

Run executable from local storage using Azure Web Role

i'm trying to run a simple executable using an Azure Web Role.
The executable is stored in the Web Role's local storage.
The executable produces a log.txt file once it has been run.
This is the method I am using to run the executable:
public void RunExecutable(string path)
{
Process.Start(path);
}
Where path is localStorage.RootPath + "Application.exe"
The problem I am facing is that when I open the local storage folder the executable is there however there is no log.txt file.
I have tested the executable, it works if I manually run it, it produces the log.txt file.
Can anyone see the problem?
Try setting an explicit WorkingDirectory for the process... I wonder if log.txt is being created, just not where you expect. (Or perhaps the app is trying to create log.txt but failing because of the permissions on the directory it's trying to create it in.)
If you remote desktop into the instance, can't you find the file created at E:\approot\ folder ? As Steve said, using a WorkingDirectory for the process will fix the issue
You can use Environment.GetEnvironmentVariable("RoleRoot") to construct the URL to your application root

Resources