I am running my e2e WebdriverIO on Gitlab Pipeline. Now I am trying to integrate my e2e tests after the deployment on Azure. The tests run after deployment as expected, only that I have a strange error with Azure.
I have a test case to upload a file. Here is how I get the file:
const filePath = process.env.PWD + '/test/resources/files/logo.jpg'
console.log('file = ' + filePath);
When my tests run on Gitlab Pipeline, the file can be located as follow:
file = /builds/hw2yvbjx/0/xxx/xxx/xxx/xxx/e2e/test/resources/files/logo.jpg
But when my tests run on Azure Pipeline, the file is undefined as follow:
file = undefined/test/resources/files/logo.jpg
and the full log is as follow:
Error: ENOENT: no such file or directory, open 'C:\azagent\xxx\xxx\xxx\xxx\_xxx\e2e\undefined\test\resources\files\logo.jpg'
The path is correct, except that extra undefined is added between e2e and test. Does anyone know why this undefined is appended in the path? And how to fix it?
Thanks
process.env.PWD will log as undefined on Windows. The PWD environment variable is a 'Linux thing'.
That's why you're seeing this in your log statement in Azure (running on Windows, deduced based on the path starting with C:\...) but not in your GitLab job, which runs on a Linux host.
To fix it, you can use process.cwd(), which is platform agnostic, instead of process.env.PWD.
Related
I'm trying to start a headless chrome with a puppeteer in Azure Functions on Linux.
What do I do? I have a “Function App” that looks this way:
And I have a function:
I build this function remotely this way:
func azure functionapp publish {appname} --build remote
And this is what I get when I try to run a function:
Result: Failure
Exception: Failed to launch the browser process!
/home/site/wwwroot/node_modules/puppeteer/.local-chromium/linux-1011831/chrome-linux/chrome: error while loading shared libraries: libgobject-2.0.so.0: cannot open shared object file: No such file or directory
I've seen this topic already (Puppeteer throws launch exception when deployed on azure functions node on Linux) but they recommend do a remote build, which I do and it still doesn't help.
Maybe I'm using wrong App Service Plan, but I checked and there were nothing related to special linux setup there.
The reason was that I was indeed using the wrong App Service Plan. I needed a “Function App” one. When I recreated a function with the right service plan, everything worked just fine.
I was able to deploy the azure function without the use of func azure functionapp publish {appname} --build remote. I did it using the visual studio code.
But before that I installed the puppeteer inside the function folder using
npm install puppeteer
Then I added the node_modules name in the .funcignore file.
Then I added the following setting in setting.json in the .vscode folder
"azureFunctions.scmDoBuildDuringDeployment": true
Then deploy the function normally through vscode
I am currently using nodejs that is deployed in ebs on aws. I have a function that will write a pdf and then email it off but it says the file path can't be found. I've verified the project file seems to be /var/app/current/, but changing the reference of the file path doesn't seem to remove the error. Any idea how to go about fixing this?
The /var/app/current/ does not exist initially. Its only created at the very last stage of your deployment.
The deployment happens in /var/app/staging/ folder, and at the very last, once everything finishes, /var/app/staging/ is moved into /var/app/current/.
Thus, I would not recommend using absolute paths in your project or config files. Its better to use relative path or container_commands for config scripts:
The specified commands run as the root user, and are processed in alphabetical order by name. Container commands are run from the staging directory, where your source code is extracted prior to being deployed to the application server.
Question - Is there a better/right way to get the application's launch path?
Setup -
I have a console application that runs in a Linux Debian docker image. I am building the application using the --runtime linux-x64 command line switch and have all the runtime identifiers set appropriately. I was expecting the application to behave the same whether launching it by calling dotnet MyApplication.dll or ./MyApplication but they are not.
Culprit Code -
I have deployed files in a folder below the application directory that I reference so I do the following to get what I consider my launch path. I have read various articles saying this is the correct way to get what I want, and it works depending on how I launch it.
using var processModule = Process.GetCurrentProcess().MainModule;
var basePath = Path.GetDirectoryName(processModule?.FileName);
When launching this using the comand dotnet MyApplication.dll the above codes path is /usr/share/dotnet
When launching this using the command ./MyApplication.dll the path is then /app
I understand why using dotnet would be different as it is the process that is running my code, but again it was unexpected.
Any help here to what I should use given the current environment would be appreciated. Ultimately I need the path where the console application started from as gathered by the application when it starts up.
Thanks for your help.
This code should work:
public static IConfiguration LoadConfiguration()
{
var assemblyDirectory = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);
.....
}
I'm running Cypress in one of my release stages and it gives me this output:
Finished processing: D:\a\r1\a\_ClientWeb-Build-CI\ShellArtifact\tests\integration\cypress\videos\onboarding.spec.js.mp4 (0 seconds)
I have 2 questions:
Is the path name relative to the app service? If I have a app service called randomname and run the Cypress Stage on that randomname app service should I be able to find tCypresshe output in randomname.scm.azurewebsites.net.
If I go into the scm debug console and I do cd D:\a\ I get:
cd : Cannot find path 'D:\a\' because it does not exist.
So how do I actually access my Cypress test results?
I've also tried archiving the files into a zip file:
In the output of the task step I see:
Creating archive: d:\home\testing\somefile.zip
But when I try to access the D:/home/testing folder on my appname.scm.azurewebsites.net I get:
cd : Cannot find path 'D:\home\testing' because it does not exist.
The path D:\a\r1\a is inside the hosted agent that run the release pipeline, is not in your application.
The same thing is for the zip file, when you specify d:/home/... is in the agent.
After the release is finish all the files are deleted, so you need to save the file in another place (maybe in azure?) during the pipeline, for example, with "Azure File Copy" task.
i'm trying to run a simple executable using an Azure Web Role.
The executable is stored in the Web Role's local storage.
The executable produces a log.txt file once it has been run.
This is the method I am using to run the executable:
public void RunExecutable(string path)
{
Process.Start(path);
}
Where path is localStorage.RootPath + "Application.exe"
The problem I am facing is that when I open the local storage folder the executable is there however there is no log.txt file.
I have tested the executable, it works if I manually run it, it produces the log.txt file.
Can anyone see the problem?
Try setting an explicit WorkingDirectory for the process... I wonder if log.txt is being created, just not where you expect. (Or perhaps the app is trying to create log.txt but failing because of the permissions on the directory it's trying to create it in.)
If you remote desktop into the instance, can't you find the file created at E:\approot\ folder ? As Steve said, using a WorkingDirectory for the process will fix the issue
You can use Environment.GetEnvironmentVariable("RoleRoot") to construct the URL to your application root