I build my v 1.1.1 nodes from source and cannot find the logs when I run the executable - logs in user friendly format are supposed to generate in the $ROOT directory, according to the configuration settings here https://docs.chain.link/chainlink-nodes/v1/configuration#logging but without $ROOT set and with $ROOT set, no logs are appearing.
(They should show in ~/.chainlink - which is autogenned and has the secret in it when chainlink runs)
Not sure why there are no logs?
I also tried this by setting $ROOT to another directory ( ~/chainlinlops ) when I restarted the server still no logs (tho the secret file was generated there )
In your .env set LOG_TO_DISK=true
Related
I am currently using nodejs that is deployed in ebs on aws. I have a function that will write a pdf and then email it off but it says the file path can't be found. I've verified the project file seems to be /var/app/current/, but changing the reference of the file path doesn't seem to remove the error. Any idea how to go about fixing this?
The /var/app/current/ does not exist initially. Its only created at the very last stage of your deployment.
The deployment happens in /var/app/staging/ folder, and at the very last, once everything finishes, /var/app/staging/ is moved into /var/app/current/.
Thus, I would not recommend using absolute paths in your project or config files. Its better to use relative path or container_commands for config scripts:
The specified commands run as the root user, and are processed in alphabetical order by name. Container commands are run from the staging directory, where your source code is extracted prior to being deployed to the application server.
I had Composer Site extension installed till now on azure php webapp.
I need custom deployment that can run grunt tasks also. So I created the .deployment and deploy.sh files in project root. But that deploy.sh is not being picked up.
.deployment file contents:
[config]
command = bash deploy.sh
Looking at the deployment logs, I find this
2017-05-04T06:21:03.9301086Z,Updating submodules.,8bc3029f-d77b-4c1e-860f-a3d439d7a354,0
2017-05-04T06:21:03.9926050Z,Preparing deployment for commit id 'e2b45fb52b'.,61c286b1-5c00-4c11-ae14-54e0711d6857,0
2017-05-04T06:21:04.2632947Z,Running custom deployment command...,e71c397e-bc63-4357-abc4-acd49bc2041d,0
2017-05-04T06:21:04.3101663Z,Running deployment command...,24db1c4f-8a51-463b-8c4a-ee040bc5dfd8,0
2017-05-04T06:21:04.3101663Z,Command: D:\home\SiteExtensions\ComposerExtension\Hooks\deploy.cmd,,0
2017-05-04T06:21:04.4039215Z,The system cannot find the path specified.,,1
2017-05-04T06:21:04.4195462Z,The system cannot find the path specified.\r\nD:\Program Files (x86)\SiteExtensions\Kudu\62.60430.2807\bin\Scripts\starter.cmd D:\home\SiteExtensions\ComposerExtension\Hooks\deploy.cmd,,2
Seems like somewhere the trigger for Composer site extension still remains which is being invoked during deployment.
How can I completely remove Composer site extension and use my custom deployment script deploy.sh? Thanks in advance.
Found the problem. After uninstalling Composer SiteExtension, this environment variable is still present APPSETTING_COMMAND = D:\home\SiteExtensions\ComposerExtension\Hooks\deploy.cmd. Deleted the environment variable using kudu console and then deployment succeeded.
After removing the Composer Extension the APPSETTING_COMMAND remains as an environment variable.
Use the Kudu PowerShell command Remove-Item Env:\APPSETTING_COMMAND to remove the variable online.
Alternatively, restarting the App Service via the overview tab will refresh the environment variables, though this could be a little invasive.
i'm trying to run a simple executable using an Azure Web Role.
The executable is stored in the Web Role's local storage.
The executable produces a log.txt file once it has been run.
This is the method I am using to run the executable:
public void RunExecutable(string path)
{
Process.Start(path);
}
Where path is localStorage.RootPath + "Application.exe"
The problem I am facing is that when I open the local storage folder the executable is there however there is no log.txt file.
I have tested the executable, it works if I manually run it, it produces the log.txt file.
Can anyone see the problem?
Try setting an explicit WorkingDirectory for the process... I wonder if log.txt is being created, just not where you expect. (Or perhaps the app is trying to create log.txt but failing because of the permissions on the directory it's trying to create it in.)
If you remote desktop into the instance, can't you find the file created at E:\approot\ folder ? As Steve said, using a WorkingDirectory for the process will fix the issue
You can use Environment.GetEnvironmentVariable("RoleRoot") to construct the URL to your application root
Im having trouble creating a workspace and downloading the files from a Team Foundation Server using the Team Explorer Everywhere command line client (TEE-CLC-10.0.0). I've gotten as far as creating workspace:
$ ../tfs/TEE-CLC-10.0.0/tf -login:secretUsername,secretPassword -server:http://secretHost:8080 workspace -new KOLOBI
Workspace 'KOLOBI2' created.
Then I want to download files from the server to my workspace:
$ ../tfs/TEE-CLC-10.0.0/tf -login:secretUsername,secretPassword -server:http://secretHost:8080 get -recursive -all -force .
An argument error occurred: Items must reside in a workspace that has been previously used on this computer.
I guess I'm missing one step which is to add local directories to the workspace or something like that. But I can't figure out how to do it to be able to download the files.
You'll need to create working folder mappings between your local folder and the server items you wish to correspond to.
For example:
tf workfold -map -login:secretUsername,secretPassword -server:http://secretHost:8080 -workspace:KOLOBI '$/TeamProject/Project' '/home/me/project'
Then from the /home/me/project directory (or whatever you pick), you can just execute tf get .
We have test and prod environment for a publishing portal.
What i want to make is keeping synced both environment.
Currently, we make changes on test server and publish content, check the modified pages and if everything is ok we then make same changes on prod server.
Is there any other short way or command to update prod server with last changes made in test server, not doing the same things again and again.
Thanks..
On Sharepoint 2010 it's preatty simple: you could run a command in PS to first export the content you need from Test environemnt and then import that content on the Prod Server:
// on Test Environment
Export-SPWeb webrooturl -path "fullpathfile.cmp" -includeVersions LastMajor -itemurl Pages -FORCE
This command create a file .cmp that contains all the latest major version of items in Pages library.
then you have to copy that file .cmp on the target server (Prod) and run
// on Prod Environment
Import-SPWeb webrooturl -path "fullpathfile.cmp"
I used it only for Pages library and works fine, but I think that operating with the parameter -itemurl it should be possible to export all the other library contents.