Issue when deploying Yesod 1.4 to Heroku - haskell

im having a problem when deploying a new Yesod application to heroku. I am following the steps here;
https://github.com/yesodweb/yesod/wiki/Deploying-Yesod-Apps-to-Heroku
But since yesod 1.4 was released, I am getting the following issue show up in my logs (PaperTrail);
Dec 21 04:42:51 fxtest heroku/web.1: Starting process with command ./dist/build/fxtest/fxtest production -p 37347
Dec 21 04:42:52 fxtest app/web.1: loadAppSettings: Could not parse file as YAML: production
Dec 21 04:42:52 fxtest app/web.1: fxtest: InvalidYaml (Just (YamlException "Yaml file not found: production"))
Dec 21 04:42:53 fxtest heroku/web.1: Process exited with status 1
Dec 21 04:42:53 fxtest heroku/web.1: State changed from starting to crashed
This appears to say that I have a missing yaml file called "production". If I try to negate this error by adding a dummy yaml file, then I get a similar error telling me I am missing a file called "-p". This leads me to think that the issue is being caused by my Procfile, which only contains one line;
web: ./dist/build/fxtest/fxtest production -p $PORT
thanks in advance for any help

If you're using the new scaffolding, it no longer requires the command line parameter. Try dropping production from the command, and probably leave of the-p $PORT as well.

Related

OPC Publisher module does not start on my Ubuntu VM as an edge module

The OPC Publisher marketplace image runs successfully as a standalone container (albeit with server connection problems). But I am not able to deploy it as an edge module, especially after changing container create options.
Background: In my host laptop I was never able to get the module up so I created a Ubuntu VM. When I tried to deploy the edge module in the VM with default container create options the module did show up in the iotedge module list as "running". I wanted to set the "--op" option to set publishing rate so I changed it in the create options using the portal "Set modules" tab. Since there is no update button I used create button to "recreate" the modules. After this the module did not show up.
After that the OPC publisher module is not showing up on the edge VM. I am following the Microsoft tutorial.
Following is the command:
sudo docker run -v /iiotedge:/appdata mcr.microsoft.com/iotedge/opc-publisher:latest --aa --pf=/appdata/publishednodes.json --c="HostName=<iot hub name>.azure-devices.net;DeviceId=iothubowner;SharedAccessKey=<hub primary key>" --dc="HostName=<edge device id/name>.azure-devices.net;DeviceId=<edge device id/name>;SharedAccessKey=<edge primary key>" --op=10000
Container create options:
{
"Hostname": "opcpublisher",
"Cmd": [
"--pf=/appdata/publishednodes.json",
"--aa",
"--op=10000"
],
"HostConfig": {
"Binds": [
"/iiotedge:/appdata"
]
}
}
I have not specified the connection strings explicitly since the documentation from Microsoft assures that the runtime will pass them automatically.
The relevant iotedge journalctl logs are here.
Oct 06 19:36:05 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:05Z [INFO] - Pulling image mcr.microsoft.com/iotedge/opc-publisher:latest...
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [INFO] - Successfully pulled image mcr.microsoft.com/iotedge/opc-publisher:latest
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [INFO] - Creating module OPCPublisher...
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [INFO] - Starting new listener for module OPCPublisher
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [ERR!] - Internal server error: Could not create module OPCPublisher
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: caused by: Could not get module OPCPublisher
The logs from iotedge itself is not much useful. Find below anyway.
~$ iotedge logs OPCPublisher
A module runtime error occurred
I have also tried docker container prune just to be sure but it did not help.
Also strangely in the Azure portal when I try to restart the module from the troubleshoot page it throws an error "module not found in the current environment"
Can someone please help me out in troubleshooting this problem? I will be glad to share more details if required.
I raised a support query in Azure portal. After sending support bundles and trying various suggestions like removing DNS configuration, changing bind path to a non-sudo location etc. the team zeroed in on the edge version mismatch.
After re-reading the documentation I uninstalled the earlier iotedge package and installed aziot-edge instead and problem solved!
The team has raised a github issue for public tracking here:
https://github.com/Azure/Industrial-IoT/issues/1425
#asergaz also pointed to the right direction but did not notice since it came a bit later

Submitting first job to pacemaker

I followed this guide:
https://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/
I stayed with the Active/Passive DRBD file system sharing. I had to reboot my cluster and now I am getting the following error:
Current DC: rbx-1 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Tue Nov 28 17:01:14 2017
Last change: Tue Nov 28 16:40:09 2017 by root via cibadmin on rbx-1
2 nodes configured
5 resources configured
Node rbx-2: UNCLEAN (offline)
Online: [ rbx-1 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started rbx-1
WebSite (ocf::heartbeat:apache): Stopped
Master/Slave Set: WebDataClone [WebData]
WebData (ocf::linbit:drbd): FAILED rbx-1 (blocked)
Stopped: [ rbx-2 ]
WebFS (ocf::heartbeat:Filesystem): Stopped
Failed Actions:
* WebData_stop_0 on rbx-1 'invalid parameter' (2): call=20, status=complete, exitreason='none',
last-rc-change='Tue Nov 28 16:27:58 2017', queued=0ms, exec=3ms
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
Any ideas?
Also does anyone have any recommended guides for submitting jobs?
This post is relatively old at this point but I'll leave this here for others to find if they stumble upon the same issue.
This problem has to do with an issue with the DRBD integration script that pacemaker uses. If it's broken, missing, has incorrect permissions, etc. you can get an error like this. In CentOS 7 that script is located at /usr/lib/ocf/resource.d/drbd
Note: This is specifically for the guide mentioned by OP but may help you:
Section 7.1 has a big "IMPORTANT" block that talks about replacing the Pacemaker integration script due to a bug. If you use the command it tells you to there, you actually replace the script with a 404 Error page which obviously doesn't work, causing the error. You can fix this issue by replacing the script with the original, either by reinstalling DRBD...
yum remove -y kmod-drbd84 drbd84-utils
yum install -y kmod-drbd84 drbd84-utils
...or finding just the drbd script elsewhere and adding/replacing it to /usr/lib/ocf/resource.d/drbd. Make sure its permissions are correct and that it is set as executable.
Hope that helps!

Node app on heroku fails due to read only filesystem

I have a very simple TypeScript app (build with Express.js) that runs on Node which I deploy to Heroku. I use both the node buildpack and zidizei/heroku-buildpack-tsc, the latter of which compiles the .ts to JavaScript at deploy time.
This has worked fine up to today when I tried to run a deploy. The deployment itself works fine but the app crashes subsequently at the point at which the launch script node ./dist/server.js is run. Logs here:
Sep 06 14:43:33 goodlord-bark app/web.1: $ node ./dist/server.js
Sep 06 14:43:33 goodlord-bark heroku/web.1: Process exited with status 1
Sep 06 14:43:33 goodlord-bark heroku/web.1: State changed from starting to crashed
Sep 06 14:43:33 goodlord-bark app/web.1: error An unexpected error occurred: "EROFS: read-only file system, access '/usr/local/bin'".
Sep 06 14:43:33 goodlord-bark app/web.1: info If you think this is a bug, please open a bug report with the information provided in "/app/yarn-error.log".
At no point does any script or my app attempt to write to /usr/local/bin so I'm confused as to why this is happening. This error only occurred following a superficial change to the codebase and new deploy so it strikes me something has changed on Heroku's end but I can get to the bottom of it.
Rolling back to a previous deploy has kept my app running for the time being, but I'm currently unable to deploy any updates.
There is also no /app/yarn-error.log to examine.
I just had similar issue on Heroku all of a sudden. Turns out the issue was introduced at yarn 1.0.0 and Heroku use the latest version by default. Here's a relevant heroku-buildpack-nodejs issue.
Fixed by downgrading yarn in engines section of package.json:
"engines": {
"node": "^7.10.1",
"yarn": "0.27.5"
}

(bdutil) Unable to get hadoop/spark cluster working with a fresh install

I'm setting up a tiny cluster in GCE to play around with it but although instances are created some failures prevent to get it working. I'm following the steps in https://cloud.google.com/hadoop/downloads
So far I'm using (as of now) lastest versions of gcloud (143.0.0) and bdutil (1.3.5), freshly installed.
./bdutil deploy -e extensions/spark/spark_env.sh
using debian-8 as image (as bdutil still uses debian-7-backports).
At some point I got
Fri Feb 10 16:19:34 CET 2017: Command failed: wait ${SUBPROC} on line 326.
Fri Feb 10 16:19:34 CET 2017: Exit code of failed command: 1
full debug output is in https://gist.github.com/jlorper/4299a816fc0b140575ed70fe0da1f272
(project id and bucket names changed)
Instances are created, but spark not even installed. Digging a bit I've managed to run spark installation and start hadoop commands in the master after after ssh. But it fails badly when starting the spark-shell:
17/02/10 15:53:20 INFO gcs.GoogleHadoopFileSystemBase: GHFS version: 1.4.5-hadoop1
17/02/10 15:53:20 INFO gcsio.FileSystemBackedDirectoryListCache: Creating '/hadoop_gcs_connector_metadata_cache' with createDirectories()...
java.lang.RuntimeException: java.lang.RuntimeException: java.nio.file.AccessDeniedException: /hadoop_gcs_connector_metadata_cache
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
and not able to import sparkSQL. For what I've read everything should be started automatically.
Up to this point I'm a bit lost and don't know what else to do.
Am I missing any step? Is any of the commands faulty? Thanks in advance.
Update: solved
As pointed out in accepted solution I cloned the repo and cluster was created without issues. When trying to start the spark-shell though it gave
java.lang.RuntimeException: java.io.IOException: GoogleHadoopFileSystem has been closed or not initialized.`
That sounded to me like connectors were not initialized properly, so after running
./bdutil --env_var_files extensions/spark/spark_env.sh,bigquery_env.sh run_command_group install_connectors
it worked as expected.
The last version of bdutil on https://cloud.google.com/hadoop/downloads is a bit stale and I'd instead recommend using the version of bdutil at head on github: https://github.com/GoogleCloudPlatform/bdutil.

Getting NPM error: "Resolving deps of app\app.ts failed."

I'm trying to install and utilize this brunch.io skeleton. I keep running into this error:
20 Apr 19:40:21 - info: application started on http://localhost:3333/
20 Apr 19:40:24 - info: compiling
20 Apr 19:40:28 - error: Resolving deps of app\app.ts failed. Could not load module 'app\home' from 'C:\Users\tyler.WORKGROUP\Documents\GitHub\zenith-folio\app'. Possible solution: add 'app' to package.json and `npm install`.
20 Apr 19:40:28 - error: Resolving deps of app\about\index.ts failed. Could not load module 'app\about\about.tpl' from 'C:\Users\tyler.WORKGROUP\Documents\GitHub\zenith-folio\app\about'. Possible solution: add 'app' to package.json and `npm install`.
20 Apr 19:40:28 - info: compiling.
20 Apr 19:40:29 - info: compiled 477 files into 2 files, copied index.html in 8.4 sec
I'm trying my best to understand what's going on here, but I'm not sure. I can see that I need to add "app" to package.json, but I don't know how or which "app" it's specifying. Is it talking about:
the folder called "app"
a file called app.ts
Or is there something else I'm missing?
Message is pretty clear: Could not load module 'app\home', it means that you have unresolved import in app.ts which possibly looks like import ... from 'app\home';

Resources