I am trying to initialize a catkin workspace: create a folder and inside it I created /src then $catkin init, but then the workspace path is shown somewhere else:
ABC:~ xxl$ cd spoc_lite_ws/
ABC:spoc_lite_ws xxl$ catkin config
Profile: default
Extending: [explicit] /opt/ros/lunar
Workspace: /Users/xlei
Source Space: [missing] /Users/xxl/src
Log Space: [missing] /Users/xxl/logs
Build Space: [missing] /Users/xxl/build
Devel Space: [missing] /Users/xxl/devel
Install Space: [unused] /Users/xxl/nstall
DESTDIR: [unused] None
Devel Space Layout: linked
Install Space Layout: None
Additional CMake Args: None
Additional Make Args: None
Additional catkin Make Args: None
Internal Make Job Server: True
Cache Job Environments: False
Whitelisted Packages: None
Blacklisted Packages: None
WARNING: Source space /Users/xxl/src does
not yet exist.
ABC:spoc_lite_ws xxl$ ls
src
How can I change the working workspace to be spoc_lite_ws?
Thanks and happy new year!
I met the same problem after moving the workspace to another directory. In my case, I solved the problem by removing the directory .catkin_tools/. I guess that the old cache of .catkin_tools/ remained.
Reference: https://catkin-tools.readthedocs.io/en/latest/verbs/catkin_init.html
Related
I try to deploy the rails 6 App on platform.sh,
I have this error when deploying my project in Rails 6 with Webpacker 5.2.1.
I have done days of research on google, without success.
I have NVM installed locally, node and npm, I will say that everything is fine locally, the webpacker is compiled correctly, but not on the remote machine.
and this error makes the deployment fail.
W: Webpacker requires Node.js ">=10.17.0" and you are using v6.17.1
W: Please upgrade Node.js https://nodejs.org/en/download/
W: Exiting!
E: Error building project: Step failed with status code 1.
E: Error: Unable to build application, aborting.
when I want to compile the assets with this line on Build hook.
RAILS_ENV=production bundle exec rails webpacker:install
can you help me
I'll share my config
.platform.app.yaml
# The name of this app. Must be unique within a project.
name: app
type: 'ruby:2.7'
mounts:
log:
source: local
source_path: log
tmp:
source: local
source_path: tmp
relationships:
postgresdatabase: 'dbpostgres:postgresql'
# The size of the persistent disk of the application (in MB).
disk: 1024
hooks:
build: |
bundle install --without development test
RAILS_ENV=production bundle exec rails webpacker:install
deploy: |
RAILS_ENV=production bundle exec rake db:migrate
web:
upstream:
socket_family: "unix"
commands:
start: "unicorn -l $SOCKET -E production config.ru"
locations:
'/':
root: "public"
passthru: true
expires: 1h
allow: true
services.yaml
dbpostgres:
# The type of your service (postgresql), which uses the format
# 'type:version'. Be sure to consult the PostgreSQL documentation
# (https://docs.platform.sh/configuration/services/postgresql.html#supported-versions)
# when choosing a version. If you specify a version number which is not available,
# the CLI will return an error.
type: postgresql:13
# The disk attribute is the size of the persistent disk (in MB) allocated to the service.
disk: 9216
configuration:
extensions:
- plpgsql
- pgcrypto
- uuid-ossp
routes.yaml
# Each route describes how an incoming URL is going to be processed by Platform.sh.
"https://www.{default}/":
type: upstream
upstream: "app:http"
"https://{default}/":
type: redirect
to: "https://www.{default}/"
I've been wrestling with this all day. I get the below error whenever I try to run my puppeteer script on Digital Ocean Apps.
/workspace/node_modules/puppeteer/.local-chromium/linux-818858/chrome-linux/chrome: error while loading shared libraries: libnss3.so: cannot open shared object file: No such file or directory
I did some research and found this helpful link below that seems to be the accepted solution.
https://github.com/puppeteer/puppeteer/issues/5661
However since I'm using digital ocean apps instead of the standard digital ocean droplets it seems I can't use sudo commands in the command line so I can't actually run these in the command line.
Does anyone know of anyway around this?
I'm using node.js by the way.
Thank you!!!!
I've struggled with this issue myself. I needed a way to run puppeteer in order to run Scully (static site generator for Angular) during my build process.
My solution was to use a Dockerfile to build my DigitalOcean App Platform's app. You can ignore the Scully stuff if not needed (Putting it here for others who struggles with this use case like me) and have a regular yarn build:prod or such, and use something like dist/myapp (instead of dist/scully) as output_dir in the appspec.
In your app's root folder, you should have these files:
Dockerfile
FROM node:14-alpine
RUN apk add --no-cache \
chromium \
ca-certificates
# This is to prevent the build from getting stuck on "Taking snapshot of full filesystem"
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true
WORKDIR /usr/src/myapp
COPY package.json yarn.lock ./
RUN yarn
COPY . .
# Needed because we set PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true
ENV SCULLY_PUPPETEER_EXECUTABLE_PATH /usr/bin/chromium-browser
# build_prod_scully is set in package.json to: "ng build --configuration=production && yarn scully --scanRoutes"
RUN yarn build_prod_scully
digitalocean.appspec.yml
domains:
- domain: myapp.com
type: PRIMARY
- domain: www.myapp.com
type: ALIAS
name: myapp
region: fra
static_sites:
- catchall_document: index.html
github:
branch: master
deploy_on_push: true
repo: myname/myapp
name: myapp
output_dir: /usr/src/myapp/dist/scully # <--- /user/src/myapp refering to the Dockerfile's WORKDIR, dist/scully refers to scully config file's `outDir`
routes:
- path: /
source_dir: /
dockerfile_path: Dockerfile # <-- may be unneeded unless you already have an app set up.
Then upload the digitalocean.appspec.yml from your computer to your
App Dasboard > Settings > App > AppSpec > edit > Upload File.
For those who use it with Scully, I've also used the following inside scully.config.ts
export const config: ScullyConfig = {
...
puppeteerLaunchOptions: {
args: ['--no-sandbox', '--disable-setuid--sandbox'],
},
};
Hope it helps.
Is there a standard or recommended .gitignore file to use with Node-RED projects? Or are there files or folders that should be ignored? For example, should files like .config.json or flow_cred.json be ignored?
At present I'm using the Node template generated by gitignore.io (see below), but this doesn't contain anything specific to Node-RED.
I found these github projects with .gitignore files:
https://github.com/dceejay/node-red-project-starter/blob/master/.gitignore
https://github.com/natcl/node-red-project-template/blob/master/.gitignore
https://github.com/natcl/electron-node-red/blob/master/.gitignore
But I'm unsure if these are generic to any Node-RED project.
The Node .gitignore file:
# Created by https://www.gitignore.io/api/node
# Edit at https://www.gitignore.io/?templates=node
### Node ###
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
# Diagnostic reports (https://nodejs.org/api/report.html)
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
*.lcov
# nyc test coverage
.nyc_output
# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# Bower dependency directory (https://bower.io/)
bower_components
# node-waf configuration
.lock-wscript
# Compiled binary addons (https://nodejs.org/api/addons.html)
build/Release
# Dependency directories
node_modules/
jspm_packages/
# TypeScript v1 declaration files
typings/
# TypeScript cache
*.tsbuildinfo
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# dotenv environment variables file
.env
.env.test
# parcel-bundler cache (https://parceljs.org/)
.cache
# next.js build output
.next
# nuxt.js build output
.nuxt
# react / gatsby
public/
# vuepress build output
.vuepress/dist
# Serverless directories
.serverless/
# FuseBox cache
.fusebox/
# DynamoDB Local files
.dynamodb/
# End of https://www.gitignore.io/api/node
Have you gone through the node-red projects? Here is a link to the video and here is the documentation of how you can set it up.
It creates a folder with your flows, encrypts your credentials, adds a read-me and a file with your package dependencies. It also lets you set up your git/GitHub account so you can push to your local and remote repositories safely and from the node red console.
The .gitignore file inside your project folder automatically starts with *.backup
This is because node-red creates a copy of your files and appends it .backup every time you deploy the nodes.
This way, the only thing you need to back up separately is a file outside your project folder called config_projects.json which stores your encryption key for your credentials.
I just set it up and I'm really happy with it.
We have a private npm repository based on Sinopia
What should I define in package.json that some packages will be installed from Synopia rather then from global npm repository?
If I install it from command line I can run: npm install <package_name> --registry <http://<server:port>
P.S. tried to google and looked in official NPM documentation but have found nothing.
One of the method i know that is by .npmrc
You can also use .npmrc also inside the project
set configuration like this
registry = http://10.197.142.28:8081/repository/npm-internal/
init.author.name = Himanshu sharma
init.author.email = rmail#email.com
init.author.url = http://blog.example.com
# an email is required to publish npm packages
email=youremail#email.com
always-auth=true
_auth=YWRtaW46YWRtaW4xMjM=
auth can be generate by
username:password
echo -n 'admin:admin123' | openssl base64
output YWRtaW46YWRtaW4xMjM=
The whole point of sinopia is a private registry and a proxy at the same time. You can use uplinks install all your packages from one registry entry point. Sinopia is able to route to any registry if the local storage is not able to resolve the dependency. By default, he points to npmjs .
So, if you set your configuration like
# a list of other known repositories we can talk to
uplinks:
npmjs:
url: https://registry.npmjs.org/
packages:
'#*/*':
# scoped packages
access: $all
publish: $authenticated
proxy: npmjs
'**':
# allow all users (including non-authenticated users) to read and
# publish all packages
#
# you can specify usernames/groupnames (depending on your auth plugin)
# and three keywords: "$all", "$anonymous", "$authenticated"
access: $all
# allow all known users to publish packages
# (anyone can register by default, remember?)
publish: $authenticated
# if package is not available locally, proxy requests to 'npmjs' registry
proxy: npmjs
You should be able to resolve all your dependencies independently of the source of each of them
btw: sinopia has no longer maintained.
I have a simple Dockerfile
FROM haskell:8
WORKDIR "/root"
CMD ["/bin/bash"]
which I run mounting pwd folder to "/root". In my current folder I have a Haskell project that uses stack (funblog). I configured in stack.yml to use "lts-7.20" resolver, which aims to install ghc-8.0.1.
Inside the container, after running "stack update", I ran "stack setup" but I am getting "Too many open files in system" during GHC compilation.
This is my stack.yaml
flags: {}
packages:
- '.'
- location:
git: https://github.com/agrafix/Spock.git
commit: 2c60a48b2c0be0768071cc1b3c7f14590ffcc7d6
subdirs:
- Spock
- Spock-core
- reroute
- location:
git: https://github.com/agrafix/Spock-digestive.git
commit: 4c85647427e21bbaefbf04c4bc315d4bdfabba0e
extra-deps:
- digestive-bootstrap-0.1.0.1
- blaze-bootstrap-0.1.0.1
- digestive-functors-blaze-0.6.0.6
resolver: lts-7.20
One import note: I don't want to use Docker to deploy the app, just to compile it, i.e. as part of my dev process.
Any ideas?
Should I use another image without ghc pre-installed to use with docker? Which one?
update
Yes, I could use the built-in GHC in the container and it is a good idea, but wondered if there is any issue building GHC within Docker.
update 2
For anyone wishing to reproduce (on MAC OSX by the way), you can clone repo https://github.com/carlosayam/funblog and grab this commit 9446bc0e52574cc574a9eb5f2733f69e07b874ef
(I will probably move on using container's GHC)
By default, Docker for macOS limits number of file descriptors to avoid hitting macOS system-wide limits (default limit is 900). To increase the limit, follow these commands:
$ cd ~/Library/Containers/com.docker.docker/Data/database/
$ git reset --hard
HEAD is now at 9410b78 last-start-time changed at 1480947038
$ cat com.docker.driver.amd64-linux/slirp/max-connections
900
$ echo 1200 > com.docker.driver.amd64-linux/slirp/max-connections
$ git add com.docker.driver.amd64-linux/slirp/max-connections
$ git commit -s -m 'Update the maximum number of connections'
[master 227a248] Update the maximum number of connections
1 file changed, 1 insertion(+), 1 deletion(-)
Then check the notice messages by:
$ syslog -k Sender Docker
<Notice>: updating connection limit to 1200
To check how many files you got open, run: sysctl kern.num_files.
To check what's your current limit, run: sysctl kern.maxfiles.
To increase it system-wide, run: sysctl -w kern.maxfiles=20480.
Source: Containers become unresponsive due to "too many connections".
See also: Docker: How to increase number of open files limit.
On Linux, you can also try to run Docker with --ulimit, e.g.
docker run --ulimit nofile=5000:5000 <image-tag>
Source: Docker error: too many open files