Laravel cloned repository docker composer unable to create vendor - linux

I recently got a new working PC with Ubuntu 22.04 on it. I pulled down my repo from Github and decided to give running everything through docker containers a try.
I found a script on laravel docs in order to run composer install through a container.
docker run --rm \
-u "$(id -u):$(id -g)" \
-v $(pwd):/var/www/html \
-w /var/www/html \
laravelsail/php81-composer:latest \
composer install --ignore-platform-reqs
Instantly I'm getting an issue saying
In Filesystem.php line 254:
/var/www/html/vendor does not exist and could not be created.
I've read through the documentation, but I can't seem to find anything. I've been googling for hours and keep finding people saying stuff about chown etc. but that just messes things up even worse.
I tried installing a brand new application through
curl -s "https://laravel.build/example-app" | bash
Here I have sail installed and run sail up, but when I try to run composer install or npm install, I get permission issues again.
I'm about to lose my mind here, so I hope someone will be able to help me out.

I had this exact problem today, and I was able to determine that Docker Desktop was the cause (mainly due to this GitHub issue). I was able to resolve my issue by switching my docker context to use Docker Engine:
$ docker context use default
If you are not using Docker Desktop, then I do not know what the solution might be. Setting the folder permissions to 777 did work for me, but like you I was not satisfied with that as an answer.

Related

Github actions can not run the config.sh file in the ubuntu 20.04

I am trying to set up the GitHub actions in ubuntu. I made a folder and install the runner using my root account. Now, this is how permissions look like.
When I tried to run
sudo ./config.sh --url https://github.com/user/api --token supersecret
It gives me the error
Must not run with sudo
The solution most people say is that export RUNNER_ALLOW_RUNASROOT="1" and then run the command. But this widely accepted solution is not working for me for some reason.
And some others say create a non-root user and try to run. I tried that way too. It ends up with more errors.
How do I fix this?
I did it this way to run env ❯ sudo env RUNNER_ALLOW_RUNASROOT="1" ./run.sh
for the config just keep the same pattern, put the env at the beginning.

Laradock - add custom npm package

It's a kind of not normal thing, but this is something, that temporarily is a solution.
I have laradock installed in a system and laravel app.
All that I'm using from laradock provides me command below
docker-compose up -d nginx mysql php-worker workspace redis
I need to add node package (https://www.npmjs.com/package/tiktok-scraper) installed globally in my docker, so I can get results by executing php code like below
exec('tiktok-scraper user username-n 3 -t json');
This needs to be available for php-fpm and php-worker level, as I need this in jobs and for endpoints, that should invoke scrape.
I know, that I'm doing wrong, but I have tried to install it within workspace like using
docker-compose exec workspace bash
npm i -g tiktok-scraper
and after this it's available in my workspace (I can run for instance tiktok-scraper --help) and it will show me the different options.
But this doesn't solve the issue, as I'm getting nothing by exec('tiktok-scraper user username-n 3 -t json'); in my laravel app.
I'm not so familiar with docker and not sure, in which dockerfile should I put something like
RUN npm i -g tiktok-scraper
Any help will be appreciated
Thanks
To execute the npm package from inside your php-worker you would need to install it in the php-worker container. But for the php exec() to have an effect on your workspace this workspace would need to be in the same container as your php-worker.

Creating a custom NodeJSDocker image on rhel7

I am building some base Docker images for my organization to be used by applications teams when they deploy their applications in OpenShift. One of the images I have to make is an NodeJS image (we want our images to be internal rather than sourced from DockerHub). I am building on RedHat's RHEL7 Universal Base Image (ubi). However I am having trouble configuring NodeJS to work in the container. Here is my Dockerfile:
FROM myimage_rhel7_base:1.0
USER root
RUN INSTALL_PKGS="rh-nodejs10 rh-nodejs10-npm rh-nodejs10-nodejs-nodemon nss_wrapper" && \
yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS && \
rpm -V $INSTALL_PKGS && \
yum clean all
USER myuser
However when I run the image there are no node or npm commands available unless I run scl enable rh-nodejs10 bash. This does not work in the Dockerfile as it creates a subshell that will not be usable to a user accessing the container.
I have tried installing from source, but I have run into a different issue of needing to upgrade the gcc/g++ versions despite them not being available in my configured repos from my org. I also figure that if I can get NodeJS to work from the package manager it will help get security patches and such should the package be updated.
My question is, what are the recommended steps to create an image that can be used to build applications running on NodeJS?
Possibly this is a case where the best code is code you don't write at all. Take a look at https://github.com/sclorg/s2i-nodejs-container
It is a project that creates an image that has nodejs installed. This might be a perfect solution out of the box, or it could also serve as a great example of what you're trying to build.
Also, their readme attempts to describe how they get around the scl enable command.
Normally, SCL requires manual operation to enable the collection you
want to use. This is burdensome and can be prone to error. The
OpenShift S2I approach is to set Bash environment variables that serve
to automatically enable the desired collection:
BASH_ENV: enables the collection for all non-interactive Bash sessions
ENV: enables the collection for all invocations of /bin/sh
PROMPT_COMMAND: enables the collection in interactive shell
Two examples:
* If you specify BASH_ENV, then all your #!/bin/bash scripts do not need to call scl enable.
* If you specify PROMPT_COMMAND, then on execution of the podman exec ... /bin/bash command, the collection will be automatically
enabled.
I decided in the end to install node using the binaries rather than our rpm server. Here is the implementation
FROM myimage_rhel7_base:1.0
USER root
# Get node distribution from nexus and install it
RUN wget -P /tmp http://myrepo.example.com/repository/node/node-v10.16.3-linux-x64.tar.xz && \
tar -C /usr/local --strip-components 1 -xf /tmp/node-v10.16.3-linux-x64.tar.xz && \
rm /tmp/node-v10.16.3-linux-x64.tar.xz

startFabric.sh fails with docker compose command not find

I am following the tutorial on writing your first application to make a sample hyperledger fabric application. I am using Ubuntu 16.04 and I have installed prerequisites as well as binaries and docker images. When I move into fabric-samples/fabcar, after npm install. I run:
./startFabric.sh
I get the following error:
docker-compose -f docker-compose.yml down.
./start.sh: line 13: docker-compose: command not found.
I looked into ./startFabric.sh by nano. Line 13 is as follows:
starttime=$(date +%s)
LANGUAGE=${1:-"golang"}
This is a screen shot of the error I get:
It may be irrelevant, but I have also issues running .byfn -m up as I have posted on issue with byfn. I am not sure if these two are related. But obviously neither can I start fabric, nor can I build a network.
I appreciate any help to solve the issue.
Thank you for your attention.
You should install docker. If you already installed docker, you should check if your docker bin folder is referenced in your PATH environment variable.
https://docs.docker.com/install/

Only some locally built Docker images fail to work on remote server (error: "No command specified")

I have a perplexing Docker problem. I am running Docker on my Mint laptop and on a Ubuntu VPS. I have been able to build images in the past locally and send them to the server and have them run there. However, for clarity, the ones that work were probably built when I was running Ubuntu locally (more on that later).
I have an example based on Alpine:
FROM alpine:3.5
# Do a system update
RUN apk update
ENTRYPOINT ["sleep", "3"]
I build like so, and send to the remote:
docker build -t alpine-sleep .
docker save alpine-sleep | gzip > alpine-sleep.tgz
rsync --progress alpine-sleep.tgz myserver.example.com:/path/to/images/
I then unpack/import on the remote, and run, thus:
docker import /path/to/images/alpine-sleep.tgz alpine-sleep
docker run -it alpine-sleep
I get this console reply:
docker: Error response from daemon: No command specified.
See 'docker run --help'.
However, if I copy the Dockerfile to the remote, then do this:
docker build -t alpine-sleep-localbuild .
docker run -it alpine-sleep-localbuild
then I get the sleep working fine.
My Docker and kernel versions locally:
jon#jvb ~/alpine_test $ uname -r
4.4.0-79-generic
jon#jvb ~/alpine_test $ docker -v
Docker version 1.12.6, build 78d1802
And remotely:
root#vps:~/alpine-sleep# uname -r
3.13.0-24-generic
root#vps:~/alpine-sleep# docker -v
Docker version 17.05.0-ce, build 89658be
I wonder, does the major difference in the kernel make a difference? I expect 3.13 to 4.4 is quite a big jump. I don't recall what version of the kernel I was using when I build things when I was running Ubuntu locally, but it would not surprise me if it is was 3.x.
The other thing that strikes me as unexpected is the high variation in Docker version numbers. How do I have version 1.x locally, and 17.x remotely? Has the project been through a version re-numbering?
Update
I've just checked the kernel version when I was running Ubuntu locally, and that was:
4.4.0-75-generic
So, this makes me think that a major kernel discrepancy could not be to blame.
The issue is that Docker won't warn you when you use the wrong combination of save/load and export/import. You save/load an image, and you export/import a tar file from a container. Since you are doing a docker save to save your image, you need to do a docker load to restore it on the other host:
docker load < /path/to/images/alpine-sleep.tgz
I have found this very old issue: https://github.com/moby/moby/issues/1826
An image imported via docker import won't know what command to run. Any image will lose all of its associated metadata on export, so the default command won't be available after importing it somewhere else.
So, run it with the entrypoint:
docker run --entrypoint sleep alpine-sleep 3

Resources