Gitlab all of sudden showing no repository message on different repo. I am using docker-compose to run docker ce. I can see on file system on host machine as well as on docker container repository contain data in #hashed folder. I am able to see my ci/cd pipeline and images in container registry. but I am not able to see all my source code in repository. I have try clearing cache and run the gitlab reconfigure command. but not solve the issue.
can anyone know what could be the reason of this kind of behavior.
Related
I'm trying to open this repository using Github Codespaces. Note that this repository is correctly configured for local devcontainer development.
However, when I try to open it in CodeSpaces, it seems to build the container correctly, but fails with:Could not detect any language/platform in the source directory (full log here)
What am I missing?
It looks like you may have run into a regression that Codespaces had during the time specified in your log file.
Given your configuration, Oryx should no longer run, which means you should no longer run into this issue.
Would you mind retrying?
In absence of network, on-site we can commit to local git repo but can't have gitlab-ci to compile project and early trobuleshoot.
How to have a localized gitlab-ci and gitlab-runner which can compile commits offline (*or alternate means) ?
The gitlab runner has an exec command which allows you to run the gitlab runner on your local machine with your local .gitlab-ci.yml configuration file.
This command allows you to run builds locally, trying to replicate the CI
environment as much as possible. It doesn't need to connect to GitLab, instead
it reads the local .gitlab-ci.yml and creates a new build environment in
which all the build steps are executed.
Though if local network troubles are often you may consider installing the gitlab on premises and connect your own local gitlab runner to it so the work is automated.
I'm quite new at docker, but I'm facing a problem I have no idea how to solve it.
I have a jenkins (docker) image running and everything was fine. A few days ago I created a job so I can run my nodejs tests every time a pull request is made. one of the job's build steps is to run npm install. And the job is constantly failing with this error:
tar (child): bzip2: Cannot exec: No such file or directory
So, I know that I have to install bzip2 inside the jenkins container, but how do I do that? I've already tried to run docker run jenkins bash -c "sudo apt-get bzip2" but I got: bash: sudo: command not found.
With that said, how can I do that?
Thanks in advance.
Answer to this lies inside the philosophy of dcoker containers. Docker containers are/should be immutable. So, this is what you can try to fix this issue.
Treat your base image i.e, jenkins as starting point.
login to this base image and install bzip2.
commit these changes and this should result in a new image.
Now use above image from step 3 to install any other package like npm.
Now commit above image.
Note: To execute commands in much controlled way, I always prefer to use something like this;
docker exec -it jenkins bash
In nutshell, answer to both of your current issues lie in the fact that images are immutable so to make any change that will get propagated is to commit them and use newly created image to make further changes. I hope this helps.
Lots of issues here, but the biggest one is that you need to build your images with the tools you need rather than installing inside of a running container. As techtrainer mentions, images are immutable and don't change (at least from your running container), and containers are disposable (so any changes you make inside them are lost when you restart them unless your data is stored outside the container in a volume).
I do disagree with techtrainer on making your changes in a container and committing them to an image with docker commit. This will work, but it's the hand built method that is very error prone and not easily reproduced. Instead, you should leverage a Dockerfile and use docker build. You can either modify the jenkins image you're using by directly modifying it's Dockerfile, or you can create a child image that is FROM jenkins:latest.
When modifying this image, the Jenkins image is configured to run as the user "jenkins", so you'll need to switch to root to perform your application installs. The "sudo" app is not included in most images, but external to the container, you can run docker commands as any user. From the cli, that's as easy as docker run -u root .... And inside your Dockerfile, you just need a USER root at the top and then USER jenkins at the end.
One last piece of advice is to not run your builds directly on the jenkins container, but rather run agents with your needed build tools that you can upgrade independently from the jenkins container. It's much more flexible, allows you to have multiple environments with only the tools needed for that environment, and if you scale this up, you can use a plugin to spin up agents on demand so you could have hundreds of possible agents to use and only be running a handful of them concurrently.
We recently started to use GitLab-CI on the gitlab.com free service.
At first everything went fine, but now, seems like we can't build our project anymore. The builds are shown as pending and doesn't do anything.
Here's what we have in our builds list:
And if we check the details of a build:
As you might notice, in the list, each build is assigned to a runner id, but in the details page, the runner section is blank.
At first, we thought it was just latency caused by gitlab.com ingrastructure, but it's really just stuck there...
EDIT
It's more than 1 year ago but I keep having notifications about this question. If I recall properly, the problem was due to GitLab itself. Follow the GitLab docs and make sure your setup is valid, and hope for the best !
If you are working with local gitlab-runner, like macOS or custom runner that you have been made, you should start running jobs manually.
Based on this topic on gitlab documentation you should start manually in user-mode or system-mode based on where you executing this command
Run in terminal
If you did not started gitlab-runner yet
gitlab-runner start
system-mode execution
sudo gitlab-runner run
user-mode execution
gitlab-runner run
I was stuck into same issue on my windows machine. I went to event viewer to get some logs of the service and found the error "listen_address not defined".
I followed below steps to fix it.
Go to gitlab repository and edit the runner settings.
You will find checkbox named "Indicates whether this runner can pick jobs without tags"
Make sure the option is checked.
It works for me now.
My problem was solved after doing the following steps:
Go to your project repository, click on CI/CD and then select pipelines. Try removing the runner cache by clicking on clear runner caches.
Verify, start and run your local runners by the doing the following steps on the server you have registered your runners:
sudo gitlab-runner verify
sudo gitlab-runner start
sudo gitlab-runner run
GitLab maxed out their shared runners but they have just finished adding more of them. Now GitLab has 12 shared runners. Take a look at this issue: https://gitlab.com/gitlab-org/gitlab-foss/issues/5543#note_3130561
Update
GitLab has moved to auto scaling Runners. If you're still hitting any issues it might be due to a different cause.
Try to clear the Runner cache if you have set it up.
Goto CI/CD>>Pipelines>>on top side >> clear Runner Caches
For me, this workaround worked: Pausing and unpausing the runner triggers the pending job to run.
Reference: https://gitlab.com/gitlab-org/gitlab/-/issues/23401
I had the same issue because there were no active runners.
Go to settings > CI CD > Enable shared runners for this project
We have a serious problem with foresight linux. As we know it, foresight has no support due to the conary package method which is shutdown now. However in our application the build fails because the online repo is not reachable (rpath).
This is the error we get during build:
Error occurred opening repository http://foresight.rpath.org/conary/: Connection refused
So we found a way to get a list of conary packages on the local server as a dump(from git - mirror of conary repo).
Now we are really not sure on how and where to update foresight Linux to look over new repo path instead of foresight.rpath.org/conary.
The fact is that we would not expect any major upgrade or update on the packages. This is to let build run without getting exit saying online repo issue, so that we can plan and manage until the application gets completely migrated.
You can edit the repo list by yourself, the path is
cat /etc/apt/sources.list