I had a working instance of CluedIn installed from the Home repo (https://github.com/CluedIn-io/Home).
For the last few days, I can't reach it. Say, my organization name is foobar: when I open http://foobar.127.0.0.1.xip.io:9080/ (which used to work just fine last week), my browser can't reach the page and shows DNS_PROBE_FINISHED_NXDOMAIN.
This issue happens when http://xip.io/ is down.
To solve the problem, please, use another wildcard DNS provider. For example, https://nip.io/.
There is only one change you need to do in the Home repo - in the .env file, change this line:
CLUEDIN_DOMAIN=127.0.0.1.xip.io
To this:
CLUEDIN_DOMAIN=127.0.0.1.nip.io
And then run ./cluedin.ps1 up.
Your site will be available at http://foobar.127.0.0.1.nip.io:9080/
Should you need to script this, you can run the following:
./cluedin.ps1 env -set CLUEDIN_DOMAIN=127.0.0.1.nip.io # sets for the default environment
or
./cluedin.ps1 env dev -set CLUEDIN_DOMAIN=127.0.0.1.nip.io # sets for a custom environment called dev
For the Helm chart you have to update the dns section ..
v3.2.2
global:
dns:
hostname: "127.0.0.1.nip.io"
.. or in older versions of the chart v3.2.1 and below ..
dns:
hostname: "127.0.0.1.nip.io"
Once deployed it should restart effected pods but the server, ui and gql deployments might need to be restarted to pick up new configmap values.
Related
I set up a Debian 10 server to host my containers running on Docker version 19.03.2.
It currently hosts 3 DNS containers (pi-hole => bind9 => dnscrypt-proxy) which means my Debian 10 server acts as a DNS server for my LAN.
I want to add a new container. However, I can't build it because it fails when it comes to RUN apt-get update. I checked the content of the /etc/resolv.conf of the container, and the content seems right (nameserver 1.1.1.1 and nameserver 9.9.9.9, that matches with what I wrote in /etc/docker/daemon.json).
If I'm correct, the build step uses - by default - the DNS of the host, except if you specify DNS servers in /etc/default/docker or /etc/docker/daemon.json.
If the DNS servers in /etc/resolv.conf seem correct, and if the container has an Internet access (I tried a RUN ping 8.8.8.8 -c1 and it works), the build should succeed ?
I tried several things, like overwriting the content of /etc/resolv.conf with other DNS, I also rebooted the server, restarted Docker, pruned downloaded images, used the --no-cache option... I also reinstalled Docker. Nothing seems to work.
It must be somehow related to my DNS containers I guess.
Below is the content of the /etc/resolv.conf of the host (the first one is itself, as it redirects to Pi-hole).
Have you any lead to solve this issue ?
I can provide the docker-compose file of my DNS containers and the Dockerfile of my new container if you need them.
Thanking you in advance,
I have found this fix :
RUN chmod o+r /etc/resolv.conf && apt-get [....]
It works when I change the permissions.
I do not really understand why it behaves like this, if you have any lead I would be glad to know more !
So I have a need to do some new web development on a standalone box on a standalone network. This standalone network does not have any access to internet, but there are quite a few machines on it that operate in a Windows Server environment.
I have an internet-accessible machine with which I could download node and get the packages, but I need to be able to transfer the packages en masse over to the standalone machine.
What's the best way for doing that? I've read a few docs about replicating the registry on a local machine so it caches it, but how would I take that cache and port it over via usb to this standalone network?
Are there other methods for handling this?
Previously on a different project, we established our own private npm repo using Verdaccio, and published our own npm modules to that repo. I could easily set that up and then port over tar or zip files of node modules and publish them that way. But again the question is, how do I get the bulk of node packages I need?
The main thing I need to know is how to take this locally cached npm registry and set it up on a standalone machine once all the modules are copied. I can do that all on the internet box, but how would I transfer and replicate all on the server?
I have the same problem.
I install and use the verdaccio and resolved my problem.
thanks to juan picado
what you need is cache properly all dependencies in your storage folder.
see here how to find it
(e.g in windows 8.1: C:\Users\xxx\AppData\Roaming\npm-cache)
You should be able to see all the resolved dependencies in the cache.
then set Environment variable with name: XDG_DATA_HOME in follow path:
right click on MyComputer
click properties.
from left panel, click Advance system settings.
from Advance Tab click Environmrnt variable ... button.
in new opened from, in system variable group. click new button.
enter XDG_DATA_HOME to Variable name and cache path to Variable value.
click Ok button.
now, go to config.yaml and comment proxy in packages section. follow this:
packages:
'#*/*':
access: $all
publish: $authenticated
# proxy: npmjs
'**':
access: $all
publish: $authenticated
# proxy: npmjs
change registry config url.
npm config set registry http://localhost:4873/
finnally, restart verdaccio.
I hope is useful.
verdaccio was not really meant for this use case, I would rather run this package on a container "npm-offline-registry". if you go with versaccio, you may encounter some difficulties setting it up for an offline network.
OK, I have a test setup running on a local server that is running like a champ.
I would like to reimplement this on my VPS. The config file only differs with regards to the mail server section, as the VPS has this enabled, my local server does not.
The issue that is most apparent (perhaps more) is that when I hit my domain:9080 it redirects to the login page, but loses that port information. My local install does not.
I for the life of me, cannot figure out what I need to change to fix this issue.
To get an idea of what I mean, if the above was unclear, you can goto shadow.schotty.com:9080 and that works perfectly (well obviously not the new user part, as the email isnt setup). schotty.com:9080 has that redirection issue.
As for the obvious questions for me:
Here is the docker publish ports copied from my start script:
--publish 9443:443 --publish 9080:80 --publish 9022:22 \
No, I did not copy over any existing part of the install on the local host, as I wanted to also document what the hell I did and to ensure that since I am using a newer version I wanted none of the potential issues that crop up with incompatible config files.
I did copy my startup script, and modified it appropriately for the volume directories.
The only modifications to any configuration files are the mail server section entries.
Thanks to anyone who can toss an idea my way.
Andrew.
OK, Figured a few things out here that should be of help to others.
First off something had changed somewhat since I had done the install on shadow. But now both are behaving the same since both are on the exact same revision.
To fix the web port across the board, you will need to pick a port to use that the rest of the software suite does not use, nor the obvious of other containers/daemons on the host. 8080 is indeed used, so I chose to stick with 9080.
There are 2 places this matters and has a very specific way of needing to be done. First is in the config -- you will need to setup the variable as follows:
external_url 'http://host.domain.tld:9080'
I am sure many tried stopping there and failed (I sure the heck did). The second spot is in the docker container initialization. For some reason it used to work, but does not anymore. But the simple fix is just map 1:1 the external port to the internal one. So in my case I am using 9080, so the following publish must be used:
--publish 443:443 --publish 9080:9080 --publish 22:22 \
This fixes everything.
Now off to other issues :D
I have installed Gitlab Omnibus gitlab-7.4.3_omnibus.5.1.0.ci-1.el6.x86_64.rpm on CentOS 6.6. I have a few projects created and working fine but I would like to try using the continuous integration features. I don't know where to start and documentation/tutorials are thin on the ground.
I have found the following files that do not appear in an older Gitlab omnibus install I have:
/usr/bin/gitlab-ci-rake
/usr/bin/gitlab-ci-rails
I presume I need to do something with these? But do I need a configuration file first?
In my projects (Settings > Services > Gitlab CI) I can see there are options for Active, Token and Project Url but I do not know what to put in these fields.
Any help to get me started on CI would be appreciated. Cheers,jonny
We recently installed the omnibus GitLab 7.6.2 release which has GitLab CI 5.3 built in. I had the same question. Here's how we got it working.
We're using a single secured server over https; single ip for both gitlab and gitalb-ci hosts.
We have dns entries for both host names to a single ip. (Done with an alias for the ci server I think). We have two ssl certificates one for each hostname.
We have the following lines at the top of the /etc/gitlab/gitlab.rb script (found by searching the gitlab site for rb file setup details):
external_url 'https://gitlab.example.edu'
nginx['redirect_http_to_https'] = true
ci_external_url 'https://gitlab-ci.example.edu'
ci_nginx['redirect_http_to_https'] = true
For http, leave out the nginx statements.
If gitlab-ci url displays the gitlab site contents then the ci_nginx statement needs to be corrected.
Recently I had the problem of starting a postgresql service with custom PGDATA path. It tried to look in the default data directory (/var/lib/pgsql/9.3/data/) which was not initialized and therefore triggered these errors. It appears the problem is that the service starter on Centos 7 strips all the environment variables, including PGDATA.
Interesting thread on the issue
Is there a way to configure
service postgresql-9.3 start
to use custom environment variables? Are there configuration files for services where these variables have to be defined?
Thank you in advance!
Thanks for the above answer, we just ran into this change today. You can also keep the default settings and only override the PGDATA variable by putting the following in /etc/systemd/system/postgresql-9.3.service:
# Include the default config:
.include /lib/systemd/system/postgresql-9.3.service
[Service]
Environment=PGDATA=<your path here>/pgsql/9.3/data
This removes the need to reintegrate changes in /usr/lib/systemd/system/postgresql-9.3.service back to your local copy.
OK, I got a solution that worked for me.
nano /etc/systemd/system/postgresql-9.3.service
with the contents copied over from /usr/lib/systemd/system/postgresql-9.3.service and PGDATA variable changed. Then
systemctl daemon-reload
And then I started the service normally and it worked fine. The trick was making changes to this service configuration file.