This morning I got emails for each of my Gitlab Pages that are hosted on custom domains, saying that the domain verification failed.
That's fine, because I don't think I ever verified them in the first place - good on Gitlab for getting this going.
When I head on over the the Settings>Pages>Domain_Details on each repo, I see the instructions to create the following record:
_gitlab-pages-verification-code.blog.ollyfg.com TXT gitlab-pages-verification-code={32_digit_long_code}
On creating this record, and clicking the "Verify Ownership" button, I get the message "Failed to verify domain ownership".
I have ensured that the record is set, and calling
dig -t txt +short _gitlab-pages-verification-code.blog.ollyfg.com
Returns:
"gitlab-pages-verification-code={same_32_digit_long_code}"
Is this a bug in Gitlab? Am I doing something wrong?
Thanks!
The docs (and the verification page) were a little confusing for me. Here's what worked for me, on GoDaddy:
A Record:
Name: #
Value: 35.185.44.232
CNAME:
Name: example.com
Value: username.gitlab.io
TXT Record:
Name: #
Value: gitlab-pages-verification-code=00112233445566778899aabbccddeeff
Verified with Gitlab, and also:
dig -t txt +short example.com
Here is how to get a subdomain.domain.com point to namespace.gitlab.io/project-name with Gandi.
The CNAME and TXT records generated by GitLab when adding a new subdomain to a project via Settings > Pages > New Domain did not work in my case. The exact non-working records were mysubdomain.mydomain.com CNAME mynamespace.gitlab.io. and _gitlab-pages-verification-code.mysubdomain.mydomain.com TXT gitlab-pages-verification-code=00112233445566778899aabbccddeeff.
Modifications like mysubdomain CNAME mynamespace.gitlab.io. (with and without a dot at the end) did not work, either (ping mysubdomain.mydomain.com said unknown host).
Using an A record and a TXT record with only the subdomain in the record's name field does work in my case. Here are the exact working records:
mysubdomain 1800 IN A 35.185.44.232
mysubdomain 1800 IN TXT "gitlab-pages-verification-code=00112233445566778899aabbccddeeff"
Note that the namespace.gitlab.io IP address has changed from 52.167.214.135 to 35.185.44.232 in 2018.
Wait as least 30 minutes to get the records propagate.
In my case GitLab also verified the domain automatically, I did not need to click the Verify button.
Wait for sometime, it worked for me. Initially, having the same problem as you mentioned.
Also, you may find this page useful: https://gitlab.com/help/user/project/pages/getting_started_part_three.md#dns-txt-record
It might be worthwhile, trying with:
blog.ollyfg.com
instead of: _gitlab-pages-verification-code.blog.ollyfg.com
I really get a hard time to make it work. But in the end below settings worked for me.
GoDaddy
domain.com
A record
+-----------+---------------------+
| Host | # |
+-----------+---------------------+
| Points To | 35.185.44.232 |
+-----------+---------------------+
To Verify your domain Add TXT record
TXT record
+-----------+-----------------------------------------------------------------+
| Host | # |
+-----------+-----------------------------------------------------------------+
| TXT Value | gitlab-pages-verification-code=00112233445566778899aabbccddeeff |
+-----------+-----------------------------------------------------------------+
subdomain.domain.com
CNAME record
+-----------+---------------------+
| Host | subdomain |
+-----------+---------------------+
| Points To | namespace.gitlab.io |
+-----------+---------------------+
To Verify your domain Add TXT record
TXT record
+-----------+-----------------------------------------------------------------+
| Host | _gitlab-pages-verification-code.subdomain |
+-----------+-----------------------------------------------------------------+
| TXT Value | gitlab-pages-verification-code=00112233445566778899aabbccddeeff |
+-----------+-----------------------------------------------------------------+
Note subdomain and verification code will be found under settings>page (create/details)
GitLab Pages IP on GitLab.com has been changed
from 52.167.214.135 to 35.185.44.232 in 2018
For GoDaddy (April 2020), I had to do the following:
|Type |Name |Value |
-----------------------------------------------------------------------------------
|A |example.com (or #) |35.185.44.232 |
|TXT |_gitlab-pages-verification-code|gitlab-pages-verification-code=blahblahblah|
|A |www |35.185.44.232 |
|CNAME|www.example.com |example.gitlab.io |
|TXT |_gitlab-pages-verification-code|gitlab-pages-verification-code=blahblahblah|
| |(or _gitlab-pages.verification-| |
| |code.www) | |
While the documentation said to use _gitlab-pages-verification-code.example.com and _gitlab-pages-verification-code.www.example.com, those did not work for me, and I could see within seconds after changing and re-checking my verification status that it changed from unverified to verified, and vice versa.
It's 2021, and this issue still happens. Can't verify domain with gitlab's suggested CNAME and TXT. Had to use on godaddy:
subdomain A 35.185.44.232
subdomain TXT gitlab-pages-verification-code=####
Related
To test mail server I need an MX record in DNS server, it always with a delay because DNS cache, I need to make it faster. Is there a way make an MX record locally like A record by etc/hosts file?
Thanks, this works.
echo "10.10.10.1 mail.example.com" | sudo tee --append /etc/hosts > /dev/null
echo "disable_dns_lookups = yes" | sudo tee --append /etc/postfix/main.cf > /dev/null
systemctl restart postfix
i'm managing to use postfix on a proxmox server to delivery the mail to a local mail server on a vm , with a different ip address from the dns mx record, i've found very useful this article,
http://www.readonlymaio.org/rom/2018/01/16/force-postfix-to-search-mx-records-in-etc-hosts-file/
he leds me to disable the dns lookup in postfix to force it to use the hosts A record !!
it works
in the hosts
X.X.X.X example.com
Edit the /etc/postfix/main.cf file and add this line:
disable_dns_lookups = yes
after, restart postfix
I was wondering if it's possible to log into name.com and update a specific domains DNS from SSH? The VPS hostname is the domain I want modified.
I basically want to update each DNS field with the following
Log into https://www.name.com/account/domain/details/$(hostname -i)#dns
A Record "" $(hostname -i)
A Record "*" $(hostname -i)
TXT default._domainkey cat /etc/opendkim/keys/"$(hostname)"/default.txt
TXT "" "v=spf1 mx a ip4:$(hostname -i) -all"
I know they have an API (have to request signup as a reseller) but that is not an option for me.
We currently have a dynamically provided IP address and are switching over to a static ip address. As such, I need to change the IP address on our 3 LAMP servers. These servers also run bind9 for DNS and postfix/dovecot for email. (MySQL is actually running as a Percona DB cluster which may be irrelevant.)
I think I have a good strategy, but want to check my logic with others who may have done this successfully before.
The concept is to stop all web, database, and mail services on each machine one at a time, pushing traffic to one of the two remaining servers, and run the following script to replace the old IP address with the new IP address, then reboot the server and attempt to push traffic back to it then proceed with the next server in the cluster if all goes well.
I used grep -r to find instances of the old ip address in the system and need to make sure that I'm not missing anything important that needs to be considered.
find /etc/bind -type f -print0 | xargs -0 sed -i 's/old.ip.address/new.ip.address/g'
find /etc/postfix -type f -print0 | xargs -0 sed -i 's/old.ip.address/new.ip.address/g'
find /etc/apache2 -type f -print0 | xargs -0 sed -i 's/old.ip.address/new.ip.address/g'
find /etc/postfix -type f -print0 | xargs -0 sed -i 's/old-ip-address/new-ip-address/g'
find /etc/bind -type f -print0 | xargs -0 sed -i 's/rev.address.ip.old/rev.address.ip.new/g'
As a point of clarification, grep -r found the IP address references in the /etc/bind/zones tables, the /etc/postfix configuration files, and the /etc/apache2 config file. The IP address separated by hyphens was also found in the postfix config files. The reverse IP address was also found in a /etc/bind/named.conf.local file and will also need to be replaced.
Can anyone see if I may be missing something here? I'm doing this in a production environment...not the most ideal of circumstances, of course.
Sorry all. Looks like I let this get stale after finding the solution. For posterity's sake, here's what seems to be working at this point:
$ORIGIN example.com.
$TTL 12H
; # symbol represents example.com.
# 12H IN SOA ns1.example.com. hostmaster#example.com. (
2015062954 ;serial
30M ;refresh
2M ;retry
2W ;expire
1D ;minimum TTL
)
NS ns1.example.com.
NS ns2.example.com.
MX 10 mail.example.com.
IN A 99.101.XXX.XXX
IN TXT "v=spf1 a mx ip4:99.101.XXX.XXX ~all"
IN SPF "v=spf1 a mx ip4:99.101.XXX.XXX -all"
ns1 IN A 99.101.XXX.XXX
ns2 IN A 99.101.XXX.XXX
mail IN A 99.101.XXX.XXX
IN TXT "v=spf1 a mx ip4:99.101.XXX.XXX ~all"
IN SPF "v=spf1 a mx ip4:99.101.XXX.XXX -all"
www IN A 99.101.XXX.XXX
dev IN A 99.101.XXX.XXX
demo IN A 99.101.XXX.XXX
webconf IN A 99.101.XXX.XXX
stats IN A 99.101.XXX.XXX
While the idea of using a find piped to an xargs sounds reasonable, I would take my 15 years of experience and tell you that is a bad idea. I would propose:
identify those services running on the boxes that are important (your find command works great here)
identify those files important to each of those services where address is defined
back up those files (cp to .orig works nicely)
create new files that contain your new addresses
This way you have a fast transition with:
cp somefile.new somefile
and a fast backout with:
cp somefile.orig somefile
Additionally, I would expect that the zones files contain actual DNS entries, so changing them is fine, but you'll probably need to reload named for those changes to take effect. Same goes for postfix, you'll want to postfix reload those as well.
EDIT (I haven't taken the time to actually load this zone, but it looks reasonably correct):
$ORIGIN example.com.
$TTL 12H # IN SOA ns1.example.com. hostmaster#example.com. (
2015062660 ;
30M ;refresh
2M ;retry
2W ;expire
1D ;minimum TTL
)
IN NS ns1.example.com.
IN NS ns2.example.com.
IN A 99.101.XXX.X
example.com. IN MX 10 mail.example.com.
mail IN A 99.101.XXX.X
IN TXT "v=spf1 a mx ip4:99.101.XXX.X ~all
ns1 IN A 99.101.XXX.X
ns2 IN A 99.101.XXX.X
www IN CNAME example.com.
dev IN CNAME example.com.
demo IN CNAME example.com.
webconf IN CNAME example.com.
stats IN CNAME example.com.
EDIT:
glue records
My isp provides dynamic ip addresses.I have forwarded my port to an raspberry pi and accessing it through ssh and also using it as web server.but the problem is that ip changes every 3-4 days is there any way or script so that i can be informed or updated with new ip address.
Thank You.
You can write a script like:
============
#!/bin/bash
OUT=$(wget http://checkip.dyndns.org/ -O - -o /dev/null | cut -d: -f 2 | cut -d\< -f 1)
echo $OUT > /root/ipfile
============
Set a cron to execute this every 3h or something and configure your mta to send the file /root/ipfile to your email address ( that too you can use a cron ). mutt can be a useful tool to attach the file and do the email delivery.
So I am trying to parse FTP logs and see if a certain user is logging in securely. So far I have this to pull the next couple of lines after the user logs in
cat proftpd.log.2 | grep -B 3 "USER $sillyvariable"
and this is a sample output it creates
::ffff:127.0.0.0 UNKNOWN ftp [04/Jan/2013:11:03:06 -0800] "AUTH TLS" 234 -
::ffff:127.0.0.0 UNKNOWN ftp [04/Jan/2013:11:03:06 -0800] "USER $sillyvariable" 331 -
Now this is a perfect example of what I want, it displays the AUTH TLS Message and the IPs match. However this is not always the case as many users are constantly logging in and out and most of the time the output is jumbled.
Is there a way I can grep for the USER $sillyvariable and find his/her matched IP containing the "AUTH TLS" in the preceding line so I can know they logged in securely? I guess you can say I want to grep the user and then grep backwards to see if the connection they originated from (matching IPs) was secure. I'm kind of stuck on this and could really use some help.
Thanks!
$ grep -B3 'USER $sillyvariable' proftpd.log.2 |
tac | awk 'NR==1 {IP=$1} $1==IP {print}' | tac
::ffff:127.0.0.0 UNKNOWN ftp [04/Jan/2013:11:03:06 -0800] "AUTH TLS" 234 -
::ffff:127.0.0.0 UNKNOWN ftp [04/Jan/2013:11:03:06 -0800] "USER $sillyvariable" 331 -
This uses tac to reverse the lines in the grep result. It then looks for all lines where the IP addresses match the one in the USER line. Finally it runs tac again to put the lines back in the original order.
I realize I am very late to this party, but the comment I made about the AUTH statement possibly being more than 3 lines earlier left me wondering.
I took a slightly different approach, in which I make minimal assumptions (based on limited knowledge of the contents of your log file):
There is one user per IP address (may not be true if they are behind a firewall)
For every AUTH entry there should be exactly one "good" USER entry from the same IP address
A sorted list of IP addresses which have entries in the log file will show more "USER" than "AUTH" requests for any IP address from which a "bad" request was made
If those assumptions are reasonable / true, then a simple bash script does quite a nice job of giving you exactly what you want (which is a list of the users that didn't log in properly - which is not exactly what you were asking for):
#!/bin/bash
# first, find all the "correct" IP addresses that did the login "right", and sort by IP address:
grep -F "AUTH TLS" $1 | awk '{print $1}' | sort > goodLogins
# now find all the lines in the log file with USER and sort by IP address
grep USER $1 | awk '{print $1}' | sort > userLogins
# now see if there were user logins that didn't correspond to a "good" login:
echo The following lines in the log file did not have a corresponding AUTH statement:
echo
sdiff goodLogins userLogins | grep "[<>]" | awk '{print $2 ".*USER"}' > badUsers
grep -f badUsers $1
echo -----
Note that this leaves you with three temporary files (goodLogins, userLogins, badUsers) which you might want to remove. I assume you know how to create a text file with the above code, set it to be executable ( chmod u+x scrubLog ), and run it with the name of the log file as parameter ( ./scrubLog proftpd.log.2 ).
Enjoy!
PS - I am not sure what you mean by "logging in correctly", but there are other ways to enforce good behaviors. For example, you could block port 21 so only sftp (port 22) requests come through, you could block anonymous ftp, ... But that's not what you were asking about.