How to move all zones to a new Bind DNS Server - linux

Existing Server = BIND version 9.1.3
new Server = BIND version 9.3.4
How can i move all zones records to new server? I tried moving them by coping the files and changing configuration file on new to match and zones did not resolve, no resolution.
Is there an smooth way to just transfer all zones to this new server?

Copying the files and updating the configuration is the right way to do this.
Using AXFR (zone transfer) is no good:
You still have to create a config file at the new server listing all of the zones, there's no way to say "transfer every zone".
The files you get out will be in a different order to the records in the originals, with any comments etc missing. If you normally hand-craft your zone files this would be pretty annoying.
Please expand on "it didn't work" so that we can establish why not.

What about setting up zone transfer by making your new server a slave of the old one? In that way the new server will pick up the data from the old one, after which you could probably break the master-slave relationship.

Related

WordPress Migration from Windows to Linux, Unable to Login to Staging Sites

I am working to migrate a clients WP site from Windows to Linux, and I am having an issue logging into their DEV and PROD site after successfully migrating UAT to the new servers.
A little background to uncomplicate this, they have three sites (DEV/UAT/and PROD) on a Windows server that are currently still viewable to the public.
The new Linux servers, since the migrations have not yet been completed, are only accessible by the IP address. As of now, I have been able to successfully migrate the UAT site from the old servers to the new servers, using WP All in One Migration. I am able to log into the UAT instance, make changes, updates, etc. However, when I try to log into the DEV and PROD instances on the new servers, I am getting an error that the WP username/email does not exist, so I am unable to complete the migration for the remaining two instances. I am still able to log into the old Windows servers and make changes, but not the new servers.
Any help would be greatly appreciated!
You likely did not get a usable full copy of the database. When you migrate WordPress around, the files on the server is only a small portion of the data you need. You MUST get copies of everything in the database as well or the site will not function! Do a dump of the database and then restore it on your DEV/UAT database node. You may need to jump through some hoops to change the URL that is embedded in the database as well for the sites to function properly as well. It's a PITA, but once you get the hang of it, you can script it and take the human out of the loop. I used to do this with a script that ran the DB search and replace statements.
As an alternative, you can add a user (even an administrator) to WordPress using only the database. Can be handy in some situations ...
INSERT INTO databasename.wp_users (ID, user_login, user_pass, user_nicename, user_email, user_status, d isplay_name) VALUES ('1000', 'usernamefornewuser', MD5('plaintextnewuserpassword'), 'usernameforyournewuser', 'newuseremail#somenewuseremaildomain.com', '0', 'The New User I just created');
INSERT INTO databasename.wp_usermeta (umeta_id, user_id, meta_key, meta_value) VALUES (NULL, '1000', 'wp_ca
pabilities', 'a:1:{s:13:"administrator";b:1;}');
INSERT INTO databasename.wp_usermeta (umeta_id, user_id, meta_key, meta_value) VALUES (NULL, '1000', 'wp_us
er_level', '10');
There are some values you will have to change however I have personally used these MySQL snippets with WordPress for many, many moons (years!).

Retrieving to-be-pushed entries in IMobileServiceSyncTable while offline

Our mobile client app uses IMobileServiceSyncTable for data storage and handling syncing between the client and the server.
A behavior we've seen is that, by default, you can't retrieve the entry added to the table when the client is offline. Only when the client table is synced with the server (we do an explicit PushAsync then a PullAsync) can the said entries be retrieved.
Anyone knows of a way to change this behavior so that the mobile client can retrieve the entries added while offline?
Our current solution:
Check if the new entry was pushed to the server
If not, save the entry to a separate local table
When showing the list for the table, we pull from both tables: sync table and regular local table.
Compare the entries from the regular local table to the entries from the sync table for duplicates.
Remove duplicates
Join the lists, order, and show to the user.
Thanks!
This should definitely not be happening (and it isn't in my simple tests). I suspect there is a problem with the Id field - perhaps you are generating it and there are conflicts?
If you can open a GitHub Issue on https://github.com/azure/azure-mobile-apps-net-client/issues and share some of your code (via a test repository), we can perhaps debug further.
One idea - rather than let the server generate an Id, generate an Id using Guid.NewGuid().ToString(). The server will then accept this as a new Id.

Dblookup not working

temp:=#DbLookup("Notes":"NoCache";"ARRoW/SSS":"sss/sssProj.nsf";"(Lookup for Community)";"State of Maine";2);
temp1:=#DbLookup("Notes":"NoCache";"ARRoW/SSS":"sss/sssProj.nsf";"(Lookup for Community)";"State of Maine";3);
temp2:=#DbLookup("Notes":"NoCache";"ARRoW/SSS":"sss/sssProj.nsf";"(Lookup for Community POC)";"State of Maine";4);
#If(#IsError(temp)|#IsError(temp1)|#IsError(temp2);"Error";temp + " " + temp1 + " " + temp2)
Hi this works on Lotus Notes Client but doesn't work on web Any help is welcome thanks in advance!
There are typically three types of root causes for something like this.
One type of problem is server trust. This only applies if there are two servers involved. I.e., the web server is ServerX/SSS and the code is trying to access ARRoW/SSS. You need to review ARRoW/SSS's server document and check whether "ServerX/SSS" is listed in the field for "Trusted servers". (Also note that if this is a really, really old version of Domino - before version 6 if I recall correctly - then the trusted servers feature is not there and you cannot make cross-server calls to #DbLookup in web code.)
The second type of problem is that the server where the code is running can not resolve the name of the server where the database lives. The code is accessing server ARRoW/SSS, but you haven't said whether ARRoW/SSS is the actual web server, so let's look at both cases.
Assuming that it is all happening on one server, there can still be a name resolution problem because of the way the formula is coded. Try specifying "":"sss/sssProj.nsf" instead of "ARRoW/SSS":"sss/sssProj.nsf". If that fixes your problem, great! But it means that you still have a problem either in your server document or with the DNS configuration on your Domino server and you should address that. You should probably continue with the troubleshooting that I give in the next paragraph. Just bear in mind that everything I say there is true even if ServerX/SSS is really the same as Arrow/SSS.
If the code is running on web server ServerX/SSS, then you need to make sure that ServerX can connect to ARRoW/SSS. The easiest way to do this is to bring up the console for ServerX and enter the command 'trace ARRoW/SSS'. If it fails, check the server documents and/or connection documents for correct IP addresses or host names, and open a command window on the server and try a ping using the exact information in the server documents. If it fails, you have a networking issue. One of the underlying causes I've seen for a problem like this is that there is no connection document (because the servers are in the same named network, but neither the IP address nor the fully-qualified host name is entered in the networks table in the server document, so Domino just asks DNS to resolve the common name 'ARRow' - but the DNS configuration on the web server does not include a default search path so the name is not resolved. But you need to check everything until you can get a 'trace' command to succeed.
The third type of problem is Access Control. This is a broad category that comes down to the fact that the identity that the code is running under either does not have access to the server ARRoW/SSS, the database sss/sssProj.nsf, the view (Lookup for Community)" or the document(s) with the key "State of Maine". There are a lot of things to check. If the code is running in a field formula, the identity is that of the user, and if the same user does not get the error through the web client then you need to look at the database properties for sss/sssProj.nsf and check the maximum web access level. If the code is running as an agent, you need to check the agent properties to determine what identity the agent is running under, and then review everything: the security settings in the server document, the database ACL, restrictions on the view, and reader names fields in the documents.

Chef server migration: How to update the client.pem in nodes?

I am attempting to migrate from 1 chef server to another using knife-backup. However, knife-backup does not seem to update the nodes, and all my nodes are still pointing to the old server in their respective client.rb files, and their validation.pem and client.pem are still paired with the old server.
Consequently, I update all the client.rb and validation.pem files manually.
However, I still need to update client.pem. Obviously, one way to do so would be to bootstrap the node again to the new server, however I do not want to do that because I do not want to deploy to these nodes because that could cause a loss of data.
Is there any way to update client.pem in the nodes without having to bootstrap or run chef-client? One way would be to get the private key and do it manually, but I am not sure how to do that.
Thanks!
PS: Please feel free to suggest any other ideas for migration as well!
It's the chef server "client" entities that contain the public keys matching the private key ("client.pem") files on each client server. The knife backup plugin reportedly restores chef clients. Have you tried just editing the chef server URL (in the "client.rb") and re-running chef-client?
Additional note:
You can discard the "validation.pem" files. These are used during bootstrap to create new client registrations. Additionally most likely your new chef server has a alternative validation key.

Web (anchor) link to a Notes database

Is there a way to create a regular web (or anchor) link that will open a Notes client and display a pre-determined database from the workplace?
try this for a local database: notes:///[drive]:/[path_to_notes_data]/[database.nsf]
This will open the database in the notes-client.
Regards
Thorsten
Thorstens answer should work for databases on servers aswell:
notes://[server name]/[path-to-database-on-server]
..or
notes://[server name]/__[replica-id].nsf
The Notes client seems to replace "/" with "#" in the server name to create a server name without slashes - but the host name or ip of the server should work.
I believe that leaving the server name out (or specifying 127.0.0.1) will use the currently active replica on the desktop - possibly with priority to local replicas.
notes:///__[replica-id].nsf
Some details can be found in the footnote here: Specifying valid notesurl entries

Resources