Chef server migration: How to update the client.pem in nodes? - knife

I am attempting to migrate from 1 chef server to another using knife-backup. However, knife-backup does not seem to update the nodes, and all my nodes are still pointing to the old server in their respective client.rb files, and their validation.pem and client.pem are still paired with the old server.
Consequently, I update all the client.rb and validation.pem files manually.
However, I still need to update client.pem. Obviously, one way to do so would be to bootstrap the node again to the new server, however I do not want to do that because I do not want to deploy to these nodes because that could cause a loss of data.
Is there any way to update client.pem in the nodes without having to bootstrap or run chef-client? One way would be to get the private key and do it manually, but I am not sure how to do that.
Thanks!
PS: Please feel free to suggest any other ideas for migration as well!

It's the chef server "client" entities that contain the public keys matching the private key ("client.pem") files on each client server. The knife backup plugin reportedly restores chef clients. Have you tried just editing the chef server URL (in the "client.rb") and re-running chef-client?
Additional note:
You can discard the "validation.pem" files. These are used during bootstrap to create new client registrations. Additionally most likely your new chef server has a alternative validation key.

Related

WordPress Migration from Windows to Linux, Unable to Login to Staging Sites

I am working to migrate a clients WP site from Windows to Linux, and I am having an issue logging into their DEV and PROD site after successfully migrating UAT to the new servers.
A little background to uncomplicate this, they have three sites (DEV/UAT/and PROD) on a Windows server that are currently still viewable to the public.
The new Linux servers, since the migrations have not yet been completed, are only accessible by the IP address. As of now, I have been able to successfully migrate the UAT site from the old servers to the new servers, using WP All in One Migration. I am able to log into the UAT instance, make changes, updates, etc. However, when I try to log into the DEV and PROD instances on the new servers, I am getting an error that the WP username/email does not exist, so I am unable to complete the migration for the remaining two instances. I am still able to log into the old Windows servers and make changes, but not the new servers.
Any help would be greatly appreciated!
You likely did not get a usable full copy of the database. When you migrate WordPress around, the files on the server is only a small portion of the data you need. You MUST get copies of everything in the database as well or the site will not function! Do a dump of the database and then restore it on your DEV/UAT database node. You may need to jump through some hoops to change the URL that is embedded in the database as well for the sites to function properly as well. It's a PITA, but once you get the hang of it, you can script it and take the human out of the loop. I used to do this with a script that ran the DB search and replace statements.
As an alternative, you can add a user (even an administrator) to WordPress using only the database. Can be handy in some situations ...
INSERT INTO databasename.wp_users (ID, user_login, user_pass, user_nicename, user_email, user_status, d isplay_name) VALUES ('1000', 'usernamefornewuser', MD5('plaintextnewuserpassword'), 'usernameforyournewuser', 'newuseremail#somenewuseremaildomain.com', '0', 'The New User I just created');
INSERT INTO databasename.wp_usermeta (umeta_id, user_id, meta_key, meta_value) VALUES (NULL, '1000', 'wp_ca
pabilities', 'a:1:{s:13:"administrator";b:1;}');
INSERT INTO databasename.wp_usermeta (umeta_id, user_id, meta_key, meta_value) VALUES (NULL, '1000', 'wp_us
er_level', '10');
There are some values you will have to change however I have personally used these MySQL snippets with WordPress for many, many moons (years!).

how to configure graphql url in prefect server 0.13.5

After upgrading from 0.12.2 to 0.13.5 a connectivity issue came up with the graphql component. Prefect server is running in a different server but the graphql url remains http://localhost:4200/graphql. server.ui.graphql_url was working great with version 0.12.2 but now I can't find any way to configure the graphql url properly.
Below you will find the config.toml:
$ cat ~/.prefect/config.toml
[logging]
level = "INFO"
[api]
url = "http://192.168.40.180:4200"
[server.database]
host_port = "6543"
[context.secrets]
SLACK_WEBHOOK_URL = 'https://hooks.slack.com/services/xx/XX/Xx'
[server.ui]
graphql_url = "http://192.168.40.180:4200/graphql"
In the image you can see a POC of the case.
I'm a lit bit confused about the old and the new way to configure the prefect server. Have you any idea about this issue?
EDIT: The ticket I mentioned below has been closed; when 0.13.9 is released, it'll contain a new runtime config apollo_url (which is more accurate since that's the container we're looking for anyway), which is inserted into a static settings file in the UI build, which is fetched when the application starts. This should hit all the points mentioned below.
This is a change from Prefect Server ^0.13.0, which removed the graphql_url variable as a configurable environment variable.
The previous version of Server used a find-replace on the UI code, which is compiled and minified at image build time. The reason for this is that it moves the burden of installing the required Node modules and building the application away from client-side installations and onto Prefect at release-time instead, since these can take a long time (10+ minutes each) in containerized environments. However, the downside is that modifying environment variables, which are injected at build time, requires a less than desirable lookup of the previously-injected variables, which means modifying these requires pulling a new image.
We chose to ship the new version with an in-app input, which allows changing the Server endpoint at browser run-time instead. This gives the flexibility of a single UI instance to connect to any accessible Server installation, leveraging local storage to persist this setting between browser sessions.
That said, we have a ticket open to re-expose the default configuration in a better way than in the previous version. You can follow that ticket here.

couchdb, after replicating to clean version of the database, users get not authorized

After replicating the database to remove tombstones, it started throwing "you are not authorized to access this db".
Restlet error
Pouchdb error
What I had to do was manually add a new user, then remove them again, and that made it happy.
I guess that means, like the indexes, something needs to be reset or rescanned. Any way I can do this operation through script? My script handling this is currently in node by using pouchdb to replicate with a filter to remove all tombstones, then shut down couchdb service, swap the dbname.couch and .dbname_design file and folder around with the clean versions, then start up the service again.
Thanks
--Edit, I have narrowed it down a bit, it looks like creating a new database, adds a new _admin role. Removing that role fixes the permissions. Is there a way to prevent this role from being added, or alternatively, remove it through a script, curl, node, etc.? I am only finding documentation on removing users, not roles.

Retrieving to-be-pushed entries in IMobileServiceSyncTable while offline

Our mobile client app uses IMobileServiceSyncTable for data storage and handling syncing between the client and the server.
A behavior we've seen is that, by default, you can't retrieve the entry added to the table when the client is offline. Only when the client table is synced with the server (we do an explicit PushAsync then a PullAsync) can the said entries be retrieved.
Anyone knows of a way to change this behavior so that the mobile client can retrieve the entries added while offline?
Our current solution:
Check if the new entry was pushed to the server
If not, save the entry to a separate local table
When showing the list for the table, we pull from both tables: sync table and regular local table.
Compare the entries from the regular local table to the entries from the sync table for duplicates.
Remove duplicates
Join the lists, order, and show to the user.
Thanks!
This should definitely not be happening (and it isn't in my simple tests). I suspect there is a problem with the Id field - perhaps you are generating it and there are conflicts?
If you can open a GitHub Issue on https://github.com/azure/azure-mobile-apps-net-client/issues and share some of your code (via a test repository), we can perhaps debug further.
One idea - rather than let the server generate an Id, generate an Id using Guid.NewGuid().ToString(). The server will then accept this as a new Id.

How to move all zones to a new Bind DNS Server

Existing Server = BIND version 9.1.3
new Server = BIND version 9.3.4
How can i move all zones records to new server? I tried moving them by coping the files and changing configuration file on new to match and zones did not resolve, no resolution.
Is there an smooth way to just transfer all zones to this new server?
Copying the files and updating the configuration is the right way to do this.
Using AXFR (zone transfer) is no good:
You still have to create a config file at the new server listing all of the zones, there's no way to say "transfer every zone".
The files you get out will be in a different order to the records in the originals, with any comments etc missing. If you normally hand-craft your zone files this would be pretty annoying.
Please expand on "it didn't work" so that we can establish why not.
What about setting up zone transfer by making your new server a slave of the old one? In that way the new server will pick up the data from the old one, after which you could probably break the master-slave relationship.

Resources