I am doing a side by side upgrade from 2k8 to 2k12. The original machine is a named instance and the new machine is a default instance. The goal is to not change connection strings from applications.
If I were going from a default instance to a default instance I could change the DNS A record and have it connect. However, the application are storing the connection string as OLDSERVER\OLDINSTANCE. The A record is only going to change the OLDSERVER name but the connection is still looking for the \OLDINSTANCE.
Can SQL Server 2012 be configured to have the default instance accept connections when they are made to an instance that does not exist?
Thanks,
Chris
A default instance, as far as the client connecting to it is concerned, is the one listening on 1433.
If you install a named instance with the same name on the new server, and then change it's port to 1433, a client would be able to connect using both NEWSERVER\OLDINSTANCE and NEWSERVER, as long as the browser is running.
I went through this exact process a few weeks ago, and what I found out sadly is that it does not appear to be possible without uninstalling the old instance (which I wanted nothing to do with). Here were some of the many links that I found on the subject and which I imagine you have probably been through as well.
StackOverflow
SQLServerCentral
MSDN
I tried a number of different things on my local and not one of them worked for me. In my case, if I could have upgraded the server (downtime would have happened) I would have done it, but we were going from SQL 2K8 Enterprise to SQL 2K12 Standard so that was not possible either.
I will be interested in seeing if anyone comes up with a way that this can be done outside of an uninstall, but I could not come up with one and everything that I read seemed to back that up. Maybe someday we will get an sp_dropserver/sp_addserver that will change instance names.
Related
I'm frequently receiving the error There was a problem connecting to your instance when connecting via AWS console. I have a database instance running there and the application connecting to this server for DB service goes down!
This is neither a new instance having a network config error nor this has been purposefully isolated. Considering a day, I am able to connect to this instance for about 10hours, and for the rest, I get this error! So this is definitely not a configuration error.
Any advice/suggestion to debug this further for a permanent fix please? TIA.
Faced this issue multiple times, each time AWS recommends to either upgrade the plan for support and if not, aws recommends to create a new instance, move the contents here and terminate the old one! So I'm actively looking for a solution aswell.
PS: Posting here as answer since I don't have reputation to make comments.
Posting here so that might help someone some day! This is not a solution to the question that I asked, but a work around. I cloned the existing instance to a new instance and monitored both for couple of days. Turns out the new instance worked perfectly while the old one still had the issue.
Probably something with the instance itself.
While am not certainly sure, the AWS mainly comprises of FPGA and there might be some interim issue within it!
Finally terminated the old one! Not a solution, but this work-around works!
I am not sure if this is the correct place to ask my question, but really I am out of ideas, and my clock is ticking.
In short, I got a new machine that I need to make development ready.
This project is based on rather old program versions, that is a task to update.
In short I have set up the Vagrant (1.8.1) in VirtualBox (5.0.14). Chef (0.10.0) created all dependencies successfully and I can SSH to machine and see all is fine, all services are running as set in VagrantFile.
Vagrant box is latest ubunty/trystu64. My host machine is MacOs HighSierra(10.13.3).
Now, I open for example an mySQL editor (mySQL Workbench) and it connects to the Box, I can see DB and manipulate it.
My problem is with the NodeJS (I think). When I run my tests, it simply refuses to connect to the Box. More precisely, it attempts to connect to 127.0.0.1: 3306 (mySQL) and it errors. While MySQL Workbench performs the same connection without problems.
It seems the port forwarding in Vegrant works fine, as mySQL workbench is being forwarded to a box. Nodejs is not being forwarded, or something.
Is it Node doing it? Something else that I need to allow?
I have tried many different things, I have lost count. And always the same issue.
Is there something that I can do to Node, so it behaves as mySQL Workbench? Any idea is appreciated.
This identical setup used to work before, but not now.
I am quite new to perforce and I am facing an issue concerning the P4HOST value.
Here's the situation : I have one, let's say, classic setup with a connection, a workspace name, etc. and a host set to the local machine name. Everything works perfectly.
I have another connection with quite the same but the host should not be the local machine name to connect to the correct server. If I set the host to my local machine I am referring to a bad server. If I set the host in p4v I get this error : Client can only be used from host. and it breaks everything for this setup.
To fix this I tried to set the host value manually with this command : p4 set P4HOST=myhost and it works well unless I can't access my other repositories because, I think, it's a global value and as other configurations are not using a specific host it fails.
Anyway, according to my configurations what can I do ? Is it possible to manually set P4HOST for a specific setup without affecting everything ? Is there another way ?
Thank you very much !
Edit : I don't know if this is useful but the classic host I am using is like myname-PC and the other one which is failing is something like apath/toanotherpath
P4HOST's job is to keep you from using the same workspace from different machines. If you use the same workspace from different machines, you're going to have a bad time. (Why exactly is its own topic -- for purposes of this answer, just take my word for it that you do not want to use one workspace from different client machines. Dead rising from their graves, cats and dogs living together, that kind of thing. Bad time.)
When you create a workspace, its Host: value is set to your current P4HOST value (which defaults to the client machine hostname). If you try to use that workspace with a DIFFERENT host value, it's a strong clue to the server that you're trying to use it from more than one machine (which, as established, is a Bad Time), and so the server gives you that error (to try to stop you before you have a BAD TIME).
So it sounds like this workspace that you're trying to use was created on a different client host machine -- which means that using that workspace is probably going to lead to a bad time. Create a new workspace for the client machine that you're on.
Alternatively (and only if you're really sure it's the right thing to do), you can change the Host in that workspace to match your current machine. Note that if you find yourself having to do this more than once, you're probably in the process of generating a bad time for yourself.
Our company has an old linux server that runs a few tomcat web applications. One of those applications is connecting to PostgreSQL. While I'm a C#.Net/Windows coder, I need to connect to this database from my computer using pGAdmin III (or any suggested equivalent). When attempting the connection, pgAdmin says Server Not Listening.
Without knowing much about linux I'm using WinSCP to connect to the file structure. I have ZERO documentation on the old apps, any data sources, or their data connections. I've been able to determine the following, assuming the location of the web app is actuallly legit and not some non-running copy.
PostgreSQL
In one app's connection information:
jdbc:postgresql://localhost:5432/somename
After some digging, I found the following possible instances of postgresql on the server file structure.
\etc\postgresql\8.3\main
\etc\postgresql\8.4\main
There's also \etc\postgresql-common with very different types of files in there.
If there are other instances or related folder, I am unaware and wouldn't know where to look. It's a labyrinthine beast.
I ensured in the config file for both that listening="*", which was supposed to be one of two fixes. It was already set to *, so assuming one of these is the right one, I should be good there.
I know that at least some instance of postgresql is turned on because the old app is running and fetching data, so that's the other of the two fixes.
pgAdmin
I heard in a separate thread here that reinstalling pgAdmin might solve the problem, but it did not. I tried with and without ssl.
Here is how I'm trying to set up the connection in pgAdmin III:
Name: SomeName
Host: I've tried a few combinations here. //servername/somename, or just //servername
Port: 5432 (matches what was expected, also the port from the connection)
Service: Blank
MaintenanceDB: I tried the default in pgadmin, postgres and the actual db I'm trying to connect to.
username & Password: the credentials from the connection info in the old app.
I'm getting the Server Doesn't Listen, suggesting that either it's not on (Well...some data source is on and working and the data in WEB-INF suggests it's postgresql), or it's not accepting TCP/IP connections, which it is according to the instances of postgresql I was able to find.
Long Story Short
At this point I'm assuming that one of the following is the problem...
The connection information I'm entering into postgreSQL is not being entered correctly, but I don't know what I'm doing wrong.
The source of the connection information (the web application) is bad/old/not from a running instance (and in this case I don't know how to tell, not in linux).
The instances of postgresql I found are not the instances it's using, and I have no idea how to find it.
Something's fishy network-wise, but since both my computer and the linux server are on the same network, it doesn't seem too likely.
Also, everyone, please document your stuff for the poor souls of the future. I greatly appreciate any assistance you are able to offer me.
You may want to use a tunnel:
ssh -L 5432:localhost:5432 user#server
After you log into the remote server, you'll have mapped port 5432 on your computer to the remote one. Then you can use pgAdmin to connect to your localhost on port 5432. Make sure you don't have anything running on this port on your computer.
Edit: Look at these examples on how to setup tunnels using putty
I've just installed Neo4j 1.8.2 onto Azure by following this step-by-step process...
http://de.slideshare.net/neo4j/neo4j-on-azure-step-by-step-22598695
Unfortunately, when I browse to http://:7474/webadmin Fiddler says Error 10061 - No connection could be made because the target machine actively refused it.
I've followed the instructions exactly and haven't received any errors.
Any help much appreciated.
So, I think I got to the bottom of this. I think it was due to the size of compute / VM I was creating. It looks like the problem is caused when running on Extra Small instances. I created a new installation using a Small instance and everything now works :).
Try setting the server to accept connections form all hosts, and maybe use a newer Neo4j, say 1.9.4
http://docs.neo4j.org/chunked/stable/security-server.html#_secure_the_port_and_remote_client_connection_accepts
The way the VM Depot image is set up, it's pre-configured to allow all hosts to connect, and the Neo4j server will auto-start. The only thing you need to take care of, when constructing your VM, is to open an Input Endpoint, with any public port you want (preferably 7474 to stay true to Neo4j) and internal port 7474.
Note that the UI changed a bit since the how-to was published: You can specify the endpoint as the last step before creating your virtual machine. Other than that, the instructions should be the same. And... once the VM is up and running (it'll take about 5-10 minutes), you just visit http://yourservicename.cloudapp.net:7474 and you should see the web admin. Note: this is not the same as your vm name. If you named your VM something like 'neo' then you do not want http://neo:7474 or http://neo.cloudapp.net:7474. You need to use your cloud service name (you had to create a name for the service when you deployed the VM.
I've deployed that image several times in demos, and just tried again right now to make sure nothing wonky happened. Worked perfectly.