I really should know this, but would someone tell me how to change the default database on Linux?
For example:
I have a database test1 on server1 with ORACLE_SID=test1. So, to connect to test1 I can use:
sqlplus myuser/password
Connects to the default database, test1
I would now like the default sqlplus connection to go to database test2 on server server2.
So, I've updated tnsnames so that the old test1 entry now points to test2#server2. I've also added a separate entry for test2 that points to the same place. However, the default connection still seems to go to test1#server1.
The following both work fine and go to database test2 on server2:
sqlplus myuser/password#test1
sqlplus myuser/password#test2
But the default connection, sqlplus myuser/password, goes to test1#server1.
Any ideas?
Thanks.
To expand on kerchingo's answer: Oracle has multiple ways to identify a database.
The best way -- the one that you should always use -- is USER/PASSWORD#SERVER. This will use the Oracle naming lookup (tnsnames.ora) to find the actual server, which might be on a different physical host every time you connect to it. You can also specify an Oracle connection string as SERVER, but pretend you can't.
There are also two ways to specify a default server via environment variables. The first is TWO_TASK, which uses the naming lookup, and the second is ORACLE_SID, which assumes that the server is running on the current machine. ORACLE_SID takes precedence over TWO_TASK.
The reason that you should always use an explicit connect string is that you have no idea whether the user has set TWO_TASK, ORACLE_SID, both, or neither; nor do you know what they might be set to. Setting both to different values is a particularly painful problem to diagnose, particularly over the phone with a person who doesn't really understand how Oracle works (been there, done that).
Assuming you're logged into server1, you'll need to connect to test2 using
sqlplus myuser/password#test2
because you have to go through a listener to get to server2. The string test2 identifies an entry in your tnsnames.ora file that specifies how to connect to test2. You won't be able to connect to a different server using the first form of your sqlplus command.
If both instances (test1, test2) were on server1, then you could, as #kerchingo states, set the ORACLE_SID environment variable to point at another instance.
Defining a enviroment variable LOCAL with the tns alias of your database.
> set LOCAL=test1
> sqlplus myuser/password
> ... connected to test1
> set LOCAL=test2
> sqlplus myuser/password
> ... connected to test2
This works on windows client, not shure about other os.
The correct question is 'How do I change the default service'? Oracle DBMS offers two types of connection request: explicit and implicit. In an explicit request, you supply three operands like sqlplus username/password#service. In an implicit request, you ignore the third operand.
Implicit connection applies only when the client-host and server-host are the same. Consequently, the listener is on the same host.
The listener is the one that initially responds to connection request. In handling the implicit-connection request from the same host,
it checks whether the instance name has been set. It checks the value of shell-variable ORACLE_SID.
If set, then it can handle implicit-connection request. Otherwise, it cannot, and you must perform explicit-connection request, supplying the third operand.
The listener-config file name listener.ora associates instance with service.
To change the default service you connect to, change the default value of instance.
Therefore, change the default value of shell-variable ORACLE_SID. You do it in the OS user config file such as .profile or similar config files.
Hope this helps.
I think it is set in your environment, can you echo $ORACLE_SID?
Related
Assuming that we have the end-point to a Roxie server of interest, I was wondering if it is possible to make a remote call to it from a bwr script on Thor, and get the number of nodes that Roxie server has.
The code would probably look like the following:
RoxieServerIP := 'roxie-end-point';
numNodesRoxie := someBuiltInFunctionToGetNodes(RoxieServerIP);
OUTPUT(numNodesRoxie, NAMED('numNodesRoxie'));
I looked into some of the built-in functions to get the number of nodes of a cluster that you are running a process on such as:
OUTPUT(thorlib.wuid());
OUTPUT(thorlib.nodes());
but I haven't seen anything where we can call out to a difference server (e.g. Roxie) and get its number of nodes.
Any help would be appreciated!
Thanks
I chatted with the development team today and the best way to approach what you need to do is to deploy a query to the remote ROXIE that returned how many nodes it had. So in other words, built a "diagnostic" ROXIE query that embeds the nodes() function, and then call it from your other remote location.
Hope this helps!
Bob
I would like to know if anyone has a connection string model with the DB2 database?
The database is located on a Linux Centos 7 server.
I tried something like this:
db2 Database=SI;Hostname=VMCENTDB2;Protocol=TCPIP;Port=3700;Uid=db2inst1;Pwd=password;
But it didn't work and the following message returned:
DB21034E The command was processed as an SQL statement because it was
not a valid Command Line Processor command. During SQL processing it
returned: SQL1024N A database connection does not exist.
SQLSTATE=08003
Thanks in advance.
Your command is not valid for the db2 command, because it does not accept connection strings. Other tools do.
If you want to connect to a Db2-database, from the shell command line you have different options provided by different tools:
use the db2 command (requires previous catalog actions to be completed)
use the CLPplus command (accepts a connection string)
use an odbc iterface like isql
use the db2cli tool (more suitable for experts and also requires pre-configuration of db2dsdriver.cfg and/or db2cli.ini).
Different options are suitable for different purposes, and different skill sets etc.
You can use the Db2 command line processor = the db2command, via use of db2 connect to $DATABASENAME user $USER using $PASSWD (you provide your own values for the variables). This does not accept connection strings. But before that connect command can succeed from remote, you must catalog the node on which the database lives (using the db2 catalog tcpip node .... remote ... server ... command), and then catalog the database on that node using the db2 catalog database $DBNAME as $DBALIAS at node $NODENAME command. Refer to the online Db2 Knowledge centre for details of these commands. This is the oldest form of shell interface to Db2 from MS-Windows, Linux or Unix and is very script friendly for cmd.exe or bash or ksh etc. But many people do not like the catalog actions that are pre-requisites for remote working, although are easily scriptable.
Note that if you ssh to the centos server and get a shell, then you do not need to catalog local databases, you can use connect to them with the db2 command as long as your login shell dots in the correct db2profile file.
You cannot use a connection string for the Db2-CLP (command line processor), but you can use a connection string in the java-based CLPPlus tool and thereby avoid the need to catalog. CLPlus is useful for people who are familiar with Oracle SQLPlus syntax and does not need any catalog actions.
The CLPPlus command comes with the Db2-server, and with the Db2 runtime client and with the Db2 data server client, but it does not come with the tiny footprint Db2 clidriver. Refer to the documentation for usage details.
I want to connect dynamic mongo DB with my single code according to sub domain url.
eg.
if www.xyz.example.com then mongo DB is xyz
if www.abc.example.com then mongo DB is abc
if www.efg.example.com then mongo DB is efg
if someone hit www.xyz.example.com url then xyz DB automatically connect. if someone hit www.abc.example.com url then abc DB automatically connect.
but xyz DB connection should not disconnect. it should be remain . Because there is single code/project.
Please give a solution.
I'm not quite sure about your application use case so cannot assure the best solution.
One feasible solution is to run 3 node.js threads on 3 different ports, each connect to a specific DB instance. You can do it by running 3 different node.js process with different environment variables. Then forward the requests to each domain to different ports.
This approach has some advantages:
Ease of configuration, just need to care about deployment setting without if/else hacking in source code.
System availability, if 1 of the 3 DBs is down, only 1 domain affected, the others still work well.
NOTE: This approach just works well with small number of sub domains. If you have 30 sub domains or dynamic domains, then please re-consider your deployment architecture :). You may need to use some more advanced techniques to deal with it. A quick (but not best) way is to maintain a list of mongoose instances inside the application during application runtime, each instance is responsible for 1 sub domains. Then use req.get('host') to check the sub domain and use the corresponding mongoose instance to process the DB operations.
If I have two node apps (one running on port 1000 and one on 3000) and two dynamodb ports (one running on port 2000 and one on 4000). I want the 1000 port to only talk to the 2000 port and 3000 port to talk to 4000 one. I tried to do this but the data is same for both. A change in one, reflects on the other. Is it like this or is this my fault in some setup? I wish to resolve a concurrency problem in node.js without need of session token (just need a quick solution tbh), just setting up a new instance seemed easy solution.
Tips?
*different database or instance of database. I just don't want concurrency issues. I don't want test A to update the database and test B fails because it expected something else.
I can suggest two alternatives:
Start both DynamoDB local instances with separate -dbPath values. I am assuming that you aren't doing this right now, because of which both your instances must be using one data file.
If you do not specify this option, the file will be written to the current directory.
Use the -inMemory option, because of which:
DynamoDB will run in memory, instead of using a database file.
Look at more documentation here.
Everytime we create a new server I have a bash script that asks the end-user a set of questions to help chef configure the custom server, his/her answer to those questions needs to be injected into chef so that I can use their responses within my chef script (to set the server "hostname" = "server1.stack.com", for instance). There is a json attribute when running chef-client I've read about that may be helpful but I'm not sure how that would work in our environment.
Note: We run chef-client on all of our systems every 15 minutes via cronjob to get updates.
Psuedocode:
echo -n "What is the server name?"
read hostname
chef-client -j {'hostname' => ENV['$hostname']}
Two issues, first is that -j takes a filename not raw JSON and second is that using -j will entirely override the node data coming from the server which also includes the run list and environment. If this is being done at system provisioning time you can definitely do stuff like this, see my AMI bootstrap script for an example. If this is done after initial provisioning, you are probably best off writing those responses to a file, and then reading that in from you Chef recipe code.
Passing raw json into chef-client is possible, but requires a little creativity. You simply do something like this:
echo '{"hostname": "$hostname"}' | chef-client -j /dev/stdin
The values in your json will be deep merged with the "normal" attributes stored in the chef-server. You can also include a run_list in your json, which will replace (not be merged) the run_list on the chef server.
You can see the run_list replacing the server run list here:
https://github.com/opscode/chef/blob/cbb9ae97e2d3d90b28764fbb23cb8eab4dda4ec8/lib/chef/node.rb#L327-L338
And you can see the deep merge of attributes here:
https://github.com/opscode/chef/blob/cbb9ae97e2d3d90b28764fbb23cb8eab4dda4ec8/lib/chef/node.rb#L305-L311
Also, any attributes you declare in your json will override the attributes already stored on the chef-server.