I would like to know if anyone has a connection string model with the DB2 database?
The database is located on a Linux Centos 7 server.
I tried something like this:
db2 Database=SI;Hostname=VMCENTDB2;Protocol=TCPIP;Port=3700;Uid=db2inst1;Pwd=password;
But it didn't work and the following message returned:
DB21034E The command was processed as an SQL statement because it was
not a valid Command Line Processor command. During SQL processing it
returned: SQL1024N A database connection does not exist.
SQLSTATE=08003
Thanks in advance.
Your command is not valid for the db2 command, because it does not accept connection strings. Other tools do.
If you want to connect to a Db2-database, from the shell command line you have different options provided by different tools:
use the db2 command (requires previous catalog actions to be completed)
use the CLPplus command (accepts a connection string)
use an odbc iterface like isql
use the db2cli tool (more suitable for experts and also requires pre-configuration of db2dsdriver.cfg and/or db2cli.ini).
Different options are suitable for different purposes, and different skill sets etc.
You can use the Db2 command line processor = the db2command, via use of db2 connect to $DATABASENAME user $USER using $PASSWD (you provide your own values for the variables). This does not accept connection strings. But before that connect command can succeed from remote, you must catalog the node on which the database lives (using the db2 catalog tcpip node .... remote ... server ... command), and then catalog the database on that node using the db2 catalog database $DBNAME as $DBALIAS at node $NODENAME command. Refer to the online Db2 Knowledge centre for details of these commands. This is the oldest form of shell interface to Db2 from MS-Windows, Linux or Unix and is very script friendly for cmd.exe or bash or ksh etc. But many people do not like the catalog actions that are pre-requisites for remote working, although are easily scriptable.
Note that if you ssh to the centos server and get a shell, then you do not need to catalog local databases, you can use connect to them with the db2 command as long as your login shell dots in the correct db2profile file.
You cannot use a connection string for the Db2-CLP (command line processor), but you can use a connection string in the java-based CLPPlus tool and thereby avoid the need to catalog. CLPlus is useful for people who are familiar with Oracle SQLPlus syntax and does not need any catalog actions.
The CLPPlus command comes with the Db2-server, and with the Db2 runtime client and with the Db2 data server client, but it does not come with the tiny footprint Db2 clidriver. Refer to the documentation for usage details.
Related
I have created a Cassandra database in DataStax Astra and am trying to load a CSV file using DSBulk in Windows. However, when I run the dsbulk load command, the operation never completes or fails. I receive no error message at all, and I have to manually terminate the operation after several minutes. I have tried to wait it out, and have let the operation run for 30 minutes or more with no success.
I know that a free tier of Astra might run slower, but wouldn't I see at least some indication that it is attempting to load data, even if slowly?
When I run the command, this is the output that is displayed and nothing further:
C:\Users\JT\Desktop\dsbulk-1.8.0\bin>dsbulk load -url test1.csv -k my_keyspace -t test_table -b "secure-connect-path.zip" -u my_user -p my_password -header true
Username and password provided but auth provider not specified, inferring PlainTextAuthProvider
A cloud secure connect bundle was provided: ignoring all explicit contact points.
A cloud secure connect bundle was provided and selected operation performs writes: changing default consistency level to LOCAL_QUORUM.
Operation directory: C:\Users\JT\Desktop\dsbulk-1.8.0\bin\logs\LOAD_20210407-143635-875000
I know that DataStax recently changed Astra so that you need credentials from a generated Token to connect DSBulk, but I have a classic DB instance that won't accept those token credentials when entered in the dsbulk load command. So, I use my regular user/password.
When I check the DSBulk logs, the only text is the same output displayed in the console, which I have shown in the code block above.
If it means anything, I have the exact same issue when trying to run dsbulk Count operation.
I have the most recent JDK and have set both the JAVA_HOME and PATH variables.
I have also tried adding dsbulk/bin directory to my PATH variable and had no success with that either.
Do I need to adjust any settings in my Astra instance?
Lastly, is it possible that my basic laptop is simply not powerful enough for this operation or just running the operation crazy slow?
Any ideas or help is much appreciated!
Is there any postgresql command or linux command that can be run as cron job to know the number of active connections at any point of time? I have a flask application running which is integrated with gps logging in every 15 minutes via the mobile app built using Ionic framework.
Query pg_stat_activity:
psql -c "select count(*) from pg_stat_activity" -t
Add any necessary connection params (-h, -U, etc). Auth is a little trickier - if you trust the environment, you can use a .pgpass file.
Other notes -
The -t limits output to the tuple itself
You can get some other useful info with different groupings or predicates. For example: select datname, count(*) from pg_stat_activity group by datname
Hope that helps.
I want to use Ansible to automate my deployment process. Let me say few words about it. Deployment process in my case consists of two steps:
update DB (SQL Script)
copy predefined set of files to various network folders (on different machines)
For this purpose I use special selfwritten program called Installer.exe. If I run it myself it performes operations with my credentials. So it has all my rights, e.g. access to network folders and SQL Databese.
I want to use Ansible as wrapper for my program (Installer.exe), not instead of it. My target scenario - Ansible prepares configuration files and runs my installer on remote windows machine. I've faced a problem - my program run by Ansible hasn't my full rights. It can successfully access SQL Database 1 on the same machine, but can't access SQL Database 2 on remote machine or access network folder. I always get "access denied" on networks access, SQL Database says something about NT AUTHORITY\ANONYMOUS LOGON. It looks like double hop problem, but not exactly it as far as I understand it. Double hop is about service accounts, but I am trying to access remote server with my own personal accouns.
UPD 1:
My variables for that group are:
ansible_user: qtros#ABC.RU
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
ansible_winrm_operation_timeout_sec: 120
ansible_winrm_read_timeout_sec: 150
ansible_winrm_transport: kerberos
ansible_winrm_kerberos_delegation: yes
Before any actions with Ansible I run the following command:
$> kinit qtros#ABC.RU
and enter my password. Later if run klist I can see some valid tickets. I intended to use domain account, but not local system account. Am I doing it right?
UPD 2: if I add such command in playbook:
...
raw: "klist"
...
I get something like:
fatal: [targetserver.abc.ru]: FAILED! => {"changed": true, "failed": true, "rc": 1, "stderr": "", "stdout": "\r\nCurrent LogonId is 0:0x20265db4\r\nError calling API LsaCallAuthenticationPackage (ShowTickets substatus): 1312\r\n\r\nklist failed with 0xc000005f/-1073741729: A specified logon session does not exist. It may already have been terminated.\r\n\r\n\r\n", "stdout_lines": ["", "Current LogonId is 0:0x20265db4", "Error calling API LsaCallAuthenticationPackage (ShowTickets substatus): 1312", "", "klist failed with 0xc000005f/-1073741729: A specified logon session does not exist. It may already have been terminated.", "", ""]}
Based on your problem statement, it sounds like the Windows machine is running installer.exe under the Local System account, which has no rights outside of the Windows machine itself and will always fail trying to run any procedure on SQL Database 2. This wouldn't be a Kerberos double-hop scenario. For one, there's only one hop between the Windows machine in the middle of the diagram running installer.exe and SQL Database 2. Since your Ansible program is wrapping up installer.exe inside of it, then unless I'm missing something, run the Ansible program on the Windows machine with AD domain credentials having the appropriate rights to SQL Database 2.
EDIT: As the focus of your question was based on resolving the SQL Database 2 message regarding NT AUTHORITY\ANONYMOUS LOGON, and whether or not this was a Kerberos double hop problem (doesn't look like it), that's what I answered on. Note you have ansible_user defined but not ansible_ssh_pass. There's an apparent bug in the documentation (http://docs.ansible.com/ansible/intro_windows.html), so use ansible_ssh_pass instead of ansible_ssh_password.
When using the Sql Server Data Tools Data Comparision tools a few of us here are unable to do comparisons when the source is an Azure database.
The error we get is below:
---------------------------
Microsoft Visual Studio
---------------------------
Data information could not be retrieved because of the following error:
Value cannot be null.
Parameter name: conn
Value cannot be null.
Parameter name: conn
The connection test works fine. I've tried creating a new connection. As a side note if I do data compare with a non-Azure source things work fine.
SQL Server Data tools version is 12.0.50512.0
We can access the server using SSMS without any problems.
It turned out to be a permissions issue but I was able to diagnose it using the details available at https://social.msdn.microsoft.com/Forums/sqlserver/en-US/740e3ed8-bb05-48f7-8ea6-721eca071198/publish-to-azure-db-v12-failing-value-cannot-be-null-parameter-name-conn?forum=ssdt
Gathering an Event Log for SSDT
Open a new command prompt as Administrator.
Run the following command
logman create trace -n DacFxDebug -p "Microsoft-SQLServerDataTools"
0x800 -o "%LOCALAPPDATA%\DacFxDebug.etl" -ets
logman create trace -n
SSDTDebug -p "Microsoft-SQLServerDataToolsVS" 0x800 -o
"%LOCALAPPDATA%\SSDTDebug.etl" -ets
Run whatever the target/issue scenario is in SSDT. Go back to the command prompt and run the following commands
logman stop DacFxDebug -ets
logman stop SSDTDebug -ets
The resulting ETL files will be located at %LOCALAPPDATA%\SSDTDebug.etl & %LOCALAPPDATA%\DacFxDebug.etl and can be navigated to using Windows Explorer.
There is no such limitation. Ref - https://msdn.microsoft.com/en-us/hh272693(v=vs.103).aspx
Check whether the Firewall Rule is open for this connection. If not, then add the current client IP to allowed IP addresses of that SQL Azure DB
I find that if I have already compared a local DB before hand (in the same session) then try to compare an Azure DB. I find there is some strange lock preventing login on the Azure SQL DB.
Shut down Visual Studio and reopen and it should connect ok.
I really should know this, but would someone tell me how to change the default database on Linux?
For example:
I have a database test1 on server1 with ORACLE_SID=test1. So, to connect to test1 I can use:
sqlplus myuser/password
Connects to the default database, test1
I would now like the default sqlplus connection to go to database test2 on server server2.
So, I've updated tnsnames so that the old test1 entry now points to test2#server2. I've also added a separate entry for test2 that points to the same place. However, the default connection still seems to go to test1#server1.
The following both work fine and go to database test2 on server2:
sqlplus myuser/password#test1
sqlplus myuser/password#test2
But the default connection, sqlplus myuser/password, goes to test1#server1.
Any ideas?
Thanks.
To expand on kerchingo's answer: Oracle has multiple ways to identify a database.
The best way -- the one that you should always use -- is USER/PASSWORD#SERVER. This will use the Oracle naming lookup (tnsnames.ora) to find the actual server, which might be on a different physical host every time you connect to it. You can also specify an Oracle connection string as SERVER, but pretend you can't.
There are also two ways to specify a default server via environment variables. The first is TWO_TASK, which uses the naming lookup, and the second is ORACLE_SID, which assumes that the server is running on the current machine. ORACLE_SID takes precedence over TWO_TASK.
The reason that you should always use an explicit connect string is that you have no idea whether the user has set TWO_TASK, ORACLE_SID, both, or neither; nor do you know what they might be set to. Setting both to different values is a particularly painful problem to diagnose, particularly over the phone with a person who doesn't really understand how Oracle works (been there, done that).
Assuming you're logged into server1, you'll need to connect to test2 using
sqlplus myuser/password#test2
because you have to go through a listener to get to server2. The string test2 identifies an entry in your tnsnames.ora file that specifies how to connect to test2. You won't be able to connect to a different server using the first form of your sqlplus command.
If both instances (test1, test2) were on server1, then you could, as #kerchingo states, set the ORACLE_SID environment variable to point at another instance.
Defining a enviroment variable LOCAL with the tns alias of your database.
> set LOCAL=test1
> sqlplus myuser/password
> ... connected to test1
> set LOCAL=test2
> sqlplus myuser/password
> ... connected to test2
This works on windows client, not shure about other os.
The correct question is 'How do I change the default service'? Oracle DBMS offers two types of connection request: explicit and implicit. In an explicit request, you supply three operands like sqlplus username/password#service. In an implicit request, you ignore the third operand.
Implicit connection applies only when the client-host and server-host are the same. Consequently, the listener is on the same host.
The listener is the one that initially responds to connection request. In handling the implicit-connection request from the same host,
it checks whether the instance name has been set. It checks the value of shell-variable ORACLE_SID.
If set, then it can handle implicit-connection request. Otherwise, it cannot, and you must perform explicit-connection request, supplying the third operand.
The listener-config file name listener.ora associates instance with service.
To change the default service you connect to, change the default value of instance.
Therefore, change the default value of shell-variable ORACLE_SID. You do it in the OS user config file such as .profile or similar config files.
Hope this helps.
I think it is set in your environment, can you echo $ORACLE_SID?