Unable to connect to the PYMQI Client facing FAILED: MQRC_ENVIRONMENT_ERROR - python-3.x

I am getting the below error while connecting to IBM MQ using library pymqi.
Its a clustered MQ channel
Traceback (most recent call last):
File "postToQueue.py", line 432, in <module>
qmgr = pymqi.connect(queue_manager, channel, conn_info)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 2608, in connect
qmgr.connect_tcp_client(queue_manager or '', CD(), channel, conn_info, user, password)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1441, in connect_tcp_client
self.connect_with_options(name, cd, user=user, password=password)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1423, in connect_with_options
raise MQMIError(rv[1], rv[2])
pymqi.MQMIError: MQI Error. Comp: 2, Reason 2012: FAILED: MQRC_ENVIRONMENT_ERROR'
Please see my code below.
queue_manager = 'quename here'
channel = 'channel name here'
host ='host-name here'
port = '2333'
queue_name = 'queue name here'
message = 'my message here'
conn_info = '%s(%s)' % (host, port)
print(conn_info)
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name)
queue.put(message)
print("message sent")
queue.close()
qmgr.disconnect()
Getting error at the line below
qmgr = pymqi.connect(queue_manager, channel, conn_info)
Added the IBM client to scripts folder as well , using Windows 10 , Python 3.8.1 and IBM Client 9.1 windows client installation image, Below is the header
-----------------------------------------------------------------------------+
| |
| WebSphere MQ First Failure Symptom Report |
| ========================================= |
| |
| Date/Time :- Tue January 28 2020 16:27:51 Eastern Standard Time |
| UTC Time :- 1580246871.853000 |
| UTC Time Offset :- -300 (Eastern Standard Time) |
| Host Name :- CA-LDLD0SQ2 |
| Operating System :- Windows 10 Enterprise x64 Edition, Build 17763 |
| PIDS :- 5724H7251 |
| LVLS :- 8.0.0.11 |
| Product Long Name :- IBM MQ for Windows (x64 platform) |
| Vendor :- IBM |
| O/S Registered :- 0 |
| Data Path :- C:\Python\Scripts\IBM |
| Installation Path :- C:\Python |
| Installation Name :- MQNI08000011 (126) |
| License Type :- Unknown |
| Probe Id :- XC207013 |
| Application Name :- MQM |
| Component :- xxxInitialize |
| SCCS Info :- F:\build\slot1\p800_P\src\lib\cs\amqxeida.c, |
| Line Number :- 5085 |
| Build Date :- Dec 12 2018 |
| Build Level :- p800-011-181212.1 |
| Build Type :- IKAP - (Production) |
| UserID :- alekhya.machiraju |
| Process Name :- C:\Python\python.exe |
| Arguments :- |
| Addressing mode :- 32-bit |
| Process :- 00010908 |
| Thread :- 00000001 |
| Session :- 00000001 |
| UserApp :- TRUE |
| Last HQC :- 0.0.0-0 |
| Last HSHMEMB :- 0.0.0-0 |
| Last ObjectName :- |
| Major Errorcode :- xecF_E_UNEXPECTED_SYSTEM_RC |
| Minor Errorcode :- OK |
| Probe Type :- INCORROUT |
| Probe Severity :- 2 |
| Probe Description :- AMQ6090: MQM could not display the text for error |
| 536895781. |
| FDCSequenceNumber :- 0 |
| Comment1 :- WinNT error 1082155270 from Open ccsid.tbl. |
| |
+-----------------------------------------------------------------------------+

Related

Bitbucket Pipelines export into variable using jq and xq causes error

When i running a pipeline in bitbucket i want to export into variable using
export APEX_CLASSES=$(xq . < package/package.xml | jq '.Package.types | [.] | flatten | map(select(.name=="ApexClass")) | .[] | .members | [.] | flatten | map(select(. | index("*") | not)) | unique | join(",")' -r)
but i got error in pipeline
parse error: Invalid numeric literal at line 1, column 5
i tried to identify a error but i always get same error :(
When i add a escape \ before " i got this error
jq: error: syntax error, unexpected INVALID_CHARACTER (Unix shell quoting issues?) at <top-level>, line 1:
.Package.types | [.] | flatten | map(select(.name==\"ApexClass\")) | .[] | .members | [.] | flatten | map(select(. | index(\"*\") | not)) | unique | join(\",\")
jq: 1 compile error
This is package.xml
<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
<types>
<members>AccountHelper</members>
<members>BoatHelper</members>
<members>CaseHelper</members>
<name>ApexClass</name>
</types>
<version>57.0</version>
</Package>

Can’t enable encryption in YugabyteDB cluster using yugabyted cli

[Question posted by a user on YugabyteDB Community Slack]
I'm running the yb-cluster of 3 logical node on 1 VM. I am trying with SSL Mode enabled cluster. Below is the property file i am using to start the cluster with SSL Mode ON:
./bin/yugabyted start --config /data/ybd1/config_1.config
./bin/yugabyted start --base_dir=/data/ybd2 --listen=127.0.0.2 --join=192.168.56.12
./bin/yugabyted start --base_dir=/data/ybd3 --listen=127.0.0.3 --join=192.168.56.12
my config file:
{
"base_dir": "/data/ybd1",
"listen": "192.168.56.12",
"certs_dir": "/root/192.168.56.12/",
"allow_insecure_connections": "false",
"use_node_to_node_encryption": "true"
"use_client_to_server_encryption": "true"
}
I am able to connect using:
bin/ysqlsh -h 127.0.0.3 -U yugabyte -d yugabyte
ysqlsh (11.2-YB-2.11.1.0-b0)
Type "help" for help.
yugabyte=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------------+----------+----------+---------+-------------+-----------------------
postgres | postgres | UTF8 | C | en_US.UTF-8 |
system_platform | postgres | UTF8 | C | en_US.UTF-8 |
template0 | postgres | UTF8 | C | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
yugabyte | postgres | UTF8 | C | en_US.UTF-8 |
But when I am trying to connect to my yb-cluster from psql client. I am getting below errors.
psql -h 192.168.56.12 -p 5433
psql: error: connection to server at "192.168.56.12", port 5433 failed: FATAL: Timed out: OpenTable RPC (request call id 2) to 192.168.56.12:9100 timed out after 120.000s
postgres#acff2570dfbc:~$
And in yb t-server logs I am getting below errors:
I0228 05:00:21.248733 21631 async_initializer.cc:90] Successfully built ybclient
2022-02-28 05:02:21.248 UTC [21624] FATAL: Timed out: OpenTable RPC (request call id 2) to 192.168.56.12:9100 timed out after 120.000s
I0228 05:02:21.251086 21627 poller.cc:66] Poll stopped: Service unavailable (yb/rpc/scheduler.cc:80): Scheduler is shutting down (system error 108)
2022-02-28 05:54:20.987 UTC [23729] LOG: invalid length of startup packet
Any HELP in this regard is really apricated.
You’re setting your config wrong when using yugabyted tool. You want to use --master_flags and --tserver_flags like explained in the docs: https://docs.yugabyte.com/latest/reference/configuration/yugabyted/#flags.
An example:
bin/yugabyted start --base_dir=/data/ybd1 --listen=192.168.56.12 --tserver_flags=use_client_to_server_encryption=true,ysql_enable_auth=true,use_cassandra_authentication=true,certs_for_client_dir=/root/192.168.56.12/
Sending the parameters this way should work on your cluster.

How do I query PostgreSQL within GNAT CE 2019

I am trying to query a PostgreSQL database using GNAT CE 2019. I have two tables in my database, car and person:
mydb1-# \dt
List of relations
Schema | Name | Type | Owner
--------+--------+-------+----------
public | car | table | postgres
public | person | table | postgres
(2 rows)
I would like to perfom a simple Select statement, when I perform this using psql in my terminal, this is what's returned:
mydb1=# SELECT * FROM Person;
person_uid | first_name | last_name | gender | email | date_of_birth | country_of_birth | car_uid
--------------------------------------+------------+------------+--------+------------------------------+---------------+------------------+--------------------------------------
75f5e55d-12b2-463e-93ff-1c921e44c3e1 | Audrie | Vasyukov | Female | avasyukovd6#domainmarket.com | 1988-11-24 | Guatemala |
9e3f7f90-6e9a-4f2d-ae4e-c852d819ed33 | Nefen | Philippard | Male | nphilippardd7#economist.com | 2006-11-08 | Russia |
ffad6113-2321-47c3-8e1d-b8bbe1f7ffa1 | Leonore | Garthland | Female | lgarthlandd8#furl.net | 1991-03-23 | Canada |
268c1977-a5cc-4794-9cdd-e0e4af7890b9 | Yank | Turfitt | Male | yturfittd9#exblog.jp | 1990-02-07 | China |
3c815fa3-74b9-493a-9466-f32010806b16 | Benn | Pawley | Male | bpawleyda#indiegogo.com | 2006-10-28 | Russia |
690fc9e9-309e-4d70-8dab-167653b99763 | Tod | Easen | Male | teasend4#php.net | 1990-08-24 | China | 5fa7490e-ba42-4a96-b806-097bbb16e30e
However, I would like to perform this from within GNAT CE 2019. This is how my main.adb file currently looks:
with GNATCOLL.SQL.Postgres; use GNATCOLL.SQL.Postgres;
with GNATCOLL.SQL.Exec; use GNATCOLL.SQL.Exec;
with GNATCOLL.VFS; use GNATCOLL.VFS;
with GNATCOLL.SQL.Inspect; use GNATCOLL.SQL.Inspect;
with GNATCOLL.SQL; use GNATCOLL.SQL;
procedure Main is
-- Datebase Description
--
DB_Descr : GNATCOLL.SQL.Exec.Database_Description;
-- Database Connection
--
DB : GNATCOLL.SQL.Exec.Database_Connection;
-- Used to Query the data
--
Q : SQL_Query;
begin
-- Database Description
--
DB_Descr := GNATCOLL.SQL.Postgres.Setup ("mydb1", "parallels", "localhost", "MyPassword", 5432);
-- Database Connection
--
DB := DB_Descr.Build_Connection;
-- Query the data
--
Q := SQL_Select
(Fields => Person.first_name,
From => Person);
Free (DB); -- for all connections you have opened
Free (DB_Descr);
end Main;
When attempting to make a connection to the database, the process terminates successfully.
I'm unsure of the syntax that should be used for the Select statement. If anyone would be able to tell me how to perform a simple SELECT * FROM Person; statement from within GNAT CE 2019, it would be greatly appreciated.
Thank you,
Lloyd
Added 13/05/20
parallels#localhost gnatcoll_db2ada]$ ls
dborm.py gnatcoll-db2ada-main-generate.adb
dbschema.txt gnatcoll_postgres2ada.adb
gnatcoll_all2ada.adb gnatcoll_postgres2ada.gpr
gnatcoll_all2ada.gpr gnatcoll_sqlite2ada.adb
gnatcoll_db2ada.adb gnatcoll_sqlite2ada.gpr
gnatcoll-db2ada.ads Makefile
gnatcoll_db2ada.gpr makefile.setup
[parallels#localhost gnatcoll_db2ada]$ gnatcoll_postgres2ada -dbmodel dschema.txt/home/parallels/Desktop/dschema.txt
bash: gnatcoll_postgres2ada: command not found...
[parallels#localhost gnatcoll_db2ada]$
Added 15/05/20
prbuild -d -P/home/parallels/Documents/Ada Projects/TEST/default.gpr -XGNATCOLL_OS=unix -XBUILD=PROD -XGNATCOLL_HASPQPREPARE=yes -XGPR_BUILD=static -XLIBRARY_TYPE=static -XXMLADA_BUILD=static -XGNATCOLL_CORE_BUILD=static -XGNATCOLL_BUILD=static /home/parallels/Documents/Ada Projects/TEST/src/main.adb
Compile
[Ada] main.adb
Bind
[gprbind] main.bexch
[Ada] main.ali
Link
[link] main.adb
/home/parallels/opt/GNAT/2019/bin/../libexec/gcc/x86_64-pc-linux-gnu/8.3.1/ld: cannot find -lpq
collect2: error: ld returned 1 exit status
gprbuild: link of main.adb failed
gprbuild: failed command was: /home/parallels/opt/GNAT/2019/bin/gcc main.o b__main.o /home/parallels/Documents/Ada Projects/TEST/obj/database_names.o /home/parallels/Documents/Ada Projects/TEST/obj/database.o /home/parallels/gnatcoll-db/postgres/lib/static/libgnatcoll_postgres.a /home/parallels/gnatcoll-db/sql/lib/static/libgnatcoll_sql.a /home/parallels/opt/GNAT/2019/lib/gnatcoll.static/libgnatcoll.a /home/parallels/opt/GNAT/2019/lib/gpr/static/gpr/libgpr.a /home/parallels/opt/GNAT/2019/lib/xmla
da/xmlada_schema.static/libxmlada_schema.a /home/parallels/opt/GNAT/2019/lib/xmlada/xmlada_dom.static/libxmlada_dom.a /home/parallels/opt/GNAT/2019/lib/xmlada/xmlada_sax.static/libxmlada_sax.a /home/parallels/opt/GNAT/2019/lib/xmlada/xmlada_input.static/libxmlada_input_sources.a /home/parallels/opt/GNAT/2019/lib/xmlada/xmlada_unicode.static/libxmlada_unicode.a -lpq -L/home/parallels/Documents/Ada Projects/TEST/obj/ -L/home/parallels/Documents/Ada Projects/TEST/obj/ -L/home/parallels/gnatcoll-db/p
ostgres/lib/static/ -L/home/parallels/opt/GNAT/2019/lib/gnatcoll.static/ -L/home/parallels/opt/GNAT/2019/lib/xmlada/xmlada_dom.static/ -L/home/parallels/opt/GNAT/2019/lib/xmlada/xmlada_sax.static/ -L/home/parallels/opt/GNAT/2019/lib/xmlada/xmlada_unicode.static/ -L/home/parallels/opt/GNAT/2019/lib/xmlada/xmlada_input.static/ -L/home/parallels/opt/GNAT/2019/lib/xmlada/xmlada_schema.static/ -L/home/parallels/opt/GNAT/2019/lib/gpr/static/gpr/ -L/home/parallels/gnatcoll-db/sql/lib/static/ -L/home/par
allels/opt/GNAT/2019/lib/gcc/x86_64-pc-linux-gnu/8.3.1/adalib/ -static-libgcc /home/parallels/opt/GNAT/2019/lib/gcc/x86_64-pc-linux-gnu/8.3.1/adalib/libgnarl.a /home/parallels/opt/GNAT/2019/lib/gcc/x86_64-pc-linux-gnu/8.3.1/adalib/libgnat.a -lrt -lpthread -ldl -Wl,-rpath-link,/home/parallels/opt/GNAT/2019/lib/gcc/x86_64-pc-linux-gnu/8.3.1//adalib -Wl,-z,origin,-rpath,$ORIGIN/:$ORIGIN/../../../..//gnatcoll-db/postgres/lib/static:$ORIGIN/../../../..//opt/GNAT/2019/lib/gnatcoll.static:$ORIGIN/../../
../..//opt/GNAT/2019/lib/xmlada/xmlada_dom.static:$ORIGIN/../../../..//opt/GNAT/2019/lib/xmlada/xmlada_sax.static:$ORIGIN/../../../..//opt/GNAT/2019/lib/xmlada/xmlada_unicode.static:$ORIGIN/../../../..//opt/GNAT/2019/lib/xmlada/xmlada_input.static:$ORIGIN/../../../..//opt/GNAT/2019/lib/xmlada/xmlada_schema.static:$ORIGIN/../../../..//opt/GNAT/2019/lib/gpr/static/gpr:$ORIGIN/../../../..//gnatcoll-db/sql/lib/static:$ORIGIN/../../../..//opt/GNAT/2019/lib/gcc/x86_64-pc-linux-gnu/8.3.1/adalib -o main
[2020-05-15 14:56:50] process exited with status 4, 100% (30/30), elapsed time: 02.34s
Added 17/05/20
[SQL.ERROR] FATAL: Ident authentication failed for user "postgres"
_SQL.ERROR_ FATAL: Ident authentication failed for user "postgres"
_SQL.ERROR_ params="dbname='mydb1' user='postgres' host='localhost' sslmode=allow" ()
[SQL.ERROR] Failed to execute SELECT persons.person_uid, persons.first_name, persons.last_name, persons.gender, persons.email, persons.date_of_birth, persons.country_of_birth FROM persons error=No connection to database
FAILED
[2020-05-17 19:30:04] process terminated successfully, elapsed time: 00.19s
Added 29/05/20
home/parallels/Documents/Ada Projects/Connect to a DB/obj/main
[SQL.ERROR] select failed: SELECT persons.person_uid, persons.first_name, persons.last_name, persons.gender, persons.email, persons.date_of_birth, persons.country_of_birth FROM persons PGRES_FATAL_ERROR ERROR: relation "persons" does not exist
_SQL.ERROR_ LINE 1: ...persons.date_of_birth, persons.country_of_birth FROM persons
_SQL.ERROR_ ^
FAILED
[2020-05-29 22:40:11] process terminated successfully, elapsed time: 00.19s
postgres=# select * from persons;
person_uid | first_name | last_name | gender | email | date_of_birth | country_of_birth
--------------------------------------+------------+------------+--------+------------------------------+---------------+------------------
afbf64be-e7d5-45a1-b8fb-1a9fd66e2765 | Audrie | Vasyukov | Female | avasyukovd6#domainmarket.com | 1988-11-24 | Guatemala
ffda7264-1e54-428b-ae68-9ddb7b97702a | Nefen | Philippard | Male | nphilippardd7#economist.com | 2006-11-08 | Russia
169d5fb9-8d40-451e-8b02-cd1c3639fbed | Leonore | Garthland | Female | lgarthlandd8#furl.net | 1991-03-23 | Canada
e34f4f21-524c-4134-81d6-7fd1fd9a7537 | Yank | Turfitt | Male | yturfittd9#exblog.jp | 1990-02-07 | China
57336d0e-34f2-4166-a100-7b468e96a521 | Benn | Pawley | Male | bpawleyda#indiegogo.com | 2006-10-28 | Russia
b54a8411-3674-4432-b896-aaf35a10919b | Tod | Easen | Male | teasend4#php.net | 1990-08-24 | China
(6 rows)
The user manual states that you must first generate Ada data types that represent the entities (tables, fields, etc.) of the database with which you interact in order to use functions like SQL_Select. This can be done using the gnatcoll_db2ada utility (see here); either by some sort of reflection on the database or by providing a hand written schema (example in the user manual). Here are my own steps for creating an example.
Install dependencies:
$ sudo apt-get install postgresql libpq-dev
Create a database user (here: deedee):
$ sudo -u postgres bash
postgres#debian: $ createuser --pwprompt deedee
Clone gnatcoll-db:
$ git clone https://github.com/AdaCore/gnatcoll-db.git
Build and install gnatcoll-sql:
gnatcoll-db/sql $ make setup
gnatcoll-db/sql $ make
gnatcoll-db/sql $ sudo bash -c "PATH=$PATH:/opt/GNAT/2019/bin make install"
Build and install gnatcoll-postgress:
gnatcoll-db/postgress $ make setup
gnatcoll-db/postgress $ make
gnatcoll-db/postgress $ sudo bash -c "PATH=$PATH:/opt/GNAT/2019/bin make install"
Build and install gnatcoll_db2ada:
gnatcoll-db/gnatcoll_db2ada $ make setup DB_BACKEND=postgres
gnatcoll-db/gnatcoll_db2ada $ make
gnatcoll-db/gnatcoll_db2ada $ sudo bash -c "PATH=$PATH:/opt/GNAT/2019/bin make install"
The utility will be installed next to all other GNAT programs:
$ which gnatcoll_postgres2ada
/opt/GNAT/2019/bin/gnatcoll_postgres2ada
To use the utility, I first created a small database using the data you provided. I recreated the table persons using the commands:
$ sudo -u postgres bash
postgres#debian: $ createdb mydb1
postgres#debian: $ psql mydb1 < persons.sql
with
persons.sql
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE TYPE gender AS ENUM ('Male', 'Female');
CREATE TABLE persons (
person_uid uuid DEFAULT uuid_generate_v4 () PRIMARY KEY,
first_name VARCHAR (20) NOT NULL,
last_name VARCHAR (20) NOT NULL,
gender gender NOT NULL,
email VARCHAR (50) NOT NULL,
date_of_birth DATE NOT NULL,
country_of_birth VARCHAR (20) NOT NULL
);
INSERT INTO persons (
first_name,
last_name,
gender,
email,
date_of_birth,
country_of_birth
)
VALUES
('Audrie' , 'Vasyukov' , 'Female', 'avasyukovd6#domainmarket.com', '1988-11-24', 'Guatemala'),
('Nefen' , 'Philippard', 'Male' , 'nphilippardd7#economist.com' , '2006-11-08', 'Russia' ),
('Leonore', 'Garthland' , 'Female', 'lgarthlandd8#furl.net' , '1991-03-23', 'Canada' ),
('Yank' , 'Turfitt' , 'Male' , 'yturfittd9#exblog.jp' , '1990-02-07', 'China' ),
('Benn' , 'Pawley' , 'Male' , 'bpawleyda#indiegogo.com' , '2006-10-28', 'Russia' ),
('Tod' , 'Easen' , 'Male' , 'teasend4#php.net' , '1990-08-24', 'China' );
GRANT SELECT ON persons TO deedee;
I then created a database schema (see user manual) and generated the Ada code:
$ gnatcoll_postgres2ada -dbmodel dschema.txt
$ ls
database.adb database.ads database_names.ads dschema.txt
with
dschema.txt
| TABLE | persons | || The contents of person |
| person_uid | TEXT | PK || Auto-generated id |
| first_name | TEXT | NOT NULL || First name |
| last_name | TEXT | NOT NULL || Last name |
| gender | TEXT | NOT NULL || Gender |
| email | TEXT | NOT NULL || E-mail address |
| date_of_birth | DATE | NOT NULL || Date of birth |
| country_of_birth | TEXT | NOT NULL || Country of birth |
I finally added a trace configuration file (based on the example in the GNATcoll user manual), adapted your example code and made it work with my own table:
.gnatdebug (write trace info to standard error; indicated by >&2)
>&2
SQL.yes
SQL.SELECT=yes
SQL.LITE=yes
default.gpr
with "gnatcoll_postgres.gpr";
project Default is
for Source_Dirs use ("src");
for Object_Dir use "obj";
for Main use ("main.adb");
end Default;
main.adb
with Ada.Text_IO; use Ada.Text_IO;
with GNATCOLL.Traces; use GNATCOLL.Traces;
with GNATCOLL.SQL.Postgres; use GNATCOLL.SQL.Postgres;
with GNATCOLL.SQL.Exec; use GNATCOLL.SQL.Exec;
with GNATCOLL.VFS; use GNATCOLL.VFS;
with GNATCOLL.SQL.Inspect; use GNATCOLL.SQL.Inspect;
with GNATCOLL.SQL; use GNATCOLL.SQL;
with Database;
procedure Main is
-- Datebase Description
DB_Descr : GNATCOLL.SQL.Exec.Database_Description;
-- Database Connection
DB : GNATCOLL.SQL.Exec.Database_Connection;
-- Used to Query the data
Q : SQL_Query;
begin
-- Enable tracing, so we can see if something goes wrong.
GNATCOLL.Traces.Parse_Config_File (".gnatdebug");
-- Database description.
DB_Descr := GNATCOLL.SQL.Postgres.Setup
(Database => "mydb1",
User => "deedee",
Host => "localhost",
Password => "xxxx");
-- Database connection.
DB := DB_Descr.Build_Connection;
-- Define the query.
Q := SQL_Select
(Fields =>
Database.Persons.Person_Uid & -- 0
Database.Persons.First_Name & -- 1
Database.Persons.Last_Name & -- 2
Database.Persons.Gender & -- 3
Database.Persons.Email & -- 4
Database.Persons.Date_Of_Birth & -- 5
Database.Persons.Country_Of_Birth, -- 6
From => Database.Persons);
declare
R : Forward_Cursor;
begin
-- Perform the actual query, show results if OK.
R.Fetch (DB, Q);
if Success (DB) then
while Has_Row (R) loop
Put_Line ("UUID . . : " & Value (R, 0));
Put_Line ("Name . . : " & Value (R, 1) & " " & Value (R, 2));
Put_Line ("Gender . : " & Value (R, 3));
Put_Line ("E-Mail . : " & Value (R, 4));
Put_Line ("Birth. . : " & Value (R, 5) & ", " & Value (R, 6));
New_Line;
Next (R);
end loop;
else
Put_Line ("FAILED");
end if;
end;
Free (DB); -- for all connections you have opened
Free (DB_Descr);
GNATCOLL.Traces.Finalize;
end Main;
build & output
$ gprbuild -P default.gpr
[...]
$ ./obj/main
[SQL.SELECT] SELECT persons.person_uid, persons.first_name, persons.last_name, persons.gender, persons.email, persons.date_of_birth, persons.country_of_birth FROM persons (6 tuples) PGRES_TUPLES_OK
UUID . . : 564b6d18-5f99-4d6b-8098-d5ce79910107
Name . . : Audrie Vasyukov
Gender . : Female
E-Mail . : avasyukovd6#domainmarket.com
Birth. . : 1988-11-24, Guatemala
UUID . . : a4c2743a-40ad-409e-9af7-f21e7b91da05
Name . . : Nefen Philippard
Gender . : Male
E-Mail . : nphilippardd7#economist.com
Birth. . : 2006-11-08, Russia
UUID . . : 04aa6123-b317-4be3-a1af-8a01d43ee60f
Name . . : Leonore Garthland
Gender . : Female
E-Mail . : lgarthlandd8#furl.net
Birth. . : 1991-03-23, Canada
UUID . . : e43ec5ef-3cd2-4d3c-8f3e-106b7c2f2ccf
Name . . : Yank Turfitt
Gender . : Male
E-Mail . : yturfittd9#exblog.jp
Birth. . : 1990-02-07, China
UUID . . : 3e1d6431-62d5-4d4b-8333-a92a66575f9f
Name . . : Benn Pawley
Gender . : Male
E-Mail . : bpawleyda#indiegogo.com
Birth. . : 2006-10-28, Russia
UUID . . : ae60db02-298a-43d3-81a2-59fcf610c045
Name . . : Tod Easen
Gender . : Male
E-Mail . : teasend4#php.net
Birth. . : 1990-08-24, China
APPENDIX
I used the default (non-customized) host-based authentication (hba) file.
pg_hba.conf
# Database administrative login by Unix domain socket
local all postgres peer
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all peer
host replication all 127.0.0.1/32 md5
host replication all ::1/128 md5

How to enable BLOB-logging for a Node.js Api App on Azure?

I have a node.js api app on azure. I use bunyan to log every request to sdtout. How can I save and read the log files? I enabled BLOB-logging. The only thing that shows up in my storage is a bunch of csv-files. Here is an example:
| date | level | applicationName | instanceId | eventId | pid | tid | message
_______________________________________________________________________________________________________________________________________________________________
| 2017-05-17T14:21:15 | Verbose | myApp | tae9d6 | 636306276755847146 | 13192 | -1 | SnapshotHelper::RestoreSnapshotInternal SUCCESS - File.Copy
| 2017-05-17T14:21:15 | Verbose | myApp | tae9d6 | 636306276756784690 | 13192 | -1 | SnapshotHelper::RestoreSnapshotInternal SUCCESS - process
Where are my logs, that I printed to stdout?
1) Create file iisnode.yml in your root folder (D:\home\site\wwwroot) if not exists.
2) Add the following lines to it.
loggingEnabled: true
logDirectory: iisnode
After that done, you can find logs in D:\home\site\wwwroot\iisnode.
For more info, please refer to https://learn.microsoft.com/en-us/azure/app-service-web/web-sites-nodejs-debug#enable-logging.
After above settings in iisnode.yml, the logs you see D:\home\site\wwwroot\iisnode are from BLOB storage or file system.

How to include modules for code coverage for unit testing?

My assumption is that any module tested using Intern will automatically be covered by Istanbul's code coverage. For reasons unknown to me, my module is not being included.
I am:
running Intern 1.6.2 (installed with npm locally)
testing NodeJS code
using callbacks, not promises
using CommonJS modules, not AMD modules
Directory Structure (only showing relevant files):
plister
|
|--libraries
| |--file-type-support.js
|
|--tests
| |--intern.js
| |--unit
| |--file-type-support.js
|
|--node_modules
|--intern
plister/tests/intern.js
define({
useLoader: {
'host-node': 'dojo/dojo'
},
loader: {
packages: [
{name: 'libraries', location: 'libraries'}
]
},
reporters: ['console'],
suites: ['tests/unit/file-type-support'],
functionalSuites: [],
excludeInstrumentation: /^(tests|node_modules)\//
});
plister/tests/unit/file-type-support.js
define([
'intern!bdd',
'intern/chai!expect',
'intern/dojo/node!fs',
'intern/dojo/node!path',
'intern/dojo/node!stream-equal',
'intern/dojo/node!../../libraries/file-type-support'
], function (bdd, expect, fs, path, streamEqual, fileTypeSupport) {
'use strict';
bdd.describe('file-type-support', function doTest() {
bdd.it('should show that the example output.plist matches the ' +
'temp.plist generated by the module', function () {
var deferred = this.async(),
input = path.normalize('tests/resources/input.plist'),
output = path.normalize('tests/resources/output.plist'),
temporary = path.normalize('tests/resources/temp.plist');
// Test deactivate function by checking output produced by
// function against test output.
fileTypeSupport.deactivate(fs.createReadStream(input),
fs.createWriteStream(temporary),
deferred.rejectOnError(function onFinish() {
streamEqual(fs.createReadStream(output),
fs.createReadStream(temporary),
deferred.callback(function checkEqual(error, equal) {
expect(equal).to.be.true;
}));
}));
});
});
});
Output:
PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (29ms)
1/1 tests passed
1/1 tests passed
Output (on failure):
FAIL: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (30ms)
AssertionError: expected true to be false
AssertionError: expected true to be false
0/1 tests passed
0/1 tests passed
npm ERR! Test failed. See above for more details.
npm ERR! not ok code 0
Output (after removing excludeInstrumentation):
PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (25ms)
1/1 tests passed
1/1 tests passed
------------------------------------------+-----------+-----------+-----------+-----------+
File | % Stmts |% Branches | % Funcs | % Lines |
------------------------------------------+-----------+-----------+-----------+-----------+
node_modules/intern/ | 70 | 50 | 100 | 70 |
chai.js | 70 | 50 | 100 | 70 |
node_modules/intern/lib/ | 79.71 | 42.86 | 72.22 | 79.71 |
Test.js | 79.71 | 42.86 | 72.22 | 79.71 |
node_modules/intern/lib/interfaces/ | 80 | 50 | 63.64 | 80 |
bdd.js | 100 | 100 | 100 | 100 |
tdd.js | 76.19 | 50 | 55.56 | 76.19 |
node_modules/intern/lib/reporters/ | 56.52 | 35 | 57.14 | 56.52 |
console.js | 56.52 | 35 | 57.14 | 56.52 |
node_modules/intern/node_modules/chai/ | 37.9 | 8.73 | 26.38 | 39.34 |
chai.js | 37.9 | 8.73 | 26.38 | 39.34 |
tests/unit/ | 100 | 100 | 100 | 100 |
file-type-support.js | 100 | 100 | 100 | 100 |
------------------------------------------+-----------+-----------+-----------+-----------+
All files | 42.14 | 11.35 | 33.45 | 43.63 |
------------------------------------------+-----------+-----------+-----------+-----------+
My module passes the test and I can make it fail too. It just will not show up in the code coverage. I have done the tutorial hosted on GitHub without any problems.
I tried dissecting the Istanbul and Intern dependencies. I place a console.log where it seems files to be covered go through, but my module doesn't get passed. I have tried every variation of deferred.callback and deferred.rejectOnError with no difference to the code coverage.
Also, any feedback on my use of deferred.callback and deferred.rejectOnError will be greatly appreciated. I am still a little uncertain on their usage.
Thanks!
As of Intern 1.6, only require('vm').runInThisContext is hooked to add code coverage data, not require. Instrumentation of require was added in Intern 2.0.
The use of callback/rejectOnError in the above code is correct.

Resources