When I hit the URL I get this: -
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<EnumerationResults ServiceEndpoint="https://bmiplstorageacc.blob.core.windows.net/" ContainerName="bmiplcontainer">
<Blobs>
<Blob>
<Name>APTITUDE & REASONING TOPICS.PNG</Name>
<Properties>
<Creation-Time>Wed, 16 Nov 2022 12:39:56 GMT</Creation-Time>
<Last-Modified>Wed, 16 Nov 2022 12:39:56 GMT</Last-Modified>
<Etag>0x8DAC7CFAAA27D93</Etag>
<ResourceType>file</ResourceType>
<Content-Length>40678</Content-Length>
<Content-Type>image/png</Content-Type>
<Content-Encoding/>
<Content-Language/>
<Content-CRC64/>
<Content-MD5>IKfsZIstgzA+I0WBgJ+2aw==</Content-MD5>
<Cache-Control/>
<Content-Disposition/>
<BlobType>BlockBlob</BlobType>
<AccessTier>Hot</AccessTier>
<AccessTierInferred>true</AccessTierInferred>
<LeaseStatus>unlocked</LeaseStatus>
<LeaseState>available</LeaseState>
<ServerEncrypted>true</ServerEncrypted>
</Properties>
<OrMetadata/>
</Blob>
</Blobs>
<NextMarker/>
</EnumerationResults>
I can access blobs directly but unable to access the container.
Related
The OPC Publisher marketplace image runs successfully as a standalone container (albeit with server connection problems). But I am not able to deploy it as an edge module, especially after changing container create options.
Background: In my host laptop I was never able to get the module up so I created a Ubuntu VM. When I tried to deploy the edge module in the VM with default container create options the module did show up in the iotedge module list as "running". I wanted to set the "--op" option to set publishing rate so I changed it in the create options using the portal "Set modules" tab. Since there is no update button I used create button to "recreate" the modules. After this the module did not show up.
After that the OPC publisher module is not showing up on the edge VM. I am following the Microsoft tutorial.
Following is the command:
sudo docker run -v /iiotedge:/appdata mcr.microsoft.com/iotedge/opc-publisher:latest --aa --pf=/appdata/publishednodes.json --c="HostName=<iot hub name>.azure-devices.net;DeviceId=iothubowner;SharedAccessKey=<hub primary key>" --dc="HostName=<edge device id/name>.azure-devices.net;DeviceId=<edge device id/name>;SharedAccessKey=<edge primary key>" --op=10000
Container create options:
{
"Hostname": "opcpublisher",
"Cmd": [
"--pf=/appdata/publishednodes.json",
"--aa",
"--op=10000"
],
"HostConfig": {
"Binds": [
"/iiotedge:/appdata"
]
}
}
I have not specified the connection strings explicitly since the documentation from Microsoft assures that the runtime will pass them automatically.
The relevant iotedge journalctl logs are here.
Oct 06 19:36:05 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:05Z [INFO] - Pulling image mcr.microsoft.com/iotedge/opc-publisher:latest...
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [INFO] - Successfully pulled image mcr.microsoft.com/iotedge/opc-publisher:latest
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [INFO] - Creating module OPCPublisher...
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [INFO] - Starting new listener for module OPCPublisher
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [ERR!] - Internal server error: Could not create module OPCPublisher
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: caused by: Could not get module OPCPublisher
The logs from iotedge itself is not much useful. Find below anyway.
~$ iotedge logs OPCPublisher
A module runtime error occurred
I have also tried docker container prune just to be sure but it did not help.
Also strangely in the Azure portal when I try to restart the module from the troubleshoot page it throws an error "module not found in the current environment"
Can someone please help me out in troubleshooting this problem? I will be glad to share more details if required.
I raised a support query in Azure portal. After sending support bundles and trying various suggestions like removing DNS configuration, changing bind path to a non-sudo location etc. the team zeroed in on the edge version mismatch.
After re-reading the documentation I uninstalled the earlier iotedge package and installed aziot-edge instead and problem solved!
The team has raised a github issue for public tracking here:
https://github.com/Azure/Industrial-IoT/issues/1425
#asergaz also pointed to the right direction but did not notice since it came a bit later
I'm trying to generate a graph of SQL Server tables. For some reason -command=graph and outputformat=pdf are not working.
I was able to generate the HTML report when I run with -command=schema but command=graph is not even working.
_schemacrawler>schemacrawler.cmd -server=sqlserver -host=sqlserver -port=1433 -database=test -schemas=test\..* -user=test -password=******** -command=graph -noinfo -infolevel=standard -routines= -infolevel=maximum -tabletypes=TABLE -grepcolumns=.*\.TestID -only-matching -outputformat=pdf -outputfile=graph.pdf -loglevel=CONFIG
Output:
Jan 16, 2019 3:12:09 PM schemacrawler.tools.executable.SchemaCrawlerExecutable execute
INFO: Executing command <graph> using <schemacrawler.tools.executable.CommandDaisyChain>
Jan 16, 2019 3:12:09 PM schemacrawler.tools.text.operation.OperationCommand execute
INFO: Output format <pdf> not supported for command <graph>
I have stood up a Quickstart Cloudera VM on a PC with all services running allocating 14G of ram. On the desktop that the VM is running on(not in the vm) I installed RapidMiner in order to test out Radoop before it goes onto a production server. I used the "Import from Cluster Manager" from RapidMiner which retrieved the correct configuration from CHD. When I run the full test I am running into a Access denied when rapidminer tests if it can create a table etc on Hive.
Logs:
May 18, 2018 3:45:29 PM FINE: Hive query: SHOW TABLES
May 18, 2018 3:45:29 PM FINE: Hive query: set -v
May 18, 2018 3:45:32 PM INFO: Getting radoop_hive-v4.jar file from plugin jar...
May 18, 2018 3:45:32 PM INFO: Remote radoop_hive-v4.jar is up to date.
May 18, 2018 3:45:32 PM INFO: Getting radoop_hive-v4.jar file from plugin jar...
May 18, 2018 3:45:32 PM INFO: Remote radoop_hive-v4.jar is up to date.
May 18, 2018 3:45:32 PM FINE: Hive query: SHOW FUNCTIONS
May 18, 2018 3:45:33 PM INFO: Remote radoop-mr-8.2.1.jar is up to date.
May 18, 2018 3:45:33 PM FINE: Hive query: CREATE TABLE radoop__tmp_cloudera_1526672733223_qznjpj8 (a1 DOUBLE , a2 DOUBLE , a3 DOUBLE , a4 DOUBLE , id0 STRING COMMENT 'role:"id" ', label0 STRING COMMENT 'role:"label" ') ROW FORMAT DELIMITED FIELDS TERMINATED BY ';' STORED AS TEXTFILE
May 18, 2018 3:45:33 PM FINE: Hive query: LOAD DATA INPATH '/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew/' OVERWRITE INTO TABLE radoop__tmp_cloudera_1526672733223_qznjpj8
May 18, 2018 3:45:33 PM FINE: Hive query failed: LOAD DATA INPATH '/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew/' OVERWRITE INTO TABLE radoop__tmp_cloudera_1526672733223_qznjpj8
May 18, 2018 3:45:33 PM FINE: Error: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 20009 from org.apache.hadoop.hive.ql.exec.MoveTask. Access denied: Unable to move source hdfs://quickstart.cloudera:8020/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew to destinationhdfs://quickstart.cloudera:8020/user/hive/warehouse/radoop__tmp_cloudera_1526672733223_qznjpj8: Permission denied by sticky bit: user=hive, path="/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew":cloudera:supergroup:drwxrwxrwx, parent="/tmp/radoop/cloudera":cloudera:supergroup:drwxrwxrwt
May 18, 2018 3:45:33 PM FINER: Connecting to Hive. JDBC url: radoop_hive_0.13.0jdbc:hive2://192.168.100.113:10000/default
May 18, 2018 3:45:33 PM FINER: Connecting to Hive took 108 ms.
May 18, 2018 3:45:33 PM FINE: Hive query failed again, error: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 20009 from org.apache.hadoop.hive.ql.exec.MoveTask. Access denied: Unable to move source hdfs://quickstart.cloudera:8020/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew to destination hdfs://quickstart.cloudera:8020/user/hive/warehouse/radoop__tmp_cloudera_1526672733223_qznjpj8: Permission denied by sticky bit: user=hive, path="/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew":cloudera:supergroup:drwxrwxrwx, parent="/tmp/radoop/cloudera":cloudera:supergroup:drwxrwxrwt
May 18, 2018 3:45:33 PM FINE: Error while processing statement: FAILED: Execution Error, return code 20009 from org.apache.hadoop.hive.ql.exec.MoveTask. Access denied: Unable to move source hdfs://quickstart.cloudera:8020/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew to destination hdfs://quickstart.cloudera:8020/user/hive/warehouse/radoop__tmp_cloudera_1526672733223_qznjpj8: Permission denied by sticky bit: user=hive, path="/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew":cloudera:supergroup:drwxrwxrwx, parent="/tmp/radoop/cloudera":cloudera:supergroup:drwxrwxrwt
May 18, 2018 3:45:33 PM FINER: Connecting to Hive. JDBC url: radoop_hive_0.13.0jdbc:hive2://192.168.100.113:10000/default
Maybe this is just a configuration change I can make in CDH like modify the Hive config, or some way to allow RapidMiner to read/write.
Long story short: on Cloudera Quickstart 5.13 VM you should use the same username for "Hadoop username" on Global tab and "Hive username" and on Hive tab.
I tried to setup custom Log4J appenders in Tibco BW/Designer.
I added to <tibco_folder>/bw/5.11/lib/log4j.xml the following appender:
<appender name="TestFile" class="org.apache.log4j.FileAppender">
<param name="file" value="d:/temp/tibco-test.log"/>
<param name="Threshold" value="DEBUG"/>
<param name="append" value="true"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d{yyyy MMM dd HH:mm:ss:SSS 'GMT'} %X{offset} %X{engine} %X{role} [%X{msgCategory}] %X{msgcode} %m %n"/>
</layout>
</appender>
and then I added <appender-ref ref="TestFile"/> to each logger, including bw.logger, beside the tibco_bw_log appender. The messages are sent to tibco_bw_log, but not to my appender.
My appender is getting only some logs like the below, logs that don't appear in tibco_bw_log appender (c:\Users\<me>\.TIBCO\logs\<app_name>.log)
2017 Feb 21 17:05:16:693 GMT [] no system property set, defaulting to entrust61 since got class com.entrust.toolkit.security.provider.Initializer
2017 Feb 21 17:05:16:698 GMT [] getVendor()=entrust61
2017 Feb 21 17:05:16:719 GMT [] Initializing Entrust crypto provider in NON FIPS 140-2 mode; insert provider as normal
2017 Feb 21 17:05:17:302 GMT [] using X9_31usingDESede
2017 Feb 21 17:05:18:021 GMT [] getVendor()=entrust61
2017 Feb 21 17:05:18:023 GMT [] Initialized crypto vendor entrust61
java.lang.Exception: FOR TRACING ONLY -- NOT AN ERROR
at com.tibco.security.impl.new.F.init(CryptoVendor.java:69)
...
Even if I remove the tibco_bw_log appender from bw.logger, the logs are still going there and not to my logger. I changed my appender name to tibco_bw_log and removed the original appender, but then I was getting the error: "org.apache.log4j.FileAppender cannot be cast to com.tibco.share.util.BWLogFileAppender".
Now I don't even get that error, but my appender does not get any logs.
Every time I changed the log4j.xml file, I restarted the Designer. I also applied the same changes to log4j.properties and even removed it. It seems that log4j.xml is taking priority anyway.
I also tried to specify the full path of log4j.xml in bwengine.xml for bw.log4j.configuration and adding the two below properties (as shown here) - no effect.
<property>
<name>bw.engine.showInput</name>
<option>bw.log4j.configuration</option>
<default>true</default>
<description>Log4j Configuration file path</description>
</property>
<property>
<name>bw.engine.showOutput</name>
<option>bw.log4j.configuration</option>
<default>true</default>
<description>Log4j Configuration file path</description>
</property>
I'm using BW 5.11 and Designer 5.8.
What am I missing?
Unfortunately this is not possible in Tibco. Only Java activities can be used with custom loggers.
I'm running a Linux LDAP environment with multiple servers on the domain. As we have added and removed users from our environment, I started getting these error messages:
Nov 9 05:07:25 ops1 nslcd[1377]: [35895e] lookup of user cn=Deleted1 User,ou=People,dc=company,dc=net failed: No such object
Nov 9 05:07:25 ops1 nslcd[1377]: [35895e] ldap_result() failed: No such object
Nov 9 05:07:25 ops1 nslcd[1377]: [35895e] lookup of user cn=Deleted2 User,ou=People,dc=company,dc=net failed: No such object
Nov 9 05:07:25 ops1 nslcd[1377]: [35895e] ldap_result() failed: No such object
The users showing up are only users who I have deleted. Not just recently, either.
I found the answer to this in this bug:
http://tracker.clearfoundation.com/view.php?id=1752
Basically, I had a group that was still referencing those users as members even though I had deleted them. In my case, they were part of the uniqueMember list.