HTML::TagFilter command line vs apache - linux

I installed HTML::TagFilter from CPAN on a Fedora machine
This snippet works just fine on the command line :
my $tf = new HTML::TagFilter;
$tf->deny_tags( { TABLE => {style => ["BORDER-BOTTOM"]} });
$tf->deny_tags( { TABLE => {prevstyle => ['any']} }); $str = $tf->filter($str);
But when the same code is run on Apache, I am getting this error:
[Fri Dec 14 16:11:48 2012] [error] Can't locate object method "new" via
package "HTML::TagFilter" at
/usr/local/lib/perl5/site_perl/5.10.0/HTML/TagFilter.pm line 320.
What could be the source of this error?

Related

PYODBC Cannot find driver for MSSQL

I am attempting to connect to SQL Server using a simple python script that looks like the following.
import pyodbc
details = {
'server': '<hostname>',
'database': '<database>',
'username': '<username>',
'password': '<password>'
}
connect_string = 'DRIVER={{ODBC Driver 17 for SQL Server}};SERVER={server};PORT=1443;DATABASE={database};UID={username};PWD={password}'.format(**details)
connection = pyodbc.connect(connect_string)
print(connection)
The only difference is that I have removed the config value for obvious reasons. However, when I run this script I get the following error:
Traceback (most recent call last):
File "connect.py", line 12, in <module>
connection = pyodbc.connect(connect_string)
pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 17 for SQL Server' : file not found (0) (SQLDriverConnect)")
For reference the output of runnning odbcinst -j gives me:
unixODBC 2.3.4
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /home/bipvanwinkle/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8
cat /etc/odbcinst.ini:
[SQLServer]
Description = ODBC Driver 17 for SQL Server
Driver = /usr/lib/x86_64-linux-gnu/libodbc.so
Setup = /usr/lib/x86_64-linux-gnu/libodbc.so.1
UsageCount = 1
FileUsage = 1
cat /etc/odbc.ini:
[SQLServer]
Description = ODBC Driver 17 for SQL Server
Driver = /usr/lib/x86_64-linux-gnu/libodbc.so
Servername =
Database =
UID =
Port = 1433
ls /usr/lib/x86_64-linux-gnu/libodbc.so:
/usr/lib/x86_64-linux-gnu/libodbc.so
ldd /usr/lib/x86_64llinux-gnu/libodbc.so:
linux-vdso.so.1 (0x00007ffc86bec000)
libltdl.so.7 => /usr/lib/x86_64-linux-gnu/libltdl.so.7 (0x00007f9841306000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f98410e7000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9840cf6000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f9840af2000)
/lib64/ld-linux-x86-64.so.2 (0x00007f984177d000)
If it helps, both /etc/odbcinst.ini and /etc/odbc.ini were initially empty. I used a template I found online to fill them out. I definitely could have made errors.
Any ideas where I've gone wrong?
P.S. I'm running this on Ubuntu 17.10

logstash Error: com.mariadb.jdbc.Driver not loaded

I'm trying to sync some tables from MariaDB to ElasticSearch using LogStash.
I'm on a Debian Buster (10) Server
$ java -version
openjdk version "11.0.4" 2019-07-16
OpenJDK Runtime Environment (build 11.0.4+11-post-Debian-1deb10u1)
OpenJDK 64-Bit Server VM (build 11.0.4+11-post-Debian-1deb10u1, mixed mode, sharing)
$ mariadb --version
mariadb Ver 15.1 Distrib 10.3.15-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
$ /usr/share/logstash/bin/logstash --version
logstash 7.2.0
I tried different connectors :
$ ls -l /usr/share/java/
mariadb-java-client.jar
$ ls -l /etc/logstash/connectors/
mariadb-java-client-2.1.2.jar
mariadb-java-client-2.2.6.jar
mariadb-java-client-2.3.0.jar
mariadb-java-client-2.4.2.jar
mysql-connector-java-8.0.17.jar
Using "org.mariadb.jdbc.Driver" for mariadb connectors and "com.mysql.cj.jdbc.Driver" for mysql connector
$ cat /etc/logstash/conf.d/db-fr-bank.conf
input {
jdbc {
jdbc_connection_string => "jdbc:mariadb://localhost:3306/db_fr"
jdbc_user => "logstash"
jdbc_password => "<password>"
jdbc_driver_library => "/usr/share/java/mariadb-java-client.jar"
jdbc_driver_class => "org.mariadb.jdbc.Driver"
statement => "SELECT * FROM bank"
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "fr-bank"
}
}
But, instead of syncing, i keep getting :
$ /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/db-fr-bank.conf
...
[ERROR] 2019-07-29 02:08:17.563 [[main]<jdbc] jdbc - Failed to load /usr/share/java/mariadb-java-client.jar {:exception=>#<TypeError: failed to coerce jdk.internal.loader.ClassLoaders$AppClassLoader to java.net.URLClassLoader>}
[ERROR] 2019-07-29 02:08:17.598 [[main]<jdbc] javapipeline - A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Jdbc jdbc_user=>"logstash", jdbc_password=><password>, statement=>"SELECT * FROM bank", jdbc_driver_library=>"/usr/share/java/mariadb-java-client.jar", jdbc_connection_string=>"jdbc:mariadb://localhost:3306/db_fr", id=>"38a6d112755a5e87278761cf5f41b7e509212d1d02837a03672df2face00943a", jdbc_driver_class=>"org.mariadb.jdbc.Driver", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_f3094292-7482-4b73-95c4-7f78da4da911", enable_metric=>true, charset=>"UTF-8">, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>"info", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, parameters=>{"sql_last_value"=>1970-01-01 00:00:00 UTC}, last_run_metadata_path=>"/root/.logstash_jdbc_last_run", use_column_value=>false, tracking_column_type=>"numeric", clean_run=>false, record_last_run=>true, lowercase_column_names=>true>
Error: org.mariadb.jdbc.Driver not loaded. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
Exception: LogStash::ConfigurationError
Stack: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/plugin_mixins/jdbc/jdbc.rb:163:in `open_jdbc_connection'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/plugin_mixins/jdbc/jdbc.rb:221:in `execute_statement'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/inputs/jdbc.rb:277:in `execute_query'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/inputs/jdbc.rb:263:in `run'
/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:309:in `inputworker'
/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:302:in `block in start_input'
...
Same issue here.
I use the workaround from here with this and it works.
ie:
Copy the driver file to {logstash install dir}/logstash-core/lib/jars/ directory. These jars get added to the correct JDK classpath as logstash is started via java.
And
Change the jdbc_driver_library value in the logstash conf to "". i.e.: jdbc_driver_library => "" as well, otherwise the code still tries to load the jar separately

Puppet : exec[] : wget returned 8 instead of 0

I am completely new to Puppet and this is my first time writing code in puppet. I want to get a tar.gz file and then untar it to create the folder.
Here is my code:
file{ "${::filename}.tar.gz":
ensure => 'file',
mode => '0644',
notify => Exec['untar-file'],
}
exec{ 'download-file' :
command => "wget URL_FOR_TAR_GZ",
cwd => "PATH_WHERE_TO_STORE",
user => "my_name",
group => "our company name",
}
exec { 'untar-file':
command => "/bin/tar -xzvf tar_file_name",
cwd => "file_path",
creates => "foldername_to_be_createdc",
user => "my_name",
group => "our company name",
require => Exec['download-file']
}
As soon as I run this I get an error:
wget returned 8 instead of one of [0]" and "/Exec[download-file]/returns: change from notrun to 0 failed"
Where am I going wrong?

Could not evaluate: [/dev/null]:is an invalid url

I am automating an instance using Puppet in Google Compute engine. I installed necessary gcloud tool and running the manifest file using "puppet apply new-ins.pp" but not able to execute successfully as I am getting an error
Could not evaluate: [/dev/null]: is an invalid un
Could not evaluate: Invalid line 3: url[/dev/null]:
What exactly I need to put in device.conf
File new-ins.pp:
gce_instance { 'puppet-test':
ensure => present,
description => 'A Puppet test',
machine_type => 'n1-standard-1',
zone => 'us-central1-a',
network => 'default',
image => 'projects/centos-cloud/global/images/centos-6-v20131120',
tags => ['puppet', 'pp-master'],
startupscript => 'puppet-enterprise.sh',
metadata => {
'pe_role' => 'master',
'pe_version' => '3.3.1',
'pe_consoleadmin' => 'arunp7080#gmail.com',
'pe_consolepwd' => 'puppetize',
},
service_account_scopes => ['compute-ro'],
}
That's the output I get:
Error: /Stage[main]/Main/Gce_instance[puppet-test]: Could not evaluate: Invalid line 3: url[/dev/null]:
/usr/lib/ruby/site_ruby/1.8/puppet/util/network_device/config.rb:65:in `parse'
/usr/lib/ruby/site_ruby/1.8/puppet/util/network_device/config.rb:44:in `each'
/usr/lib/ruby/site_ruby/1.8/puppet/util/network_device/config.rb:44:in `parse'
/usr/lib/ruby/site_ruby/1.8/puppet/util/network_device/config.rb:42:in `open'
/usr/lib/ruby/site_ruby/1.8/puppet/util/network_device/config.rb:42:in `parse'
/usr/lib/ruby/site_ruby/1.8/puppet/util/network_device/config.rb:33:in `read'
/usr/lib/ruby/site_ruby/1.8/puppet/util/network_device/config.rb:26:in `initiali
I also ran into this issue myself. It seems like there is additional URI validation starting from Puppet 3.7.5
https://github.com/puppetlabs/puppet/blob/3.7.5/lib/puppet/util/network_device/config.rb#L86
To work with this, I've temporarily commented out the validation rule in the local version...

how to start druid from puppet script

I am trying to run druid on a local vagrant machine. I use puppet to get archives, extract them etc. However I get a problem when trying to run historical and overlord node.
I use following code to start servers:
file_line { "configure_historical_server":
path => '/usr/share/druid-services-0.6.160/config/historical/runtime.properties',
line => 'druid.extensions.coordinates=["io.druid.extensions:druid-s3- extensions:0.6.147","io.druid.extensions:druid-hdfs-storage:0.6.147"]',
match => '^druid.extensions.coordinates*',
require => [ Exec["run_coordinator"] ],
}
exec { "run_historical":
cwd => "/usr/share/druid-services-0.6.160/",
command => "nohup java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop/*:/usr/lib/hadoop/client/*:config/historical io.druid.cli.Main server historical&",
path => ["/bin", "/usr/bin"],
require => [ File_Line["configure_historical_server"] ],
}
file_line { "configure_overlord_server":
path => '/usr/share/druid-services-0.6.160/config/overlord/runtime.properties',
line => 'druid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.147","io.druid.extensions:druid-hdfs-storage:0.6.147"]',
match => '^druid.extensions.coordinates*',
require => [ Exec["run_historical"] ],
}
exec { "run_overlord":
cwd => "/usr/share/druid-services-0.6.160/",
command => "nohup java -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop/*:/usr/lib/hadoop/client/*:config/overlord io.druid.cli.Main server overlord&",
path => ["/bin", "/usr/bin"],
require => [ File_Line["configure_overlord_server"] ],
}
but both overlord and historical server fails due to the following error:
Caused by: java.io.FileNotFoundException: /home/vagrant/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.3.0/aether-e687f19b-733b-4348-a06f-e67797a26748-hadoop-hdfs-2.3.0.jar-in-progress (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at org.eclipse.aether.internal.impl.DefaultFileProcessor.copy(DefaultFileProcessor.java:151)
at org.eclipse.aether.internal.impl.DefaultFileProcessor.copy(DefaultFileProcessor.java:139)
at org.eclipse.aether.internal.impl.DefaultFileProcessor.move(DefaultFileProcessor.java:214)
at io.tesla.aether.connector.AetherRepositoryConnector$GetTask.rename(AetherRepositoryConnector.java:624)
at io.tesla.aether.connector.AetherRepositoryConnector$GetTask.run(AetherRepositoryConnector.java:404)
at io.tesla.aether.connector.AetherRepositoryConnector.get(AetherRepositoryConnector.java:232)
... 8 more
any idea how to fix this? when I start those servers from command line one after another (I wait until historical is started then I start overlord) everything works fine.

Resources