Configured secure HBase-1.1.2 with Hadoop-2.7.1 on Windows. When i enable authorization referring Configuring HBase Authorization, getting ERROR: DISABLED: Security features are not available exception.
I have set the authorization configurations as below,
Configuration
<property>
<name>hbase.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
But HBase Authorization works fine when i tried with HBase-0.98.13 version. Some one help me to enable HBase Authorization in a correct way.
I was encountered with the same problem as I was not able to grant privileges to any other users. Mine was Kerberized Hadoop cluster I did following changes to make it work.
hbase.security.authentication=kerberos
hbase.security.authorization=true
Then re-deployed the configurations then it worked fine.
I was encountered with the same problem as I was not able to grant privileges to any other users. Mine was Kerberized Hadoop cluster.In addition to,My zookeeper was kerberized.So I do the following things:
firstly,you need stop your hbase.
Add the following to {$ZOOKEEPER_CONF_DIR}/jaas.conf:
Client{
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/var/local/hadoop/zookeeper-3.4.8/conf/keytabs/hbase.keytab"
storeKey=true
useTicketCache=true
principal="hbase/zte1.zdh.com#ZDH.COM";
};
(My hbase principal is:hbase/zte1.zdh.com#ZDH.COM,username must be same)
then,use zkCli.sh command Line,next you can use: rmr /hbase to rmove the hbase directory,then start your hbase service,you will solve this problem.
Related
I'm using Apache-Spark 3.2.3.
To connect to Hive JDBC, HiveServer2 is configured as http transport mode.
hive-site.xml:
<property>
<name>hive.server2.transport.mode</name>
<value>http</value>
<description>
Expects one of [binary, http].
Transport mode of HiveServer2.
</description>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
<description>Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is 'binary'.</description>
</property>
<property>
<name>hive.server2.thrift.http.port</name>
<value>10001</value>
<description>Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is 'http'.</description>
</property>
<property>
<name>hive.server2.thrift.http.path</name>
<value>cliservice</value>
<description>Path component of URL endpoint when in HTTP mode.</description>
</property>
For Amazon QuickSight, "http" transport mode is failing. I tried ports 10000 and 10001.
If I change transportation mode to binary, QuickSight works with the port 10000. But now the hive jdbc connection fails.
I'M NOT USING CLOUDERA, but it this topic gives a good idea.
https://community.cloudera.com/t5/Support-Questions/hive-Enable-HTTP-Binary-transport-modes-in-HiveServer2/td-p/94401
Is it possible to configure Hive "config group" to allow multiple instances in hive-site.xml manually? or any other idea about how to configure Thrift in Apache-Spark to work with binary and http transportation mode at the same time?
I am doing parsechecker for url url=https://www.modernfamilydental.net/
o/p
Fetch failed with protocol status: exception(16), lastModified=0: Http code=403, url=https://www.modernfamilydental.net/
May I know what is the issue and how to solve it. I tried changing the agent name but it did not work. Please help me.
nutch-site.xml
<property>
<name>http.agent.name</name>
<value>crawlbot</value>
</property>
<property>
<name>plugin.includes</name>
<value>protocol-httpclient|urlfilter-regex|query-(basic|site|url|lang)|indexer-csv|nutch-extensionpoints|protocol-httpclient|urlfilter-regex|summary-basic|scoring-opic|urlnormalizer-(pass|regex|basic)protocol-http|urlfilter-regex|parse-(html|tika|metatags|text|js|feed)|index-(basic|anchor|more|metadata)</value>
</property>
<property>
<name>db.ignore.external.links</name>
<value>true</value>
</property>
<property>
<name>db.ignore.external.links.mode</name>
<value>byDomain</value>
</property>
<property>
<name>fetcher.server.delay</name>
<value>2</value>
</property>
<property>
<name>fetcher.server.min.delay</name>
<value>0.5</value>
</property>
<property>
<name>fetcher.threads.fetch</name>
<value>400</value>
</property>
<property>
<name>fetcher.max.crawl.delay</name>
<value>10</value>
<description> If the Crawl-Delay in robots.txt is set to greater than this value (in seconds) then the fetcher will skip this page, generating an error report. If set to -1 the fetcher will never skip such pages and will wait the amount of time retrieved from robots.txt Crawl-Delay, however long that might be. </description>
</property>
As you requested In the comments
How to Integrate proxy setup for Nutch?
There are a lot of free(like https://www.sslproxies.org/) and paid(you can find many paid proxies online) proxy server that you can easily integrate to Nutch.
Nutch(1.16) has provided a lot of configurations that related to proxy server integration.
<property>
<name>http.proxy.host</name>
<value>ip-address</value>
<description>The proxy hostname. If empty, no proxy is used.</description>
</property>
<property>
<name>http.proxy.port</name>
<value>proxy port</value>
<description>The proxy port.</description>
</property>
<property>
<name>http.proxy.username</name>
<value>blahblah</value>
<description>Username for proxy. This will be used by
'protocol-httpclient', if the proxy server requests basic, digest
and/or NTLM authentication. To use this, 'protocol-httpclient' must
be present in the value of 'plugin.includes' property.
NOTE: For NTLM authentication, do not prefix the username with the
domain, i.e. 'susam' is correct whereas 'DOMAIN\susam' is incorrect.
</description>
</property>
<property>
<name>http.proxy.password</name>
<value>blahblah</value>
<description>Password for proxy. This will be used by
'protocol-httpclient', if the proxy server requests basic, digest
and/or NTLM authentication. To use this, 'protocol-httpclient' must
be present in the value of 'plugin.includes' property.
</description>
</property>
<property>
<name>http.proxy.realm</name>
<value></value>
<description>Authentication realm for proxy. Do not define a value
if realm is not required or authentication should take place for any
realm. NTLM does not use the notion of realms. Specify the domain name
of NTLM authentication as the value for this property. To use this,
'protocol-httpclient' must be present in the value of
'plugin.includes' property.
</description>
</property>
<property>
<name>http.proxy.type</name>
<value>HTTP</value>
<description>
Proxy type: HTTP or SOCKS (cf. java.net.Proxy.Type).
Note: supported by protocol-okhttp.
</description>
</property>
<property>
<name>http.proxy.exception.list</name>
<value>nutch.org,abc.com</value>
<description>A comma separated list of hosts that don't use the proxy
(e.g. intranets). Example: www.apache.org</description>
</property>
If you see in nutch lib-http plugin code which is an interface plugin for all http libraries like(protocal-http,protocal-httpclient,protocal-okhttp .. etc)
org.apache.nutch.protocol.http.api.HttpBase
public void setConf(Configuration conf) {
this.conf = conf;
this.proxyHost = conf.get("http.proxy.host");
this.proxyPort = conf.getInt("http.proxy.port", 8080);
this.proxyType = Proxy.Type.valueOf(conf.get("http.proxy.type", "HTTP"));
this.proxyException = arrayToMap(conf.getStrings("http.proxy.exception.list"));
this.useProxy = (proxyHost != null && proxyHost.length() > 0);
this.timeout = conf.getInt("http.timeout", 10000);
.........................................
.........................................
as you can see from the above code Nutch uses those configurations while initializing HTTPclient object.
after taking a look at your plugin.includes conf you are using protocol-httpclient
if you look at the code of **org.apache.nutch.protocol.httpclient.Http** inside configureClient method
This particular specific code will integrate proxy server to httpclient
// HTTP proxy server details
if (useProxy) {
hostConf.setProxy(proxyHost, proxyPort);
if (proxyUsername.length() > 0) {
AuthScope proxyAuthScope = getAuthScope(this.proxyHost, this.proxyPort,
this.proxyRealm);
NTCredentials proxyCredentials = new NTCredentials(this.proxyUsername,
this.proxyPassword, Http.agentHost, this.proxyRealm);
client.getState().setProxyCredentials(proxyAuthScope, proxyCredentials);
}
}
nutch is setting up proxy Object so that every request you make through httpclient will goes to the proxy server.
I would suggest you to increase fetcher.server.min.delay to 2 seconds.it will make sure the other end will not get abused.
For testing purpose, you can use this tutorial
It is issue of the http.agent.version they are blocking agent version after changing it solved the issues.
I am using Linux Web Apps for Azure together with a SQL Azure Database.
I can save my SQL Azure Database password (for argument's sake, let us say it is pass123) in META-INF/context.xml and this works successfully
<Context>
<Resource name="jdbc/myDB" type="javax.sql.DataSource" auth="Container"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
driverClassName="com.microsoft.sqlserver.jdbc.SQLServerDriver" initialSize="30"
maxActive="100" validationQuery="SELECT 1"
validationQueryTimeout="1000"
testOnBorrow="true"
url="jdbc:sqlserver://exampleserver.database.windows.net:1433;database=exampledb;encrypt=true;hostNameInCertificate=westeurope1-a.control.database.windows.net;
loginTimeout=10;user=myuser;
password=pass123"
/>
</Context>
I however would like the password to be encrypted and not stored in plaintext in the context.xml file. So I did the following - based on https://learn.microsoft.com/en-us/azure/app-service/containers/configure-language-java#data-sources:
I added two Web App/Configuration/Application Settings
CATALINA_OPTS and THEDBPASSWORD
I set CATALINA_OPTS to be "$CATALINA_OPTS -Ddbpassword={THEDBPASSWORD}"
I set THEDBPASSWORD to be the password of the user
I then changed my context.xml to be password={dbpassword} instead of password=pass123
I then however get the following error in the log (and the application fails to start)
Error: Could not find or load main class "$CATALINA_OPTS -Ddbpassword=pass123" -Ddbpassword=pass123
Any ideas?
I solved it by doing the following:
I had forgotten the $ in context.xml
So context.xml needed to be password=${dbpassword}
I also changed the CATALINA_OPTS to be -Ddbpassword=pass123
I dispensed with the THEDBPASSWORD variable
I am sure this is not the only way or the best way of doing this, but at least it works.
I was partly inspired by Tomcat 8 - context.xml use Environment Variable in Datasource
I have a Spring integration flow which bridges from ActiveMQ to OracleAQ.
See example project under GitHub - https://github.com/cknzl2014/springio-ora-xa/tree/atomikos.
When I run it without XA, it is blazingly fast.
With XA, it processes only 1 to 2 messages per second.
When profiling the application, I see that for every message a new physical connection is established, and with this, the metadata query is issued on the oracle db.
But I don't understand why it does this, and how I can prevent this from happening.
Does anyone of you guys have experience with OracleAQ and XA?
Could this be a problem with the XA transaction manager (I use Atomikos)?
Thanks for your help,
Chris
We found a solution to the problem.
It consists of four steps.
Step 1: Use the latest Oracle client libraries
The first step ist to use the lastest Oracle 12c client libraries.
There were significant improvements in the ojdbc8.jar, e.g. they use stored procedures to get the metadata now.
This increased the throughput to about 10 msgs/s.
Step 2: Setup connection pooling correctly
The second step was improving the connection pooling according to article http://thinkfunctional.blogspot.ch/2012/05/atomikos-and-oracle-aq-pooling-problem.html:
<bean id="oraXaDataSource" primary="true"
class="oracle.jdbc.xa.client.OracleXADataSource" destroy-method="close">
<property name="URL" value="${oracle.url}" />
<property name="user" value="${oracle.username}" />
<property name="password" value="${oracle.password}" />
</bean>
<bean id="atomikosOraclaDataSource"
class="org.springframework.boot.jta.atomikos.AtomikosDataSourceBean">
<property name="uniqueResourceName" value="xaOracleAQ" />
<property name="xaDataSource" ref="oraXaDataSource" />
<property name="poolSize" value="5" />
</bean>
<bean id="OracleAQConnectionFactory" class="oracle.jms.AQjmsFactory" factory-method="getConnectionFactory">
<constructor-arg ref="atomikosOraclaDataSource" />
</bean>
This configuration alone resultet in exceptions because of 'auto-commit' of the Oracle connection.
Step 3: Set autoCommit to false
The third step was to set the following java system property (see https://docs.oracle.com/database/121/JAJDB/oracle/jdbc/OracleConnection.html#CONNECTION_PROPERTY_AUTOCOMMIT):
-DautoCommit=false
But then the throughput went down to 1 to 2 msg/s again.
Step 4: Set oracle.jdbc.autoCommitSpecCompliant to false
The last step was to set the following java system property (see https://docs.oracle.com/database/121/JAJDB/oracle/jdbc/OracleConnection.html#CONNECTION_PROPERTY_AUTO_COMMIT_SPEC_COMPLIANT):
-Doracle.jdbc.autoCommitSpecCompliant=false
Now we get a throughput of 80 msgs/s.
Conclusion
The setting of oracle.jdbc.autoCommitSpecCompliant to false is not elegant, but solved the problem.
We have to investigate further to see how we can get around this problem without setting oracle.jdbc.autoCommitSpecCompliant to false.
Many thanks to Dani Steinmann (stonie) for the help!
P.S.: I updated the sample project under GitHub - https://github.com/cknzl2014/springio-ora-xa/tree/atomikos.
First of all you should be sure that you use pool for JDBC connections.
On the other hand you may consider to use ChainedTransactionManager isntead of XA for two target transaction managers - JMS and JDBC.
Also see some information in the JDBC extensions project.
There is also some Oracle AQ API as well in that project.
We can access HMC in JUnit tenant by hitting the below URL
https://localhost:9002/hmc_junit/hybris
which is defined in tenant_junit.properties like this hmc.webroot=/hmc_junit
But I havn't seen anywhere URL to access Backoffice in JUnit Tenant.
Can anybody help me please to access Backoffice in JUnit Tenant ?
I was looking for it everywhere as well, couldn't find any documentation in the wiki... It doesn't seem to be officially supported but here is what I found.
Under Hybris 6.3 there is no junit context path for the backoffice application. Here is how you could add one :
Create a file named : local_tenant_junit.properties under your configuration folder, it should contain :
backoffice.webroot=/backoffice_junit
Create a file for customization inside your config folder customize/ext-backoffice/backoffice/web/webroot/WEB-INF/backoffice-spring-filter.xml. Copy the content of the original file and update the backofficeFilterChain bean. We want to use the dynamicTenantActivationFilter instead of the tenantActivationFilter) :
<bean id="backofficeFilterChain" class="de.hybris.platform.servicelayer.web.PlatformFilterChain">
<constructor-arg>
<list>
<ref bean="log4jFilter"/>
<ref bean="dynamicTenantActivationFilter"/>
<ref bean="backofficeRedirectFilter"/>
<ref bean="sessionFilter"/>
<ref bean="backofficeDataSourceSwitchingFilter"/>
<ref bean="backofficeCatalogVersionActivationFilter"/>
<ref bean="backofficeContextClassloaderFilter"/>
<ref bean="backofficeSecureMediaFilter" />
</list>
</constructor-arg>
Execute ant clean all customize
Check that in bin/platform/tomcat/conf/server.xml you now have a new context backoffice_junit
Start your server, you can now access the backoffice application for master and junit tenant
For Hybris 6.7, to me the following steps were sufficient:
in config/local_tenant_junit.properties , add
backoffice.webroot=/backoffice_junit
ant server
this puts the endpoint into ${tomcat.webapps} in server.xml template resulting in:
<Context path="/backoffice_junit"...
being added to your bin/platform/tomcat/conf/server.xml
Then when you open https://localhost:9002/backoffice_junit , the DataSourceSwitchingFilter gets current tenant from a ThreadLocal and activates its dataSource.
Config values inside the junit local properties that worked for me:
backoffice.library.home=${data.home}/junit
backoffice.webroot=/junit_backoffice
For more info see:
https://help.sap.com/viewer/5c9ea0c629214e42b727bf08800d8dfa/1811/en-US/c7e1bf2832414c8ea15c001d5cf1defd.html