How to stream a log file from Windows 7 to HDFS in Linux ?
Flume in Windows is giving error
I have installed 'flume-node-0.9.3' on Windows 7 (Node 1) . The 'flumenode' service is running and localhost:35862 is accessible.
In Windows, the log file is located at 'C:/logs/Weblogic.log'
The Flume agent in CentOS Linux (Node 2) is also running.
In Windows machine, JAVA_HOME variable is set to "C:\Program Files\Java\jre7"
The Java.exe file is located at "C:\Program Files\Java\jre7\bin\java.exe"
Flume node is installed at " C:\Program Files\Cloudera\Flume 0.9.3"
Here is the flume-src.conf file placed inside 'conf' folder of Flume on Windows 7 (Node 1)
source_agent.sources = weblogic_server
source_agent.sources.weblogic_server.type = exec
source_agent.sources.weblogic_server.command = tail -f C:/logs/Weblogic.log
source_agent.sources.weblogic_server.batchSize = 1
source_agent.sources.weblogic_server.channels = memoryChannel
source_agent.sources.weblogic_server.interceptors = itime ihost itype
source_agent.sources.weblogic_server.interceptors.itime.type = timestamp
source_agent.sources.weblogic_server.interceptors.ihost.type = host
source_agent.sources.weblogic_server.interceptors.ihost.useIP = false
source_agent.sources.weblogic_server.interceptors.ihost.hostHeader = host
source_agent.sources.weblogic_server.interceptors.itype.type = static
source_agent.sources.weblogic_server.interceptors.itype.key = log_type
source_agent.sources.weblogic_server.interceptors.itype.value = apache_access_combined
source_agent.channels = memoryChannel
source_agent.channels.memoryChannel.type = memory
source_agent.channels.memoryChannel.capacity = 100
source_agent.sinks = avro_sink
source_agent.sinks.avro_sink.type = avro
source_agent.sinks.avro_sink.channel = memoryChannel
source_agent.sinks.avro_sink.hostname = 10.10.201.40
source_agent.sinks.avro_sink.port = 41414
I tried to run the above mentioned file by executing the following command inside the Flume folder:
C:\Program Files\Cloudera\Flume 0.9.3>"C:\Program Files\Java\jre7\bin\java.exe"
-Xmx20m -Dlog4j.configuration=file:///%CD%\conf\log4j.properties -cp "C:\Program Files\Cloudera\Flume 0.9.3\lib*" org.apache.flume.node.Application
-f C:\Program Files\Cloudera\Flume 0.9.3\conf\flume-src.conf -n source_agent
But it gives the following message:
Error: Could not find or load main class Files\Cloudera\Flume
Here is the trg-node.conf file running in CentOS (Node 2). The CentOS node is working fine:
collector.sources = AvroIn
collector.sources.AvroIn.type = avro
collector.sources.AvroIn.bind = 0.0.0.0
collector.sources.AvroIn.port = 41414
collector.sources.AvroIn.channels = mc1 mc2
collector.channels = mc1 mc2
collector.channels.mc1.type = memory
collector.channels.mc1.capacity = 100
collector.channels.mc2.type = memory
collector.channels.mc2.capacity = 100
collector.sinks = HadoopOut
collector.sinks.HadoopOut.type = hdfs
collector.sinks.HadoopOut.channel = mc2
collector.sinks.HadoopOut.hdfs.path =/user/root
collector.sinks.HadoopOut.hdfs.callTimeout = 150000
collector.sinks.HadoopOut.hdfs.fileType = DataStream
collector.sinks.HadoopOut.hdfs.writeFormat = Text
collector.sinks.HadoopOut.hdfs.rollSize = 0
collector.sinks.HadoopOut.hdfs.rollCount = 10000
collector.sinks.HadoopOut.hdfs.rollInterval = 600
The problem is due to the white space between Program and Files in this path:
C:**Program Files**\Cloudera\Flume 0.9.3
Consider installing Flume in a path without whitespaces, it will work like a charm.
Related
we have big-data Hadoop cluster based on horton-works HDP version 2.6.4 and ambari 2.6.1 version
all machines are with RHEL 7.2 version
in our cluster we have more then 540 machines and on all machines we have ambari-agent that communicate with ambari server , ( Ambari server is installed only on one machine ) while ambari-agent installed on all machines
until using ansible everything was good , when we do ambari-agent upgrade and ambari-agent restart
but recently we start to use ansible ( ansible-playbook ) in order to automate the installation
and ansible is running on all machines
so when task do the ambari-agent restart , then imminently we notice that ansible execution stooped and killed
after some investigation we saw that ambari agent is using the following ports
url_port = 8440
secured_url_port = 8441
ping_port = 8670
but I not see that any ansible process used above ports , so we not think its related
but the basic issue is clear
when ansible task doing on remote machine - ambari-agent restart , then its caused ansible interrupt and ansible killed
ambari-agent configuration looks like this
[server]
hostname = datanode02.gtfactory.com
url_port = 8440
secured_url_port = 8441
connect_retry_delay = 10
max_reconnect_retry_delay = 30
[agent]
logdir = /var/log/ambari-agent
piddir = /var/run/ambari-agent
prefix = /var/lib/ambari-agent/data
loglevel = INFO
data_cleanup_interval = 86400
data_cleanup_max_age = 2592000
data_cleanup_max_size_mb = 100
ping_port = 8670
cache_dir = /var/lib/ambari-agent/cache
tolerate_download_failures = true
run_as_user = root
parallel_execution = 0
alert_grace_period = 5
status_command_timeout = 5
alert_kinit_timeout = 14400000
system_resource_overrides = /etc/resource_overrides
[security]
keysdir = /var/lib/ambari-agent/keys
server_crt = ca.crt
passphrase_env_var_name = AMBARI_PASSPHRASE
ssl_verify_cert = 0
credential_lib_dir = /var/lib/ambari-agent/cred/lib
credential_conf_dir = /var/lib/ambari-agent/cred/conf
credential_shell_cmd = org.apache.hadoop.security.alias.CredentialShell
[network]
use_system_proxy_settings = true
[services]
pidlookuppath = /var/run/
[heartbeat]
state_interval_seconds = 60
dirs = /etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,
/etc/sqoop,
/var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,
/var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive
log_lines_count = 300
idle_interval_min = 1
idle_interval_max = 10
[logging]
syslog_enabled = 0
for now we are thinking about the following:
maybe ansible crash because TLSv1 is restricted ( Transport Layer Security ) , the default is that ambari-agent connects to TLSv1
so we think to set force_https_protocol=PROTOCOL_TLSv1_2 in ambari agent configuration , but this is only assumption
our suggestion and the new conf that maybe can help?
[security]
force_https_protocol=PROTOCOL_TLSv1_2 <------ the new update
keysdir = /var/lib/ambari-agent/keys
server_crt = ca.crt
passphrase_env_var_name = AMBARI_PASSPHRASE
ssl_verify_cert = 0
credential_lib_dir = /var/lib/ambari-agent/cred/lib
credential_conf_dir = /var/lib/ambari-agent/cred/conf
credential_shell_cmd = org.apache.hadoop.security.alias.CredentialShell
I am been trying to download an sln from Azure DevOps but I get this message that wants me to remap the local path. The current mapped local is correct where I want it on my desktop but this pop-up comes up and then I can't open the project file.
GlobalSection(TeamFoundationVersionControl) = preSolution
SccNumberOfProjects = 3
SccEnterpriseProvider = {4CA58AB2-18FA-4F8D-95D4-32DDF27D184C}
SccTeamFoundationServer = https://nccn.visualstudio.com/defaultcollection
SccProjectUniqueName0 = ..\\NCCN\u0020Libraries\\GuidelineDataLayer\\GuidelineDataLayer\\GuidelineDataLayer.csproj
SccProjectName0 = ../../NCCN\u0020Libraries/GuidelineDataLayer/GuidelineDataLayer
SccLocalPath0 = ..\\NCCN\u0020Libraries\\GuidelineDataLayer\\GuidelineDataLayer
SccLocalPath1 = .
SccProjectUniqueName2 = NccnWebApi\\Nccn.WebService.GAT.csproj
SccProjectName2 = NccnWebApi
SccLocalPath2 = NccnWebApi
EndGlobalSection
I am having a strange error when setting up a unixODBC connection to a Oracle 11g R1 database. After everything was set up I wanted to try to test the connection using isql. It keeps on returning the error
[08004][unixODBC][Oracle][ODBC][Ora]ORA-12154: TNS:could not resolve the connect identifier specified
What is confusing to me is that I can connect via sqlplus using the same environment and TNS notation just fine
sqlplus dbuser/password#DBOPBAC9
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
SQL>
I am working on the problems for two days now and can't find a solution. ORA-12154 is a common error for which I have found a lot of possible solutions. But none of them worked for me. It is frustraiting.
Here is what I have tryed:
Environment variables that are mentioned are all set before starting isql
ORACLE_SID=DBOPBAC9
ORACLE_BASE=/CSGPBAC9/DBA/oracle
ORACLE_INSTANT_CLIENT_64=/CSGPBAC9/opt/myuser/tools/instantclient_11_2_x64
ORACLE_HOME=/CSGPBAC9/DBA/oracle/product/11.2.0
TNS_ADMIN=/CSGPBAC9/DBA/oracle/product/11.2.0/network/admin
This is the tnsnames.ora found in $TNS_ADMIN directory
DBOPBAC9 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = host IP)(PORT = 1480))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DBOPBAC9)
)
)
This is the sqlnet.ora
TRACE_LEVEL_CLIENT = OFF
SQLNET.EXPIRE_TIME = 10
NAMES.DIRECTORY_PATH = (TNSNAMES)
DIAG_ADR_ENABLED=off
This is my unixODBC setup. I have installed unixODBC into directory /opt/unixODBC and set the environment variables accordingly. The odbc.ini is in directory /opt/myuser/tools/unixODBC and variables are also set.
odbc.ini
[OracleODBC-11g]
Application Attributes = T
Attributes = W
BatchAutocommitMode = IfAllSuccessful
BindAsFLOAT = F
CloseCursor = F
DisableDPM = F
DisableMTS = T
Driver = Oracle 11g ODBC driver
DSN = OracleODBC-11g
EXECSchemaOpt =
EXECSyntax = T
Failover = T
FailoverDelay = 10
FailoverRetryCount = 10
FetchBufferSize = 64000
ForceWCHAR = F
Lobs = T
Longs = T
MaxLargeData = 0
MetadataIdDefault = F
QueryTimeout = T
ResultSets = T
ServerName = //host.ip/DBOPBAC9
SQLGetData extensions = F
Translation DLL =
Translation Option = 0
DisableRULEHint = T
UserID =
StatementCache=F
CacheBufferSize=20
UseOCIDescribeAny=F
odbcinst.ini
[Oracle 11g ODBC driver]
Description = Oracle ODBC driver for Oracle 11g
Driver =
Driver64 = /CSGPBAC9/opt/myuser/tools/instantclient_11_2_x64/libsqora.so.11.1
Setup =
FileUsage =
CPTimeout =
CPReuse =
I have created a strace output to check for errors but unfortunatly I can't find anything. To me it looks like it is able to find tnsnames.ora file and read it
You need to edit odbc.ini
ServerName = TNS_ALIAS
How can I use dcmprscp to receive from SCU Printer a DICOM file and save it, I'm using dcmtk 3.6 & I've some trouble to use it with the default help, this's what I'm doing in CMD:
dcmprscp.exe --config dcmpstat.cfg --printer PRINT2FILE
each time I receive this messagebut (database\index.da) don't exsist in windows
W: $dcmtk: dcmprscp v3.6.0 2011-01-06 $
W: 2016-02-21 00:08:09
W: started
E: database\index.dat: No such file or directory
F: Unable to access database 'database'
I try to follow some tip, but the same result :
http://www.programmershare.com/2468333/
http://www.programmershare.com/3020601/
and this's my printer's PRINT2FILE config :
[PRINT2FILE]
hostname = localhost
type = LOCALPRINTER
description = PRINT2FILE
port = 20006
aetitle = PRINT2FILE
DisableNewVRs = true
FilmDestination = MAGAZINE\PROCESSOR\BIN_1\BIN_2
SupportsPresentationLUT = true
PresentationLUTinFilmSession = true
PresentationLUTMatchRequired = true
PresentationLUTPreferSCPRendering = false
SupportsImageSize = true
SmoothingType = 0\1\2\3\4\5\6\7\8\9\10\11\12\13\14\15
BorderDensity = BLACK\WHITE\150
EmptyImageDensity = BLACK\WHITE\150
MaxDensity = 320\310\300\290\280\270
MinDensity = 20\25\30\35\40\45\50
Annotation = 2\ANNOTATION
Configuration_1 = PERCEPTION_LUT=OEM001
Configuration_2 = PERCEPTION_LUT=KANAMORI
Configuration_3 = ANNOTATION1=FILE1
Configuration_4 = ANNOTATION1=PATID
Configuration_5 = WINDOW_WIDTH=256\WINDOW_CENTER=128
Supports12Bit = true
SupportsDecimateCrop = false
SupportsTrim = true
DisplayFormat=1,1\2,1\1,2\2,2\3,2\2,3\3,3\4,3\5,3\3,4\4,4\5,4\6,4\3,5\4,5\5,5\6,5\4,6\5,6
FilmSizeID = 8INX10IN\11INX14IN\14INX14IN\14INX17IN
MediumType = PAPER\CLEAR FILM\BLUE FILM
MagnificationType = REPLICATE\BILINEAR\CUBIC
The documentation of the "dcmprscp" tool says:
The dcmprscp utility implements the DICOM Basic Grayscale Print
Management Service Class as SCP. It also supports the optional
Presentation LUT SOP Class. The utility is intended for use within the
DICOMscope viewer.
That means, it is usually not run from the command line (as most of the other DCMTK tools) but started automatically in the background by DICOMscope.
Anyway, I think the error message is clear:
E: database\index.dat: No such file or directory
F: Unable to access database 'database'
Did you check whether there is a subdirectory "database" and whether the "index.dat" file exists in this directory? If you should ask why there is a need for a "database" then please read the next paragraph of the documentation:
The dcmprscp utility accepts print jobs from a remote Print SCU.
It does not create real hardcopies but stores print jobs in the local
DICOMscope database as a set of Stored Print objects (one per page)
and Hardcopy Grayscale images (one per film box N-SET)
I have installed ABAP Development Tools on Eclipse 4.2 and Kubuntu 12.04 64 bit and everything went fine.
When I try to create a new ABAP Project and search for configured SAP Connections on the SAP GUI (I have SAP GUI for Java 7.30 rev 3) Eclipse shows the following error:
"Configuration not found in settings file '/home/dfabbri/.SAPGUI/settings', with include 'null', and message server 'null'"
I verified that file '/home/dfabbri/.SAPGUI/settings' is present and not empty; here is the content:
############################################################
#
# file : /home/dfabbri/.SAPGUI/settings
# created : 08.05.2012 12:42:08 CEST
# encoding: UTF-8
#
############################################################
#logonFrameY = "83"
#logonFrameX = "137"
#GLF_showDetailCol = "1"
#GLF_ColumnState = "0 / 75"
#logonFrame_2_X = "970"
#logonFrame_2_Y = "241"
#frameWidth = "778"
#frameHeight = "900"
#logonFrame_2_Width = "348"
#logonFrame_2_Height = "451"
#lookAndFeelDefault = "Tradeshow"
#propFont = "Roboto Cn"
#fixedFont = "Ubuntu Mono"
#labelFont = "Roboto"
#genFont = "Roboto Cn"
#forceLongWindowTitle = "true"
#showListboxKeyAlways = "true"
#listboxSortByKey = "true"
#overwrite = "false"
Does anyone have any suggestion about this problem?
I tried on a Windows Virtual Machine and everything went fine.
I realize this is a really old question but just in case you still have this issue (I have it on my mac all the time) add the following two entries to the bottom of that settings file:
#INCLUDE = "file:///home/dfabbri/.SAPGUI/connections"
#MESSAGESERVER = "file:///home/dfabbri/.SAPGUI/message_servers"
Then create the two files above in that path (connections should already be there), message_servers can be empty. Hope this helps.