Just started to use LogParser. Already existing system is using log parser to read the IIS file and update the db to calculate hits, etc..
I am trying to understand the flow and need to extract two more new fields from IIS log and update the db.
In my local desktop i do have sample log file and log parser. And i tried this query LogParser.exe “Select top 10 * from c:\LogParser*.log” in Log parser and got Error: detected extra argument "top" after query. Why i couldnt read the log file which is existing in my local?
And also i got batch file which is in the production. i changed the path to access my desktop files and scheduled the windows task. It is also not working. The code as,
logparser file:Extract.sql?inputfile=c:\LogParser*.log -o:SQL -database:dbname server:test1 -username:username -password:password -createtable:OFF -maxStrFieldLen:2048 -clearTable:OFF
I just need to simulate the existing system to update the database and need to add more fields.
Please help me to go further. i really got stuck.
I am not sure, if this would solve your problem, you can try your hands at LogParser Studio - that gives an IDE to the traditional Log Parser.
Definitely easier to rectify your common mistakes, and get help/documentation at your disposal. You can get more info and download it from here.
Hope it helps!
Related
Since i want to schedule a job in SAS-DIS. i tried the process using sas management console,but an error is popping up saying scheluing server not found. and scheduling server is an extra package that has to be purchased.
and because of that i need running on crontab on linux.
Do i need to export my job in SAS Data Integration Studio into format .sas or i can execute using .spk file format ?
if yes, i need to generate into .sas format . Please tell me how to generate that format. since i can't convert it ? Thank you.
A ´*.pkg ´ file contains much more information than your SAS code. It also contians information on the transformation and option you used as well as the graphical layout of your job.
A simple way to grab your SAS code is to open the properties of your job and select the tab with the code. However, that is not the way you should do it to schedule your job.
What you should do, is to create a "Deployed Job" object from your job. This has the advantage that your successor, or you when you come back to the project in a year, kan find out where you deployed the code today.
My goal is to collect some custom logs in Azure Monitor from an external VM running on Linux. In that regard, I've installed the log analytics agent according to the MS official documentation, I ran the wizard in order to setup a custom log - that includes a sample file, a row delimiter and a location from where to collect the logs. However, I'm getting a warning message saying:
Two successive configuration applications from OMS Settings failed – please report issue to github.com/Microsoft/PowerShell-DSC-for-Linux/issues (1)
Tried to follow the link proposed that points to Github where I wasn't able to find any solution (nor on any other link) on this and that's why I said to give it a change and ask the community in here.
Though, it is weird that the heartbeat of the machine or manual syslogs messages are being collected except for the custom logs.
Has anyone encountered this and managed to overpass it? Thanks
Apparently, according to the MS answer, the above warning message is normal to be displayed. However, the reason for not collecting the logs was that in the target file that has to be processed by the oms agent, you need to keep appending new entries because this triggers the oms agent which compare and check if the file has new entries than at the last check.
Hope will help someone!
After going through a windows 10 re-installation due to a windows update crashing my laptop, I was left with re-installing many applications. One of them being node.js. When I tried to install it through the windows installer, I kept getting 'setup wizard ended prematurely because of an error message'. I am not sure what the problem is. I used x64 version which is what my OS is and there is no nodejs folder in program files. When I logged the installation this message popped in a lot of the lines has no eligible binary patches. Before the no eligible lines there were error logs such as:
'WixSchedInternetShortcuts: Error 0x8007000d: failed to add temporary row, dberr: 1, err: Directory_'
'WixSchedInternetShortcuts: Folder 'ApplicationProgramsFolder' already exists in the CreateFolder table; the above error is harmless'
If that is not enough information please advice me on how to send the full logs without spamming huge text in the thread. Thank you.
The MSI log file:
https://gist.github.com/luki2000/ab00476127d54aaf610d8bda84d40a64
Maybe try to search the log for "value 3" as explained by Rob Mensching in his blog. Doing so will find the locations in the log file that describe errors of significance.
Many people use dropbox, gdisk or similar to post logs. Some put it on github (just a sample log for OP, leaving in for reference). Check that last link, is that the same problem you see perhaps? (search for "value 3" as explained above - without the quotes of course). Looks like there is an error creating an Internet shortcut. Perhaps that is a Windows 10 problem? I will take a quick look.
I am betting Bob Arnson knows what this problem is outright. He will probably give us the real answer, see below for my workaround.
The correct thing to do overall, would probably be to communicate the problem back to the Node.js guys so they can fix the problem once and for all.
UPDATE: Maybe see if this answer helps you: node.js installer failing with 'CAQuietExec Failed' and 1603 error code on Windows 7. Essentially un-check Event tracing(ETW) in the setup's feature dialog - or you can try to launch the MSI from an elevated command prompt.
UPDATE: There seem to be two Internet shortcuts configured for this MSI in the WixInternetShortcut table. I would just create a transform to remove these two shortcuts and try a reinstall. If you feel bold and fearless and like to break the law, you can delete the two rows from the table and just save directly to the MSI itself. This is never the right thing to do if you are a deployment specialists. The original MSI is sacred, but if this is for your own system and you need to get something done, that would work. Then you just install the MSI direct afterwards. Otherwise you can install the transform after creating it with a simple command line:
msiexec.exe /i node-v8.11.2-x64.msi TRANSFORMS="C:\MyTransform"
You can create the transform using Orca, InstEd or SuperOrca or any commercial tool that supports creating transforms.
In case you don't know, transforms are little database fragments that are applied to the original MSI (which is also a database under the hood). After the transform is applied the in-memory version of the MSI is the MSI + the changes from the transform.
I'm using liferay portal community edition 6.1-
I'm trying to export a portal in order to move the content to another intance of liferay. however, when trying to export the portal content the export fails with message 'Your request failed to complete."
When looking at log files, there is no sign of anything going wrong.
Could someone please explain to me what can possibly be wrong, and from where should I get the information for what is failing on server?
Marko
I've been digging into similar issues with the LAR importing/exporting system myself. There isn't much logging in that code unfortunately. I ended up enabling remote debugging in the server's JVM and stepping through the problematic tasks using the Eclipse debugger.
One of the main issues I've had with Liferay is that it swallows detailed exceptions, doesn't log them, and waters them down into very generic error messages by the time they're displayed to you in the UI. Using the debugger really helps because you can often see the original exception without having to dig too deeply into the code.
How's your free disk space looking on the server? I believe Liferay saves the LAR to a temp file as it works on it (at least, it does as part of the publishing process). On our test site, with only ten pages and almost no content, that temp file is still 3.5M. If your site is really big you might just be running out of space.
I am trying to create an instance using the Configuration Manager of WCS 7. I am working on a Win 7 x64 machine with DB2 9.5 64 bit version.
I am struck with this Massloading error when the instance creation happens :
In createInstanceANT.log file :
[Massload] Massloading
C:\IBM\WebSphere\CommerceServer\schema\xml\wcs.keys.xml Error in
MassLoading, please check logs for details.
The error log shows the following error :
[jcc][10165][10044][4.3.111] Invalid database URL syntax:
jdbc:db2://:0/WCSDEMO. ERRORCODE=-4461, SQLSTATE=42815
C:\IBM\WEBSPH~1\COMMER~2\config\DEPLOY~1\xml\createBaseSchema.xml:185:
Error in massloading
WCSDEMO is the database name. The Massloader is not able to get the URL and port to connect. It is supposedly getting them from createInstance.properties file but it is not working. The createInstance.properties file has all the details of the DB to connect.
What could be the reason for this error and how to resolve it ? Is there any configuration change that I am missing ?
Can you provide some more details.
look inside the messages.txt file located in WC_install_dir/instances/instance_name/logs
and confirm what the exact issue is. If it is related to jdbc driver being wrong I may be able to help you.
I've been running into massloading problems with external systems. Eg. databases not on the same machine as the WAS installation.
In these cases I look for the
As you can see setting the loaderDBName to just the name of the database would look on the local machine. But by changing this statement so you load with the syntax
loaderDBName=[DATABASE_SERVER_NAME]:[PORT]/[DATABASE_NAME]
You'll be able to massload using the commerce standard scripts. These changes needs to be done in many scripts. Both for updating fixpacks and enabling features. If you run database updates without the changes it will crash at first and have done all the schema changes to the database that you then need to comment out before trying again.
IBM Software Support is your friend. They'll help you fix it.