Jest test coverage for node_module package - jestjs

I am writing the test case for my node_module package by including package file.
Ex:
import {xyz} from 'abc'
test('CheckboxWithLabel changes the text after click', () => {
expect(xyz(1)).isEqual(1);
})
when I am running my test coverage showing:
Statements : Unknown% ( 0/0 )
Branches : Unknown% ( 0/0 )
Functions : Unknown% ( 0/0 )
Lines : Unknown% ( 0/0 )
How to have the coverage file listing??

Related

AWS CDK / NodejsFunction: beforeInstall, beforeBundling, afterBundling

I'm working on a project with an AWS Infrastructure. I work with aws-cdk-lib for IaC to simplify the whole process.
The flow is the following:
User upload photo/video to S3 Bucket
A lambda is triggered to compress the file with sharp and store it on another S3 Bucket with a short TTL.
A S3 Notification event trigger another lambda which will upload the S3Object to a Storj Bucket.
So, I need to use uplink-nodejs library to store images and videos on the Storj DCS service.
The issue is that, in order to install uplink-nodejs i need to install make command and copy ./node_modules/uplink-nodejs/* / for uplink command to be available to the system.
But I can't find a way to install all required dependancies and make the required commands.
I've already tried multiple solutions:
Use NodejsFunction bundling options (beforeInstall, beforeBundling, afterBundling).
Create a .dockerfile in my lambda folder.
The first solution throw an error for su command and sudo command, which prevent me from installing make
The second solution doesn't seems specifically relevant in my situation.
Someone have an idea of what's happening ?
Here's the AWS CDK NestedStack for the ContentService with the lambdas:
import * as cdk from 'aws-cdk-lib';
import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as dynamodb from 'aws-cdk-lib/aws-dynamodb';
import * as s3 from 'aws-cdk-lib/aws-s3';
import * as path from 'path';
import { Construct } from 'constructs';
import { NodejsFunction } from 'aws-cdk-lib/aws-lambda-nodejs';
import { DockerImage } from 'aws-cdk-lib';
import { LambdaDestination } from 'aws-cdk-lib/aws-s3-notifications';
interface ContentServiceProps extends cdk.NestedStackProps {
main_table: dynamodb.Table;
user_content_bucket: s3.Bucket;
};
export class ContentService extends cdk.NestedStack {
public readonly content_service_handler: NodejsFunction;
public readonly storj_image_lambda: NodejsFunction;
constructor(scope: Construct, id: string, props: ContentServiceProps) {
super(scope, id, props);
this.storj_image_lambda = new NodejsFunction(this, "storj_image_lambda", {
entry: path.join(__dirname, '../../lambda-fns/src/functions/storj_upload/index.ts'),
bundling: {
nodeModules: ["uplink-nodejs", "node-gyp"],
forceDockerBundling: true,
commandHooks: {
beforeInstall(_inputDir, outputDir) {
return [
'export PATH=$PATH:$GOPATH/bin',
'sudo yum -y install make'
]
},
beforeBundling(inputDir, outputDir) {
return [
''
]
},
afterBundling(inputDir, outputDir) {
return [`cp -r ./node_modules/uplink-nodejs /usr`]
}
}
}
});
this.content_service_handler = new NodejsFunction(this, "content_service_handler", {
runtime: lambda.Runtime.NODEJS_16_X,
handler: 'handler',
entry: path.join(__dirname, "../../lambda-fns/src/functions/content/index.ts"),
environment: {
TABLE_NAME: props.main_table.tableName,
USER_CONTENT: props.user_content_bucket.bucketName,
STORJ_LAMBDA: this.storj_image_lambda.functionName
}
});
props.main_table.grantFullAccess(this.content_service_handler);
props.user_content_bucket.grantRead(this.content_service_handler);
props.user_content_bucket.addEventNotification(s3.EventType.OBJECT_CREATED, new LambdaDestination(this.storj_image_lambda));
};
};
And here's the logs I get when I try to deploy the my Stack:
Bundling asset MainStack/content_service/storj_image_lambda/Code/Stage...
asset-output/index.js 1.5kb
⚡ Done in 21ms
bash: sudo: command not found
/Users/thomgeenen/Git/nude_safer_cdk/node_modules/aws-cdk-lib/core/lib/asset-staging.js:2
`),localBundling=options.local?.tryBundle(bundleDir,options),!localBundling){let user;if(options.user)user=options.user;else{const userInfo=os.userInfo();user=userInfo.uid!==-1?`${userInfo.uid}:${userInfo.gid}`:"1000:1000"}options.image.run({command:options.command,user,volumes,environment:options.environment,entrypoint:options.entrypoint,workingDirectory:options.workingDirectory??AssetStaging.BUNDLING_INPUT_DIR,securityOpt:options.securityOpt??""})}}catch(err){const bundleErrorDir=bundleDir+"-error";throw fs.existsSync(bundleErrorDir)&&fs.removeSync(bundleErrorDir),fs.renameSync(bundleDir,bundleErrorDir),new Error(`Failed to bundle asset ${this.node.path}, bundle output is located at ${bundleErrorDir}: ${err}`)}if(fs_1.FileSystem.isEmpty(bundleDir)){const outputDir=localBundling?bundleDir:AssetStaging.BUNDLING_OUTPUT_DIR;throw new Error(`Bundling did not produce any output. Check that content is written to ${outputDir}.`)}}calculateHash(hashType,bundling,outputDir){if(hashType==assets_1.AssetHashType.CUSTOM||hashType==assets_1.AssetHashType.SOURCE&&bundling){const hash=crypto.createHash("sha256");return hash.update(this.customSourceFingerprint??fs_1.FileSystem.fingerprint(this.sourcePath,this.fingerprintOptions)),bundling&&hash.update(JSON.stringify(bundling)),hash.digest("hex")}switch(hashType){case assets_1.AssetHashType.SOURCE:return fs_1.FileSystem.fingerprint(this.sourcePath,this.fingerprintOptions);case assets_1.AssetHashType.BUNDLE:case assets_1.AssetHashType.OUTPUT:if(!outputDir)throw new Error(`Cannot use \`${hashType}\` hash type when \`bundling\` is not specified.`);return fs_1.FileSystem.fingerprint(outputDir,this.fingerprintOptions);default:throw new Error("Unknown asset hash type.")}}}exports.AssetStaging=AssetStaging,_a=JSII_RTTI_SYMBOL_1,AssetStaging[_a]={fqn:"aws-cdk-lib.AssetStaging",version:"2.53.0"},AssetStaging.BUNDLING_INPUT_DIR="/asset-input",AssetStaging.BUNDLING_OUTPUT_DIR="/asset-output",AssetStaging.assetCache=new cache_1.Cache;function renderAssetFilename(assetHash,extension=""){return`asset.${assetHash}${extension}`}function determineHashType(assetHashType,customSourceFingerprint){const hashType=customSourceFingerprint?assetHashType??assets_1.AssetHashType.CUSTOM:assetHashType??assets_1.AssetHashType.SOURCE;if(customSourceFingerprint&&hashType!==assets_1.AssetHashType.CUSTOM)throw new Error(`Cannot specify \`${assetHashType}\` for \`assetHashType\` when \`assetHash\` is specified. Use \`CUSTOM\` or leave \`undefined\`.`);if(hashType===assets_1.AssetHashType.CUSTOM&&!customSourceFingerprint)throw new Error("`assetHash` must be specified when `assetHashType` is set to `AssetHashType.CUSTOM`.");return hashType}function calculateCacheKey(props){return crypto.createHash("sha256").update(JSON.stringify(sortObject(props))).digest("hex")}function sortObject(object){if(typeof object!="object"||object instanceof Array)return object;const ret={};for(const key of Object.keys(object).sort())ret[key]=sortObject(object[key]);return ret}function singleArchiveFile(directory){if(!fs.existsSync(directory))throw new Error(`Directory ${directory} does not exist.`);if(!fs.statSync(directory).isDirectory())throw new Error(`${directory} is not a directory.`);const content=fs.readdirSync(directory);if(content.length===1){const file=path.join(directory,content[0]),extension=getExtension(content[0]).toLowerCase();if(fs.statSync(file).isFile()&&ARCHIVE_EXTENSIONS.includes(extension))return file}}function determineBundledAsset(bundleDir,outputType){const archiveFile=singleArchiveFile(bundleDir);switch(outputType===bundling_1.BundlingOutput.AUTO_DISCOVER&&(outputType=archiveFile?bundling_1.BundlingOutput.ARCHIVED:bundling_1.BundlingOutput.NOT_ARCHIVED),outputType){case bundling_1.BundlingOutput.NOT_ARCHIVED:return{path:bundleDir,packaging:assets_1.FileAssetPackaging.ZIP_DIRECTORY};case bundling_1.BundlingOutput.ARCHIVED:if(!archiveFile)throw new Error("Bundling output directory is expected to include only a single archive file when `output` is set to `ARCHIVED`");return{path:archiveFile,packaging:assets_1.FileAssetPackaging.FILE,extension:getExtension(archiveFile)}}}function getExtension(source){for(const ext of ARCHIVE_EXTENSIONS)if(source.toLowerCase().endsWith(ext))return ext;return path.extname(source)}
^
Error: Failed to bundle asset MainStack/content_service/storj_image_lambda/Code/Stage, bundle output is located at /Users/thomgeenen/Git/nude_safer_cdk/cdk.out/bundling-temp-4e95a752d2aca74dd84aa402102b31591eb7ee9ec0a94b306360353a61e924f0-error: Error: docker exited with status 127
at AssetStaging.bundle (/Users/thomgeenen/Git/nude_safer_cdk/node_modules/aws-cdk-lib/core/lib/asset-staging.js:2:614)
at AssetStaging.stageByBundling (/Users/thomgeenen/Git/nude_safer_cdk/node_modules/aws-cdk-lib/core/lib/asset-staging.js:1:4506)
at stageThisAsset (/Users/thomgeenen/Git/nude_safer_cdk/node_modules/aws-cdk-lib/core/lib/asset-staging.js:1:1867)
at Cache.obtain (/Users/thomgeenen/Git/nude_safer_cdk/node_modules/aws-cdk-lib/core/lib/private/cache.js:1:242)
at new AssetStaging (/Users/thomgeenen/Git/nude_safer_cdk/node_modules/aws-cdk-lib/core/lib/asset-staging.js:1:2262)
at new Asset (/Users/thomgeenen/Git/nude_safer_cdk/node_modules/aws-cdk-lib/aws-s3-assets/lib/asset.js:1:736)
at AssetCode.bind (/Users/thomgeenen/Git/nude_safer_cdk/node_modules/aws-cdk-lib/aws-lambda/lib/code.js:1:4628)
at new Function (/Users/thomgeenen/Git/nude_safer_cdk/node_modules/aws-cdk-lib/aws-lambda/lib/function.js:1:2803)
at new NodejsFunction (/Users/thomgeenen/Git/nude_safer_cdk/node_modules/aws-cdk-lib/aws-lambda-nodejs/lib/function.js:1:1171)
at new ContentService (/Users/thomgeenen/Git/nude_safer_cdk/lib/services/content/index.ts:24:35)
Detected file changes during deployment. Invoking 'cdk deploy' again
[+] Building 0.5s (13/13) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for public.ecr.aws/sam/build-nodejs14.x:latest 0.4s
=> [1/9] FROM public.ecr.aws/sam/build-nodejs14.x#sha256:cfe32c14b97a6d5d7128f4e110c6d11b80bf89c0cca8a3493879f8c587627655 0.0s
=> CACHED [2/9] RUN npm install --global yarn#1.22.5 0.0s
=> CACHED [3/9] RUN npm install --global pnpm 0.0s
=> CACHED [4/9] RUN npm install --global typescript 0.0s
=> CACHED [5/9] RUN npm install --global --unsafe-perm=true esbuild#0 0.0s
=> CACHED [6/9] RUN mkdir /tmp/npm-cache && chmod -R 777 /tmp/npm-cache && npm config --global set cache /tmp/npm-cache 0.0s
=> CACHED [7/9] RUN mkdir /tmp/yarn-cache && chmod -R 777 /tmp/yarn-cache && yarn config set cache-folder /tmp/yarn-cache 0.0s
=> CACHED [8/9] RUN npm config --global set update-notifier false 0.0s
=> CACHED [9/9] RUN /sbin/useradd -u 1000 user && chmod 711 / 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:c83389551577052395325ed8a0fb51c59639344b8c92baa1656956f23c765d18 0.0s
=> => naming to docker.io/library/cdk-7ef608663d730a301f1ab98604e57fd96273751c63db25a1a75f1390f462c655 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

Failing to index csv file based data in opendistro elasticsearch

I am trying to index sample csv based data into opendistro elasticsearch but failing to create the index. Could you please let me what i am missing here.
csv file to index
[admin#fedser32 logstashoss-docker]$ cat /tmp/student.csv
"aaa","bbb",27,"Day Street"
"xxx","yyy",33,"Web Street"
"sss","mmm",29,"Adam Street"
logstash.conf
[admin#fedser32 logstashoss-docker]$ cat logstash.conf
input {
file {
path => "/tmp/student.csv"
start_position => "beginning"
}
}
filter {
csv {
columns => ["firstname", "lastname", "age", "address"]
}
}
output {
elasticsearch {
hosts => ["https://fedser32.stack.com:9200"]
index => "sampledata"
ssl => true
ssl_certificate_verification => false
user => "admin"
password => "admin#1234"
}
}
My Opendistro cluster is listening on 9200 as shown below.
[admin#fedser32 logstashoss-docker]$ curl -X GET -u admin:admin#1234 -k https://fedser32.stack.com:9200
{
"name" : "odfe-node1",
"cluster_name" : "odfe-cluster",
"cluster_uuid" : "5GOEtg12S6qM5eaBkmzUXg",
"version" : {
"number" : "7.10.0",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
"build_date" : "2020-11-09T21:30:33.964949Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
As per the logs it does indicate it is able to find the csv file as shown below.
logstash_1 | [2022-03-03T12:11:44,716][INFO ][logstash.outputs.elasticsearch][main] Index Lifecycle Management is set to 'auto', but will be disabled - Index Lifecycle management is not installed on your Elasticsearch cluster
logstash_1 | [2022-03-03T12:11:44,716][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash_1 | [2022-03-03T12:11:44,725][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x5c537d14 run>"}
logstash_1 | [2022-03-03T12:11:45,439][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.71}
logstash_1 | [2022-03-03T12:11:45,676][INFO ][logstash.inputs.file ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_20d37e3ca625c7debb90eb1c70f994d6", :path=>["/tmp/student.csv"]}
logstash_1 | [2022-03-03T12:11:45,697][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1 | [2022-03-03T12:11:45,738][INFO ][filewatch.observingtail ][main][2f140d63e9cab8ddc711daddee17a77865645a8de3d2be55737aa0da8790511c] START, creating Discoverer, Watch with file and sincedb collections
logstash_1 | [2022-03-03T12:11:45,761][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
logstash_1 | [2022-03-03T12:11:45,921][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
Could you check the access right for /tmp/student.csv file? it must be readable by user logstash.
check with this command:
#ls -l /tmp
Other way, if you have already indexed the file path, you have to clean up the sincedb
The thing that i was missing is i had to volume mount my CSV file into the logstash container as shown below after which i was able to index my csv data.
[admin#fedser opensearch-logstash-docker]$ cat docker-compose.yml
version: '2.1'
services:
logstash:
image: opensearchproject/logstash-oss-with-opensearch-output-plugin:7.16.2
ports:
- "5044:5044"
volumes:
- $PWD/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
- $PWD/student.csv:/tmp/student.csv

Trouble running ORC JIT llvm-hs examples

Dear StackOverflow Haskellers:
I tried asking this same question in reddit r/haskell but sadly I got no answers at all. I hope it does better here.
Maybe the answer is that llvm-hs isn't used that much. If you have experience using llvm with haskell, but don't use llvm-hs, I'd love to see how you use it.
I'm trying to run basic llvm-hs ORC JIT functionality, which has taken me to try and run the orc example of llvm-hs-examples. So far I've been unsuccessful in multiple ways (out of the four examples only the first one, basic, runs):
Trying to run examples with the shell.nix in https://github.com/llvm-hs/llvm-hs:
$ git clone https://github.com/llvm-hs/llvm-hs.git
$ git clone https://github.com/llvm-hs/llvm-hs-examples.git
$ cd llvm-hs
$ nix-shell shell.nix
$ cd ../llvm-hs-examples
$ cabal new-build
$ cabal run orc
which produces:
$ cabal run orc
Up to date
; ModuleID = 'basic'
source_filename = "<string>"
define i32 #add() {
entry:
ret i32 42
}
JITSymbolError ""
Eager JIT Result:
()
doing a default.nix a la haskell.nix and running with nix-build
llvm-hs-examples/default.nix:
let
examplesOverlays = [ (self: super: {
llvm-config = self.llvm_9;
}) ];
in
{ # Fetch the latest haskell.nix and import its default.nix
haskellNix ? import (builtins.fetchTarball "https://github.com/input-output-hk/haskell.nix/archive/ef6ca0f431fe3830c25cb2d185367245c1cce894.tar.gz") {}
# haskellNix ? import (builtins.fetchTarball "https://github.com/input-output-hk/haskell.nix/archive/c88f9eccc975b21ae1e6a6b8057a712b91e374f2.tar.gz") {}
# haskellNix ? import (builtins.fetchTarball "https://github.com/input-output-hk/haskell.nix/archive/master.tar.gz") {}
# haskell.nix provides access to the nixpkgs pins which are used by our CI,
# hence you will be more likely to get cache hits when using these.
# But you can also just use your own, e.g. '<nixpkgs>'.
, nixpkgsSrc ? haskellNix.sources.nixpkgs-2003
# haskell.nix provides some arguments to be passed to nixpkgs, including some
# patches and also the haskell.nix functionality itself as an overlay.
, nixpkgsArgs ? haskellNix.nixpkgsArgs
# import nixpkgs with overlays
, pkgs ? (import nixpkgsSrc (nixpkgsArgs // { overlays = nixpkgsArgs.overlays ++ examplesOverlays;}))
# , pkgs ? import nixpkgsSrc nixpkgsArgs
}: pkgs.haskell-nix.project {
# 'cleanGit' cleans a source directory based on the files known by git
src = pkgs.haskell-nix.haskellLib.cleanGit {
name = "examples";
src = ./.;
};
# For `cabal.project` based projects specify the GHC version to use.
# compiler-nix-name = "ghc884"; # Not used for `stack.yaml` based projects.
}
in llvm-hs-examples's directory ran:
$ nix-build -A examples.components.exes.orc
$ ./result/bin/orc
; ModuleID = 'basic'
source_filename = "<string>"
define i32 #add() {
entry:
ret i32 42
}
JITSymbolError ""
Eager JIT Result:
()
which is the same output as before.
I believe the last (2) approach uses stack to build, but also tried manually with stack:
a) first failed approach:
In a nix-shell with llvm-config (I used the one in llvm-hs and the one I created with haskell.nix, but same result):
$ llvm-config --version
9.0.1
$ stack build examples:orc
No packages found in snapshot which provide a "llvm-config" executable, which is a build-tool dependency of llvm-hs
llvm-hs > configure
llvm-hs > [1 of 2] Compiling Main ( /run/user/1000/stack-2e6b46f4d38b8260/llvm-hs-9.0.1/Setup.hs, /run/user/1000/stack-2e6b46f4d38b8260/llvm-hs-9.0.1/.stack-work/dist/x86_64-linux-nix/Cabal-2.4.0.1/setup/Main.o )
llvm-hs > [2 of 2] Compiling StackSetupShim ( /home/hhefesto/.stack/setup-exe-src/setup-shim-mPHDZzAJ.hs, /run/user/1000/stack-2e6b46f4d38b8260/llvm-hs-9.0.1/.stack-work/dist/x86_64-linux-nix/Cabal-2.4.0.1/setup/StackSetupShim.o )
llvm-hs > Linking /run/user/1000/stack-2e6b46f4d38b8260/llvm-hs-9.0.1/.stack-work/dist/x86_64-linux-nix/Cabal-2.4.0.1/setup/setup ...
llvm-hs > setup: The program 'llvm-config' version ==9.0.* is required but it could not
llvm-hs > be found.
llvm-hs >
b) build-successful stack approach I also tried with stack's nix integration explicitly specifying llvm-config as part of buildInputs:
stack.yaml
# resolver: nightly-2020-01-30
resolver: lts-14.0
packages:
- '.'
extra-deps:
- llvm-hs-9.0.1
- llvm-hs-pure-9.0.0
- llvm-hs-pretty-0.9.0.0
flags:
llvm-hs:
shared-llvm: true
nix:
enable: true
shell-file: stackShell.nix
stackShell.nix:
let
examplesOverlays = [ (self: super: {
llvm-config = self.llvm_9;
}) ];
in
{ haskellNix ? import (builtins.fetchTarball "https://github.com/input-output-hk/haskell.nix/archive/ef6ca0f431fe3830c25cb2d185367245c1cce894.tar.gz") {}
, nixpkgsSrc ? haskellNix.sources.nixpkgs-2003
, nixpkgsArgs ? haskellNix.nixpkgsArgs
, pkgs ? (import nixpkgsSrc (nixpkgsArgs // { overlays = nixpkgsArgs.overlays ++ examplesOverlays;}))
}:
with pkgs;
haskell.lib.buildStackProject {
name = "llvm-hs";
buildInputs = [ llvm-config
];
inherit ghc;
}
Which was able to build successfully, but same error:
$ stack build examples:orc ghc-shell-for-examples
examples> configure (exe)
Configuring examples-1.0.0.0...
examples> build (exe)
Preprocessing executable 'orc' for examples-1.0.0.0..
Building executable 'orc' for examples-1.0.0.0..
examples> copy/register
Installing executable orc in /home/hhefesto/src/llvm-hs-examples/.stack-work/install/x86_64-linux-nix/2bab12248a943811dbc5aa23a88b887ce7aef1939551d21b69d96e3f64bfcbd7/8.6.5/bin
Installing executable basic in /home/hhefesto/src/llvm-hs-examples/.stack-work/install/x86_64-linux-nix/2bab12248a943811dbc5aa23a88b887ce7aef1939551d21b69d96e3f64bfcbd7/8.6.5/bin
Installing executable arith in /home/hhefesto/src/llvm-hs-examples/.stack-work/install/x86_64-linux-nix/2bab12248a943811dbc5aa23a88b887ce7aef1939551d21b69d96e3f64bfcbd7/8.6.5/bin
Installing executable irbuilder in /home/hhefesto/src/llvm-hs-examples/.stack-work/install/x86_64-linux-nix/2bab12248a943811dbc5aa23a88b887ce7aef1939551d21b69d96e3f64bfcbd7/8.6.5/bin
$ /home/hhefesto/src/llvm-hs-examples/.stack-work/install/x86_64-linux-nix/2bab12248a943811dbc5aa23a88b887ce7aef1939551d21b69d96e3f64bfcbd7/8.6.5/bin/orc ghc-shell-for-examples
; ModuleID = 'basic'
source_filename = "<string>"
define i32 #add() {
entry:
ret i32 42
}
JITSymbolError ""
Eager JIT Result:
()
A weird thing worth noting is that stack was unable to find what it had just built with stack exec examples:orc:
$ stack exec examples:orc
Executable named examples:orc not found on path: ["/home/hhefesto/src/llvm-hs-examples/.stack-work/install/x86_64-linux-nix/2bab12248a943811dbc5aa23a88b887ce7aef1939551d21b69d96e3f64bfcbd7/8.6.5/bin","/home/hhefesto/.stack/snapshots/x86_64-linux-nix/2bab12248a943811dbc5aa23a88b887ce7aef1939551d21b69d96e3f64bfcbd7/8.6.5/bin","/home/hhefesto/.stack/compiler-tools/x86_64-linux-nix/ghc-8.6.5/bin","/nix/store/9wvsbqr57k9n6d8vv6b10d04j51f9ims-ghc-8.6.5/bin","/nix/store/4xb9z8vvk3fk2ciwqh53hzp72d0hx1da-bash-interactive-4.4-p23/bin","/nix/store/9wvsbqr57k9n6d8vv6b10d04j51f9ims-ghc-8.6.5/bin","/nix/store/m6h7zh8w6s52clnyskffj5lbkakqgywn-gcc-wrapper-9.2.0/bin","/nix/store/b3zsk4ihlpiimv3vff86bb5bxghgdzb9-gcc-9.2.0/bin","/nix/store/0k65d30z9xsixil10yw3bwajbdk4yskv-glibc-2.30-bin/bin","/nix/store/x0jla3hpxrwz76hy9yckg1iyc9hns81k-coreutils-8.31/bin","/nix/store/n48b8n251dwwb04q7f3fwxdmirsakllz-binutils-wrapper-2.31.1/bin","/nix/store/hrkc2sf2883l16d5yq3zg0y339kfw4xv-binutils-2.31.1/bin","/nix/store/0k65d30z9xsixil10yw3bwajbdk4yskv-glibc-2.30-bin/bin","/nix/store/x0jla3hpxrwz76hy9yckg1iyc9hns81k-coreutils-8.31/bin","/nix/store/6dacwd7ldb2jazc218d11v2w2g55hba8-pkg-config-0.29.2/bin","/nix/store/lb61dshvvqy1rgjhhlzaiiv2fv157lr5-stack-2.1.3.1/bin","/nix/store/71n1xcigc00w3z7yc836jqcx9cb2dys8-patchelf-0.9/bin","/nix/store/xhhkr936b9q5sz88jp4l29wljbbcg39k-ncurses-6.1-20190112/bin","/nix/store/khqyxflp8wbq038wdyv5sr8sjsfwlr72-llvm-9.0.1/bin","/nix/store/84g84bg47xxg01ba3nv0h418v5v3969n-ncurses-6.1-20190112-dev/bin","/nix/store/xhhkr936b9q5sz88jp4l29wljbbcg39k-ncurses-6.1-20190112/bin","/nix/store/x0jla3hpxrwz76hy9yckg1iyc9hns81k-coreutils-8.31/bin","/nix/store/97vambzyvpvrd9wgrrw7i7svi0s8vny5-findutils-4.7.0/bin","/nix/store/dqq1bvpi3g0h4v05111b3i0ymqj4v5x1-diffutils-3.7/bin","/nix/store/p34p7ysy84579lndk7rbrz6zsfr03y71-gnused-4.8/bin","/nix/store/b0vjq4r4sp9z4l2gbkc5dyyw5qfgyi3r-gnugrep-3.4/bin","/nix/store/c8balm59sxfkw9ik1fqbkadsvjqhmbx4-gawk-5.0.1/bin","/nix/store/g7dr83wnkx4gxa5ykcljc5jg04416z60-gnutar-1.32/bin","/nix/store/kkvgr3avpp7yd5hzmc4syh43jqj03sgb-gzip-1.10/bin","/nix/store/rw96psqzgyqrcd12qr6ivk9yiskjm3ab-bzip2-1.0.6.0.1-bin/bin","/nix/store/dp6y0n9cba79wwc54n1brg7xbjsq5hka-gnumake-4.2.1/bin","/nix/store/hrpvwkjz04s9i4nmli843hyw9z4pwhww-bash-4.4-p23/bin","/nix/store/xac1zfclx1xxgcd84vqb6hy3apl171n8-patch-2.7.6/bin","/nix/store/mm0w8jc58rn01c4kz2n9jvwd6bibcihs-xz-5.2.4-bin/bin"]
Other things worth noting: I tried everything stack related with two resolvers: lts-14.0 which was already there and nightly-2020-01-30 which was recently added in a commit to llvm-hs, but same result.
Also tried changing llvm-hs version restrictions to include the latest 9.0.1 in llvm-hs-examples' cabal file, but same result.
I'm on NixOS channel 20.03
If you want me to share or try something else, please let me know, and thank you for your help.
Lastly, thank you very much for your time!

Grokparsefailure and type problems in logstash configuration file

I have several problems with my configuration file. My goal is to parse three types of logs (for the moment). Here they are :
[29/05/2020 07:41:51.354] - ih912865 - 10.107.119.121 - 93 - Transaction 7635 COMPLETED 318 ms wait time 3183 ms
[29/05/2020 10:30:01.318] - Process status database sync - us1salx08167.corpnet2.com:8400(#52279) (load 0 grace period 5 minutes) : current date 2020/02/02 21:30:01 update date 2020/02/02 21:29:58 old state OK new state OK
31730 31626 464 10980020 52:25 /plw/modules/bin/Lx86_64/opx2-intranet.exe -I /plw/modules/bin/Lx86_64/opx2-intranet.dxl -H /plw/modules/bin/Lx86_64 -L /plw/PLW_PROD/modules/preload-intranet.ini -- plw-sysconsole -port 8400 -logdir /plw/PLW_PROD/httpdocs/admin/log/ -slaves 2
Two of these logs can be in slave files named intranet-2020-06-25-8401.log or intranet-2020-06-25-8400.log the last one is in a master file named intranet-2020-06-25-8402.log
For my tests I simplified the architecture of my log files, so I have a Log-test folder in which I put a slave file and a master file.
In these files I only put the corresponding logs and a different log to be able to see how to manage this case.
Here is the content of a "slave" :
[29/05/2020 07:41:51.354] - ih912865 - 10.107.199.125 - 93 - Transaction 7635 COMPLETED 318 ms wait time 3183 ms
[29/05/2020 10:30:01.318] - Process status database sync - us1salx08167.corpnet2.com:8400(#52279) (load 0 grace period 5 minutes) : current date 2020/02/02 21:30:01 update date 2020/02/02 21:29:58 old state OK new state OK
[29/05/2020 13:49:20.635] - Main process - Transaction SYSTEM 105238-12 SQL done 1 ms
Here is the content of a "master" :
31730 31626 464 10980020 52:25 /plw/modules/bin/Lx86_64/opx2-intranet.exe -I /plw/modules/bin/Lx86_64/opx2-intranet.dxl -H /plw/modules/bin/Lx86_64 -L /plw/PLW_PROD/modules/preload-intranet.ini -- plw-sysconsole -port 8400 -logdir /plw/PLW_PROD/httpdocs/admin/log/ -slaves 2
[26/06/2020 21:38:01.386] - Main process - Starting HTTP service on port 8402 (socket #<MULTIVALENT stream socket waiting for connection at */8402 # #x1022d2ddbb2>)
Now that you have a better understanding of my environment and my purpose, here's the problem. When I launch my logstash configuration, I retrieve my data in kibana. But kibana shows me that each log has been treated as coming from a slave file while I also have a log coming from a master file which doesn't have the same processing.
For a better understanding here is my configuration file :
input {
file {
path => "/home/mathis/Documents/**/intranet*.log"
exclude =>"*8402.log"
sincedb_path => '/dev/null'
start_position => beginning
type => "slave"
}
file {
path => "/home/mathis/Documents/**/intranet*8402.log"
sincedb_path => '/dev/null'
type => "master"
}
}
filter {
if [type] == "slave" {
grok {
match => { "message" => ["\[%{DATESTAMP:eventtime}\] \- %{USERNAME:user} \- %{IPV4:clientip} \- %{NUMBER} \- %{WORD} %{NUMBER:exectime} %{WORD} %{NUMBER:time} %{GREEDYDATA:data} %{NUMBER:waittime}","\[%{DATESTAMP:eventtime}\] \- Process status database sync \- %{WORD}\.%{WORD}\.%{WORD}\:%{NUMBER:slavenumb}\(\#%{NUMBER}\) \(load %{NUMBER:nbutilisateur} grace period 5 minutes\) %{GREEDYDATA}"] }
remove_field => "message"
}
date {
match => [ "eventtime", "dd/MM/YYYY HH:mm:ss.SSS" ]
target => "#timestamp"
}
}
if [type] == "master" {
grok {
match => {"message" => ["%{NUMBER}%{SPACE}%{NUMBER}%{SPACE}%{NUMBER}%{SPACE}%{NUMBER}%{SPACE}(?<starttime>((?!<[0-9])%{HOUR}:)?%{MINUTE}(?::%{SECOND})(?![0-9]))"]}
remove_field => "message"
}
date {
match => [ "starttime", "HH:mm:ss","mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => "127.0.0.1:9200"
index => "logstash-local3-%{+YYYY.MM.dd}"
}
}
And now this is what kibana shows me:
As you can see, the type field is slave for all logs but we can also observe that the logs of the slave file "intranet-2020-06-25-8401.log" are correctly parsed and that the line of added log that does not interest me has the field tags _grokparsefailure (the middle line in the picture).
The other problem is that the other logs (the first two lines on the image) are from a slave file (which is not true) according to kibana, so I guess they are processed in my first grok which would explain why they also have the _grokparsefailure tags field.
So I guess there are several errors in my input and filter part. I've been searching for a long time and doing a lot of testing, could you help me fix my config file please?

DB2 Conection (Linux) CLI 10.5 - Hangs

We are having an issue with the DB2 connection is successful yet hangs -- no errors nor does code continue to the next line of execution.....broke this down to just running the CLI to perform a basic validate and it shows the same symptom.
dsadmin#usc1:/opt/ibm/clidriver/bin$ db2cli writecfg add -database DBNAME -host 10.0.0.444 -port 9470 -parameter "SecurityTransportMode=SSL"
dsadmin#usc1:/opt/ibm/clidriver/bin$ db2cli validate -dsn MAX_DEV -connect -user UID -passwd PWD
===============================================================================
Client information for the current copy:
===============================================================================
Client Package Type : IBM Data Server Driver For ODBC and CLI
Client Version (level/bit): DB2 v10.5.0.5 (special_33523/64-bit)
Client Platform : Linux/X8664
Install/Instance Path : /opt/ibm/clidriver
DB2DSDRIVER_CFG_PATH value: <not-set>
db2dsdriver.cfg Path : /opt/ibm/clidriver/cfg/db2dsdriver.cfg
DB2CLIINIPATH value : <not-set>
db2cli.ini Path : /opt/ibm/clidriver/cfg/db2cli.ini
db2diag.log Path : /opt/ibm/clidriver/db2dump/db2diag.log
===============================================================================
db2dsdriver.cfg schema validation for the entire file:
===============================================================================
Success: The schema validation completed successfully without any errors.
===============================================================================
The output of the validate command ends with the "Success" message however does not return to command line....hangs....
===========================DB2 CLI TRACE==================================
dsadmin#usc1:/opt/ibm/clidriver/bin$ vi 'clitrace;'
[ db2cli.ini Location: /opt/ibm/clidriver/cfg/db2cli.ini ]
[ db2dsdriver.cfg Location: /opt/ibm/clidriver/cfg/db2dsdriver.cfg ]
[ CLI Driver Type: IBM Data Server Driver For ODBC and CLI ]
SQLAllocEnv( phEnv=<NULL pointer> )
---> Time elapsed - 0 seconds
SQLAllocEnv( )
<--- SQL_ERROR Time elapsed - +1.100000E-005 seconds
SQLAllocEnv( phEnv=&00007ffd94147934 )
---> Time elapsed - +3.130975E+001 seconds
SQLAllocEnv( phEnv=0:1 )
<--- SQL_SUCCESS Time elapsed - +1.780000E-004 seconds
SQLFreeEnv( hEnv=0:1 )
---> Time elapsed - +2.500000E-005 seconds
SQLFreeEnv( )
<--- SQL_SUCCESS Time elapsed - +2.100000E-005 seconds
================END OF FILE==================================

Resources